{"_id":"59a8775b2bdf3600193d53ae","project":"545137a814af501a00b50cf9","initVersion":{"_id":"545137a814af501a00b50cfc","version":"1.0"},"user":{"_id":"5638f69b22afbc0d001f23c1","username":"","name":"Yammer Platform DL"},"__v":0,"createdAt":"2017-08-31T20:53:47.607Z","changelog":[],"body":"**Update: The Suggestions endpoint was successfully fixed on September 9, 2017.**\nAn issue has been reported which affects the REST API endpoint that provides suggested users to follow. The Yammer Engineering team has identified a fix, and has planned a release for next week. [The Suggestions endpoint](https://developer.yammer.com/docs/suggestionsjson) should be operational again on September 9, 2017. At that time, this post will be updated with the technical details.  \nThank you for your patience, \nThe Yammer Platform Team\n\n***Update #2: Technical details of the fix are added below:***\n\nFor a given user, the api/v1/suggestions endpoint compiles a list of suggested peers for the user to follow. This list is compiled from the results of a selection of algorithms that query for users in different categories (such as active users, recently created pending users, etc.). For certain algorithms, the results are intended to be consistent for any user in that network, so the query result is cached for one user and subsequent retrievals for other users in the network will draw from the cache. However, the query fetching user ids is slow even with an index due to it needing to scan an entire table. Because any user in a network can trigger a cache refresh, and cache keys tend to expire around the same time for many networks, we were vulnerable to slow queries which negatively impacted the database. The most recent update to this endpoint ensures that:\n\n *   ***** For each network, the cached user id set gets a random TTL between 18 - 36 hours so that the cache refreshes are spread out more over time.\n\n *   ***** The retrieval and caching of the user ids is done in an asynchronous job queue to avoid slowing down web requests.\n\n *   ***** Cache refresh attempts are started up to one hour before cache expiry with increasing probability of refresh over time (i.e. 0% probability at 3600s left, increasing linearly to 100% probability at 0s left). This handles the situation where if the cache is cold and many users log in at around the same time, we don’t enqueue a large number of redundant jobs.","slug":"suggestions-endpoint-rest-api-temporarily-unavailable","title":"Suggestions Endpoint (REST API) Temporarily Unavailable"}

Suggestions Endpoint (REST API) Temporarily Unavailable


**Update: The Suggestions endpoint was successfully fixed on September 9, 2017.** An issue has been reported which affects the REST API endpoint that provides suggested users to follow. The Yammer Engineering team has identified a fix, and has planned a release for next week. [The Suggestions endpoint](https://developer.yammer.com/docs/suggestionsjson) should be operational again on September 9, 2017. At that time, this post will be updated with the technical details. Thank you for your patience, The Yammer Platform Team ***Update #2: Technical details of the fix are added below:*** For a given user, the api/v1/suggestions endpoint compiles a list of suggested peers for the user to follow. This list is compiled from the results of a selection of algorithms that query for users in different categories (such as active users, recently created pending users, etc.). For certain algorithms, the results are intended to be consistent for any user in that network, so the query result is cached for one user and subsequent retrievals for other users in the network will draw from the cache. However, the query fetching user ids is slow even with an index due to it needing to scan an entire table. Because any user in a network can trigger a cache refresh, and cache keys tend to expire around the same time for many networks, we were vulnerable to slow queries which negatively impacted the database. The most recent update to this endpoint ensures that: * ***** For each network, the cached user id set gets a random TTL between 18 - 36 hours so that the cache refreshes are spread out more over time. * ***** The retrieval and caching of the user ids is done in an asynchronous job queue to avoid slowing down web requests. * ***** Cache refresh attempts are started up to one hour before cache expiry with increasing probability of refresh over time (i.e. 0% probability at 3600s left, increasing linearly to 100% probability at 0s left). This handles the situation where if the cache is cold and many users log in at around the same time, we don’t enqueue a large number of redundant jobs.