- Product/Service
- Product/Service
- Organization
Jupiter is a high-performance service that matches tests and other automated jobs to the machines that are best equipped to handle them. Jupiter removes the bottleneck effect, cutting down the time a job waits for a machine from minutes to milliseconds, saving hours of engineering time every week.
At Facebook, every change made to our mobile code is checked by our open source static analyzer Infer. Despite Infer's advantages, one of its limitations has been its extensibility. Adding a checker for a new type of bug was a complex task and required a lot of deep static analysis expertise as well as knowledge about Infer's internals. For this reason, we have introduced a new language called AL to easily design new checkers for detecting bugs. It doesn't require any knowledge of the internals of Infer — writing a new checker can normally be done with few lines of code.
As more people across the world connect on Facebook, we want to make sure our apps and services work well in a myriad of scenarios. At Facebook's scale, this means testing hundreds of important interactions across numerous types of devices and operating systems for both correctness and speed before we ship new code. Today we introduced One World, a unified resource management system that gives engineers access to thousands of test devices, web browsers, and emulators in our data centers through a single API.
On Facebook, people share billions of photos every day, making it challenging to scroll backward in time to find photos posted a few days ago, let alone months or years ago. To help people to find the photos they're looking for more easily, Facebook’s Photo Search team applied machine learning techniques to better understand what’s in an image as well as improve the search and retrieval process.
One of the long-term goals in AI is to develop intelligent chat bots that can converse with people in a natural way. Since human dialog is so varied, chat bots must be skilled at many related tasks. Today the Facebook AI Research (FAIR) team announced a new, open source platform for training and testing dialog models across multiple tasks at once. ParlAI is a one-stop shop for dialog research, where researchers can submit new tasks and training algorithms to a single, shared repository, changing the way dialog research is done.
Today, the Facebook AI Research team released pre-trained vectors in 294 languages, accompanied by two quick-start tutorials, to increase fastText’s accessibility to the large community of students, software developers, and researchers interested in machine learning. In addition, fastText’s models now fit on smartphones and small computers like Raspberry Pi devices thanks to a new functionality that reduces memory usage.
Facebook's global data center infrastructure carries both egress and internal server-to-server traffic. As our bandwidth needs increased, we realized the need to split the cross-data center vs internet-facing traffic into different networks and optimize them individually. In a less than a year, we built the first version of our new cross-data center network, called the Express Backbone.
This week we introduced Relay Modern, a new version of Relay, our JavaScript framework for building data-driven applications. Relay Modern is designed from the ground up to be easier to use, more extensible and, most of all, able to improve performance on mobile devices.
This week at F8 we open-sourced Litho, a declarative framework for efficient UIs on Android. Litho lays out components ahead of time in a background thread, and renders incrementally to deliver best-in-class performance and free developers from painstakingly hand-optimizing their UIs.
Read more about React VR, a new library that will allow developers everywhere to build compelling experiences for VR. Expanding on the declarative programming style of React and React Native, React VR allows anyone with an understanding of JavaScript to rapidly build and deploy VR experiences using standard web tools.
At F8 we shared our work on three new technologies that 360 video more accessible under difficult network conditions: a gravitational predictor, AI-powered saliency maps, and a content-dependent streaming model.
Today at F8 we released the 360 Capture SDK. VR experiences can be captured in the form of 360 photos and videos instantly and then uploaded to be viewed in News Feed or a VR headset. Now, people no longer need the power of a supercomputer to capture their VR experiences. The SDK is compatible with multiple game engines, but also works on baseline recommended hardware for VR without compromising quality or speed.
Facebook AI Similarity Search, or Faiss, is an open source library for large-scale nearest neighbor search implementations. Faiss is optimized for memory usage and speed and offers a state-of-the-art GPU implementation.
Modern web applications contain complex and dense user interface patterns — infinitely scrolling lists of content, menu bars, and complex data tables with interactive controls in cells, to name a few components. With a mouse pointer, a person can easily traverse the controls and items of an application. For a keyboard user, traversing a page via the Tab key becomes more cumbersome as the number of controls and items increases.
At Facebook, we are experimenting with a user interface pattern for traversing a page with a keyboard that we call a logical grid, which we hope will become a recognizable and expected pattern of traversing through UI components on the web.
Facebook uses machine learning and ranking models to deliver the best experiences across many different parts of the app, such as which notifications to send, which stories you see in News Feed, or which recommendations you get for Pages you might want to follow. To surface the most relevant content, it’s important to have high-quality machine learning models. More complex models can help improve the precision of our predictions and show more relevant content, but the trade-off is that they require more CPU cycles and can take longer to return results. With a type of predictive model called a gradient-boosted decision tree, we were able to evaluate more inventory in the same time frame and with the same computing resources, for up to a 5x improvement over plain compiled models.
Over the past few years, we've been working to upgrade our data centers to run at 100 gigabits per second. To do so, we needed to deploy 100G optical connections to connect the switch fabric at higher data rates and allow for future upgradability — all while keeping power consumption low and increasing efficiency. We created a 100G single-mode optical transceiver solution, which we've shared through the Open Compute Project.
Bryce Canyon, our next-generation high-density storage server, is designed to support more powerful processors and more memory, and improves thermal and power efficiency by taking in air underneath the chassis. Our goal was to build a platform that would not only meet our storage needs today, but also scale to accommodate new modules for future growth.




























