Technology Updates From F8 Day 2: Connectivity, VR Video and Brain-controlled Devices
The second day of the Facebook Developer Conference (F8) was dedicated to the future of technology. Facebook Chief Technology Officer Mike Schroepfer announced three main pillars the company will be working on over the next ten years: connectivity, VR video, and artificial intelligence we have written about before.
Project ARIES — The Wireless Systems Focused on Bringing High-speed Internet
The primary Facebook objective is to figure out how to get internet access to the 4.1 billion people currently not connected to the internet. In order to do that the company is investing in new technologies that dramatically reduce the cost of delivering internet access across the world. Facebook’s Connectivity Lab is developing three technologies to connect the world.
Facebook introduced its new 60 GHz, terrestrial connectivity system — Terragraph. The multi-gigabit wireless system is designed to bring high-speed internet connectivity to dense urban areas. The nodes are placed across a city at 200–250 meter intervals with a limited range of the 60 GHz signal.
The company is currently testing Terragraph at in city of San Jose, California.
The prototype antenna array with 96 transmit antennas providing 10x spectral and energy efficiency gains.
Facebook is going to connect rural regions by air and millimeter-wave. That’s why Facebook’s Connectivity Lab is developing MMW — an aircraft technology that can beam connectivity through the stratosphere.
Earlier this year the company tested a terrestrial point-to-point link in Southern California. The MMW technology demonstrated a record-breaking data rate of nearly 20 Gbps over 13 km.
One way of developing wireless internet access is Aquila: a solar powered aerial vehicle (UAV). It had its first test flight last year and that time a prototype of the device is in a scale modular testing.
We can recall that last year Facebook announced the Telecom Infra Project (TIP). That is an engineering-focused initiative that will drive the development of next-generation technologies that make it easier and cheaper to deploy an internet connection all over the world.
Mobile 360 Giveaway to Kickstart Posting 360 Videos
Video has been a huge part of using VR. Right now a self-created, spherical video content (which anyone can livestream to viewers) is the most appreciated by the Facebook ranking algorithm called Edgerank. Capturing 360 photos and videos is something the company would like to make possible for everyone. To make users share spherical videos more, Facebook gave F8 attendees each a Giroptic iO 360 connected camera for smartphones.
New 360 Cameras That Capture in VR Video
Last year Facebook launched the ability to post 3D-360 content and presented a high quality 3D-360 camera system Facebook Surround360.
The company wanted to take the immersive chain further. One of the challenges of 3D-360 photos was that when viewing them in VR, if you moved, the world was still locked wherever the camera happened to be placed. Facebook designed the second generation of Surround 360: x25 and x6 to fix that problem.
A camera layout is specifically designed to maximize pixel overlap at every single part of a sphere.
Predicting Where a Viewer Will Look Improved The 360 Video Experience
Deep learning has changed the abilities of computers. Joaquin Quinonero, Facebook’s Director of Applied Machine Learning, described new architecture the company built to improve the viewing experience of 360 videos.
It’s pretty challenging to deliver the format because of its size. Facebook is using machine learning to reduce the number of pixels that have to be rendered at any given time. The algorithm predicts where a viewer will look next to give that location a rendering priority ans this is particularly helpful for users with lower quality internet access.
Facebook’s Vision of Brain-controlled Devices
Regina Dugan, Head of Facebook’s Building 8 lab, shared with an F8 audience two of the projects her team is working on.
One of those is a device that uses implanted electrodes that record firing neurons to help type using a person’s mind. In testing a woman typed around eight words-per-minute, but advances in this technology could lead to speeds of around a hundred words-per-minute. That is a goal Dugan’s team is working on.
Also, the team is developing the ability of “skin-hearing”. The hardware used in tests allowed deaf users to ‘hear’ sound through their skin. The software converts sound waves into frequency components, and transmits them directly to the brain through the skin.
The future of interacting with computers and other devices isn’t in keyboards or even voice interfaces, but in thought. One day it will be enough to think an action to input it into a device.
Originally published at Master of Code Global.