News Smartphones

Facebook Developer Conference : FB Surround 360 Virtual Reality Camera revealed

fb_360

Facebook announced Facebook Surround 360 virtual reality video camera. Surround 360 is an open, high-quality, production-ready 3D-360 hardware and software video capture system.

Surround 360 is a professional grade end-to-end system to capture, edit and render high-quality 3D-360 video. The system includes a design for camera hardware and the accompanying stitching code, and Facebook will make both available on GitHub this summer.

facebook-360-camera

The Facebook 360 Surround exports 4K, 6K, and 8K video for each eye. The 8K videos double industry standard output and can be played on Gear VR with Facebook’s custom Dynamic Streaming technology.

  • Facebook has designed and built a durable, high-quality 3D-360 video capture system.
  • The system includes a design for camera hardware and the accompanying stitching code, and we will make both available on GitHub this summer. FB has kept open-sourcing for the camera and the software to accelerate the growth of the 3D-360 ecosystem — developers can leverage the designs and code, and content creators can use the camera in their productions.
  • Building on top of an optical flow algorithm is a mathematically rigorous approach that produces superior results. FB code uses optical flow to compute left-right eye stereo disparity and leveraging this ability to generate seamless stereoscopic 360 panoramas, with little to no hand intervention.
  • The stitching code drastically reduces post-production time. What is usually done by hand can now be done by algorithm, taking the stitching time from weeks to overnight.
  • The system exports 4K, 6K, and 8K video for each eye. The 8K videos double industry standard output and can be played on Gear VR with Facebook’s custom Dynamic Streaming technology.

Surround 360 had 3 key challenges which FB considered :

  • The hardware (the camera and control computer)
    • The cameras must be globally synchronized. All the frames must capture the scene at the same time within less than 1 ms of one another. If the frames are not synchronized, it can become quite hard to stitch them together into a single coherent image.
    • Each camera must have a global shutter. All the pixels must see the scene at the same time. That’s something, for example, cell phone cameras don’t do; they have a rolling shutter. Without a global shutter, fast-moving objects will diagonally smear across the camera, from top to bottom.
    • The cameras themselves can’t overheat, and they need to be able to run reliably over many hours of on-and-off shooting.
    • The rig and cameras must be rigid and rugged. Processing later becomes much easier and higher quality if the cameras stay in one position.
    • The rig should be relatively simple to construct from off-the-shelf parts so that others can replicate, repair, and replace parts.
  • The camera control software (for synchronized capture)
  • The stitching and rendering software
    • Convert raw Bayer input images to gamma-corrected RGB.
      • Mutual camera color correction
      • Anti-vignetting
      • Gamma and tone curve
      • Sharpening (deconvolution)
      • Pixel demosaicing
    • Perform intrinsic image correction to remove the lens distortion and reproject the image into a polar coordinate system.
    • Bundle adjusted mutual extrinsic camera correction to compensate for slight misalignments in camera orientation.
    • Perform optical flow between pairs of cameras to compute left-right eye stereo disparity.
    • Synthesize novel views of virtual cameras for each view direction separately for the left and right eye view based on the optical flow.
    • Composite final pixels of left and right flows.

Source : Facebook

About the author

Profile photo of Rakesh Bhatia

Rakesh Bhatia

Leave a Comment

Powered by keepvid themefull earn money