Resources / Insights

March 6, 2018

Using Fog Computing & Networking for Drone Camera Efficiencies

Tags: , , , , ,


By Dr. Aakanksha Chowdhery, Associate Research Scholar,  Princeton University

 

This is part two of a two-series blog on this topic.

 

In my first blog, I highlighted how fog computing works with drones to make them more efficient and help to locally process necessary information.  Fog-computing-based architectures can provide two key benefits to emerging drone applications:

  1. Drone applications can leverage contextual and historical knowledge by adaptively processing or compressing the video data at the local edge node. Then, the applications can transmit selected video segments relevant to the application – or at a lower video resolution, based on available network bandwidth to the ground station. The ground station can establish global context based on shared video segments and provide timely responses to coordinate a fleet of drones without taxing the network.
  2. With fog, drone applications can compute approximate solutions locally and rely on ground stations (or the cloud) for more compute-intensive functions to improve the solution accuracy and reliability. This helps drones trade off compute-latency with accuracy or reliability by combining the best of both local and cloud computing under network bandwidth constraints. Further, a fleet of drones can improve the accuracy or reliability of its local decisions to accomplish a successful target mission.

In particular, consider the example of disaster-response applications such as search-and-rescue or surveillance. Here, fog computing can select video segments where a human operator can intervene and guide drone trajectory if needed.

Consider another example of live-event streaming and immersive game applications. Here, fog computing can adapt video stream to channel quality and refine multiple-angle 3D views of the captured scene by sending more images as network bandwidth becomes available to enable high-quality video experience. Further, fog computing can effectively coordinate a fleet of drones without sharing all sensor data.

In our research at Princeton University, we have focused on the following key innovations toward a fog computing-based architecture for networked drones deployed on missions in the air:

  • Adaptive video streaming. Drones stream the captured video data to the ground station or the cloud over a wireless channel whose link capacity varies with the drone speeds and wireless interference. Conventional adaptive streaming algorithms fail to keep up with the fast channel variations and perform poorly.

Our lab has designed novel adaptive algorithms to enable reliable and stable streaming [1]. Our approach leverages a unique aspect of drones: their location trajectories are more predictable than other mobile platforms and therefore we can predict future link throughputs over aerial channel from the drone with high accuracy. Our proposed algorithms adapt video quality to the wireless channel, drone speed, and number of ground stations. We also ensure that video-encoding complexity can adapt to available compute capability of drone processor.

 

  • Parallel successive-refinement based streaming. Complex processing tasks, such as mosaicking, multi-target detection and tracking, require compute- intensive processing and incur high latencies on drone processors even with the latest processors. Their accuracy requires high-resolution video frames, but network constraints limit such streaming.

We have designed parallel successive-refinement-based algorithms to enable drones to adaptively stream the most important image frames to the ground station to render an approximate solution and refine the solution in parallel by fetching more images to improve accuracy over time. We further extend the proposed solution approach to fuse multiple drones’ views when they cover overlapping areas [2].

 

  • Networked drone cameras. The potential of drone fleets has been exploited by deploying them to cover disjointed areas. Live-event streaming applications require a high-definition video experience for their users. In such scenarios, it does not suffice to just cover larger areas. Just like multiple-antenna wireless technology, a fleet of drones can cover same scenes from multiple angles to improve coverage or they can boost video bitrate as drone relays between drone-server wireless link.

We have developed a fog-computing based framework that dynamically assigns a fleet of drones to optimize this tradeoff between spatial coverage, video resolution, and video-stream bitrate as a function of both drone speed and fleet size [3].

By taking advantage of the benefits of the hierarchical nature of the fog computing and networking architecture, we can address the challenges and deliver real-time processing of video via drone deployment and delivery.   For additional insights into my work at Princeton, please view the video where I discuss more on low latency video in fog environments.

 

Dr. Aakanksha Chowdhery has been an Associate Research Scholar at Princeton University since 2015. Her research work is at the intersection of mobile systems and machine learning focusing on fog computing architectures to optimize the tradeoff between bandwidth, energy, latency and accuracy for video analytics. Her work has contributed to industry standards and consortia, such DSL standards and OpenFog Consortium. She completed her PhD in Electrical Engineering from Stanford University in 2013 and was a postdoctoral researcher at Microsoft Research in Mobility and Networking Group until 2015. In 2012, she became the first woman to win the Paul Baran Marconi Young Scholar Award, given for the scientific contributions in the field of communications and the Internet. She also received the Stanford School of Engineering Fellowship and the Stanford’s Diversifying Academia Recruiting Excellence (DARE) fellowship. Prior to joining Stanford, she completed her Bachelor’s degree at IIT Delhi where she received the President’s Silver Medal Award.

 

References

[1] X. Wang, A. Chowdhery, and M. Chiang, ‘‘SkyEyes: adaptive video streaming from UAVs,’’ Third Workshop on Hot Topics in Wireless (HotWireless’16) (Invited Paper), New York, USA, 2016.

[2] A. Chowdhery, and M. Chiang, “Model Predictive Compression for Drone Video Analytics,” under submission.

[3] X. Wang, A. Chowdhery, and M. Chiang, ‘‘Networked Drone Cameras for Sports Streaming,’’ IEEE International Conference on Distributed Computing Systems (ICDCS) 2017.

 


Scroll Up