Foxglove

Foxglove

Software Development

San Francisco, CA 8,739 followers

Visualize and manage multimodal data in one purpose-built robotics development platform to build better robots, faster.

About us

Foxglove is pioneering a new era for robotic development and embodied AI. Our powerful interactive visualization and data management capabilities empowers robotic developers to understand how their robots sense, think, and act in dynamic and unpredictable environments. All with the performance and scalability needed to create autonomy and build better robots, faster.

Website
https://foxglove.dev
Industry
Software Development
Company size
11-50 employees
Headquarters
San Francisco, CA
Type
Privately Held
Specialties
Multimodal Data Visualization, Multimodal Data Management, Robotics Development, and Autonomous Robotics Development

Products

Locations

Employees at Foxglove

Updates

  • View organization page for Foxglove, graphic

    8,739 followers

    Understanding why autonomous robots do what they do can be a very difficult task to understand. At its core, Foxglove is a visual analysis and debugging tool for multimodal data; the data a robot produces as it perceives the world. Examples: lidar scans, depth images, RGB camera feeds, sonar, 3D maps (indoor and outdoor), bounding boxes that represent objects (static or dynamic) the robot has encountered, segmentation images, radar, time-series, and much more… Understanding robots requires so much more than the text and time-series we’ve become so accustomed to over the years building cloud infrastructure and applications. Text and time-series simply won’t help you understand why the robot thinks something to be when in fact it might not be. You also need to visualize live and recorded streams of data over time to triage incidents, improve and test your models, and repeat. There are so many factors to consider when understanding a robot's action: like environmental, temporal, etc… that could lead the robot to detect an object in the wrong way, or take the wrong path. It could even possibly be a bug in the software from the latest update. The power of visualizing all the data and being able to traverse it over time enables you to debug quickly and better understand how your robots sense, think, and act. But it also sparks ideas to innovate and improve, along with the empowerment to make the idea come true. The nuScenes dataset is a public large-scale dataset for autonomous driving developed by the team at Motional. Link to the dataset and to learn more about Nuscenes in the comments.👇 #DataViz #Analytics #Robotics

  • View organization page for Foxglove, graphic

    8,739 followers

    Striking a balance among maintaining optimal sensor performance, minimizing operational downtime, and preserving onboard resources has become trickier than ever for perception engineers working on autonomous vehicles. 🚘 Come to #Actuate2024 and hear Jeremy Steward, Senior Software Engineer, Tangram Vision, talk about the different technical aspects of both "offline" and "online" (or fully autonomous) calibration configurations, and evaluate how each of these provide different technical, performance and operational advantages (or disadvantages) on both the production line and in the field. Get your tickets right now 👉 https://buff.ly/3Rnfteb 🎟️

    • No alternative text description for this image
  • View organization page for Foxglove, graphic

    8,739 followers

    ADAS engineering seamlessly integrates diagnostics, planning, control, and perception. Core to this field is the synthesis of electrical and mechanical diagnostics with advanced planning and motion control algorithms, allowing vehicles to navigate dynamic environments effectively. 🚘 Key technologies such as 3D deep learning and computer vision are essential for optimizing trajectories and predicting road user behavior. These technologies transform strategic driving plans into adaptable, executable actions. Critical to robust perception systems are spatio-temporal machine learning models for tasks like object detection, sensor fusion, and semantic segmentation. Engineers leverage advanced video processing, predictive analytics, and comprehensive sensor data integration to build systems that accurately interpret and respond to environmental cues. Managing and visualizing all of the data needed to properly understand your systems is streamlined with Foxglove. Leveraging customizable layouts brings all the data together for the real-time analysis needed to develop performant ADAS systems. The nuScenes dataset is a public large-scale dataset for autonomous driving developed by the team at Motional. Link to the dataset and to learn more about Nuscense in the comments. 👇 #DataViz #Analytics #Robotics

Similar pages

Browse jobs

Funding

Foxglove 2 total rounds

Last Round

Series A

US$ 15.0M

See more info on crunchbase