Svoboda | Graniru | BBC Russia | Golosameriki | Facebook

opensource.google.com

Menu

Introducing the Pigweed SDK: A modern embedded development suite

Thursday, August 8, 2024

Back in 2020, Google announced Pigweed, an open-source collection of embedded libraries to enable a faster and more reliable development experience for 32-bit microcontrollers. Since then, Pigweed’s extensive collection of middleware libraries has continuously evolved and now includes RTOS abstractions and a powerful RPC interface. These components have shipped in millions of devices, including Google’s own Pixel suite of devices, Nest thermostats, DeepMind robots, as well as satellites and autonomous aerial drones.

Today, we introduce the first developer preview of the Pigweed SDK, making it even easier to leverage Pigweed’s libraries to develop, debug, test, and deploy embedded C++ applications. Using the included sample applications and comprehensive tutorial, you can easily get started prototyping simple programs and build up to more complex applications that leverage advanced Pigweed functionalities. Pigweed’s modern and modular approach makes it easy to design applications with significantly reduced debugging and maintenance overhead, thus making it a perfect choice for medium to large product teams.

We are also thrilled to contribute to the Raspberry Pi Pico 2 and RP2350 launch, providing official support in Pigweed for RP2350 and its predecessor, the RP2040. Building on the success of the Pico 1 and RP2040, the Pico 2 introduces the RP2350 microcontroller, bringing more performance and an exciting set of new capabilities in a much lower power profile. We’ve worked closely with the Raspberry Pi team to not only provide a great experience on Pigweed, but also upstreamed a new Bazel-based build system for Raspberry Pi’s own Pico SDK.

Raspberry Pi Pico 2 (RP2350) with Enviro+ pack hat.
Raspberry Pi Pico 2 (RP2350) with Enviro+ pack hat.

What's in the SDK

The Pigweed SDK aims to be the best way to develop for the Pico family of devices. The SDK includes the Sense showcase project, which demonstrates a lot of our vision for the future of sustainable, robust, and rapid embedded system development, such as:

  • Hermetic building, flashing, testing, and cross-platform toolchain integration through Bazel.
  • Fully open-source Clang/LLVM toolchain for embedded that includes a compiler, linker, and C/C++ libraries with modern performance, features, and standards compliance
  • Efficient and robust device communication over RPC
  • An interactive REPL for viewing device logs and sending commands via command-line and web interfaces
  • Visual Studio Code integration with full C++ code intelligence
  • GitHub Actions support for continuous building and testing
  • Access to pico-sdk APIs when you need to drop down to hardware-specific functionality
Moving image of the Pigweed CLI console engaging with the device through interactive Remote Procedure Calls (RPCs).
Utilize the Pigweed CLI console to communicate with your device through interactive Remote Procedure Calls (RPCs).

By building your project with the Pigweed SDK (using the Sense showcase as your guide), you can start on readily available hardware like the Pico 1 or 2 today. Then when you’re ready to start prototyping your own custom hardware, you can target your Pigweed SDK project to your custom hardware without the need for a major rewrite.

Try Sense now

Bazel for embedded

Pigweed is all-in on Bazel for embedded development. We believe Bazel has great potential to improve the productivity (and mental wellbeing) of embedded development teams. We made the "all-in" decision last September and the Raspberry Pi collaboration was a great motivator to really flesh out our Bazel strategy:

  • We contributed to an entirely new Bazel-based build for the Pico SDK to make it easy for the RP2 ecosystem to use Bazel and demonstrate how Bazel takes care of complex toolchains and dependencies for you.
  • The new Sense showcase demonstrates Bazel-based building, testing, and flashing.
  • Our GitHub Actions guide shows you how to build and test your Bazel-based repo when pull requests are opened, updated, or merged.

Head over to Bazel's launch blog post to learn more about the benefits of Bazel for embedded.


Clang/LLVM for embedded

Pigweed SDK fully leverages the modern Clang/LLVM toolchain. We are especially excited to include LLVM libc, a fully compliant libc implementation that can easily be decomposed and scaled down for smaller systems. The team spent many months developing and contributing patches to the upstream project. Their collaboration with teams across Google and the upstream LLVM team was instrumental in making this new version of libc available for embedded use cases.

The sample applications, Pigweed modules, host-side unit tests, and showcase examples already use Clang, LLD, LLVM libc and libc++. Thus, developers can take advantage of Clang’s diagnostics and large tooling ecosystem, LLD’s fast linking times, and modern C and C++ standard library implementations which support features such as Thread Safety Analysis and Hardening.


IDE integration

With full Visual Studio Code support through pw_ide, you can build and test the Sense showcase from the comfort of a modern IDE and extend the IDE integration to meet your needs. Full target-aware code intelligence makes the experience smooth even for complicated embedded products. Automatic linting, formatting, and code quality analysis integrations are coming soon.


Parallel on-device testing with PicoPico

As you would expect from a team with the mission to make embedded development more sustainable, robust, and rapid for large teams, we are of course obsessed with testing. We have hundreds of on-device unit tests running all the time on Picos. The existing options were a bit slow so we whipped up PicoPico in a week (literally) to make it easier to run all these tests in parallel.

A “PicoPico” node
One “PicoPico” node for running parallel on-device tests

RP2 support

The goal behind our extensive catalog of modules is to make it easy to fully leverage C++ in your embedded system codebases. We aim to provide sensible, reusable, hardware-agnostic abstractions that you can build entire systems on top of. Most of our modules work with any hardware, and we have RP2 drivers for I2C, SPI, GPIO, exception handling, and chrono. When Pigweed's modules don't meet your needs, you can still fallback to using pico-sdk APIs directly.


Get started

Clone our Sense showcase repo and follow along with our tutorial. The showcase is a suitable starting point for learning what a fully featured embedded system built on top of the Pigweed SDK looks like.


What’s next

The Pigweed team will continue to have regular and on-going preview releases, with new features, bug fixes, and improvements based on your feedback. The team is working on a comms stack for all your end-to-end networking needs, factory-at-your-desk scripting, and much, much more. Stay tuned on the Pigweed blog for updates!


Learn more

Questions? Feedback? Talk to us on Discord or email us at [email protected].

We have a Pigweed Live session scheduled on August 26th, 13:00 PST where the Pigweed team will talk more about the Pigweed SDK and answer any questions you have. Join [email protected] to get an invite to the meetings.


Acknowledgements

We are profoundly grateful for our passionate community of customers, partners, and contributors. We honor all the time and energy you've given us over the years. Thank you!

By Amit Uttamchandani – Product Manager, Keir Mierle – Software Engineer, and the Pigweed team.

Google CQL: From Clinical Measurements to Action

Wednesday, July 31, 2024


Today, many institutions are building custom solutions for understanding their medical data, as well as tools for acting on that data. A major pain point with the current approach is that these tools can be error prone, lack built in medical context and medical data structure representations. Enter Clinical Quality Language (CQL), a portable, computable, and open HL7 language specification for expressing computable clinical logic over healthcare data. We believe that CQL has the power to radically improve the future of data driven workflows in healthcare. Over the past year at Google Health, our team has been hard at work building foundational tools for healthcare data analytics. Today we’re announcing the release of an experimental open source toolkit for Clinical Quality Language execution.

The Google CQL engine is an experimental open source toolkit that includes a CQL execution engine built from scratch in Go. We built this engine with a focus on horizontal scalability in mind, ease of use, and high test coverage. We wanted to make it easy to experiment with our engine, so we’ve included an easy to use CLI, REPL, and a two-click setup web playground! The toolkit is still a work in progress and we very much welcome input, contributions, and ideas from the community.


Why CQL

CQL represents a major shift away from the precedent of distributing clinical logic as free text guidelines which each institution implements in custom and often error prone ways. Now, CQL allows clinical logic to be written once, distributed, and run anywhere in a single framework. Major standards bodies like Medicare, NCQA, and the World Health Organization (WHO) have already started to adopt and distribute clinical measures in CQL! (Check out these antenatal care measures from the WHO as an example). We believe that CQL lowers the burden to writing, sharing, and computing complex clinical content.

CQL supports multiple common healthcare data models (such as FHIR and QDM) and is designed with common clinical concepts, tasks, and nested data structures in mind. For example, consider this comparison:

A side by side comparison of FHIR SQL (BigQuery) to CQL.
(click to enlarge) A side by side comparison of FHIR SQL (BigQuery) to CQL.
This logic extracts CHD encounters with statins prescribed during the visit.

The FHIR SQL requires more boilerplate, unnesting, and custom value set handling. It’s very clear here that the CQL is more readable, concise, and easier to understand than the SQL implementation for this example.

If you’d like to see a more in depth CQL example with an explanation, see Appendix A.

As the healthcare industry has matured so have the representations of Clinical Quality Measures. Previously, clinical quality mandates were provided as free-text guidelines. That left it up to each medical institution to implement themselves. This was of course error prone, and repetitive across the industry. There is a shift today where institutions like the WHO, CMS, and NCQA are writing clinical measures increasingly in CQL.

Transition to standards based Clinical Quality Measures diagram
Transition to standards based Clinical Quality Measures diagram

Examples like the WHO Antenatal Care Guidelines project exemplify the shift to openly distributed and executable measures. We believe that computable and shareable measures like these WHO SMART Guidelines are the future for expressing and sharing medical knowledge.


Our CQL Toolkit

We would love others excited about this work to check out our experimental CQL tools at https://github.com/google/cql. We continue to be very interested in welcoming external contributors, so we strongly encourage you to check out the repository to give it a try and consider helping with any open issues. If you’re not sure where to ask, reach out to us! We’d also like to hear from others about what they’re working on and how the Google CQL engine may fit into their toolchain, feel free to reach out at [email protected] or open an issue on the repository.

If you want to learn more about CQL see https://github.com/cqframework/clinical_quality_language and https://cql.hl7.org/index.html.


Appendix A: Simplified Diabetes CQL Example

library ExampleCQLLibrary version '1.2.3'
using FHIR version '4.0.1'

valueset Diabetes: 'diabetes-valuseset-url' version '1.0'
valueset GlucoseLevels: 'glucose-levels-valueset-url' version '1.0'

context Patient

define PatientMeetsAgeRequirement: AgeInYearsAt(Now()) < 20

define HasDiabetes:
       exists ([Condition: Diabetes] chd where chd.onset before Now())

define LatestGlucoseReading:
       Last([Observation: GlucoseLevels] bp sort by effective desc)

define LatestGlucoseAbove200: LatestGlucoseReading.value > 200

define Denominator: PatientMeetsAgeRequirement and HasDiabetes

define Numerator: Denominator and LatestGlucoseAbove200

In this example for a given patient record, the code selects for individuals under 20 where their most recent glucose reading was above 200. Although this is a simple example, it’s made simple because CQL provides a solid foundation for which to define and act on medical information and concepts.

By Evan Gordon and Suyash Kumar – Software Engineers 
Health AI Team: Ryan Brush, Kai Bailey, Ed Nanale, Chris Grenz

DAGify: Accelerate Your Journey from Control-M to Apache Airflow

Friday, July 26, 2024


In the dynamic world of data engineering and workflow orchestration, organizations are increasingly migrating from legacy enterprise schedulers like Control-M to the open-source powerhouse, Apache Airflow. However, this transition often involves a complex and time-consuming process of converting existing job definitions. DAGify emerges as a beacon of efficiency in this scenario, offering an open-source solution to automate the conversion of Control-M XML files into Airflow's native DAG format.

DAGify isn't just a simple conversion tool; it's a migration accelerator, designed to significantly reduce the manual effort and potential errors associated with transitioning to Airflow. While it might not provide a perfect 1:1 migration in every case, its primary goal is to expedite the process, allowing developers to focus on optimizing their workflows in the new environment.


Introduction

Control-M has served as a reliable workhorse for many organizations, but its proprietary nature and limitations can become roadblocks in today's cloud-centric and agile data landscape. Apache Airflow, with its flexibility, scalability, and thriving community, presents a compelling alternative. However, the migration journey can be daunting, especially when dealing with intricate Control-M job definitions.

DAGify steps in to bridge this gap, offering an intuitive and extensible solution. By automating the conversion process, it empowers organizations to embrace Airflow's capabilities without the burden of manual translation. This translates to faster migrations, reduced errors, and a smoother transition overall.


Technical Details

Under the hood, DAGify employs a template-driven approach, making it adaptable to various Control-M configurations and Airflow requirements. It parses Control-M XML files, extracting crucial information about jobs, dependencies, and schedules. This data is then intelligently mapped to Airflow's operators, tasks, and dependencies, preserving the essence of the original workflow. While still under active development, DAGify already supports key Control-M features like job and dependency mapping. The project roadmap includes further enhancements, such as handling custom calendars and expanding support for other enterprise schedulers.


Template-driven conversion

DAGify employs a flexible template system that empowers you to define the mapping between Control-M jobs and Airflow operators. These user-defined YAML templates specify how Control-M attributes translate into Airflow operator parameters. For instance, the control-m-command-to-airflow-ssh template maps Control-M's "Command" task type to Airflow's SSHOperator, outlining how attributes like JOBNAME and CMDLINE are incorporated into the generated DAG.

The template's structure field utilizes Jinja2 templating to dynamically construct the Airflow operator code, seamlessly integrating Control-M job attributes.

Example:

A Control-M task like:

<JOB 
  APPLICATION="my_application" 
  SUB_APPLICATION="my_sub_application" 
  JOBNAME="job_1" 
  DESCRIPTION="job_1_reports"  
  TASKTYPE="Command" 
  CMDLINE="./hello_world.sh" 
  PARENT_FOLDER="my_folder">
  <OUTCOND NAME="job_1_completed" ODATE="ODAT" SIGN="+" />
</JOB>

is converted to an Airflow operator using the control-m-command-to-airflow-ssh-gce template:

job_1 = SSHOperator(
    task_id="x_job_1",
    command="./hello_world.sh",
    dag=dag,
)

The repository includes several pre-defined templates for common Control-M task types. The config.yaml file at the project's root allows you to customize which templates are applied during the conversion process.


Leveraging Google Cloud Composer

For organizations seeking a fully managed Airflow experience, Google Cloud Composer provides a compelling solution. It eliminates the complexities of managing Airflow infrastructure, allowing you to focus on building and orchestrating your data pipelines. DAGify seamlessly integrates with Google Cloud Composer, making it even easier to migrate your Control-M workflows to a cloud-native environment.


Try it yourself

Eager to experience the power of DAGify? It's readily available as an open-source project on GitHub: https://github.com/GoogleCloudPlatform/dagify. The repository provides detailed instructions on setting up and running DAGify locally or within a Docker container.

Key steps to get started:
  1. Clone the repository: git clone https://github.com/GoogleCloudPlatform/dagify.git
  2. Install dependencies: make clean (This sets up a virtual environment and installs required packages)
  3. Run DAGify: python3 DAGify.py --source-path=[YOUR-SOURCE-XML-FILE]

Remember, DAGify is an ongoing project, and community contributions are welcome! If you encounter any issues or have feature requests, feel free to open an issue on GitHub.


Conclusion

DAGify represents a significant leap forward in simplifying enterprise scheduler migrations to Apache Airflow. By automating the conversion process and seamlessly integrating with Google Cloud Composer, it empowers organizations to embrace the benefits of Airflow more rapidly and efficiently. Whether you're a seasoned Airflow developer or just starting your migration journey, DAGify is a valuable tool to explore.

Remember:

  • Thorough testing is crucial: Always test your converted DAGs in a staging environment before deploying them to production.
  • Leverage Airflow's ecosystem: Explore the vast array of Airflow plugins and integrations to further enhance your workflows.
  • Stay engaged with the community: Keep an eye on DAGify's development and contribute to its growth if you can!

Happy migrating!

By Konrad Schieban and Tim Hiatt – Google Cloud


Acknowledgments

Thank you to the following team members who made this solution possible: Shreya Prabhu, Harish S, Slava Guzanov and Joanna Rajaseharan from Google Cloud.

.