VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
The release, dubbed Prophecy 3.0, expands the platform beyond low-code Spark for data engineers and gives business data users a visual drag-and-drop canvas to build data pipelines natively on cloud data platforms.
Low-code SQL: How does it help with data pipelines?
Building a data pipeline is essentially preparing the data for analytics. This means writing SQL code for tasks like extracting the data from databases, transforming and cleaning the information, and loading it to the targeted data platform. It’s all pretty usual stuff for data engineers, but if data users try to get the data ready themselves (probably to meet a particular business need), the process can become a bottleneck, resulting in failure to feed timely and correct data for the planned analyses.
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
After all, most business users are experts in data, not expert data engineers.
With the addition of low-code SQL to its platform, Prophecy is bridging this gap and providing business data users with a visual drag-and-drop interface to build the data pipeline they need. Once this canvas is used, the platform’s technology turns the representation into working SQL code (as open-source dbt Core projects) and readies the pipeline in question for analytics.
“Business teams can prepare data themselves for analytics, quickly and simply. This enables them to deliver analytics faster, adapting quickly to the changing business needs. More importantly, this also frees up the often oversubscribed central data platform teams,” Raj Bains, the cofounder and CEO of Prophecy, told VentureBeat.
Notably, the technology also works in reverse: Users can open existing dbt Core projects in Prophecy and edit the SQL code as visual pipelines, with the changes saved back as SQL.
“Early users [are] also very happy that low-code developers and SQL coders can now work in the same environment — since Prophecy turns low-code pipelines into SQL code and SQL code into low-code pipelines instantaneously,” Bains added.
A unified platform
While low-code SQL is new, it must be noted that this is not the first visual tooling of the platform. The company already provides low-code support for Spark, Spark Streaming, and Apache Airflow (for data workflow orchestration) — adding up to a unified offering for users with varying expertise and needs.
“When different data teams use different tools to build pipelines, it adds to costs, timelines and risks …[the] Prophecy 3.0 release helps data analysts using SQL, data engineers using Spark and DevOps engineers using Apache Airflow to collaborate through a unified low-code platform, Sanjeev Mohan, former Gartner research VP for big data and advanced analytics, told VentureBeat. “This opens up new possibilities, such as consistently applying data quality checks irrespective of the language, and enabling a self-service framework to create data products.”
Kevin Petrie, VP of research at Eckerson Group, indicated the same, noting that the addition of low-code SQL to Prophecy’s portfolio gives SQL-oriented data engineers and dbt-oriented analytics engineers new options for building, managing and orchestrating the pipelines that support modern analytics projects.
“Enterprises continue to adopt lakehouse platforms that apply SQL-based queries and transformations to cloud-native object stores. By adding SQL pipelining capabilities, Prophecy significantly increases its addressable market,” he said.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.