The lab

Discover. Learn. Innovate.

A collection of valuable insights, real success stories, and expert-led events to keep you informed.

Soft blue and white gradient background with blurred smooth texture

Insights & ideas

Stay ahead with expert articles, industry trends,
and actionable insights to help you grow.

There's a new way to turn business ideas into app frameworks
April 21, 2025
10 mins read

There's a new way to turn business ideas into app frameworks

Read blog

Imagine describing an app you need in your own words and getting a basic app framework in minutes. With Plan Designer in Power Apps, that’s already becoming possible.

What is the Plan Designer?

Plan Designer is a new Copilot experience within Power Apps. It allows users to describe their app in natural language and receive a structured starting point.  

This move is part of Microsoft’s broader move to bring generative AI into everyday business tools. While it doesn't yet deliver complete or production-ready applications, it offers a strong foundation that helps teams move faster, validate ideas earlier, and collaborate more effectively with dev teams when it’s time to build.

Important to know: It’s still in preview

Plan Designer is currently available as a public preview feature. That means it’s not production-ready yet, and it’s not recommended for complex or business-critical use cases.

It’s a promising direction, and there are many more improvements in the pipeline. But for now, think of it as a way to jumpstart your ideas, not as a full replacement for expert-built solutions. Let’s see how:  

From idea to app structure, without coding

Some of the best ideas for internal apps come from the people who work closest to the process.  

You’ve likely experienced it yourself: you know exactly what your team needs, whether it’s a simple PTO planning tool or a way to track field tasks. You understand the workflow, the challenges, and the users. But when it comes to turning that insight into a working app, you’re not sure how to get started.

That’s been the reality for many business users.

Historically, PowerApps has been aimed at non-developers, people in HR, customer service, field operations, and sales. These users know their business inside and out but often lack the technical or systems thinking skills to design a well-structured, scalable app. As a result, many apps were either overly simple or hard to maintain and improve.

That’s where Plan Designer comes in.

It offers a more guided way to get started. Instead of starting from scratch, you describe what you need in natural language, for example, “I need a tool to assign jobs to field technicians.” You can even upload visuals, like a screenshot of an old tool or a process diagram.  

Picture, Picture

Based on your input, Copilot generates a structured draft of your app.  

What you get is a smart skeleton, with suggested tables, screens, user roles, and basic logic. It proposes a data model and automation ideas using Power Automate, all based on what your prompts. You can then review, adjust, or approve what Copilot gives you before it builds out the logic.

It won’t give you a finished app, but it gives you a strong starting point, one that reflects your intent and helps you think through how your app should be structured. That’s a big step forward for anyone who understands the business problem but not the development process.

What can you currently do with Plan Designer?

To access the Plan Designer, you need a preview environment with early feature access enabled. Once set up, you can start designing solutions directly from the Power Apps homepage by toggling on the new experience.

It’s still the early days, so it’s important to set the right expectations. As of April 2025, Plan Designer has the following capabilities:  

Natural language input

Based on natural language input, the Plan Designer will generate a solution tailored to your needs. This includes creating user roles, user stories, and data schemas.

Solution generation

The tool can create basic end-to-end solutions, including:

  • Dataverse tables
  • Canvas apps
  • Model-driven apps

Iterative development

You can refine your plans by providing feedback during the design process to make sure that the generated solution aligns with your specific needs.

Collaboration and documentation

The generated plan serves as both a blueprint for development and documentation for future reference to help teams align on business goals and technical execution.

Integration with Power Platform tools

While still in preview, the tool integrates with other Power Platform components like Dataverse and Power Apps. However, some features (e.g., Power Pages support and advanced data modeling) are not yet available.

Limitations in the preview

The tool currently does not support generating full Power Automate flows or using common data model tables like accounts or contacts. Features like analytics integration, Azure DevOps compatibility, and document uploads (e.g., RFPs) are not yet implemented.

The feature set is evolving rapidly, with updates rolling out every few days. One recent improvement: Copilot now explains which AI agents are working on different tasks, for example, the requirement agent, data agent, or solution agent.

Picture, Picture

To sum up, Plan Designer helps you get the core pieces in place in just a few minutes. It’s especially useful for:

  • Prototyping apps without waiting for a developer
  • Practicing prompt-writing to refine app design
  • Getting a better understanding of how systems and logic fit together

It’s great for playing around, testing out concepts, and learning how to approach app development with systems thinking. Let’s see how this might change in the coming years.  

How you’ll use Plan Designer in the near future

Let’s say there’s a process in your team that’s manual, slow, or inconsistent, and you know exactly how it should work. Maybe it’s tracking field work, collecting customer data, or planning PTO.  

You have the knowledge to solve it. What you don’t always have is the time, tools, or technical background to build the solution yourself.

That’s where Plan designer is moving toward. It will help you translate your ideas into something concrete: a data model, screens, and suggested relationships. It will give you a head start, so you won’t have to start from scratch.

Here’s what that might look like in practice:

  • You’re a field manager who needs to track technician assignments and jobs.

You describe your idea to Copilot, and it creates basic tables like “Jobs” and “Technicians,” with suggested relationships between them. The logic and visuals still need work, but you now have a structure to build on.

Looking for inspiration to improve efficiency in Field Service? Check out our use cases here.

  • You’re in sales and want to explore upsell recommendations for client visits.

Copilot sets up a rough draft with placeholders for customer info and past purchases. It doesn’t connect to CRM data yet, but it helps you map out the concept before looping in technical teams.

  • You’re on a support team and want to build a customer intake form.

You describe the form and basic routing needs, and Copilot creates a simple layout with suggested fields and logic. You’ll need to tweak it, but it’s a much faster way to get started.

While these examples are simple, they give you an idea of where things are heading. Plan Designer isn't here to replace software engineers but to allow business teams to move faster and speak the same language as your dev team.

Turning your starting point into a real solution

At VisualLabs, we follow every development in the Microsoft ecosystem closely and we’re excited about Plan Designer’s progress. It’s already a powerful tool for creating skeleton apps, exploring ideas, and learning how data models and logic come together.

But when you need more than just a starting point, when performance, integration, scalability, and usability matter, our team is here to help. We bring the expertise to take your idea and turn it into a reliable, well-designed app that fits your organisation’s needs.

AI is changing how we build apps, but human insight still makes the difference.  

Interested in what use cases our customers are prioritising? Check out our case studies here.

This is some text inside of a div block.
What it used to be and what it has become: visualLabs through my own eyes
December 18, 2024
10 mins read

What it used to be and what it has become: visualLabs through my own eyes

Read blog

I first joined VisualLabs in the summer of 2020 as a junior business analyst. As you can see from the timeline, I was part of the mass junior recruitment. With three of us, the company grew to 8 people at that time.

In the more than 1 year I worked here from 2020-2021, I was involved in quite a variety of tasks: building and improving Power BI reports, working a lot on a contract management application I built using the Power Platform, and also gaining insight into the beauty of Business Central. The latter also gave rise to some comical memories, such as the painstaking work involved in recording and subtitling training videos for clients, and how I was then, as an undergraduate student, on 'duty' for Christmas because I had no more holidays left for the year. But I got a lot of support from my senior colleagues in these things, they didn't let me get lost in the shuffle.

3 years later, in the summer of 2024, I rejoined VL, but now I work specifically with ERP. One thing that was very nice and new to me in the company was the company timeline. Where last time I was one of the mass junior hires, I'm now a part of the company life.

An amazing amount has happened in my time away, and it's great to see these events being shared by my colleagues, creating a stronger sense of belonging.

What has actually changed in these 3 years? I haven't had the chance to go through everything since I rejoined, and there's not enough space to go into it all here, so I'll just give you a few snippets.

Office

The first of these is probably the new office: the move from Zsigmond Square to Montevideo Street was already done when I was still here as a junior. But who I couldn't enjoy it then, and I wasn't part of the "moving in", but still, when I returned here 3 years later, I felt like I had shaped it. Interpret this to mean that the ethos that makes visuallabs to visuallabs, I think, changed very little, and the homeliness of the office reflected that.

Specialisation

The company has made huge progress in terms of specialisation and staff numbers while I was away: the team has grown to 35 people, and there are now separate business units for all the tasks I had the opportunity to join on a rotational basis as a junior. These are the CE team, who build business applications for clients, the data team, who deliver data analytics and visualisation solutions, and there's the ERP team - of which I became part - where we introduce Microsoft's enterprise management solutions (Dynamics 365 Finance and Operations and Business Central) to clients.What I would perhaps highlight from this is that even though these specialisations have evolved, it has not brought with it a siloed operation. To deliver work in our own area, we have access to the knowledge of other areas, and we mutually help each other across teams to deliver the highest quality service possible. From this perspective, what has changed in 3 years? I would say nothing; what worked then on a small scale, works now on a bigger scale.

Agile operation

We have had a solid vision of how we deliver solutions since I was a junior employee here: the agile methodology. What was in its infancy is now mature. If not fully agile, it uses the agile elements so well that it supports our work to a great extent on a day-to-day basis.It helps us communicate internally and to our customers by allowing them to post issues in DevOps that we help them resolve; we write features, user stories, test cases that help us with needs assessment and implementation. We have daily stand-up meetings with the team in the mornings where we discuss our stumbling blocks, at the end of the week we have sprint rounds where we always plan the next week's sprint, and monthly we have a retros where we pay special attention to feedback to each other, looking back on the past 1 month.

Team and all the fun

Unfortunately, during my first job, I didn't get much of that live because of Covid, but even then I had those short conversations at the beginning of a call or at the morning "all-people" DSMs that reinforced the sense of belonging to the team and the good atmosphere. Fortunately, we have kept this habit ever since, so no call is ever dull. And once the epidemic subsided, these community events only grew stronger, with regular team-building events, VL team-building retreats, co-hosted Christmas and Halloween parties.It's also a good day at the office. Although it varies from day to day, we have little rituals that colour the days and take the focus off work. For example, the daily lunch together in the office, chit-chat while making coffee, or just passing a funny comment to each other at the next desk, or the monthly office day when we all go in and look back over the past month. In short, you never get bored here. 😊

Coming back to a place where I've worked before is a special experience - especially when so much has changed in the meantime. VisualLabs has retained the supportive community and vibrancy that I grew to love, while reaching new levels of development and professionalism. This journey has been a learning experience not only for the company, but also for me, as the old and new experiences have given me a stronger, more mature perspective. I look forward to being a part of the next chapter and seeing where the company goes in the future!

This is some text inside of a div block.
Recap: Budapest BI Forum
December 6, 2024
10 mins read

Recap: Budapest BI Forum

Read blog

Hey everyone! Here’s a summary of the Budapest BI Forum 2024, where I had the chance to dive into some intriguing topics and engage in inspiring conversations.

The first day was a full-day Tabular Editor workshop, where we covered the basics and discussed topics such as controlling perspectives, writing macros, and refreshing partitions. The other two days of the conference were packed with learning, and here are my key takeaways from my favorite sessions.

Keynote Speech: BI Trends

The day kicked off with a keynote that explored current and future BI trends.

Bence, the main organizer and host of the event, supported his key points with insights from Gartner research and similar studies. A few highlights that caught my attention:

  • By 2025, data security and data governance are expected to top the list of priorities for executives.
  • The rapid rise of AI introduces scenarios where users export data from dashboards to Excel, feed it into tools like ChatGPT, and generate their own insights. While exciting, this raises concerns about security and "shadow reporting," issues companies have tried to curb for years.

As a contractor and consultant I find this especially ironic. Large companies often hesitate to share data, even when it’s crucial for project development. They implement robust policies like VPNs and restricted searches to prevent leaks. But, at the same time, they struggle to monitor and control employees' behaviors, such as inadvertently sharing sensitive data.

This evolving dynamic between AI, data security, and governance will definitely be a space to watch closely.

Read more about Gartner’s 2024 BI trends here.

PBIR: Report Development in Code

This technical session introduced the PBIR format, a preview feature that allows Power BI reports to be stored as individual JSON files for each visual and page, instead of a monolithic file.

The feature’s potential for bulk modifications was the most exciting part. The presenter showed how Python scripts could iterate through the JSON files to apply changes (e.g., adding shadows to all KPI cards) across the report.

While still in preview and somewhat buggy, it’s a promising direction. I’m also intrigued by the integration possibilities with VS Code and GitHub Copilot, which could simplify automation for non-coders.

However, it seems TMDL language won’t be integrated into PBIR anytime soon—a bit disappointing, but I’m optimistic this will eventually happen.

TMDL Enhancements in Power BI & VS Code

One of the most exciting parts of the forum was exploring updates to TMDL (Tabular Model Definition Language), designed to make Power BI model development more efficient.

TMDL View in Power BI

This might be the feature I’m most excited about! The ability to edit your semantic model as code directly inside Power BI is a massive leap forward. Combining drag-and-drop, Copilot, and coding will make development smarter and faster.

Immediate Code Updates in Power BI (Planned for Next Year)

A handy feature to look forward to is real-time synchronization between modified TMDL code and Power BI. Changes to the model will reflect instantly in Power BI without reopening the file, saving tons of time during development.

VS Code TMDL Extension

The TMDL extension in VS Code offers:

  • Formatting: Automatically organizes TMDL syntax.
  • IntelliSense and Autocomplete: Speeds up coding with intelligent suggestions.
  • Expand/Collapse Functionality: Makes navigating larger TMDL files easier.

Get the extension here.

 

Copilot Integration in VS Code

Copilot lets you generate measures, calculations, and scripts with AI assistance. For example, as you type "Profit," Copilot suggests a complete formula based on the context. It’s a productivity boost I can’t wait to leverage more!

Online Editing with VSCode.dev

You can now edit repositories directly in your browser using the vscode.dev prefix for your repository URL. It’s perfect for quick edits without setting up a local environment.

These updates are poised to make model development faster, smarter, and more collaborative for teams using GitHub and VS Code.

Lunch Break with Insights from Microsoft

Lunch turned into one of the highlights of the day when Tamás Polner, a key figure at Microsoft, joined our table. Tamás shared some fascinating insights about the current direction of Microsoft’s data ecosystem and upcoming trends:

  • Fabric focus: Microsoft is heavily prioritizing Fabric over tools like ADF and Synapse, which are expected to receive basically no new feature updates as development resources shift toward Fabric. While this has been an industry assumption for a while, it was great to have this firsthand confirmation. The message is clear: Fabric is the future of Microsoft’s data ecosystem.
  • Data security: Reflecting on the keynote’s emphasis on data security, Tamás explained that this aligns with what he’s seeing at Microsoft. The number of developers in the security team is increasing significantly, and this trend doesn’t seem to be slowing down.
  • Optimized compute consumption: We also discussed CU (Compute Unit) optimization in Fabric. Tamás reaffirmed something I’d heard in Fabric training sessions: notebooks are far more powerful and efficient than UI-powered features like Dataflow Gen2. They use significantly less compute capacity, making them the better choice for many workflows.
  • DP-600 exam: Tamás mentioned that the DP-600 exam has become one of the most successful certifications in Microsoft’s history, with a record high number of certifications achieved in short time.
  • Copilot and AI: Copilot is a major focus for Microsoft, but its rollout faces challenges due to the high resource intensity of AI models. Tamás noted that, like other companies deploying built-in AI solutions, Microsoft needs to continue investing heavily in CAPEX for computing power to make these solutions broadly accessible.

 

This conversation provided valuable context and insight into Microsoft’s strategic priorities and was a great opportunity to discuss industry trends and technical strategies in detail.

 

Storytelling with Power BI

This session revisited a topic close to my heart: how to create Power BI reports that truly connect with their audiences. The presenter broke it down into three key phases:

  1. Research: Start by understanding the report’s purpose. Who will use the report? What decisions should it support? Can the goal be summarized in one clear, concise sentence?
  2. Create: Develop the report based on your research. Ensure that the visuals, design, and structure align with the user’s needs and the intended outcomes.
  3. Deliver: It’s not just about handing over the report and documentation, then walking away. True success lies in monitoring how the report is used and gathering user feedback. This feedback often reveals both strengths and weaknesses you didn’t anticipate, providing opportunities to refine and enhance the report further.

While much of this was a confirmation of what I already practice, it underscored an essential point: The discovery phase and follow-ups are just as critical as the actual development process.

It’s also a reinforced me that educating clients about the value of these stages is crucial. When clients understand that investing time and resources into proper research and post-delivery follow-ups leads to better reports and happier users, they’re much more likely to embrace these processes.

 

Final Thoughts

The day was packed with insights, but what truly stood out was the seamless blend of technical innovation and strategic foresight. Whether it was exploring new options like TMDL and PBIR, or gaining a deeper understanding of the big-picture trends shaping the future of BI, the forum offered something valuable for everyone.

Of course, the lunch chat with Tamás was a treasure trove of insider knowledge—easily one of the event’s highlights for me. Another personal highlight was a heartfelt conversation with Valerie and Elena, who encouraged me to take the next step in my professional journey: becoming a conference speaker.

If any of these topics piqued your interest or you’d like me to dive deeper into specific sessions, just let me know—I’d be happy to share more!

This is some text inside of a div block.
Handling REST API Data in ADF: A Guide to Pagination and JSON Aggregation
November 8, 2024
10 mins read

Handling REST API Data in ADF: A Guide to Pagination and JSON Aggregation

Read blog

When working with data from REST APIs, it's common to encounter limitations on how much data can be retrieved in a single call. Recently, I faced a challenge where the API limited responses to 1000 rows per call and lacked the usual pagination mechanism, such as a "next page URL" parameter in the response. This absence makes it difficult for developers to automate the data retrieval process, as there's no clear way to determine when all the data has been retrieved.

In scenarios like data backup, migration, or reporting, this limitation can become an obstacle. For instance, using a real-life scenario as an example: if your company manages its HR-related processes in SAP SuccessFactors, you’ll encounter this same challenge. Without a native connection between SuccessFactors and Power BI, one of the viable options for pulling data for reporting or building a data warehouse (DWH) is through REST API calls. While Power BI offers a quick solution via Power Query, it’s not always the best tool—particularly when dealing with larger datasets or when corporate policies limit your options. This is where Azure Data Factory (ADF) becomes a more appropriate choice. However, ADF presents its own challenges, such as handling API responses larger than 4MB or managing more than 5000 rows per lookup activity.

This article will walk you through overcoming these limitations when working with JSON files and API responses in ADF. By the end, you'll learn how to append multiple API responses into a single result and handle API calls with unknown pagination parameters.

While alternative solutions like Python or Power BI/FabricDataflow Gen2.0 may offer quicker implementations, there are cases where ADF is necessary due to restrictions or specific use cases. This guide is tailored for ADF developers or anyone interested in experimenting with new approaches.  

If you're familiar with working with APIs, you're likely aware that limitations on the number of rows per call are a common practice. Typically, APIs will let users know if there’s more data to retrieve by including a "next page URL" or similar parameter at the end of the response. This indicates that additional API calls are necessary to retrieve the full dataset.

However, in the scenario we faced, this part of the URL was missing, leaving no clear indication of whether more data remained in the system. Without this pagination rule, it's challenging to determine from a single successful API call whether more API requests are required to retrieve the complete dataset. This makes automating the data retrieval process more complicated, as you have to implement a method to check whether further calls are necessary.

This is the first issue we’ll solve using ADF.

The second issue involves handling API responses in JSON format and merging different JSON files into a single result.

If the requirement is to store all the data in one single JSON file, there are several approaches you can take:

  1. Create one single JSON file from all the responses and store or process it later.
  2. Generate multiple JSON files, one for each API call, and then flatten them into one file at a later stage in the ADF pipeline.
                  
     

Alternatively: Write the data directly to a SQL table, if the final destination is a database.

In this article, we’ll focus on Option 1—creating one single JSON file from multiple responses. While this may not be the ideal solution, it presents a unique challenge when working with JSON arrays in ADF.

While the solution itself is not overly complicated, there are a few important points where developers should proceed with caution.

First, be aware of the limitations ADF imposes on each LookUp Activity or Web Activity: the output size is restricted to 4 MBs or 5000 rows. If your response size slightly exceeds this limit, you can adjust the settings—lowering the number of rows per call from 1000 to, say, 800. However, keep in mind that this adjustment could significantly increase the overall runtime, especially if you're dealing with many columns of data. In such cases, consider an alternative approach, such as using the Copy Activity to write the data directly into a SQL database or generate multiple JSON files and merge these into one later.

Another critical point is the use of loops. The solution involves two loops, so it’s essential to carefully handle scenarios that could result in endless loops. Proper checks and conditions must be implemented to avoid such issues and ensure smooth execution.

Implementation - ADF

Here is the logic of the entire pipeline:

To manage API pagination and build a single valid JSON file in ADF, you will need to define several variables, as shown in the image above: Variables Setup:

           
  1. Skip X Rows (Integer):This variable will store the number of rows to skip in each REST API call, which is the dynamic part of the URL.
  2.        
  3. Skip X Rows - Temporary (Integer):Skip X Rows - Temporary (Integer): This variable is needed because ADF doesn’t support self-referencing for variables. You can’t directly update Skip X Rows using itself, so this temporary variable helps track progress.
  4.        
  5. REST API Response is empty? (Boolean):This Boolean flag will indicate whether the last API response was empty (i.e., no more data), triggering the loop to stop.
  6.        
  7. API Response Array (Array):Used to store each individual API response during the loop. This allows you to gather all responses one-by-one before processing them.
  8.        
  9. All API Response Array (Array) [Optional]:This array is optional and can be used to store all responses combined after the loop finishes.
  10.        
  11. Current JSON (String):Stores one individual API response in JSON format as a string.
  12.        
  13. Interim Combined (String):Stores the concatenated JSON responses as you append them together in the loop.
  14.        
  15. Combined JSON (String):Holds the final complete JSON result after all responses have been processed and combined.

Step-by-Step Execution

1)  Initialize Variables:

           
  • Set Skip X Rows to 0. This represents the starting point for the API.
           
  • Set Skip X Rows - Temporary to 0. This is a temporary counter to help update the primary skip rows.
           
  • Set REST API Response is empty? to false. This Boolean will control when to stop the loop.

         

2) Add an UNTIL activity: Set up a WHILE loop (or UNTIL activity) with the condition @equals(variables('REST API Response is empty?'), true) so it continues running as long as there is data to retrieve.

3) Inside the WHILE Loop: a) Lookup Activity (Initial API Call):

           
  • Perform a Lookup Activity calling the REST API, but limit the returned data to only one column (e.g., just the ID which should be never empty if present). This keeps the response light and allows you to check if more data exists.b) IF Condition (Check Response):
           
  • If the response is empty, set REST API Response is empty? to true to end the loop.
           
  • If not empty, proceed to the next step.c) Full API Call:
           
  • If the response is not empty, perform the full REST API call to retrieve the desired data.
           
  • Append the response to the API Response array variable.
  • d) Update Variables:
           
  • Increase Skip X Rows - Temporary by the number of rows retrieved (e.g., 1000).
           
  • Set Skip X Rows to the value of Skip X Rows - Temporary to update the dynamic part of the API URL.
  • 4) Handle Failure Scenarios:
           
  • Optionally, but highly recommended: add a Fail Condition or a Timeout Check. This condition will break the loop if there is a problem with the API response (e.g., a 404 error).

After gathering all the API responses, you'll have a list containing multiple JSON arrays. You’ll need to remove the unnecessary brackets, commas or other JSON elements. To do that you’ll need a for loop which iterates over all the JSON arrays in the array variable and modifies those accordingly.

The steps followed inside the for loop:

           
  1. Save the currently iterated JSON into 1 variable and save as string (string format is needed so you can manipulate the response as text).
           
  1. Modify JSONTo flatten a JSON files into a single file you need to remove the first “[“ or last “]” character so concatenating it will result in a valid file.

        3. Save it to another variable which will store the final JSON Due the lack of self referencing option in ADF you need to update the combined JSON variable every time a new JSON piece is added.  

And that’s it! You have successfully addressed both the pagination and JSON file handling challenges using ADF.

Implementation – Power BI

Compared to this, the solution in Power Query is much more straightforward. You need one function where you can control the number of rows you want to skip, which basically calls the API by 1000 rows. And you need another query which starts whit a while loop which calls the API as many times as it doesn’t return an empty response. Once it’s ready, you can combine the list of tables into one table. By expanding it, you’ll end up with the complete dataset.  Here is the code of the function:

let

   Source =  ( rows_to_skip as number ) =>

let

   Base_URL = "https://Your_API_URL",

   Relative_URL = "The relative URL part of your API call",

   Source = Json.Document(

       Web.Contents(

           Base_URL,

           [Relative_Path = Relative_URL & Number.ToText(rows_to_skip) ]

       )

   ),

   //Additionally you can convert the data directly with this function to table

   Convert_to_Table = Table.FromRecords ( {Source} )

in

   Convert_to_Table

in

   Source




Here is the query which will invoke the function and act as a while loop.  
let

Surce =

//Create a list of tables

List.Generate( () =>

// Try to call the function and set input parameter to 0 during the 1st call.

   [Result = try Function_by_1000(0) otherwise null, Page = 0],

//Checks if the inside of first row (referenced by the “{0}” part) is empty. Due to logic of this particular API it checks the “
results” inside the “d” parameter in the response.



   each not List.IsEmpty([Result]{0}[d.results]),

// Try to call the function again and increase the input parameter of the function by 1000 (max rows by API call)

   each [Result = try Function_by_1000([Page]+1000) otherwise null, Page = [Page]+1000],

   each [Result])

in

   Source

This is some text inside of a div block.
How to Trigger Power BI Refresh from Azure Data Factory
October 18, 2024
10 mins read

How to Trigger Power BI Refresh from Azure Data Factory

Read blog

In this article, I will show you how to connect Azure Data Factory (ADF) with Power BI to automate report refreshes. This integration keeps your Power BI dashboards up-to-date automatically, saving time and ensuring your data is always current for better decision-making.

Idea & Concept

The main challenge I’ve faced in Power BI is that I need to refresh my data hourly, but the unpredictability of the data refresh schedule can lead to inconsistencies in my reports. If the refresh process for a data source takes longer than expected, a new hourly refresh might start before the previous one finishes. This could cause Power BI to pull some data from the new refresh while the old refresh is still in progress, resulting in a mix of data from different cycles, which causes inconsistencies in the report or even break the report as it can create invalid relationships between tables.

Most organizations rely on ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) processes to load and process data within their data warehouse systems, ensuring that data is structured and accessible for analysis. In our case, we leverage Azure Data Factory (ADF) as our standard ETL tool. ADF is a powerful and versatile service that offers a wide range of built-in connectors as linked services, enabling integration with various data sources and destinations. Additionally, ADF has the capability to trigger HTTP requests, expanding its utility beyond simple data transfers.

This feature is advantageous when working with Power BI, especially given that Power BI’s online service offers numerous built-in APIs. These APIs can be leveraged to automate tasks such as dataset refreshes. By integrating ADF with Power BI through these APIs, we can synchronize ETL processes with Power BI report refreshes, ensuring that the most up-to-date data is available for analysis as soon as it’s loaded.

For instance, if there is a requirement to refresh datasets hourly, ADF can be configured to initiate the refresh process automatically after each data load operation. This not only optimizes the workflow but also guarantees that reports are consistently updated with the latest data, enhancing the accuracy and timeliness of insights. In the following example, we will demonstrate how to configure ADF to load data on a daily schedule and subsequently trigger a Power BI report refresh (in the case study we use a Power BI Dataflow which supplies data to a Power BI Semantic Modell), ensuring that your reports are always up to date with the latest data.  

In terms of pricing, we must pay for the Azure Key Vault and for the Azure Data Factory. These are depending on your configuration and usage. You can find their pricing here: Azure Data Factory, Azure Key Vault.

In this solution, we will rely on Microsoft’s trusted network for connectivity and will not configure a virtual network. However, we recommend using a virtual network across the entire solution to enhance security, provide better control over network traffic, and protect your resources more effectively.

Prerequisites

To implement the integration between Azure Data Factory (ADF) and Power BI several configurations are required. These steps will ensure secure access, manage permissions, and enable the necessary APIs to support the workflow.  

  1. App Registration in Azure Entra ID (formerly known as Azure Active Directory or AAD)

This registered application will serve as the identity for the process, enabling secure communication between ADF and Power BI.

  • Generate a client secret for the App Registration. This secret will be used for authentication when making API calls to Power BI services.
  • Assign the appropriate API permissions to the App Registration. Specifically, grant permissions to access Power BI online services, allowing the application to manage datasets refreshes.
  1. Security Group in Azure Entra ID

This group will be used to control and restrict access to the Fabric API, ensuring that only authorized users and applications can interact with Power BI resources.

  • Add the App Registration to this security group. This step is crucial for enforcing security policies and limiting API access.
  1. Azure Key Vault

An Azure Key Vault is required to securely store sensitive information, such as the client secret created in the App Registration process.

  • Upload the client certificate to the Azure Key Vault. This ensures that the client secret is securely stored and can be retrieved by ADF during the execution of ETL processes.
  1. Enable Fabric APIs in Power BI Admin Portal

Finally, Fabric APIs must be enabled in the Power BI Admin Portal. This is a crucial step as it allows the registered application to interact with Power BI, including the ability to trigger dataset refreshes.

  • Fabric Administrator permission is required to enable these APIs in the Power BI Admin Portal. This involves granting the necessary permissions and configuring the environment to support API interactions.

Each of these steps requires precise configuration within Azure. The Fabric Administrator plays a vital role, especially in enabling the Fabric APIs within Power BI Admin Portal and ensuring that all settings align with organizational security policies. Throughout the setup, we will specify and configure these services in Azure.

Implement the solution

The first step in integrating Azure Data Factory (ADF) with Power BI is to create an App Registration in Azure Entra ID. This registered application will serve as the identity through which your ADF process can securely interact with Power BI.

Setting up the Environment

Step 1: Create an App Registration

  1. Navigate to Azure Entra ID (Azure Active Directory):
  • Sign in to the Azure portal.
  • In the left-hand navigation pane, select “Azure Entra ID” (formerly Azure Active Directory). 2.Create a New App Registration:
  • In the Azure Entra ID blade, select “App registrations” from the menu.
  • Click on the “New registration” button to start the process. 3.Specify the App Registration Details:
  • Name: Enter a name for the application. This name can be changed later, so you can choose something descriptive like “ADF-PowerBI-Integration.”
  • Supported Account Types: For this scenario, select “Accounts in this organizational directory only (Single tenant).” This option restricts access to users within your organization’s Azure AD tenant.
  • Redirect URI (optional): You can leave this blank for now, as it’s not required for the initial setup. 4. Register the Application:
  • Once you have filled in the necessary details, click on the “Register” button to create the App Registration. 5. Default Permissions:
  • By default, the App Registration will have some basic permissions. Note that in Azure, every user has the ability to register an application, but the required API permissions and other advanced configurations will need to be set up afterward.

Step 2: Create a Client Secret for Your App Registration

  1. Navigate to the “Certificates & Secrets” Section:
  • In your App Registration, locate the “Manage” section on the left-hand menu.
  • Click on “Certificates & secrets” under the “Manage” heading. 2. Select the “Client Secrets” Tab:
  • In the main panel, you will see two tabs: “Certificates” and “Client secrets.”
  • Ensure that the “Client secrets” tab is selected. 3. Create a New Client Secret:
  • Click on the “+ New client secret” button to create a new client secret. This will be the secret key that your application will use to authenticate itself. 4. Configure the Client Secret:
  • A panel will open, prompting you to provide a description and an expiration period for the client secret.
  • Description: Provide a name or description for the client secret to help you identify it later (e.g., “ADF Integration Key”).
  • Expires: Choose the expiration duration for the client secret. Options typically include 6 months, 12 months, or 24 months. Select the duration that suits your security policies.
  • Once configured, click on the “Add” button. 5. Save the Client Secret Value:
  • After creating the client secret, it will be displayed only once in the “Value” column. Make sure to copy this value and store it securely, as you will not be able to retrieve it again once you leave the page.
  • You will use this client secret value later when configuring your Azure Data Factory to authenticate with Power BI.

With the client secret created, the next steps will involve storing it securely in Azure Key Vault and assigning the necessary API permissions for Power BI.

Step 3: Assign Delegated Permissions for Power BI Dataset and Dataflow Refresh

Delegated permissions allow your application to act on behalf of a signed-in user, meaning the app will perform actions based on the user’s privileges. This is useful when the application needs to perform tasks within the context of a user’s session.

3.1. Navigate to API Permissions

  1. After configuring your client secret, go back to the App Registration overview page in the Azure portal.
  2. In the left-hand menu under the “Manage” section, select API permissions.

3.2. Add Power BI API Permissions

  1. On the API permissions page, click the + Add a permission button.
  2. In the “Request API permissions” pane, select Power BI Service.

3.3. Choose Delegated Permissions

  1. In the Power BI Service section, choose Delegated Permissions.
  2. From the list of available permissions, select the following:
    • Dataset.ReadWrite.All: This permission allows the signed-in user to read and write all datasets, including the ability to refresh datasets.
    • Dataflow.ReadWrite.All: This permission allows the signed-in user to read and write all dataflows, including the ability to refresh dataflows.
  3. After selecting these permissions, click the Add permissions button.

Scope of Permissions: The app will only be able to act within the scope of the permissions granted to the user. If a user does not have permissions to refresh certain datasets or dataflows, the app won’t be able to perform those actions.

It’s important to note that the ReadWrite.All permission is only effective in workspaces where the security group has been added as a member. Attempting to access datasets in workspaces outside of this scope will result in an error, such as: {“error”:{“code”:”ItemNotFound”,”message”:”Dataset XY is not found! Please verify the datasetId is correct and that the user has sufficient permissions.”}}

Step 4: Create a Security Group and Add the App Registration as a Member

To further secure your application and manage access to Power BI resources, you can create a Security Group in Azure Entra ID (formerly Azure Active Directory) and add the App Registration as a member. This step helps you control and restrict access to the Power BI APIs and resources more effectively.

4.1. Create a Security Group in Azure Entra ID

  1. Navigate to Azure Entra ID:
    • In the Azure portal, go to Azure Entra ID from the left-hand menu.
  2. Create a New Security Group:
    • In the Azure Entra ID blade, select Groups from the menu on the left.
    • Click on the + New group button to create a new security group.
  3. Configure the Security Group:
    • Group Type: Select Security as the group type.
    • Group Name: Enter a name for the security group, such as “PowerBI-ADF-Access”.
    • Group Description: Optionally, provide a description for the group, such as “Security group for ADF to access Power BI”.
    • Membership Type: Leave this as “Assigned” to manually add members.
  4. Create the Group:
    • Once all the details are filled in, click on the Create button to create the security group.

4.2. Add the App Registration as a Member

  1. Locate the Newly Created Security Group:
    • After the group is created, it will appear in the list of groups. Click on the group name to open its settings.
  2. Add Members to the Security Group:
    • In the group settings, go to the Members tab.
    • Click on + Add members to open the member selection pane.
  3. Select the App Registration:
    • In the “Add members” pane, search for the App Registration name you created earlier (e.g., “ADF-PowerBI-Integration”).
    • Select the App Registration from the search results and click Select.
  4. Confirm Membership:
    • The selected App Registration should now appear in the members list of the security group.
    • Confirm that the App Registration has been added as a member.

Step 5: Create an Azure Key Vault and Store the Client Secret

To securely store sensitive information like the client secret generated during your App Registration, you should use Azure Key Vault. This ensures that your secrets are stored in a secure, centralized location and can be accessed safely by authorized applications.

5.1. Create an Azure Key Vault

  1. Navigate to the Azure Portal:
    • Sign in to the Azure portal and use the search bar to find “Key Vaults.”
    • Click on “Key Vaults” in the search results.
  2. Create a New Key Vault:
    • In the Key Vaults pane, click on the + Create button.
    • You will be guided through the process to set up your new Key Vault.
  3. Configure the Key Vault:
    • Subscription: Select the subscription in which you want to create the Key Vault.
    • Resource Group: Choose an existing resource group or create a new one.
    • Key Vault Name: Provide a globally unique name for your Key Vault (e.g., “kv-powerbiappreg-prod”).
    • Region: Select the region where you want the Key Vault to be hosted.
    • Pricing Tier: Choose the standard pricing tier unless you require premium features.
  4. Review and Create:
    • After filling in all the required fields, click on the Review + create button.
    • Review your configurations, and if everything is correct, click on Create to deploy your Key Vault.

5.2. Store the Client Secret in Azure Key Vault

  1. Navigate to the Newly Created Key Vault:
    • Once the Key Vault is deployed, navigate to it from the Key Vaults section in the Azure portal.
  2. Add a New Secret:
    • In the Key Vault’s left-hand menu, select Secrets under the “Settings” section.
    • Click on the + Generate/Import button to create a new secret.
  3. Configure the Secret:
    • Upload Options: Select “Manual” to manually enter the secret value.
    • Name: Enter a name for the secret (e.g., “ADF-PowerBI-ClientSecret”).
    • Value: Paste the client secret value you copied when you created it during the App Registration setup.
    • Content Type: Optionally, you can enter a content type like “Client Secret.”
    • Activation and Expiration: You can optionally set activation and expiration dates for the secret.
  4. Create the Secret:
    • After configuring all the necessary fields, click on the Create button to store the client secret in Azure Key Vault.

5.3. Accessing the Secret in Your Application

  1. Grant Access to the Key Vault:
  • You will need to grant your Azure Data Factory (or any other application that will use the secret) access to this Key Vault. This is typically done by assigning appropriate roles or setup access policies to the service principal (App Registration) or the ADF managed identity.
  • If you use RBAC (Role Based Access Control) then assign the Key Vault Certificate User permission to the Azure Data Factory.
  • If you want to use the Access Policies, then give List and Get access for the Azure Data Factory. 2. Use the Secret in Azure Data Factory:
  • In Azure Data Factory, when setting up linked services or activities that require the client secret, use the Key Vault integration to fetch the secret. This will ensure that the secret is retrieved securely at runtime. 3. Networking settings
  • To enhance the security of your Azure Key Vault, it’s important to configure the networking settings to restrict access. This ensures that only specific networks or trusted services can access the secrets stored in the Key Vault.
  • In the Firewalls and virtual networks section, choose the option Allow public access from specific virtual networks and IP. This setting restricts access to the Key Vault from only the specified virtual networks or IP addresses.
  • In the Firewall section, you have the option to allow trusted Microsoft services to bypass the firewall. This is typically recommended as it ensures that essential Azure services, like Azure Data Factory, can access the Key Vault even if network restrictions are in place.
  • You can also specify individual IP addresses or CIDR ranges that are allowed to access the Key Vault. Click on + Add your client IP address to add the current IP or manually enter an IP address or range. You need to add your client IP address to create or manage a secret.  

Step 6: Enable the Fabric API for a Specific Security Group in Power BI Admin Portal and Grant Access to a Specific Workspace

To allow your application to interact with Power BI using the Fabric API, you need to enable this API for a specific security group in the Power BI Admin Portal and grant the necessary access to the desired Power BI workspace.

6.1. Enable the Fabric API for the Security Group

  1. Access the Power BI Admin Portal:
  • Sign in to the Power BI service with an account that has Power BI administrator privileges.
  • Navigate to the Power BI Admin Portal by selecting the settings gear icon in the upper right corner and choosing Admin portal. 2.Navigate to Tenant Settings:
  • In the Power BI Admin Portal, find and select Tenant settings from the left-hand menu. 3.Enable the Fabric API:
  • Scroll down to the Developer settings section.
  • Locate the Power BI service settings for APIs or Fabric API settings (the exact name might vary).
  • Expand the section and ensure that the Fabric API is enabled.
  • Under Apply to, select Specific security groups.
  • In the Select security groups box, add the security group you created earlier (e.g., “PowerBI-ADF-Access”).  
  • As mentioned earlier, it’s important to specify a security group to grant access exclusively to the application, rather than to the entire organization.
  • Click Apply to save your changes.

6.2. Grant Access to a Specific Power BI Workspace

  1. Navigate to the Power BI Service:
  • In the Power BI service, go to the Workspaces section to locate the workspace where the app will need access. 2. Assign Workspace Roles:
  • Select the specific workspace and go to the Settings of that workspace.
  • Navigate to the Permissions tab within the workspace settings.
  • Click on Add members or Add users.
  • Search for and add the security group you configured earlier (e.g., “PowerBI-ADF-Access”).
  • Assign an appropriate role to the group:
  • Member: Allows the group to edit content within the workspace, including refreshing datasets and dataflows. We recommend this option.
  • Contributor: Allows the group to publish reports and refresh datasets.
  • Admin: Grants full control over the workspace. 3. Save Changes:
  • After assigning the appropriate role, click Save to apply the permissions to the workspace.

Creating the Azure Data Factory Pipeline

With all the foundational elements in place, we’re now ready to create the pipeline that will orchestrate our data processes and automate key tasks. Specifically, this pipeline will trigger the refresh of Power BI datasets and dataflows. By leveraging the secure connections and permissions we’ve configured. Keep in my that we need to refresh the dataflow first to refresh the dataset since the dataflow is the predecessor to the dataset.

Step 1: Creating a Linked Service in Azure Data Factory for the Azure Key Vault

Creating a linked service in Azure Data Factory (ADF) allows you to connect ADF to various data sources, including Azure Key Vault, databases, storage accounts, and more. Below is a step-by-step guide for creating a linked service to Azure Key Vault.

  1. Open Azure Data Factory
  • In the Azure portal, navigate to your Azure Data Factory instance.
  • Open the Data Factory UI by clicking on Author & Monitor.
  1. Create a New Linked Service
  • In the ADF UI, navigate to the Manage tab on the left-hand menu.
  • Under the Connections section, select Linked services.
  • Click on the + New button to create a new linked service.
  1. Choose the Linked Service Type
  • In the New Linked Service pane, search for and select Azure Key Vault from the list of available services.
  • Click Continue.
  1. Configure the Linked Service

You will now see the configuration pane for the Azure Key Vault linked service.

  1. Name:
  • Enter a name for your linked service (e.g., AzureKeyVault1). 2.Description:
  • Optionally, provide a description to identify the purpose of this linked service. 3. Azure Key Vault Selection Method:
  • From Azure subscription: Choose this option if your Key Vault is within your current subscription.
  • Enter manually: Select this option if you need to manually enter the Key Vault details (not used in this case). 4. Azure Subscription:
  • Select your Azure subscription from the dropdown list.
    5.Azure Key Vault Name:
  • Choose the Azure Key Vault you created earlier from the dropdown list. 6.Authentication Method:
  • System-assigned managed identity: This is the recommended method for authentication as it securely uses the managed identity associated with the Data Factory.
  • The Managed identity name and Managed identity object ID will be automatically populated.

 

  1. Test the Connection
  • Before creating the linked service, it’s a good practice to test the connection.
  • Select Test connection to ensure that ADF can successfully connect to the Azure Key Vault.
  1. Create the Linked Service
  • Once the connection test is successful, click on the Create button to finalize and create the linked service.

Step 2: Set Up a Web Activity for Refreshing a Power BI Dataflow in Azure Data Factory

To automate the refresh of a Power BI dataflow using Azure Data Factory, you can use a Web Activity. The following guide will walk you through setting up the Web Activity to trigger a Power BI dataflow refresh.

  1. Access the Pipeline Editor
    1. Navigate to Azure Data Factory:
      • Open your Azure Data Factory instance in the Azure portal.
      • Go to Author > Pipelines > + New pipeline to create a new pipeline or select an existing pipeline.
    2. Add a Web Activity:
      • In the Activities pane, expand the General section.
      • Drag and drop the Web activity onto the pipeline canvas.
  2. Configure the Web Activity

Here’s how you can configure the Web Activity to refresh your Power BI dataflow:

1.Set the Basic Properties:

  • Name: Set the name of the activity to something descriptive, such as Refresh Power BI Dataflow.2.Settings Tab:
  • URL:
  • Enter the URL to trigger the dataflow refresh.
  • Use the expression format to ensure the URL is dynamic if needed: https://api.powerbi.com/v1.0/myorg/groups/{GroupID}/dataflows/{DataflowID}/refreshes.
  • Replace {GroupID} with {YourGroupId} and {DataflowID} with {YourDataflowID}. (The Group ID corresponds to the Workspace ID, which you can find in the URL of the Power BI Service when you open your specific workspace).
    Group ID Example: https://app.powerbi.com/groups/89d2f27a-a923-44c2-8f1f-8fdebd865c9e/list?experience=power-bi)
  • Method:
  • Set the method to POST since you are sending a request to initiate a refresh.
  • Body:
  • Set the body to the following JSON to not receive notification in case of a failure, since Power BI built alert will send us notification. From an Azure Data Factory Point of view the dataflow/dataset refresh was a successful attempt.  

{

   “notifyOption”: “NoNotification”

}

3.Authentication:

  • Authentication Method: Select Service principal.
  • Authentication Reference Method:
  • Choose Inline.
  • Tenant: Enter your tenant ID
  • Service Principal ID: Enter your service principal (client) ID. This is your App registration client ID.
  • Service Principal Credential Type: Select Service principal key.
  • Service Principal Key:
  • Reference the key stored in Azure Key Vault by specifying the linked service and the secret name. Use the latest secret option, to dynamically handle keys if it’s renewed.  
  • Resource: Set the resource to https://analysis.windows.net/powerbi/api.4.Advanced Settings (Optional):
  • Retry: Set the number of retries to 0 if you do not want the activity to retry automatically on failure.
  • Retry Interval: Leave as default or set a custom retry interval.

Step 3: Set Up a Web Activity for Refreshing a Power BI Dataset in Azure Data Factory

  1. Add a new Web Activity

Here’s how you can configure the Web Activity to refresh your Power BI dataset:

A) Set the Basic Properties:

Name: Set the name of the activity to something more descriptive, such as Refresh Power BI Dataset.

B) Settings Tab:

  • URL:
  • Enter the URL to trigger the dataset refresh.
  • Use the expression format to ensure the URL is dynamic if needed: https://api.powerbi.com/v1.0/myorg/groups/{GroupID}/datasets/{DatasetID}/refreshes.
  • Replace {GroupID} with your Workspace ID and {DatasetID} with your Dataset ID. (The Group ID corresponds to the Workspace ID, which you can find in the URL of the Power BI Service when you open your specific workspace.)
  • Method:
  • Set the method to POST since you are sending a request to initiate a refresh.
  • Body:
  • Set the body to the following JSON to avoid receiving notifications in case of failure:

{

   “notifyOption”: “NoNotification”

}

C) Authentication:

  • Authentication Method: Select Service principal.
  • Authentication Reference Method:
  • Choose Inline.
  • Tenant: Enter your tenant ID.
  • Service Principal ID: Enter your service principal (client) ID. This is your App registration client ID.
  • Service Principal Credential Type: Select Service principal key.
  • Service Principal Key:
  • Reference the key stored in Azure Key Vault by specifying the linked service and the secret name. Use the latest secret option to dynamically handle keys if they are renewed.
  • Resource: Set the resource to https://analysis.windows.net/powerbi/api.D) Advanced Settings (Optional):
  • Retry: Set the number of retries to 0 if you do not want the activity to retry automatically on failure.
  • Retry Interval: Leave as default or set a custom retry interval.

  1. Add Dependencies (Optional)
  2. Dependencies:
  • Example: This dataset refresh activity might depend on the success of two previous Web Activities (Refresh Power BI Dataflow SQL and Refresh Power BI Dataflow BC). Ensure you configure these dependencies accordingly.

  1. Save and Trigger the Pipeline

Now that we configured to pipeline, save it and run a debug to ensure that the Power BI dataset refreshes as expected. If its successfully refreshed, then publish the changes in the Data Factory.

After publishing set an automatic trigger to your main pipeline if there isn’t one. You can do that by opening your pipeline then click on Add Trigger, then click on New/Edit. This will open a ribbon on the right side, which you can configure on your need.

We recommend to always create a main pipeline, where you can execute all your sub pipeline at once. This will give you the ability control your whole solution through a centralized pipeline. This creates a sequence to your solution helps monitor it better. For more information, you can find the documentation here: Execute Pipeline Activity – Azure Data Factory & Azure Synapse | Microsoft Learn  

Other Consideration

If you have multiple datasets that need refreshing, consider using the Get Datasets in a Group API first to retrieve the dataset IDs. You can then use the response to trigger each refresh within a single pipeline by leveraging the For Each activity in Azure Data Factory. Documentation for the API can be found here: Datasets – Get Dataset In Group – REST API (Power BI Power BI REST APIs) | Microsoft Learn and for the For Each activity can be found here: ForEach activity – Azure Data Factory & Azure Synapse | Microsoft Learn.  

Summary

In this blog post, we’ve walked through the complete process of integrating Azure Data Factory (ADF) with Power BI to automate the refresh of datasets and dataflows, ensuring your reports are always up to date. Here’s a recap of the key steps:

  1. Setup and Preparation: We started by configuring essential components such as App Registrations, security groups, and Azure Key Vault to securely manage access and credentials.
  1. Enabling APIs and Permissions: We enabled the necessary Fabric API in the Power BI Admin Portal for a specific security group, ensuring that only authorized applications can trigger dataset and dataflow refreshes.
  1. Creating Linked Services in ADF: We created linked services in Azure Data Factory to securely connect to Azure Key Vault, allowing us to store and retrieve sensitive credentials like client secrets.
  1. Building the Pipeline: With everything in place, we built the ADF pipeline, adding Web Activities to trigger the refresh of Power BI dataflows and datasets using the Power BI REST API. We also covered how to handle multiple datasets by using the Get Datasets in a Group API and the For Each activity in ADF.

By following these steps, you can automate the data refresh process in Power BI, ensuring that your business intelligence insights are based on the most current data. This integration not only streamlines your workflow but also enhances the security and manageability of your data processes.

Whether you’re working with a single dataset or multiple dataflows, Azure Data Factory provides a robust platform for orchestrating your data integration and refresh needs.

————————————————————————————————————————————————————————————–

Microsoft Documentations

Power BI REST APIs for embedded analytics and automation – Power BI REST API | Microsoft Learn

Dataflows – Refresh Dataflow – REST API (Power BI Power BI REST APIs) | Microsoft Learn

Datasets – Refresh Dataset – REST API (Power BI Power BI REST APIs) | Microsoft Learn

What is Azure Key Vault? | Microsoft Learn

Authentication and authorization basics – Microsoft Graph | Microsoft Learn

Introduction to Azure Data Factory – Azure Data Factory | Microsoft Learn

This is some text inside of a div block.

Recent events

Stay in the loop with industry-leading events designed to connect, inspire, and drive meaningful conversations.

View all
The Unified Enterprise
April 1, 2025

The Unified Enterprise

View event
The Copilot Blueprint: Best Practices for Business AI
February 25, 2025

The Copilot Blueprint: Best Practices for Business AI

View event

Expert-led webinars

Gain valuable insights from top professionals through live and on-demand webinars covering the latest trends and innovations.

View all
Leveraging Generative AI and Copilots in Dynamics CRM
Apr 1, 2025
49 min 15 sec

Leveraging Generative AI and Copilots in Dynamics CRM

Read more
Accelerating sustaianbility with AI - from reporting to green value creation
Apr 1, 2025
46 min 23 sec

Accelerating sustaianbility with AI - from reporting to green value creation

Read more
How Dallmayr Hungary became the digital blueprint of the Group
Apr 2, 2025
38 min 32 sec

How Dallmayr Hungary became the digital blueprint of the Group

Read more
The future of digital transformation - key trends and themes for leadership
Apr 2, 2025
41 min 45 sec

The future of digital transformation - key trends and themes for leadership

Read more
Intelligent Business Transformation with Power Platform in the age of AI
Apr 1, 2025
39 min 23 sec

Intelligent Business Transformation with Power Platform in the age of AI

Read more

Real results

See how companies like you have succeeded using our solutions and get inspired for your own journey.

View all
Dallmayr Hungary
Apr 10, 2025

How Dallmayr Hungary increased operational efficiency by 30% and became the digital blueprint of the group

Read more

Tools & Guides

Access helpful eBooks, templates, and reports to support your learning and decision-making.

View all

Improve First-Time Fix Rates In The Fields

Where and How to Get Started When Digitising Field Operations

Get the guide
Soft teal and white gradient background

Ready to talk about your use cases?

Request your free audit by filling out this form. Our team will get back to you to discuss how we can support you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Stay ahead with the latest insights
Subscribe to our newsletter for expert insights, industry updates, and exclusive content delivered straight to your inbox.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.