Showing posts with label MS365. Show all posts
Showing posts with label MS365. Show all posts

Monday, January 12, 2026

Quick Checklist: Where to Set All Tenant‑Level Contacts Now

 

1. Technical Contact

admin.microsoft.com → Org settings → Organization profile

2. Privacy Contact

Same page as above

3. Security Contact Email

entra.microsoft.com → Identity → Protection → Notifications

4. Billing Contact

admin.microsoft.com → Billing → Billing accounts

5. Break‑glass accounts (recommended)

Entra → Users → Create → No MFA, long password, Global Admin

Tuesday, July 1, 2025

MS Fabric Demo Sandbox : Contoso Energy


Contoso Energy and Microsoft Fabric Demo Sandbox Options

No "Contoso Energy" dataset exists for Microsoft Fabric, but the Contoso retail dataset is widely used for training. Below are options for demo data and sandboxes for personal learning, formatted for Blogger with relevant Microsoft links.

Contoso Demo Data

Overview: The Contoso dataset, focused on retail, supports Fabric’s lakehouse and BI workloads with SQL, CSV, or Delta formats.  

Access: Download from the Microsoft Download Center or use the Contoso Data Generator for custom datasets.

Energy Data: No energy-specific dataset; source public energy data (e.g., Kaggle) for custom scenarios.

Fabric Sandbox for Personal Use

Trial Capacity: 60-day free trial (64 capacity units) at app.fabric.microsoft.com. Ideal for testing Contoso or custom datasets.

Sandbox: Trial acts as a sandbox for building lakehouses and reports. Convert to a Power Platform sandbox via Power Platform Admin Center for advanced needs.

Training: Free Fabric Analyst in a Day workshop on Microsoft Learn uses Contoso data for hands-on practice.

Energy-Specific Alternatives

Custom Datasets: Import energy data (CSV/Parquet) into Fabric trial for analytics practice.

Azure Sandbox: Use Azure’s free sandbox for data integration with Fabric at Azure Free Account, but monitor costs.

Community Samples: Check Microsoft Fabric Samples on GitHub for adaptable scenarios.

Recommendations

Start with Fabric’s trial for Contoso-based practice.  

Use Microsoft Learn’s Fabric modules.

Import energy datasets for relevant training.

Monitor costs if using Azure services!


New for June 2025!

 The [Digital Twin Builder (preview) tutorial introduction](https://learn.microsoft.com/en-us/fabric/real-time-intelligence/digital-twin-builder/tutorial-0-introduction) in Microsoft Fabric provides a hands-on guide to creating operational analytics scenarios using digital twins. It introduces a low-code/no-code tool that allows users to model and contextualize data from various sources—like sensors and control systems—within Microsoft Fabric’s unified analytics platform.


The tutorial centers on a fictional company, Contoso Energy, which uses the tool to improve efficiency, reduce energy consumption, and enhance product quality across its distillation sites. Users are guided through building a scenario ontology and visualizing insights with Power BI.


**Key prerequisites** include:

- A Microsoft Fabric-enabled workspace

- Digital Twin Builder (preview) enabled in the tenant settings

- Power BI Desktop installed (not just the web version)


This feature is currently in preview and is designed to help organizations drive operational improvements such as reducing waste, improving yield, and achieving sustainability goals. Let me know if you’d like a breakdown of the tutorial steps or help setting up a test scenario.

Friday, May 2, 2025

MS Fabric Parquet Files Query Mods Cost Case Studies? Compute and Capacity and Storage billing line items


General Microsoft Fabric Billing Model

Microsoft Fabric uses a capacity-based pricing model. You purchase a Fabric Capacity (an F SKU) with a certain number of Capacity Units (CUs). These CUs are a pool of compute resources shared across all Fabric workloads (Data Factory, Synapse Data Engineering/Data Science, Data Warehouse, Data Lakehouse, Power BI, etc.).  

Your bill will primarily consist of:

  1. Capacity Compute: Based on the CU consumption of all workloads running within your Fabric capacity.
  2. OneLake Storage: Based on the amount of data stored in OneLake. Transaction costs (reads, writes) to OneLake are generally included in the capacity compute.
  3. Optional Add-ons: Such as резервирование capacity for discounted rates.

Impact of Direct Lake vs. Direct Query on Billing

While direct case studies are lacking, we can infer the potential billing implications:

1. Capacity Compute (CU Consumption):

  • Direct Lake:
    • Queries against Direct Lake models are processed by the VertiPaq engine (the same in-memory engine used by Power BI Import mode).  
    • Since the data is directly read from OneLake in Parquet format and loaded into memory on demand for querying, the compute consumption is expected to be efficient and potentially lower compared to Direct Query for analytical workloads.
    • The refresh operation for Direct Lake models is metadata-only, making it a low-cost operation in terms of compute.  
    • However, if a Direct Lake model falls back to Direct Query (e.g., due to unsupported features or row-level security on the SQL endpoint), the compute consumption would be similar to a standard Direct Query.
  • Direct Query:
    • DAX queries are translated into the query language of the underlying data source (e.g., T-SQL for Fabric Data Warehouse or SQL endpoint of Lakehouse) and executed on that source.  
    • The compute cost here depends heavily on the efficiency and scale of the underlying data source. If the source is not optimized for BI-style queries, it can lead to higher CU consumption as Fabric waits for the source to return results.
    • Each query execution against the source will contribute to the overall capacity consumption.

2. OneLake Storage:

  • Direct Lake: The underlying data resides in OneLake in Delta Parquet format. The storage cost will be based on the volume of data stored in OneLake. Direct Lake itself doesn't incur additional storage costs beyond the base OneLake storage.
  • Direct Query: The data also resides in some form of storage (e.g., OneLake, Data Warehouse). Similar to Direct Lake, the storage cost is tied to the underlying data storage.

3. Specific Billing Line Items:

Without specific case studies, it's challenging to provide exact billing line items. However, when you analyze your Fabric capacity consumption in the Microsoft Fabric Capacity Metrics app, you would likely see:

  • For Direct Lake: Operations related to "Semantic Model Queries" or similar, reflecting the CU consumption of the VertiPaq engine processing the queries. OneLake read operations might also be visible.
  • For Direct Query: Operations related to the specific workload being queried (e.g., "Data Warehouse Query", "Lakehouse SQL Query"), indicating the CU consumption while Fabric interacts with that engine.

Key Considerations:

  • Optimization of Underlying Sources: For Direct Query, ensure your Data Warehouse or Lakehouse SQL endpoint is well-optimized with appropriate indexing and partitioning to minimize query execution time and thus CU consumption.
  • Data Volume and Complexity: Larger data volumes and more complex queries will generally consume more CUs regardless of the query mode. However, Direct Lake is designed to handle large volumes efficiently.  
  • Fallback to Direct Query: Be aware of scenarios where Direct Lake might fall back to Direct Query, as this could impact performance and potentially increase CU consumption.
  • Monitoring: Regularly monitor your Fabric capacity consumption using the Metrics app to understand how different workloads and query types contribute to your bill. Filter by workload and operation type to get a better understanding.

In summary, while direct billing case studies for Direct Lake vs. Direct Query are not readily available, the expectation is that Direct Lake, being optimized for large-scale analytics on OneLake data, would generally lead to more efficient compute utilization for BI workloads compared to Direct Query against potentially less optimized data sources. Storage costs are primarily tied to the volume of data in OneLake, irrespective of the query mode used for consumption.

To get a clearer understanding of your specific billing, it's recommended to:

  1. Experiment with both Direct Lake and Direct Query scenarios with your data.
  2. Monitor the CU consumption in the Microsoft Fabric Capacity Metrics app for each scenario.
  3. Analyze the operation-level details to see how each query type contributes to your overall capacity usage.
  4. Review your Azure bill for the Fabric capacity to see the total CU consumption and storage costs.

This hands-on approach with your specific data and workloads will provide the most accurate insights into the billing implications of Direct Lake vs. Direct Query in your environment.

Tuesday, December 5, 2023

Best Extensions and Tools to assist PowerAutomate Development

 

Best Extensions and Tools for PowerAutomate Developers 

We frequently do not consider what equipment we've to be had as an Robotic Process Automation developer, specifically as they're frequently low code developers. Yet the usage of proper device could have computer virus influences for your productivity, take a look at out my suggestions below.

Pre-Build


Design
Primary: Visio, Secondary: Whimsical

Although similar these 2 actually exists in parallel in my work stream. The breadth of icons/templates in Viso is hard to beat, but whimsical is excellent for speed. When drafting I love how quickly I can create a flow of steps. The UI is modern and simple, there is a great free allowance, and key features like exporting and sharing are included.

Image descriptionWhimsical

https://www.microsoft.com/viso
https://whimsical.com/




Data

Primary: Power BI, Secondary:Flourish

Again those have barely extraordinary use instances for me so I use each. Power BI is all approximately the facts connections and stay dashboards, wherein Flourish is for one off visuals for a presentation. So while constructing a commercial enterprise case for an RPA undertaking I like to apply Flourish, however submit deployment monitoring I like Power BI. But they each can do either. Flourish has a terrific loose tier, with best decided on templates/chart patterns now no longer available.

Image descriptionFlourish

https://powerbi.microsoft.com/en-gb/
https://flourish.studio/



API

Primary: Postman, Secondary: Hoppscotch

Postman has usually been the visit API tool, with environments and variables it has all of the features. Additionally quite a few structures be given postman exports. Hoppscotch is an nearly reflect of Postman however online (and sure I realize there may be now a Postman net model however its now no longer pretty there yet). The UI is extra current and in my view higher, and in a few instances I've had higher compatibility. If you want Postman, you want this too, simply with out the overhead of putting in it.

Image descriptionHopscotch

https://www.postman.com/
https://hoppscotch.io/


Coding

Primary: Vs code, Secondary: Notepad++

There is not anything that surely comes near vs code, its breadth of addins, slick interface and cutting-edge UI make it difficult to beat. It even has a remarkable internet model too. But there may be an antique favourite, Notepad . Its simple (if now no longer dated UI) simply works, and its light-weight layout makes it perfect for short development.

https://vscode.dev/
https://notepad-plus-plus.org/


Emails

Primary: Vs code, Secondary: Topol

An regularly crucial a part of RPA is sending automated emails, and growing content material wealthy emails can regularly be a challenge. I presently fall again on vs code to write down my html e mail templates. But I even have simply began out the usage of Topol.io, its now no longer as brief as Vs code, however outstanding for without a doubt complicated templates, it has satisfactory WYSIWYG UI and the unfastened tier has the whole thing you need.

Image descriptionTopol

https://vscode.dev/
https://topol.io/


JSONs

Primary: JSON Crack, Secondary:JSON Formatter

Reading JSON outputs is not the easiest, however with JSON crack its easy. This webweb page now no longer simplest permits you to view JSONs in clean way, it has a completely unique node view that creates an interactive map (terrific for presentations). Though whilst operating withinside the browser (in particular Power Automate) then JSON Formatter is king. Its a chrome upload in (assume to be had for FireFox too), which vehiclemobile codecs JSONs. So whilst ever you open an api reaction or Power Automate Output Download you get the reaction in a clean, quite view.

Image descriptionJSON Crack

https://jsoncrack.com/
https://chrome.google.com/webstore/detail/json-formatter

Regex's

Primary: regex101, recommend: Regoio

Regex101 is the maximum bendy and characteristic wealthy regex author that I preserve going lower back to. It has a remarkable UI that permits beginners to construct regex's, and the error/validator covers everything. If you need some thing easy with a great cheat sheet then my 2nd desire is Regoio.

Image descriptionRegex101

Image descriptionRegoio

https://regex101.com/
https://regoio.herokuapp.com/

Post Build

Documentation

Primary: Confluence, Secondary:Tettra

With terrific integration with Jira (Atlassian owns both) and superb seek indexing confluence is excellent for documenting your automations. Tettra is the brand new child at the block, with present day UI and plenty of recent features. Still now no longer as sturdy as Confluence however gives some thing different.

https://www.atlassian.com/software/confluence
https://tettra.com/

Guides

Primary: Articulate 360, Secondary: Scribehow

Articulate is an all making a song all dancing LMS machine which can create in-intensity education guides. Very a good deal overkill for RPA however a outstanding tool. But I actually have these days found Scribehow, that's a very unique approach. Instead of complicated education files which can take a long term to create, Scribehow is a display screen recorder that car makes use of fundamental ai to create a guide. You can then edit the output to get precisely what you want. For technical files its outstanding. The loose tier is ok, with fundamental capabilities and the capacity to document an internet browser, aleven though for laptop and a few excellent capabilities you'll want to pay.

Image description