• Basic SKU Public IP addresses will be retired on 30 September 2025

    Microsoft has announced that all Basic SKU Public IP addresses in Azure will be retired on 30 September 2025. If you’re currently using Basic SKU IPs, it’s time to start planning your upgrade to the Standard SKU to avoid service disruptions. This change affects virtual machines, load balancers, and other resources relying on Basic IPs—so early action is key.

    Basic vs Standard Public IPs (Source: Microsoft Learn)

    AspectStandard SKU Public IPBasic SKU Public IP
    Allocation MethodStaticIPv4: Dynamic or Static<br>IPv6: Dynamic
    Security ModelSecure by default (closed to inbound traffic unless explicitly allowed via NSG)Open by default (NSG optional)
    Availability ZonesSupported (non-zonal, zonal, or zone-redundant)Not supported
    Routing PreferenceSupportedNot supported
    Global Tier SupportSupported (via cross-region load balancers)Not supported
    Standard Load BalancerSupportedNot supported
    NAT Gateway SupportSupportedNot supported
    Azure Firewall SupportSupportedNot supported

    Basic vs Standard Load Balancer (Source: Azure Docs)

    FeatureStandard Load BalancerBasic Load Balancer
    ScenarioHigh performance, ultra-low latency, zone-aware, cross-regionSmall-scale apps, no zone support
    Backend Pool TypeIP-based, NIC-basedNIC-based
    Protocol SupportTCP, UDPTCP, UDP
    Health ProbesTCP, HTTP, HTTPSTCP, HTTP
    Availability ZonesZone-redundant, zonal, non-zonalNot available
    DiagnosticsAzure Monitor multi-dimensional metricsNot supported
    HA PortsAvailableNot available
    Secure by DefaultClosed to inbound flows unless allowed via NSGOpen by default
    Outbound RulesDeclarative outbound NAT configurationNot available
    TCP Reset on IdleAvailableNot available
    Multiple FrontendsInbound and outboundInbound only
    Management OperationsMost < 30 seconds60–90+ seconds
    SLA99.99%Not available
    Global VNet Peering SupportSupportedNot supported
    NAT Gateway & Private LinkSupportedNot supported



    In order for your VMs to continue to function after September 30th 2025 you need to update this.

    How Do I Know

    To see which VM has the old SKU of IP. Go to the Azure Portal and search to Public IP in the top search bar. You will get a list where all Public IPs are listed.

    What do I need to do?

    If the VM is OLD and only used for development, it is fairly easily replaced. Either you just reploy it from LCS (the new VM will have the correct IP SKU) or if you are up for it, you can deploy one of MS fancy new Universal Development Environments and then you will not have to worry about the infrastructure again.

    If you need to keep the VM, there is an option to update the IP address, however, all of the VMs that I have needed to update were created from LCS and all of those have a loadbalancer that also needs to be updated.

    Upgrading the LoadBalancer

    Since all of the VMs deployed from LCS have a loadbalancer and if you have an older VM, with a legacy IP, chances are that you also have a Basic LoadBalancer. Since we cannot have a Standard IP connected to a Basic LoadBalancer, we need to update them both. Fortunately there is a nice script to do this.

    Start a Cloud Shell (Powershell) in the Azure portal and import the module

    Install-Module -Name AzureBasicLoadBalancerUpgrade -Scope CurrentUser -Repository PSGallery -Force

    Run the script in validation mode to validate that all prerequsites are in please (Issues that I ran into are documented below):

    Start-AzBasicLoadBalancerUpgrade -ResourceGroupName [ResourceGroupName] -BasicLoadBalancerName [LoadBalancerName] -validateScenarioOnly:$true

    If that goes through without a hitch, run the same script but without the ValidationScenarioOnly switch

    Start-AzBasicLoadBalancerUpgrade -ResourceGroupName [ResourceGroupName] -BasicLoadBalancerName [LoadBalancerName]

    When you run the update script for the LoadBalancer, it will also update the Public IP to a Standard SKU

    Missing Backend Pool

    On some of the VMs I have updates, I experienced that the LoadBalancer configuration was missing a Backend Pool and that the VM was not in that non existing pool.


    To add the pool and the VM, go to the LoadBalancer, open Settings – Backend Pools, click add. Give the pool a name (vm-backend-pool) and add the VM to it by clicking Add

    Updating the IP

    To update the IP you will need to temporarily disassociate it from the VM and the reconnect it after the update. These are the steps to do it:

    1. Associate the IP from th eresources it is connected to

    Links
    Upgrade a public IP address | Microsoft Learn

    Upgrade from Basic to Standard with PowerShell – Azure Load Balancer | Microsoft Learn

    Upgrade Basic Public IP Address to Standard SKU in Azure | Microsoft Learn

    Azure Load Balancer SKUs | Azure Docs

  • Using Open Source Software in your Dynamics Implementation

    The idea of this article started for a couple of reasons: The first thing that happended was that Alex Meyer released his D365FO Admin Toolkit on GitHub. The second thing that happened was that I read the brilliant article Scary dangerous creepy tools by Jonas Rapp) and these two things made me think about the Benefits and Challenges of using Open Source in Dynamics 365 for Finance and Supply Chain. (Since then there have been others such as Jonas Feddersen Melgaard´s D365FinanceToolbox)

    First of all I will dive into some background… About 10 years ago, I left the “IT Infrastructure” world and ventured into the Dynamics world. On the infrastructure side, Open Source Software is a big thing. The majority of web servers on the internet, runs on Linux, a lot of our Internet appliances (such as firewalls and routers) runs on Linux, more than half the worlds cell phones has a base in Open Source and a lot of components such as Curl, OpenSSH and Mermaid are used by millions of business users every day, since they are all baked into commercial products. In fact, 60% of all compute cores in Microsoft Azure runs some version of Linux.

    So the question is; why is it OK to use Open Source Software, built by the community, everywhere else but not in Dynamics… “Because it is our super-duper critical ERP System of course!!!!”. Well I would argue that there are more important systems in your organization (not to belittle your ERP system) that are using at least a couple of Open Source components. That means that the actual issue is not the code itself… it is something else. In this article we will try to understand the blockers and why the might not be relevant.

    Mentioning the fact that a lots of commercial closed source software uses Open Source components, I agree, is a bit dis-honest. However it makes a good point and I think it puts a finger on the real issue here: Responsibility. When a company embeds a 3rd party component in the software, they assume responsibility for it. They agree to patch it (which is not always done) and they agree to take responsibility for the end product towards the customer. We will come back the question of responsibility later in this text.

    The Benefits

    There are a couple of benefits of using software, made by someone else, in our solution. The most obvious one is the same as the argument for ISV software. We do not need to start building something from scratch and that means that we can free up time for our project and for our developers to do other things.

    There is also the the quality argument. If someone has built it and many other uses it, the risk of it being broken is smaller than if everyone build their own solution to a common problem. Working iteratively on a common software base over time will hopefully make it less likely to break. Another argument here will be “But we are solving our own, unique issues” and that might be so… but not all of your challenges are unique. If you have an issue in Dynamics, then it might be that others have the same issue and and that they need to solve it too.

    There is also the question of missing features in the product. When Microsoft add new features to Dynamics, they need to prioritize them and if not enough organizations wants a feature, then it will not bubble to the top in the priority list and thus it will not be built. Having community developed features will help with bridging the gap and will also act as an indicator to Microsoft of what the customers want in the product. On the consumer side of tech, there is a concept called Sherlocking. That is when (In this case) Apple implements a function, that a 3rd party software developer has built, into iOS or MacOS.

    The Challenges

    Earlier I compared Open Source Software to ISV solutions and to be honest there is one big difference… responsibility. When you buy an ISV solution from a vendor or you let a partner build and implement your solution for you there is always someone else that assumes responsibility for the code written. But as for all contract there are always disclaimers… this applies even to to the license agreement of Dynamics. There are some thing, not covered by the agreement. The responsibility always lands on the end customer in the end and you as an end customer need to be ready to assume that responsibility. If we assume that the responsibility falls on the customer in the end, there is basically no difference between Microsoft-, ISV-, Partner-, Open Source- or Customer created code. We still need to test it and make sure that it is maintained and updated. The main difference is that we (as an organization) can not affect Microsoft’s, the ISVs or (in some cases) the Partners Code. We are however able to make changes to the Open Source Solution.

    I know that there are ISVs out there that supply the source code for their solution… Is that the same as using Open Source? Well, not exactly, but sort of… There are up and down sides of this. If we buy the solution from the vendor and we have a support agreement, we should try to stay away from editing the code; it blurs the lines of responsibilities. With that being said there is still a benefit in that we are able to speed up the troubleshooting process, because we are able to debug, and help provide the solution to the vendor. The real benefit of getting access to the codebase for an ISV solution is however if something happens the vendor and they go out of business. In that case we can choose to continue to support the solution ourselves.

    The Commitments

    As we have seen in this article, there are some thing we need to think about when we start using Open Source Software. As always we need to make sure the software holds up to the same level of quality that we need, we need to keep it updated (and with that comes of course testing and code review, in the same way as with our own code), but I also think that there is another level of commitment here and that is that if we find a bug in the code, we should also be a “good citizen” of the community and at lease report a bug (maybe even with a proposed solution) or even write a fix and submit a pull-request back to the project to get the fix into the code base… if it benefits us, it will probably benefit others.

    “So that means that we should use our precious time writing code for others?” Yes, with the time we save having others write code and test it for us, we absolutely should pitch in and help. I am absolutely convinced that we will spend less time in the long run while at the same time helping others do the same.

    The Conclusions

    Using 3rd Party Tools, always needs to be a deliberate choice and going with an Open Source Solution comes with its own challenges, but we also need to understand that building all of our own customizations from scratch, means that we will be using a “one-off” solution that not always adhere to best practices. In that case we are on our own, but when we use a solution that is built by the community, at least, we can figure out the solution together.

  • Troubleshooting Dualwrite Microsoft.Dynamics.Integrator.Exceptions.IntegratorClientException

    Currently I am in the middle of installing Project Operations for a customer. In order to provide data to Project Operations we need to use Dualwrite to move data from Dynamics 365 for Finance and Supply Chain into Dataverse, which Project Operations uses as its database.

    Yesterday I found a weird Dualwrite issue. In order to sync Customers, we also need to sync the entity CDS Contacts V2 (contacts). I started the initial sync… after running for around 6 (!!) hours it failed with the following error.

    Type=Microsoft.Dynamics.Integrator.Exceptions.IntegratorClientException, Msg=Type=Microsoft.Dynamics.Integrator.Exceptions.IntegratorClientException, Msg=FinOps export encountered an error.(Type=Microsoft.Dynamics.Integrator.Exceptions.IntegratorClientException, Msg=Export failed, Please check execution for project DWM-d91cec93-bcf1-4a8e-a7fa6b0615e195c45fb9bb52d8690665b9c_0 and execution id ExportPackage-9/2/2025 7:23:39 AM-a41d84f0-e619-4a43-83db-d4f6a4855b97 in FnO. Error details Type=Microsoft.Dynamics.Integrator.Exceptions.IntegratorClientException, Msg=F&amp;O export encountered an error. Please check project and execution ExportPackage-9/2/2025 7:23:39 AM-a41d84f0-e619-4a43-83db-d4f6a4855b97 in F&amp;O)
    

    I updated mappings and refreshed the entity list (as you do) and reran it, with the same issue.

    Initial sync uses the Data Management Framework (DMF) to move the data to Dataverse so I thought I should look at the execution logs, for the DMF project. Then I filtered the list, it did not exist (!?!?!?!)

    The next step, I tried to manually export CDS Contacts V2 (contacts) from DMF. I finally got a useful error!!

    That lead me to go to the Entity List in DMF… there I found something strange:

    Normally the status for the entity should be enabled… it was not. I then went to the License page in FnO

    Turns out the customer has disabled a lot (!) of configuration keys and one of them is CDS Integrations… After entering Maintenence Mode and Enabling the key, the entities were still disabled. To see the correct status, you need to do a Entity List Refresh from DMF – Framework Parameters

    After that the sync went through just fine

    Todays learnings is around Configuration Keys… Do not disable them if you are not able to oversee the full consequence of doing so

  • Red Flags

    Hur skall man hantera en incident? Skall man bygga nytt eller skall man laga det som är trasigt. Detta är ämnet i dagens avsnitt

    Markus Lassfolk  @lassfolk  http://isolation.se/

    Mikael Nyström @mikael_nystrom  https://deploymentbunny.com/

    Viktor Hedberg @headburgh https://hedbergtech.se/

    Johan Persson @JoPe72 https://blog.johanpersson.se

     

    Jingle av David Lilja (cutpaste.org)

  • Sovereign Cloud

    I detta avsnitt av The Nerd Herd pratar vi om Microsoft blogpost som släpptes i veckan om Soveriegn Cloud och hur man skall hantera europeriska företag i datacenter som ägs av amerikanska företag

     

    Blogposterna: 
    https://blogs.microsoft.com/blog/2025/06/16/announcing-comprehensive-sovereign-solutions-empowering-european-organizations/

    https://blogs.microsoft.com/on-the-issues/2025/04/30/european-digital-commitments/

     

    Markus Lassfolk  @lassfolk  http://isolation.se/

    Mikael Nyström @mikael_nystrom  https://deploymentbunny.com/

    Viktor Hedberg @headburgh https://hedbergtech.se/

    Johan Persson @JoPe72 https://blog.johanpersson.se

     

    Jingle av David Lilja (cutpaste.org)

  • Accessing DualWrite Settings

    Today a colleague of mine reached out. He was validating having issues with DualWrite with a customer. When he entered the management page from D365 Finance and Supply Chain he saw this:

    The easiest way to get access to Dataverse is to make sure you access it with an account in the customers own Entra ID tenant, otherwise you make dataverse all confused.

    1. Start Microsoft Edge in Inprivate mode or create a new profile.
    2. Go to http://make.powerapps.com/ and make sure that you have access to the correct environment. This step also has another benefit.
      Sometimes the Single Sign-On between Power Platform and Dynamics 365 for Finance and Supply Chain does not work correctly and the login to the Maker Portal ensures that you already have a valid Authentication Cookie before you go to the DualWrite Management Page in FnO.
    3. Go to the DualWrite Management page in D365FO and verify.
    4. If it is still not visible: Ask a person with System Admin Permissions to Dataverse to give you the permissions System Administrator and System Customizer and then go to https://aka.ms/ppac to verify that you have access.

    Disclamer: There are other security aspects to take into account, when you log into a customer account on your computer… these are not in focus here, and I am not a security specialist. My considerations here are purely functional.

    Good Luck

  • Dualwrite – Beware of the reverse

    I am setting up Dualwrite at a customer and I got an issue the other day. The customer wants to be able to create customers from Dynamics 365 CE and syncing them to Dynamics 365 for FO. We had done all of the initial syncs and done multiple test for creating Accounts in CE and the synced perfectly to FO.

    When I was trying to figure out a way to manage Financial Dimension population while creating accounts I tried creating the customer directly in FO, just to test a thing… and it failed!!

    I tried again from CE and it worked… but not from FO. We had been so focused on testing one direction but not the other… Doh!

    So, what was the issue? I got this error:

    Unable to write data to entity accounts.Writes to CustCustomerV3Entity failed with error message Request failed with status code BadRequest and CDS error code : 0x80048d19 response message: Error identified in Payload provided by the user for Entity :'accounts', For more information on this error please follow this help link https://go.microsoft.com/fwlink/?linkid=2195293 ----> InnerException : Microsoft.OData.ODataException: Cannot convert the literal '' to the expected type 'Edm.Int32'. ---> System.FormatException: Input string was not in a correct format.
    

    and then it continued with a stack trace… Hmmm…

    So I started brainstorming with a colleague: It is obviously a data type missmatch and when I turn off the mapping for Account, it works. Going through the mapping we had added a few transform mappings so we started there. It turns out that all of these we 1 to 1 mappings. The problems was that there three fields were not on the initial “customer creation sidebar”. In CE these were made mandatory but in FO, they were not.

    I took a look at the mappings for these 3 fields and one stood out. It had no mapping for the empty value.

    This meant that when going from FO to CE I tried to convert and empty string to an integer and since there was no transform for the empty field and no default it could not be written to CE.

    The easiest way to add an empty mapping to null is to edit the JSON version of the mapping (I did not know this was possible until a short while ago)

    [
    	{
    		"transformType": "ValueMap",
    		"valueMap": {
    			"Dealer": "787980000",
    			"End user": "787980001",
    			"Nat OEM": "787980002",
    			"Reg OEM": "787980003",
    			"Int OEM": "787980004",
    			"Integrator": "787980005",
    			"Contractor": "787980006",
    			"Other": "787980007",
    			"": null
    		}
    	}
    ]
    
    
    
    
    
    

    Add the last line and do not forget the comma at the end of the previous line

    That was it… I saved it and restarted the mapping. We verified it… Worked!!!

    That was it for today… see you around

  • The Arctic Hackathon experience

    When I first received the invitation to be a judge at the Arctic Cloud Developer Challenge in Oslo, my initial reaction was one of hesitation: “I’ve never done this, this is a bit scary.” But then I re-framed my thinking: “I’ve never done this… Cool!” That shift in perspective led me to one of the most memorable experiences of my career.

    The hackathon took place in beautiful Holmenkollen, just outside Oslo, where 15 teams gathered to compete in what would prove to be an extraordinary event. The teams, ranging from solo developers to groups of six, competed across six different categories with the chance to earn 33 different badges. While Dynamics 365 and Power Platform formed the foundation, teams weren’t limited to just these technologies – they incorporated everything from IoT and ProCode to AI in their solutions.

    This year’s theme transported us into the magical world of Harry Potter, and the teams certainly rose to the occasion. The creativity was astounding: we saw everything from a digital Sorting Hat to a comprehensive intranet for Hogwarts. Some teams even ventured to the darker side of magic, creating a digitized assassin portal for dark wizards and a Howler service complete with a public API. The Weasley twins would have been proud to see their mischievous spirit living on in a Facebook-style clone designed just for wizards.

    The energy throughout the event was electric. Even after our first day wrapped up and dinner was served at 21:30, many participants returned to their stations, continuing to build and refine their projects. The dedication was inspiring – teams worked around the clock, but what struck me most was how they supported one another despite the competition. The spirit of collaboration and mutual learning created an atmosphere that was truly magical.

    The competition was incredibly close – in the end, just nine points separated the second-place team from the winners. But beyond the competitive aspect, what I witnessed was the pure joy of creation and innovation. Teams were fully immersed in their projects, pushing boundaries, and most importantly, learning from each other.

    As I sit here at Oslo S station, waiting for my train back to Sweden, I’m exhausted but filled with an overwhelming sense of happiness. Someone asked me at the closing party (and what a party it was!) how I felt about stepping out of my comfort zone. I can honestly say I haven’t been this happy or had this much fun in a long time.

    ## Key Takeaways from the Event

    One of the most significant lessons from this hackathon was the importance of flexibility in the creative process. While having a basic framework is essential, the most innovative solutions emerged when teams were given the freedom to think outside the box. Rigid scope definitions can often stifle creativity and limit potential solutions. By allowing teams to explore and adapt their approaches as they worked, we witnessed truly unique and innovative solutions that might never have materialized under stricter constraints. I think the benefit of adopting this mindset in a delivery scenario would be of great benefit. It would create happier employees and customers when the focus in making the solution “something extrordinary”.

    The strict time constraints of the hackathon, rather than being a limitation, proved to be a catalyst for excellence. When teams knew they had just three days to deliver, it forced them to focus on what truly mattered, make quick decisions, and maintain a steady momentum. The pressure of the deadline seemed to sharpen their creativity rather than hinder it, leading to more focused and innovative solutions. Sometimes in a delivery to customer decisions tend to extend into a whole line of meeting and the urgency gets a bit lost. Having a clear deadline for a decision an then applying an agile mindset to allow us to go back and revisit the decision rather than requiring every decision to be perfect from the start would not only create efficiency but would also encourage a culture of psychological safety.

    Another fascinating observation was the effectiveness of smaller teams. Throughout the event, we noticed that smaller, tighter-knit teams often demonstrated stronger cohesion and more efficient collaboration. With fewer team members, each person naturally took on more responsibility and felt a greater sense of ownership in the project’s success. This heightened sense of personal investment and belonging seemed to fuel their motivation and drive better outcomes. The intimate team dynamic also appeared to facilitate quicker decision-making and more agile problem-solving.

    Sometimes the best experiences come from saying “yes” to things that initially seem intimidating. This weekend proved that stepping out of your comfort zone can lead to extraordinary adventures and meaningful connections. As I head home, I’m taking with me not just memories, but also new friendships and a renewed appreciation for the magic that happens when passionate developers come together to create.

  • AADSTS50011: The redirect URI ‘https://D365FOenv.operations.eu.dynamics.com/’ specified in the request does not match the redirect URIs

    Last week I needed to set up a new Dynamics 365 for Finance and Supply Chain environment and I for a strange error message which took some time to figure out.

    AADSTS50011: The redirect URI 'https://enadvdemo01.operations.eu.dynamics.com/' specified in the request does not match the redirect URIs configured for the application '00000015-0000-0000-c000-000000000000'. Make sure the redirect URI sent in the request matches one added to your application in the Azure portal. Navigate to https://aka.ms/redirectUriMismatchError to learn more about how to fix this.

    (since I am not an EntraID expert I might be some details wrong in the explanation but this is what I think the issue is)

    The issue here is that when you are working with D365FO, which is a Microsoft Saas-ish service, there is a Service Principal created for Microsofts application in your Entra ID tenant. When you set up a new environment, the URL for that environment is added to that Service Principal as two ReplyUrls. One for the base URL and one for the OAuth endpoint.

    Apparently there is a limit (255) for how many of these URLs you can have for the Service Principal. This means that when you have deployed enough environments the property fills up. I am guessing that there might be a clean-up routine for these but that it might sometimes fail.

    The solution is to remove a couple of old ones and manually add the new ones.

    1. Log into the Azure Portal

    2. Start the Cloud Shell

    3. In the Cloud Shell, run the following commands

    connect-azuread
    
    $AADRealm = "00000015-0000-0000-c000-000000000000"
    Get-AzureADServicePrincipal -Filter "AppId eq '$AADRealm'"

    Find old, retired URLs here and run the following

    $EnvironmentUrl = "https://newenv.operations.eu.dynamics.com"
    
    $OLDEnvironmentUrl = "https://retired env.operations.eu.dynamics.com"
    
    $SP = Get-AzureADServicePrincipal -Filter "AppId eq '$AADRealm'"
    
    $SP.ReplyUrls.Remove("$OLDEnvironmentUrl")
    $SP.ReplyUrls.Remove("$OLDEnvironmentUrl/oauth")
    
    $SP.ReplyUrls.Add("$EnvironmentUrl")
    $SP.ReplyUrls.Add("$EnvironmentUrl/oauth")
    
    Set-AzureADServicePrincipal -ObjectId $SP.ObjectId -ReplyUrls $SP.ReplyUrls

    This will remove the retired URLs and add the ones for the new environment

    Links:
    Error AADSTS50011 – The reply URL specified in the request does not match the reply URLs configured for the application <GUID>. | Microsoft Learn
    Solved: AADSTS50011: The reply URL specified in the request does not match the reply URLs configured for the application: ‘00000015-0000-0000-c000-000000000000’.

  • Windows Server 2025

    I detta avsnitt tar vi oss en titt på en del av de nya funktionerna Windows Server 2025. Eftersom vi inte hinner med allt nytt finne resten i release notes

    Markus Lassfolk  @lassfolk  http://isolation.se/
    Mikael Nyström @mikael_nystrom  https://deploymentbunny.com/
    Viktor Hedberg @headburgh https://hedbergtech.se/
    Johan Persson @JoPe72 https://blog.johanpersson.se
    Jingle av David Lilja (cutpaste.org)