Quantcast
Channel: Category Name
Viewing all 5932 articles
Browse latest View live

Now Available: Update 1606 for System Center Configuration Manager

$
0
0

Once again, we’re pleased to announce that we’ve released a new version of our System Center Configuration Manager current branch (1606) that includes some great new features and product enhancements.

Looking back at the last 7 months, we’re encouraged by the positive response and momentum we’ve seen with our new current branch model; we now have over 16,000 organizations managing 30 million devices with Configuration Manager version 1511 or later. While we’re thrilled about the adoption we’ve seen, the real point of pride for our team rests in the fact that our quality and reliability have remained so high through this monumental shift. Incredibly, we haven’t seen any increase in the number of support incidents since launching our current branch model! Read more about the reasons behind our current branch success here.

As we release 1606, we’re optimistic that this winning streak will continue. Thanks to our active technical preview community, the 1606 update takes into account feedback and usage data we’ve gathered from customers who have installed and road tested our monthly technical previews over the last few months. It’s also been tested at scale — by real customers, in real production environments — with great success. As of today we have over 1 million devices being managed by the Configuration Manager 1606 update!

1606 is full of new features and enhancements in security and data protection, application management, content distribution, deployment and provisioning, end user experience, and includes loads of new functionality for customers using Configuration Manager in hybrid mode with Microsoft Intune. This is also the version that will bring support for the Windows 10 Anniversary update. Here’s a small sample of what you’ll get when you upgrade:

  • Windows Information Protection (formerly EDP) features allow you to create and deploy information protection policy, including the ability to choose your protected apps and define your EDP-protection level.
  • Windows Defender Advanced Threat Protection features enable the ability to on-board and off-board Windows 10 clients to the cloud service and view agent health in the monitoring dashboard (requires a Windows Defender ATP tenant in Azure).
  • Windows Store for Business Integration allows you to manage and deploy applications purchased through the Windows Store for Business portal for both online and offline licensed apps.
  • Windows Hello for Business policies for domain-joined Windows 10 devices managed by the Configuration Manager client.

We’ve also added a number of popular User Voice items, including:

  • The addition of content status links in the admin console
  • The option of list view for applications in the Software Center
  • The ability to select multiple updates and simultaneously install them with the new Install Selected Updates button in the Software Center

For more details and to view the full list of new features in this update check out our What’s new in version 1606 of System Center Configuration Manager documentation on TechNet.

Note: As the update is rolled out globally in the coming weeks, it will be automatically downloaded and you will be notified when it is ready to install from the “Updates and Servicing” node in your Configuration Manager console. If you can’t wait to try these new features, this PowerShell script can be used to ensure that you are in the first wave of customers getting the update. By running this script on your central administration site or standalone primary site, you will see the update available in your console right away.

For assistance with the upgrade process please post your questions in the Site and Client Deployment forum. To provide feedback or report any issues with the functionality included in this release, please use Connect.  If there’s a new feature or enhancement you want us to consider including in future updates, please use the Configuration Manager UserVoice site.

Thank you,

The System Center Configuration Manager team

Additional resources:


Kovter becomes almost file-less, creates a new file type, and gets some new certificates

$
0
0

Trojan:Win32/Kovter is a well-known click-fraud malware which is challenging to detect and remove because of its file-less persistence on infected PCs. In this blog, we will share some technical details about the latest changes we have seen in Kovter’s persistence method and some updates on their latest malvertising campaigns.

New persistence method

Since June 2016, Kovter has changed their persistence method to make remediation harder for antivirus software.

Upon installation, Kovter will generate and register a new random file extension (for example, .bbf5590fd) and define a new shell open verb to handle this specific extension by setting the following registry keys:

Registry setup for Kovter

Figure 1: Registry setup for Kovter

With this setup, every time a file with the custom file extension (.bbf5590fb) is opened, the malicious Kovter command contained in the registry key is executed via the shell extension open verb.

Therefore, all Kovter needs to do to run on infected machines is open a file with their custom file extension .bbf5590fb – causing the malicious shell open command to run. This in turn runs a command using mshta.

Mshta is a clean tool that is used by Kovter to execute malicious JavaScript. This JavaScript then loads the main payload from another registry location, HKCU\software\67f1a6b24c\d0db239. To trigger this shell open command on a regular basis, Kovter drops several garbage files with its custom file extension in different locations, for example:

The contents of these files are not important, since the malicious code is contained within the shell open verb registry key. The last step in the installation process is setting up the auto-start mechanism to automatically open the above files. Kovter uses both a shortcut file and a batch (.bat) file for this:

Using a shortcut file

Kovter drops a shortcut file (.lnk) in the Windows startup folder which points to the garbage files. We have seen it drop the following shortcut file:

  • %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup\28dd1e3d.lnk

The target command of the shortcut file is the following:

C:\Windows\System32\cmd.exe /C start “” “C:\Users\Admin\AppData\Roaming\33e58839\3ad319e6.bbf5590fd”

Once executed at startup, this command will open the file, causing the malicious shell open verb to run the malicious mshta command previously set up in the registry system (see Figure 1).

Using a batch script file

Kovter will drop a batch script file (.bat) and set a registry run key to execute the .bat file. The .bat file will be dropped in a randomly generated folder, such as:

The .bat file has the following content:

Content of the .bat file setup in run key

Figure 2: Content of the .bat file setup in run key

 

Once executed, this bat will also run the dropped file, which then executes the malicious shell open verb.

Instead of just adding the mshta script directly as a run key registry as in the old variant, Kovter is now using this shell open trick to start itself. Although Kovter is technically not fully file-less after this latest update, the majority of the malicious code is still held only within the registry. To remove Kovter completely from an infected computer, antivirus software needs to remove all of these dropped files as well as the registry change.

Windows Defender is able to successfully clean up and remove these new versions of this threat.

Kovter malvertising updates

Since our last blog on Kovter spreading through malicious advertisements as a fake Adobe Flash update, we have observed some changes.

On top of the fake Adobe Flash updates, Kovter is now also pretending to be a Firefox update. Kovter has also rotated through a series of new digital certificates, including the following:

Certificate signer hashValid fromValid until
7e93cc85ed87ddfb31ac84154f28ae9d6bee0116Apr 21 2016Apr 21 2017
78d98ccccc41e0dea1791d24595c2e90f796fd48May 13 2016May 13 2017
c6305ea8aba8b095d31a7798f957d9c91fc17cf6Jun 22 2016Jun 22 2017
b780af39e1bf684b7d2579edfff4ed26519b05f6May 12 2016May 12 2017
a286affc5f6e92bdc93374646676ebc49e21bcaeMay 13 2016May 13 2017
ac4325c9837cd8fa72d6bcaf4b00186957713414Nov 18 2015Nov 17 2016
ce75af3b8be1ecef9d0eb51f2f3281b846add3fcDec 28 2015Dec 27 2016

Table 1: List of certificates used by Kovter

 

We’ve notice that every time Kovter actors release a new wave of samples signed with a new certificate they hit a lot of machines. This can be seen in our telemetry for the past three months, with spikes on May 21, June 14, and the first week of July.

Kovter’s prevalence for the past two months

Figure 3: Kovter’s prevalence for the past two months

 

Besides fake Adobe Flash and Firefox updates, Kovter also pretends to be a Chrome update (chrome-update.exe).

We have seen Kovter downloaded from a large list of URLs, including:

  • hxxps://eepheverseoftheday.org/2811826639187/2811826639187/146819749948281/FlashPlayer.exe
  • hxxps://deequglutenfreeclub.org/8961166952189/8961166952189/146809673281840/FlashPlayer.exe
  • hxxps://zaixovinmonopolet.net/5261173544131/5261173544131/146785099939564/FlashPlayer.exe
  • hxxps://feehacitysocialising.net/7561659755159/1468089713424429/firefox-patch.exe
  • hxxps://eepheverseoftheday.org/1851760268603/1851760268603/1468192094476645/firefox-patch.exe
  • hxxps://uchuhfsbox.net/8031143191240/8031143191240/1467996389305283/firefox-patch.exe
  • hxxps://ierairosihanari.org/1461656983266/1461656983266/1467987174641688/firefox-patch.exe
  • hxxps://anayimovilyeuros.net/7601143032510/7601143032510/1465468888898207/chrome-patch.exe

For reference, here are some SHA1s corresponding to each certificate used by Kovter:

Certificate Signer HashSHA1
7e93cc85ed87ddfb31ac84154f28ae9d6bee01167177811e2f7be8db2a7d9b1f690dc9e764fdc8a2
78d98ccccc41e0dea1791d24595c2e90f796fd48da3261ceff37a56797b47b998dafe6e0376f8446
c6305ea8aba8b095d31a7798f957d9c91fc17cf6c3f3ecf24b6d39b0e4ff51af31002f3d37677476
b780af39e1bf684b7d2579edfff4ed26519b05f6c49febe1e240e47364a649b4cd19e37bb14534d0
a286affc5f6e92bdc93374646676ebc49e21bcae3689ff2ef2aceb9dc0877b38edf5cb4e1bd86f39
ac4325c9837cd8fa72d6bcaf4b00186957713414e428de0899cb13de47ac16618a53c5831337c5e6
ce75af3b8be1ecef9d0eb51f2f3281b846add3fcb8cace9f517bad05d8dc89d7f76f79aae8717a24

Table 2: List of Kovter SHA1 for each certificate

 

To protect yourself from this type of attack, we encourage users to only download and install applications or their updates from their original and trusted websites.

Using an up-to-date version of an antimalware scanner like Windows Defender will also help you to stay protected from Kovter.

Duc Nguyen
MMPC

Nemucod dot dot..WSF

$
0
0

The latest Nemucod campaign shows the malware distributing a spam email attachment with a .wsf extension, specifically ..wsf (with a double dot) extension.

It is a variation of what has been observed since last year (2015) – the TrojanDownloader:JS/Nemucod malware downloader using JScript. It still spreads through spam email attachment, typically inside a .zip file, using a file name of interest with .js or .jse as extension.

The following screenshots show how the malicious file attachment looks like in the recent campaign:

Example of how an email spam containing the latest version of Nemucod might look like

Figure 1: Example of how an email spam containing the latest version of Nemucod might look like

 

Example of how Nemucod malware looks like when extracted and opened with an archive viewer.

Figure 2: Example of how Nemucod malware looks like when extracted and opened with an archive viewer

What the double dots mean: Social engineering for unsuspecting eyes

As seen in the following file name samples, the double dot paired with the uncommon .wsf extension creates an illusion that the file name was either abbreviated, was intentionally omitted, or shortened by the system because it was too long:

  • profile-d39a..wsf
  • profile-e3de..wsf
  • profile-e7dc..wsf
  • profile-f8d..wsf
  • profile-fb50..wsf
  • spreadsheet_07a..wsf
  • spreadsheet_1529..wsf
  • spreadsheet_2c3b..wsf
  • spreadsheet_36ff..wsf
  • spreadsheet_3a8..wsf

Some might look at the sample file names and assume that they might originally have been a long unique string identifier consisting of random letters and numbers that could be a transaction ID, receipt number or even user ID:

  • profile-d39as1u3e8k9i3m4wsf
  • profile-e3dee1uwl8s10f3m4wsf
  • profile-e7dc4d1u3e83m4wsf
  • profile-f8dsdwsfe8k4i38wsf
  • profile-fb50s1u3l8k9i3m4wsf
  • spreadsheet_07as133e3k9i3e4wsf
  • spreadsheet_1529s15se8f9i3o6wsf
  • spreadsheet_2c3bs1u5dfk9i3m6wsf
  • spreadsheet_36ffs1ure8koei3d5ws
  • spreadsheet_3a8s1udwsf8s9i323wsf

However, this is not the case. These are script files that might contain malicious code which could harm your system.

Underneath the WSF

Windows Scripting File is a text document containing Extensible Markup Language (XML) code. It incorporates several features that offer you increased scripting flexibility. Because Windows script files are not specific to a script language, the underlying code can have either JavaScript or VBScript, depending on language declaration in the file. WSF acts as a container.

Underneath the WSF is the same typical Nemucod JScript code.

Nemucod code inside WSF: has encrypted code and the decryption is written under @cc_on (conditional compilation)

Figure 3: Nemucod code inside WSF: has encrypted code and the decryption is written under @cc_on (conditional compilation)

 

This Nemucod version leverages the @cc_on (conditional compilation) command. Such a command can possibly evade AV scanner detection. It tricks the AV scanners to think the command is part of a comment, thus preventing the AV scanners from interpreting it as an executable code.

Upon code decryption, the following URLs – where the malware payload is being hosted – are revealed:

  • hxxp://right-livelihoods.org/rpvch
  • hxxp://nmfabb.com/rgrna1gc
  • hxxp://www.fabricemontoyo.com/v8li8

Recent spam campaign and trends

The latest Nemucod telemetry for the past 15 days shows that it has constantly been active, although there haven’t been any huge spikes.

Daily detection trend for Nemucod. These are the unique machine encounters per day

Figure 4: Daily detection trend for Nemucod. These are the unique machine encounters per day

 

Geographic distribution of Nemucod. Data taken from July 3 to July 18, 2016

Figure 5: Geographic distribution of Nemucod. Data taken from July 3 to July 18,2016

 

Other than using ..wsf and @cc_on technique, we’ve also seen different and old tricks used as part of its social engineering tactics. This includes, but is not limited to:

  • Double extension (for example: pdf.js)
  • Invoice, receipt, and delivery related file names such as DHL, FedEx delivery, and so forth

Nemucod infection chain

Nemucod infection chain showing spam email distributing WSF which downloads and runs malware

Just like the Nemucod campaigns before this, the malware downloader payload includes ransomware, such as:

Mitigation and prevention

To avoid falling prey from this new Nemucod malware campaign:

Francis Tan Seng and Alden Pornasdoro
MMPC

Updated inbox component in Windows Server 2012 R2 Essentials for client connector

$
0
0

[This post comes to us courtesy of Schumann GE from Product Group and Sandeep Biswas from Global Business Support]

We are happy to announce that the fix for client side issues due to Windows 10 feature upgrade that was discussed in the following SBS Blog has been released:

https://blogs.technet.microsoft.com/sbs/2016/01/22/windows-10-feature-upgrade-breaks-client-connector-for-window-server-2012-r2-essentials-windows-server-2012-essentials-and-windows-small-business-server-2011-essentials/

The inbox fix for Windows Server 2012 R2 has been included with the following update rollup:

https://support.microsoft.com/en-in/kb/3172614

Note: This is an optional update and will be promoted to a mandatory one in the next update cycle.

Microsoft Authenticator – Coming August 15th!

$
0
0

Howdy folks,

Today we’re trying something different and sharing news with you about an upcoming release. I really prefer to announce new capabilities when you can actually try them out for yourself! But in this case, a lot of largest enterprise customers need some time to plan for this, so we’re sharing the news early.

On August 15th, we will start releasing the new “Microsoft Authenticator” apps in all mobile app stores. This new app combines the best parts of our previous authenticator apps into a new app which works with both Microsoft accounts and Azure AD accounts

As many of you know, we’ve had separate authenticator apps for Microsoft account and Azure AD for quite a while – the Azure Authenticator for enterprise customers and the Microsoft account app for consumers. With the new Microsoft Authenticator, we’ve combined the best of both into a single app that supports enterprise and consumer scenarios.

Here are some of the new benefits you will see in the app updates:

  • User experience refresh. We’ve made the app experience incredibly simple while maintaining the highest level of security.
  • Best in breed MFA experience through one-click push notifications. You only need to click the “approve” button in the notification to complete your login. (And in most cases, you won’t even need to open the app to complete the approval.)
  • Support for wearables. You can use an Apple Watch or Samsung Gear device to approve MFA challenges.
  • Finger prints instead of passcodes. We’ve added support finger print based approvals on both iPhone and Android.
  • Certificate based authentication. Support for enterprise customers to sign in through certificates instead of passwords.

This new app will be delivered as an update to Azure Authenticator. Existing accounts you already have in your Azure Authenticator app will be automatically upgraded. And users of our Microsoft account Android app will get a message prompting them to download the new app.

We’re just getting started on this new app! Now that we’ve finished consolidating into a single code base, we’re expecting to deliver new improvements at a very rapid pace. So, stay on the lookout for this cool new app, and let us know what you think. If you are an enterprise customer, this is a great time to start updating your documentation to direct employees to the new app!

And as always, we’d love to receive any feedback or suggestions you have!

Best Regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

Top 3 sessions on IT management & security at Microsoft Ignite

$
0
0

Microsoft Ignite, which is set to take place in Atlanta between 26th and 30th of September, gives you five days of hands-on learning, industry insights, and direct access to product experts. It promises to a fun and action-packed 5 days. If you have a focus on Hybrid Cloud IT management, here are 3 sessions that we recommend you don’t miss:

  • Take your management and security strategy to the cloud
    In this session we will explore how you can tackle multiple challenges facing IT operations in the cloud-first world. Hear experts at Microsoft talk about how you can rethink your management and security strategy to handle these challenges across a hybrid cloud. Armed with vision and strategy, you will be able to bridge the hybrid divide by adopting a set of tools for management and security designed for the cloud-first era.
  • Take advantage of new capabilities in System Center 2016
    System Center makes it possible for you to run your IT operations at higher scale and drive more value for your business. System Center 2016, which brings a new set of capabilities that integrate with our cloud management tools to help you manage your IT operations in the cloud-first era, will be launched at Microsoft Ignite! Come learn about how you can take advantage of System Center 2016 and the new business value that it promises to unlock.
  • Assess security posture of your datacenter in under one hour
    Join this session with an exciting live demo where we show you how to get from zero to a security hero in just under one hour. Using familiar and new tools, included in Microsoft Operations Management Suite Security, learn how you can detect threats, perform security investigation and protect workloads, servers and users without any prior security knowledge.

We hope to see you at all of these sessions and more at Microsoft Ignite. If you haven’t already done so, please register yourself to attend. Don’t forget to join us at the IT management pre-day training! Pre-day trainings are available as an add-on to ignite conference pass.

Don’t miss the launch of Windows Server and System Center 2016 at Ignite

$
0
0

Recently we announced our plans to launch Windows Server 2016 and System Center 2016 at the Ignite Conference in Atlanta on September 26-30. We hope you can join us for the fun! We are working on a full set of sessions, including guest appearances by customers and MVP’s who have been working closely with our engineering team to test and refine a ton of new capabilities and innovation. The event is nearly sold out, so if you plan to join us please register for Ignite soon!

In the meantime, check out the What’s new in Windows Server 2016 video on the Microsoft Virtual Academy. This is a free training resource to help customers and partners get an overview of the new capabilities. And when you are ready to try them out, download the technical preview.

We also have a lot of great content on our Windows Server 2016 page, including the on-demandTen reasons you’ll love Windows Server 2016 webcast with Jeff Woolsey.

For an update on System Center 2016, watch this Microsoft Mechanics video and check out our System Center 2016 webpage. You can also download and try out the System Center 2016 technical preview.

We look forward to seeing celebrating our launch with everyone at Ignite!

ADFS: Excluding a Specific User Group from MFA

$
0
0

Hi there, JJ Streicher-Bremer back again, this time talking about ADFS and multi-factor authentication. I had a need to configure an environment where everyone was required to use multi-factor authentication _except_ for folks in a specific AD group.

When looking at the ADFS 3.0 MFA configuration GUI there is a simple way to add users and groups to enforce the use of Multi Factor Authentication for specific users/groups.

So if I configured my ADFS to require MFA for all domain users, how might I exclude a set of users from this requirement?

What we don’t have via the GUI is an easy way to _exclude_ a user or group of users from the requirement of MFA.

The good news is there is a fairly simple way to make this happen, and we get to use PowerShell to do it!

First step is to define a few things:

  1. The default group that holds users who will _get_ MFA. In my case I’m using “Domain Users” because I want to have everyone using MFA for authentication
  2. The group that will specify the users who will be excluded from using MFA. In my case I created a group called “No MFA for these users” in my AD

When I started down this path I was pointed to an amazing blog post by Ramiro Calderon here:

https://blogs.msdn.microsoft.com/ramical/2014/01/30/under-the-hood-tour-on-multi-factor-authentication-in-adfs-part-1-policy/

Ramiro does a really great job of describing how the claims engine and pipeline work along with good descriptions on the claims themselves. This really jumpstarted the whole process. Thanks Ramiro!

After reading (and re-reading) the above blog post I determined I needed to come up with a rule that said something like this:

If a user is a member of “Domain Users” but _not_ a member of the group “No MFA for these users”, enforce the use of Multi Factor Authentication. Since the ADFS claims rules engine does not understand group (or user) names directly, we have to convert those into SIDs. That can be accomplished via ADUC or through PowerShell (get-adgroup “domain users”).

When we break down the pieces we have

  • A member of “Domain Users” becomes: [Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid”, Value == “S-1-5-21-3755518198-905394505-1163020732-513”]
  • A member of “No MFA for these users” becomes: [Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid”, Value == “S-1-5-21-3755518198-905394505-1163020732-15115”]
  • Enforce the use of MFA becomes: Type = “http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod”, Value = “http://schemas.microsoft.com/claims/multipleauthn”

Putting it all together our rule would look like:

exists([Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid”, Value == “S-1-5-21-3755518198-905394505-1163020732-513”]) && NOT exists([Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid”, Value == “S-1-5-21-3755518198-905394505-1163020732-15115”]) => issue (Type = “http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod”, Value = “http://schemas.microsoft.com/claims/multipleauthn”);

Now that I knew the claims rules I wanted to use I just had to figure out how to get them into ADFS. In the ADFS GUI there is no way to directly edit the rules so PowerShell to the rescue! Since I was using PowerShell I started using variables. My rule then looked like this:

$GroupAddMFA = Get-ADGroup “domain users”

$GroupAddNoMFA = get-adgroup “no MFA for these users”

$GroupMfaClaimTriggerRule = ‘exists([Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid“, Value == “‘ + $groupAddMFA.SID.Value + ‘“]) && NOT exists([Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid“, Value == “‘ + $groupAddNoMFA.SID.Value + ‘“]) => issue (Type = “http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod“, Value = “http://schemas.microsoft.com/claims/multipleauthn“);’

I wanted to add these rules to the relying party trust for O365, so I put that object into a variable too.

$rp = Get-AdfsRelyingPartyTrust -name “Microsoft Office 365 Identity Platform”

Before I actually modified the rules I had to remove the groups from the “Global Multi-Factor Authentication Policy”, but leave the checkbox enabling the MFA provider.

Now I had to add my ruleset to the “Additional Authentication Rules” for my relying party trust:

Set-AdfsRelyingPartyTrust –TargetRelyingParty $rp –AdditionalAuthenticationRules $GroupMfaClaimTriggerRule

Once I set this up all the users in my lab domain were forced to use MFA, except for those uses in the “No MFA for these users”.

Even better news is that in Server 2016 we should be able to accomplish all this via the “Access Control Policies” GUI directly:

I hope this blog post helps to explain by example some simple ADFS claims rules and how to load those rules into ADFS 3.0 as well as something to look forward to in Server 2016.


Storage IOPS update with Storage Spaces Direct

$
0
0

Hello, Claus here again. I played one of my best rounds of golf in a while at the beautiful TPC Snoqualmie Ridge yesterday. While golf is about how low can you go, I want to give an update on how high can you go with Storage Spaces Direct.

Once again, Dan and I used a 16-node rig attached to a 32 port Cisco 3132 switch. Each node was equipped with the following hardware:

 

  • 2x Xeon E5-2699v4 2.3Ghz (22c44t)
  • 128GB DRAM
  • 4x 800GB Intel P3700 NVMe (PCIe 3.0 x4)
  • 1x LSI 9300 8i
  • 20x 1.2TB Intel S3610 SATA SSD
  • 1x Chelsio 40GbE iWARP T580-CR (Dual Port 40Gb PCIe 3.0 x8)

Using VMFleet we stood up 44 virtual machines per node, for a total of 704 virtual machines. Each virtual machine was configured with 1vCPU. We then used VMFleet to run DISKSPD in each of the virtual machines with 1 thread, 4KiB random read with 32 outstanding IO.

IOPS

As you can see from the above screenshot, we were able to hit ~5M IOPS in aggregate into the virtual machines. This is ~7,000 IOPS per virtual machine!

We are not done yet…. If you are attending Microsoft Ignite, please stop by my session “BRK3088 Discover Storage Spaces Direct, the ultimate software-defined storage for Hyper-V” and say hello.

Let us know what you think.

Dan & Claus

Announcing Windows Management Framework (WMF) 5.1 Preview

$
0
0

Today we are pleased to announce that the Windows Management Framework (WMF) 5.1 Preview release is now available on the Download Center.

WMF provides users with the ability to update previous releases of Windows Server and Windows Client to the management platform elements released in the most current version of Windows. This enables a consistent management platform to be used across the organization, and eases adoption of the latest Windows release.

WMF 5.1 Preview includes the PowerShell, WMI, WinRM, and Software Inventory and Licensing (SIL) components that are being released with Windows Server 2016. WMF 5.1 can be installed on Windows 7, Windows 8.1, Windows Server 2008 R2, 2012, and 2012 R2, and provides a number of improvements over WMF 5.0 RTM including:

  • New cmdlets: local users and groups; Get-ComputerInfo
  • PowerShellGet improvements include enforcing signed modules, and installing JEA modules
  • PackageManagement added support for Containers, CBS Setup, EXE-based setup, CAB packages
  • Debugging improvements for DSC and PowerShell classes
  • Security enhancements including enforcement of catalog-signed modules coming from the Pull Server and when using PowerShellGet cmdlets
  • Responses to a number of user requests and issues

Detailed information on all the new WMF 5.1 features and updates, along with installation instructions, are in the WMF 5.1 release notes.

Please note:

  • WMF 5.1 Preview requires the .Net Framework 4.6, which must be installed separately. Instructions are available in the WMF 5.1 Release Notes Install and Configure topic.
  • WMF 5.1 Preview is intended to provide early information about what is in the release, and to give you the opportunity to provide feedback to the PowerShell team, but is not supported for production deployments at this time.
  • WMF 5.1 Preview may be installed directly over WMF 5.0.
  • It is a known issue that WMF 4.0 is currently required in order to install WMF 5.1 Preview on Windows 7 and Windows Server 2008. This requirement is expected to be removed before the final release.
  • Installing future versions of WMF 5.1, including the RTM version, will require uninstalling the WMF 5.1 Preview.

We welcome and appreciate any reports of issues you encounter, particularly those that are new to this release. Please file them in UserVoice at https://windowsserver.uservoice.com/forums/301869-powershell, and include “WMF5.1” in the title or description.

Thank you

PowerShell Team

New monitoring features for network performance, backup

$
0
0

Monitoring the health of your systems and operations is an essential component of effective IT management. IT environments today are more distributed and hybrid than ever, and business applications often run across multiple points in the datacenter or even across multiple locations. With such a distributed environment, network connectivity is critical to keep the business functioning smoothly. Additionally, efficient enterprise-quality backup solutions are necessary to avoid business interruptions in this hyper-connected world. Having the right level of visibility and control into your hybrid environment to monitor performance and latency becomes a must-have capability. Today, two new features in the Operations Management Suite will be available for public preview that will enhance functionality for monitoring network performance and backup deployments to help you meet business needs.

Insights for network performance and connectivity

Monitoring the quality of network connectivity between your datacenters, remote office sites, or even critical workloads running line of business applications is a challenge in most IT environments. Conventional network monitoring solutions provide very little information about performance of the network as these are generally designed to monitor health of individual network devices. The Network Performance Monitor technology, part of the Operations Management Suite, offers near real-time monitoring of network performance parameters such as loss and latency. It enables timely detection of network performance issues and localizes the source of the problem to a particular network segment or device. Using historical trend graphs, you can easily detect transient network performance issues. An interactive topology graph allows you to visualize the hop-by-hop network routes and determine the source of the problem. Now you can confidently tell if a network issue is affecting your application performance without having to rely on your network team.

NPM topology

>>> Try it now:Create a free Operations Management Suite account

Consolidated view of backup status

Many companies today are taking advantage of cloud backup solutions to securely protect enterprise data and applications without the hassle and complexity of on-premises solutions. Azure Backup, part of Operations Management Suite can further reduce time managing and maintaining backup deployments with this latest feature that centralizes monitoring and alerting into a single dashboard. Now, rather than having to go to each server to check the status of backups, you can see backup jobs and alerts across cloud and on-premises in one place. You can also configure alerts for backup failures and view the alerts or subscribe to email notifications, to keep you informed as issues arise.

Increased agility for your IT operations

These two new monitoring features support the goal of the Microsoft Operations Management Suite (OMS) to increase agility for IT operations. OMS leverages Microsoft’s deep operations management experience to streamline IT management across any hybrid cloud. With capabilities spanning analytics, automation, configuration, security, backup and disaster recovery, OMS is a cloud-based management platform designed for speed and agility. As a SaaS solution, OMS can be set up within minutes, giving you immediate visibility across your environment. Now you can confidently tell if a network issue is affecting your service so you can immediately provide insight to your network team.

For more information on these new features, please visit the Operations Management Suite documentation webpage or sign up for a free trial. Follow us on Twitter @MSCloudMgmt.

Azure Active Directory B2C is now generally available in North America

$
0
0

Today, I’m very excited to announce the General Availability of Azure AD B2C in North America.

Azure AD B2C is a cloud identity service that enables you to stay connected with your consumers in a more secure, reliable and cost-effective manner compared to on-premises systems. Azure AD B2C is built on Azure Active Directory, the highly secure cloud identity platform that handles billions of authentications per day. Azure Active Directory B2C can be easily integrated across mobile and web platforms, so your consumers can log on to all your applications through fully customizable experiences, by using their existing social accounts, or by creating new credentials.

To walk you through what’s available today, Swaroop Krishnamurthy from our Program Management team wrote an excellent summary here.

We’re looking forward to receiving any feedback or suggestions you have!

Regards,

John Justice,

Director of Program Management – Microsoft Identity Division

#AzureAD Conditional Access: Per app MFA and Network Location based policies are GA!

$
0
0

Howdy,

Great news today! The Azure AD Conditional Access per app MFA and and Network Location policies are GA! We have seen incredible demand for these capabilities from customers so I’m completely stoked that they are ready for broad production use!

Of note, quite a few customers of the customers we’ve been working directly with in public preview are already using these policies in the production environment and getting a ton of value from them. The Conditional Access policy engine is built to allow admins maintain control in a cloud-first, mobile-first world. Conditional Access policy evaluation can be based on device health, MFA, location and detected risk. You can learn more about Conditional Access here.

Today’s announcement moves the features currently in the Conditional Access public preview to GA, enabling the following policies to be set per-application:

  • Always require MFA
  • Require MFA when not at work
  • Block access when not at work.

The admin experience for configuring conditional access policies for an application is super simple. With only a few clicks you can configure your policy and select which users you want it to apply to:

Once a policy is configured, it will be automatically applied when a user attempts to sign into an application. For example, let’s say a if an admin has configured a conditional access policy requiring MFA for Exchange Online. When the user goes to the Office 365 portal, they will be seamlessly signed in:

But when they click on the “Mail” tile to access their email, the user will be challenged to complete an MFA challenge:

The MFA and Network Location policies are applied across all devices. For example, admins can create a Conditional Access policy for SharePoint that requires users to be on their corporate network to access the service. If a user tries to access SharePoint from outside their iPhone when they are off of the corporate network their authorization fails and they get blocked like this:

And best of all, conditional access works for browser apps, rich client apps, phone apps and even on-premises apps being accessed using our Azure AD Application Proxy!

Teams across Microsoft have worked together on Conditional Access and to enable these policies across all the apps and services listed here. Most notably, per-app access can be set on the following services:

  • Microsoft Office 365 Exchange Online
  • Microsoft Office 365 SharePoint Online
  • Dynamics CRM
  • Microsoft Office 365 Yammer
  • All of the 2,600+ SaaS applications from the Azure AD application gallery
  • On-premises app registered with Azure AD Application Proxy
  • LOB apps registered with Azure AD.

Many Customers are already using MFA and Location rules

Over the last few months, we’ve been working closely with our early adopter customers and Microsoft’s own IT department to help them deploy Conditional Access in production. We’ve received a ton of positive feedback from them on how the extra security provided by these policies gave them the confidence to accelerate their adoption of cloud services:

Using Azure AD conditional access policy for Onedrive, SharePoint and Exchange online, we were able to adopt Office 365, while protecting critical company data, choosing which groups of users would have access to which applications and from which locations

-Obotech

Conditional access gave Microsoft IT the granularity we needed to tightly control our rollout of MFA for email.  Being able to tightly coordinate the technical deployment with our internal communication/education program was key to delivering a great user experience and more security.

Microsoft IT

We love to see the value this is bringing to organizations, and are excited to make it available to all our customers!

Licensing

Conditional Access is an Azure AD premium feature, requiring per-user licenses for users accessing apps that have had policy applied. To help discover which users are accessing apps that have policy, we’ve added an unlicensed user report, that you can learn about here. The report will let you see any unlicensed usage, telling you the username and applications being accessed, to help you assign and make the best use of your licenses.

Try it out

If you haven’t already tried the preview on Conditional Access, now is the time to dive and learn more about this important capability. It really is the secret ingredient in Azure AD and you’ll see us make some huge additions to this area in the next 120 days!

To get you started ASAP we’ve prepared a set of guides for you here. And it really is easy – If you are already have Azure AD Premium you can have your first set of policies ready to pilot within 5-10 minutes of reading this article!

Looking forward to any feedback or suggestions you have!

Best regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

Improvements to Microsoft Azure Management Pack

$
0
0

Hello folks

We have an updated version of System Center Management Pack for Microsoft Azure (Technical Preview). Version 1.3.22.0 can be downloaded from https://www.microsoft.com/en-us/download/details.aspx?id=50013. It is upgradeable on top of the previous version 1.3.18.0.

This version of the MP pulls in Azure Application Insights alerts for web tests,  and supports Azure China subscriptions. In addition, several bugs such as performance stability issues when monitoring very large subscriptions (having 1000+ objects), issues related to Azure storage account state views and performance collection rules, Service Type page not displaying all instances in the Authoring wizard are now fixed. The full list of updates is present in the management pack guide available in the above link. Please try out the MP and let us know your feedback on our user voice website.

 

Ravi Chivukula | SCOM Program Manager | Microsoft
Get the latest System Center news on Facebook and Twitter
Main System Center blog: http://blogs.technet.com/b/systemcenter/
Operations Manager Team blog: http://blogs.technet.com/momteam/

Bare-Metal Machine using a WIM and WinPE

$
0
0

There are several ways to deploy Nano Server to a physical machine:

  1. Dual-boot a Nano Server vhd or vhdx
  2. PxE-boot a bare-metal machine and install Nano Server from WDS using a vhd or vhdx
  3. PxE-boot a bare-metal machine and install Nano Server from WDS using a wim file
  4. Booting a bare-metal machine into WinPE and deploying Nano Server using a .wim file

In this blog, we’ll talk about the last one.  For the first two, check out Getting Started with Nano Server&How to use WDS to PxE Boot a Nano Server VHD

To build the .wim image, please follow the instructions in Getting Started with Nano Server and use New-NanoServerImage. Make sure to use a .wim extension with the -TargetPath parameter.

Boot into WinPE and make sure that the .wim file you just created is accessible from WinPE. (I copied the .wim file to a bootable WinPE image on a USB thumb drive).

Once WinPE boots, use diskpart.exe to prepare the target machine’s hard drive. Enter the following diskpart commands (modify accordingly, if you’re not using UEFI & GPT):

Note that these commands will wipe out all data from the hard drive!!!

Diskpart.exe

Select disk 0

Clean

Convert GPT

Create partition efi size=100

Format quick FS=FAT32 label="System"

Assign letter="s"

Create partition msr size=128

Create partition primary

Format quick FS=NTFS label="NanoServer"

Assign letter="n"

List volume

Exit

Apply the Nano Server image (adjust the path of the .wim file):

Dism.exe /apply-image /imagefile:.\ServerDatacenterNano.wim /index:1 /applydir:n:\ 

Bcdboot.exe n:\Windows /s s:

Remove the DVD media or USB drive and reboot your system using:

Wpeutil.exe reboot

That’s it. Looking forward to your feedback.

 

ref@

 


Update 1607 for Configuration Manager Technical Preview – Available Now!

$
0
0

Hello everyone! Update 1607 for Configuration Manager Technical Preview has been released. Technical Preview releases give you an opportunity to try out new Configuration Manager features in a test environment before they are made generally available. We’re looking forward to hearing what you have to say about this month’s new preview features, which includes:

  • Customizable branding for end-user dialogs: End-user dialogs that are opened from Software Center or taskbar notifications now show the same organization name, color, and icon branding as Software Center. The administrator workflow for specifying branding settings remains unchanged.
  • Manage duplicate hardware identifiers: Add known duplicate MAC addresses or SMBIOS IDs to be ignored hierarchy-wide for PXE boot and client registration.
  • Microsoft Operations Management Suite (OMS) connector: Sync data such as collections from ConfigMgr to OMS.
  • Windows 10 Edition Upgrade: Upgrade PC clients running Windows 10 Professional edition to Windows 10 Enterprise edition with just a product key; no reimaging required.

Update 1607 for Technical Preview is available directly in the Configuration Manager console. If you want to install Configuration Manager Technical Preview for the first time, the installation bits (currently based on Technical Preview 1603) are available on TechNet Evaluation Center.

We would love to get your thoughts about the latest Technical Preview! To provide feedback or report any issues with the functionality included in this Technical Preview, please use Connect. If there’s a new feature or enhancement you want us to consider including in future updates, please use the Configuration Manager UserVoice site.

Thanks,

The System Center Configuration Manager team

Configuration Manager Resources:

Documentation for System Center Configuration Manager Technical Previews
Documentation for System Center Configuration Manager
System Center Configuration Manager Forums
System Center Configuration Manager Support
System Center Configuration Manager Technical Preview 5 (v1603)

#Azure AD Mailbag: Hybrid Identity and ADFS Part 2

$
0
0

Hey there, Ramiro Calderon back for another post on Hybrid Identity and ADFS. This is really a part 2 from last weeks post which can be found here. Let’s pick up right where we left off!

 

Question 1:

Both AD FS and Azure AD give me SaaS App SSO. What should I use when?

 

Answer 1:

Azure AD is an Identity and Access Management (IAM) platform that brings additional capabilities for Software as a Service (SaaS) applications beyond Single Sign On (SSO). In general, it is recommended to use Azure AD for SaaS apps SSO configuration because it provides:

  1. Automated provisioning for SaaS applications that support it (Salesforce, ServiceNow, etc.). This provides a simple and secure way to manage identities in various SaaS applications.
  2. User friendly portals https://myapps.microsoft.com/ for information workers (IW) to discover and launch applications they have access to. Similarly, SaaS apps are available for users from the office portal (https://portal.office.com), which is a great option for users who are already used to it.
  3. The application gallery enables simpler configuration wizards optimized for each application. You can also “bring your own” federated applications that support SAML protocol or WS-Federation.
  4. Azure AD Provides built-in reports for SaaS applications including:  Logins to the applications, and anomalous logins using machine learning powered by the cloud backend.  Without Azure AD, you need additional logic and complex parsing of the IdP audits to replicate (a) above. The machine learning models and techniques to produce (b) are not cost effective to produce on premises.
  5. Azure AD Identity Protection provides risk-based conditional access, evaluating risk of logins evaluation from multiple data sources using machine learning at cloud scale in addition to supporting conditional access based on location and MFA.
  6. Common management control plane with:
    1. Microsoft services such as Office 365
    2. Password SSO application, in addition to federation
    3. Internal applications using Azure AD Application proxy
    4. Custom Azure AD line of business (LOB) applications
  7. Independent token signing certificates per app, reducing the rollover impact.
  8. When combined with group management features, Azure AD provides more options to assign access to the SaaS apps:
    1. Delegate a business owner to manage user assignment
    2. Allow users to self-service requests for access with optional approval process
    3. Provides attribute based control using groups with dynamic membership

In an on premises deployment, the capabilities above are usually available with 3rd party solutions.

  1. Setting up Azure AD as the trust decouples the application from the on-premises credential approach in your tenant, which gives you flexibility to move from federation to Azure AD password hash sync and reduce on-premises infrastructure in the future.

In addition to the functional reasons above, there are also some practical considerations:

  1. As a cloud service, Azure AD is continuously releasing fixes, new features and enabling new scenarios for administrators, developers, and end users.
  2. When Apps in the Gallery change or break their API’s, Azure AD engineers wake up in the middle of the night to fix it, not you
  3. The SaaS application gallery is constantly updated with new applications. As a customer, you can submit requests to the  Azure AD team to add new applications.
  4. You can keep adding applications without worrying about upgrading your on-prem capacity.

 

So, when should you keep the SaaS application RP trusts in AD FS? Simply, when Azure AD does not support it. There are some advanced use cases that can only be implemented by AD FS such as:

  1. Advanced claim transformations such as transformation of attributes, regular expressions, or claim extractions from LDAP, SQL Server, or custom attribute stores
  2. Token customizations such as SHA256 signatures, specific NameID policies, etc.
  3. Support for SAML 1.1 tokens for WS-Federation applications.
  4. Custom triggering of multifactor authentication rules that are not supported by conditional access.
  5. Custom authorization logic that can’t be modeled as a security group or conditional access policies.

As mentioned above, Azure AD is constantly releasing more capabilities to close the SSO functional gaps with AD FS, especially around conditional access capabilities.

 

Question 2:

I am confused. On premises Web Application Proxy (WAP) and Azure AD Application Proxy give me internal application publishing. What should I use?

Answer 2:

In general, we recommend using Azure AD Application proxy to publish on-premises applications. As I mentioned in my previous answers, Azure AD provides a lot of capabilities beyond the endpoint publishing:

  1. User friendly portals https://myapps.microsoft.com/ for information workers (IW) to discover and launch applications they have access to. Similarly, SaaS apps are available for users from the office portal (https://portal.office.com), which is a great option for users who are already used to it.
  2. Azure AD Provides built-in reports for applications including: Logins to the applications, and anomalous logins using machine learning powered by the cloud backend.
  3. Common management control plane with:
    1. Microsoft services such as Office 365
    2. SaaS applications
    3. Custom Azure AD line of business (LOB) applications
  4. When combined with group management features, Azure AD provides more options to assign access to the internal apps:
    1. Delegate a business owner to manage user assignment
    2. Allow users to self-service requests for access with optional approval process
    3. Provides attribute based control using groups with dynamic membership

In an on premises deployment, the capabilities above are usually available with 3rd party solutions.

  1. Conditional access policies based on network location, device state (in preview), and Risk-Based (in preview). The later one is uniquely possible in the cloud due to the scale of risk evaluation and signal processing engine from multiple data sources using machine learning.

Additional capabilities specific to internal applications including:

1. Azure AD Application Proxy connectors simplify the on premises footprint. No DMZ or inbound ports are needed since there is only outbound traffic.

2. Azure AD provides SSL termination so reduces exposure to vulnerabilities like heart bleed

3. Azure AD ensures that your servers on premises are not exposed directly to the internet so no port scanners and other similar tools are not a threat

4. With pre-authentication, Azure AD will ensure that only authenticated requests are sent to your on-premises servers, mitigating DDOS attacks

5. Alternate login ID is very straightforward to configure for Kerberos applications.

So, when should you publish your internal applications in WAP? Simply, when Azure AD does not support it. There are some advanced use cases where WAP is a better fit such as:

  1. If you already have hundreds or thousands of applications published, WAP provide better scripting capabilities.
  2. Public or consumer facing websites
  3. WAP in Windows Server 2016 will support additional pre-authentication methods such as Basic and Client Certificate Authentication
  4. WAP in Windows Server 2016 will support publishing of endpoints based on wildcards (example map: https://*.contoso.com)

 

Question 3:

So, if I move my SaaS apps and my internal apps to Azure AD, can I shut AD FS Down? What are the tradeoffs if I move from federation to Password Hash Sync?

 

Answer 3:

AD FS has a very important place in an overall identity solution to meet specific use cases that might be part of your requirements:

  1. Support for authentication methods other than username/password. This includes:a. Integrated windows authentication (IWA) for IWs using domain join computers in the internal network.b. Support for Smart Card user authentication.

c. Use of 3rd Party MFA providers such as RSA SecurID, Vasco, YubiKey, etc.

2. Support for auto-registration of Windows 7 and 8.1 domain joined devices for device-based conditional access.

 

Question 4:

I have MFA on premises with AD FS, but Azure AD attempts to do MFA in the cloud again. How can I fix that?

 

Answer 4:

Azure AD recognizes the MFA authentication on premises based on claims from the identity provider (AD FS in your case). To set it up with AD FS you need to do the following:

  1. Configure the domain authentication properties to indicate MFA is supported on premises:

image

2. Then, configure AD FS to pass through the authentication method references claim in the issuance transform rules:

image

The underlying claims language is:

image

3. Then, any requests that Azure AD deems requiring MFA (for example Conditional access policies), will result on a request like this when using WS-Federation. Note the wauth parameter

image

 

4. Then, Azure AD will look for the authentication method references claim in the highlighted claim types and value, which is sent to Azure AD with the rule created in step 2.

image

 

Note that steps 1,3,4 apply to other identity providers as well. If you have the same requirements for 3rd party identity provider, consult with your provider to produce the claims described here.

 

Question 5:

I have MFA on prem with AD FS, and want to move to Azure MFA. How can I phase in my users?

 

Answer 5:

This is a tricky one because the “MFA on premises support” is a per-domain setting, and you can’t slice it by groups of users. However, here’s the trick:

Reading View. Alt Shift A for Accessibility Help.

* The “SupportsMFA” flag means that Azure AD to send the request back to AD FS with the MFA request parameter.

* As a decoupled behavior, Azure AD will honor the authentication method references claim from on prem if it sees it.

 

In other words, if you issue the MFA claim on premises then Azure AD will use it.

This is an example: Let’s say your MFA requirement is to prompt multifactor when outside the corporate network, then it is possible to phase users from an on-premises MFA to Azure MFA as follows:

1. Create a security Group on-premises called AzureMFAUsers

2. In AD FS, create a rule so request MFA for requests coming from outside the corporate network for users who are not members of AzureMFAUsers

image

3. In Azure AD, disable the “SupportsMFA” property of the federated domain:

image

4. In Azure AD, set the MFA service to skip MFA for on-premises requests for federated users:

image

5. Then, make sure that the members of the “AzureMFAUsers” and set Azure MFA to be “enforced” in the cloud. You can do this in the management portal, or at scale with a quick powershell function as follows:

 

function Set-AzureADEnforcedMFAToGroup

{

[CmdletBinding()]

param

(

[Parameter(Mandatory=$true)]

[string]

$GroupName

)

$groupId = Get-MsolGroup -SearchString $GroupName | Select-Object -ExpandProperty ObjectId

$members = Get-MsolGroupMember -GroupObjectId $groupId

$st = New-Object -TypeName Microsoft.Online.Administration.StrongAuthenticationRequirement

$st.RelyingParty = “*”

$st.State = “Enabled”

$sta = @($st)

foreach($member in $members)

{

$memberObjectGuid = $member.ObjectId

Set-MsolUser -ObjectId $memberObjectGuid -StrongAuthenticationRequirements $sta

}

}

 

So, with the trick above you can play with the AD FS rules to phase users from On prem MFA to Azure MFA.

 

We hope you’ve found this post and this series to be helpful. For any questions you can reach us at
AskAzureADBlog@microsoft.com, the Microsoft Forums and on Twitter @AzureAD, @MarkMorow and @Alex_A_Simons

-Ramiro Calderon and Mark Morowczynski

DSC Resource Kit Community Call August 3

$
0
0

We will be hosting a community call for the DSC Resource Kit 1-2PM on Wednesday, August 3 (PDT).
Call in to ask questions or give feedback about the DSC Resource Kit!

How to Join

Skype for Business

Join Skype Meeting
This is an online meeting for Skype for Business, the professional meetings and communications app formerly known as Lync.

Phone

+14257063500 (USA – Redmond Campus) English (United States)
+18883203585 (USA – Redmond Campus) English (United States)
Find a local number

Conference ID: 88745041
Forgot your dial-in PIN? | Help

Agenda

The community call agenda is posted on GitHub here.

Windows Failover Cluster Troubleshooter Data Grab

$
0
0

This blog post brought to you by eighteen year veteran Microsoft Premier Field Engineer David Morgan.

Goal of this Post

Over the years my customers have asked about what they should do first when they get a trouble ticket for a misbehaving Windows failover cluster. There are some fairly simple steps one can take first that can provide a host of benefits during the troubleshooting process like:

  • Faster problem resolution
  • A successful and faster root cause analysis
  • Faster service response times from vendor support personnel
  • Data about the event and the surrounding environment helpful in post mortems that can help prevent the same, and other, problems in the future
  • And more

This particular post isn’t about doing actual troubleshooting. Here I’m only going to go into the primary steps one should take before undertaking in-depth troubleshooting activities. Actual troubleshooting scenarios and details will follow in future posts where you’ll see why having captured these resources in the beginning can make your IT life a bit better.

Summary

  1. Immediately Capture all Cluster Logs
  2. Write a Very Detailed Description of the Problem
  3. Capture Microsoft Cluster Diagnostics Outputs
  4. Create a Cluster Validation Report

Detail

  • The most important task – immediately gather the cluster logs from all nodes.

If this is not done within ~72 hours (varies) the data logged about your problem event will be overwritten when the log wraps. In almost all cases if the cluster log is not available for the time of the event a reliable root cause cannot be provided.

  • To capture a cluster log from each machine in the cluster and place all the files in a specific location execute either of the following commands:
    • PowerShell (recommended for 2012 & 2012 R2)
      • Get-ClusterLog –destination “target-folder”
    • Cluster.exe (recommended for 2008 & 2008 R2)
      • Cluster.exe log /gen /copy:”target-folder”
        • Note: If you are using 2012 or 2012 R2, cluster.exe is a feature tool and must be added through the Add Roles & Features functions. Cluster.exe is planned to be deprecated in future releases.
  • At this time consider setting the cluster log level higher to gain more insight to the issue if it reoccurs:
    • Considerations:
      • Increasing the log level may affect overall system performance.
      • Increasing the log level will cause the log to wrap more frequently.
      • If the problem is one you can reproduce then:
        • Recommended for 2008 & 2008 R2
          • Determine the current cluster logging level
            • Cluster /prop:clusterloglevel
          • Increase the log level to 5
            • Cluster log /loglevel:5
          • Reproduce the issue
          • Capture the cluster logs
          • Rest the cluster log level to its default of 3
            • Cluster log /loglevel:3
        • Recommended for 2012 & 2012 R2
          • Determine the current cluster logging level
            • Get-Cluster | FL clusterloglevel
          • Increase the log level to 5
            • Set-Clusterlog –level 5
          • Reproduce the issue
          • Capture the cluster logs
          • Rest the cluster log level to its default of 3
            • Set-Clusterlog –level 3
  • As soon as possible collect the following diagnostic results from the cluster.
      • You will need to log in using a Microsoft account such as live.com, outlook.com, Hotmail.com, etc.
    • Once you are logged in enter “Failover Cluster” in the search field.
    • Your search results should provide a link to the Windows FailoverCluster Diagnostic.
    • Click on the link Windows FailoverCluster Diagnostic and choose create.
    • Next, choose Download and save the file to some location or you can choose Run.
    • After executing the download choose:
      • Run now on this PC
        if the desktop you are on is one of the cluster nodes.
      • Save to run later on another PC
        if the desktop you are on is not a cluster node.
    • After executing the diagnostics package, you will be taken to a screen allowing you to select which nodes you wish to collect information from.
      • It is best to have diagnostics for all the nodes in the cluster. However, there may be reasons for you to choose only a subset and run the diagnostics tool more than once with different nodes in the collection.
        • The primary reason for this is that the tool will compress no more than 2GB of collected data. With very large clusters, it is easy to reach or surpass this threshold. If you run the tool against a large number of nodes, the collection will be finished when you see the screen titled
          “Review the diagnostic results before you send the item”
          appear. Before choosing next and compressing the data, it would be prudent to check the temporary location where the captured files are located first and determine the total size. If it is greater than 2GB it would be prudent to copy all the files to another location as when the tool fails because of the size limitation the temp location files are deleted.

          The temporary file location is:

          • %WINDIR%\TEMP\SDIAG_{GUID} (where GUID represents a diagnostic execution)
      • Next choose a location to save the diagnostics output
      • A folder named Upload Results will be created that contains a compressed file with a .cab extension. Save the file Results….cab and delete the remaining files in the folder.
      • If you run into other issues this FAQ is extensive:
        • 2598970
          Information about the Microsoft Automated Troubleshooting Services and Support Diagnostic Platform
  • Capture a Failover Cluster Validation Report
    • From within Failover Cluster Manager run the Cluster Validation Wizard and collect the results of all tests.
      • If there is no storage in your cluster that can be taken offline, the storage tests will not be run. If your issue is likely a storage problem, then the storage test data is important. The simplest method around this is to introduce a single, free volume from the SAN to the cluster. The Storage Validation tests can then be run against this single disk and will return all storage test information with the exception of specifics about the disks that are in an online state when the storage tests are run.
      • When completed the final .mht format report will be found in the following directory:
        • %windir%\cluster\reports

Finally, store all these files in case you later work with a Microsoft support engineer. You’ll be amazed at how much faster your support call can go if you already have this data collected and ready to upload to your support vendor.

Azure Stack API’s – Working directly with the Resource Manager API Layer (Technical Preview 1)

$
0
0

Introduction

To work with the Azure’s Resource Manager, you have a number of options. For example, you have these SDK’s to simplify development:

Note: I can’t stress enough that this is simply a model I use, mostly for validation and there certainly are other choices depending on project and requirements. Again this is for MAS TP1 which uses Azure AD so this may also be different moving forward for you.

The beauty of cloud consistency means that these tools and patterns are also the method in which you can develop against Azure Stack’s Resource Manager. Keeping in mind Azure Stack is in Technical Preview and not all API’s are available. This blog post offers another way, one that takes away the complexity for SDK’s and PowerShell modules and allows for the ability to understand and learn the API’s as a Restful service in their most foundational form.

Why work directly with APIs?

Besides the ability to learn what’s happening underneath the covers, so to speak, debugging and translation to other languages can be easier. In PowerShell the obvious and easiest option is to utilize the Azure PowerShell cmdlets like Get-AzureRmSubscription or Get-AzureRmVM but this is hard to translate into other scripting or coding languages and if trouble exists it’s a little more complicated to debug. This should not be considered a recommendation on best practice as the note above highlights but for the purposes of testing it should suffice.

To Begin

No matter the method in which you choose to develop, either SDK’s, PowerShell, CLI or directly with APIs there are some basic requirements for a project that need to be met, some of which are managed by the PowerShell or CLI: These are:

  • Authentication and authorization – Click Here for information on authenticating a service principal with ARM.
  • The application needs to be registered with Azure Active Directory and given permission – Click Here for more details.
  • Restful requests can then be made to the API’s correct URI’s with the above information used to create a bearer token – Click Here for Azure’s documentation for ARM API’s. (Note: This is for Azure public cloud so the API versions and available resources will differ from Azure Stacks API’s but this is a great place to start in a cloud consistent world)

In the Technical Preview of Azure Stack – which uses Azure Active Directory -, the first step is to create an application in the Azure Portal. Next, give that application delegated permission to the Azure Stack API application which was created during the install of the technical preview.

Step 1

Create an application in the Classic Azure Portal https://manage.windowsazure.com within the Active Directory you used to install Azure Stack.

Add application:

  1. Within the directory used for Azure Stack select Applications tab and then select Add+ in the bottom menu to add an application.
  2. When asked ‘What do you want to do?’ select the “Add an application my organization is developing” option.
  3. Under the ‘Tell us about your application’ screen enter a name for your application and select the ‘Web application’ type.
  4. Then enter a sign in URL, here you can enter http://localhost
  5. Under APP ID URI enter a unique App URI, for example https://localhost/appname/

Step 2

Configure the applications authorization to Azure Stack’s API’s.

Configure Application:

  1. Select the configure tab and take note of the Client ID GUID (you’ll need this later)
  2. Under Keys, select the drop down called ‘select duration’ and pick 1 year.
  3. Click save and go back to the keys section and copy the newly created key (this is the only chance to get it!)
  4. Under the permissions section click add application and in the dialog box drop down the show menu and select all apps. Click the check button.
  5. In the list of applications find the AzureStack.local-Api application and select it, then click the plus sign now in the name column. Then click the check.
  6. It’s now added to the permissions list so drop down the delegated permissions and select the Access AzureStack.local-Api app and then click save again.
  7. As a last piece of required data, select the ‘VIEW ENDPOINTS’ and in the new dialog box you’ll see several choices. The important piece here is GUID you can see in each dialog box for each of the endpoints listed. This is an easy way to get your Tenant ID in GUID format.

 

Act on Azure Stacks API’s

This next steps requires their own sections. I’ll show some examples in PowerShell and Python but essentially all we are doing is sending a correctly formed HTTPS request to the Azure Stack API to perform the action we wish to perform. The request can be a GET request just like when you request a web page from a site but in API terms this is a request for information. Or it can be other types like a DELETE request or a PUT request which will delete a resource or create a resource. All requests made to a service, even web sites require a header but for ARM API’s they require some specific information.

Getting Tenants Authorization Token

To retrieve the AAD Token for the tenant’s authorization to access Azure Stacks API’s we’ll make a POST request to the OAUTH token endpoint https://login.microsoftonline.com/{0}/oauth2/token where {0} is your tenant ID from the earlier steps in creating and authorizing applications. We also need to set the grant type and scope.

Using PowerShell, let’s set some parameters:

$ClientID = "" $ClientKey = ""
$TenantID = ""
$User = <"Enter the tenant user name, for example:Tenant1@shawngibbs.onmicrosoft.com"
$Password = ""
$AppIdUri = "https://azurestack.local-api/" $AADURI = "https://login.microsoftonline.com/{0}/oauth2/token" -f $TenantID

 

In Python, setting parameters looks like this:

CLIENT_ID = """
CLIENT_SECRET ="""
CLIENT_TENANTID = ""
CLIENT_USERNAME = "Tenant1@shawngibbs.onmicrosoft.com"
CLIENT_PASSWORD = "" CLIENT_RESOURCE = "https://azurestack.local-api/" CLIENT_LOGIN_URL= "https://login.microsoftonline.com/" URL = CLIENT_LOGIN_URL + CLIENT_TENANTID + "/oauth2/token"

 

Making the request for token. Since PowerShell and Python differ in how they deal with the object model and this in turn changes the returned data, we’ll handle it in a way that is easy to deal with, although there may certainly be better ways to do the next step. For Python, we’ll set the acceptable response type to JSON with a request header: headers = {“Accept”:”application/json”. For PowerShell, this can be set to content type in the command parameters. The request will require a body of content that represents the grant request information and type.

For PowerShell, the request body:

$GrantBody = "grant_type=password&scope=openid&resource={0}&client_id={1}&client_secret={2}&username={3}&password={4}" -f $AppIdUri, $ClientID, $ClientKey, $User, $Password

 

For Python, the request body:

params = {"grant_type": "password",
"scope": "openid",
"resource": CLIENT_RESOURCE,
"client_id": parameters.CLIENT_ID,
"client_secret": parameters.CLIENT_SECRET,
"username": CLIENT_USERNAME,
"password": parameters.CLIENT_PASSWORD}

 

Now let’s make the calls to the authorization API and parse out the token we will use to make additional request to the resource managers API.

For PowerShell:

$AADTokenResponse = Invoke-RestMethod -Uri $AADURI -ContentType "application/x-www-form-urlencoded" -Body $GrantBody -Method Post -Verbose
$AADtoken = $AADTokenResponse.access_token

For Python:

response = requests.post(URL, data = params, headers=headers)
response_json = response.json()
token = response_json['access_token']

The end result is that we have the JWT (‘JSON Web Token’) now saved as a variable in PowerShell and Python. This now gets attached to future requests as the ‘Authorization’ part of the requests header. We’ll also set some other header variables just set language and response type.

In PowerShell:

$Headers = @{Authorization = "Bearer $AADtoken "
"Accept" = "application/json"
"x-ms-effective-locale" = "en.en-us"
}

In Python:

headers = {"Authorization": "Bearer "+ token, "Accept": "application/json", "x-ms-effective-locale":"en.en-us"}

At this point, we set the URI of the specific resources API we wish to get or set and make additional calls with the above headers that include the authorization token. For brevity, we’ll simply request subscriptions for the specific tenant, parse the response and since multiple subscriptions may exist, we’ll walk through each.

In PowerShell:

$GetSubscriptionsURI = "https://api.azurestack.local/subscriptions?api-version=1.0&includeDetails=true"
$Subscriptions = (Invoke-RestMethod -Uri $GetSubscriptionsURI -ContentType "application/json" -Headers $Headers -Method Get -Debug -Verbose).value
$Subscriptions

In Python:

SUBURL = "https://api.azurestack.local/subscriptions?api-version=1.0&includeDetails=true"
response = requests.get(SUBURL, headers=headers, verify=False)
response_json = json.loads(response.text)
response_value = response_json['value']
for sub in response_value:
print("My subscription ID is: " + sub['subscriptionId'])

 

Result

At this point, you have the basics needed to communicate directly with the Azure Stack Resource Management API’s. Even if this is not the model you choose to do your development moving forward, it should at least enlighten you to what is happening. These API’s are everything you need when you want to perform exactly the same tasks that you do via the portal, PowerShell or CLI.

Viewing all 5932 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>