Quantcast
Channel: Category Name
Viewing all 5932 articles
Browse latest View live

Establishing Network Connectivity to a Share in the Windows Recovery Environment

$
0
0

Hi there! My name is Neil Dsouza and I’m a Support Escalation Engineer with the Windows Core team.

Today I’m going to cover a scenario where you have a server that fails to boot and all you want to do is copy the data off the machine to a network share.  In most cases connecting a USB flash drive/hard drive is the easiest solution to copy off the data.  However, if you don’t have physical access to the server, but you do have remote console access, then you can copy the data to a network share. These steps will also help gather logs or data when troubleshooting Windows in a no-boot scenario.

For Operating systems newer than Windows 7, by default Windows Recovery Environment (WinRE) is installed, unless this was changed during deployment/installation of Windows. The steps should work for most operating systems Windows 7 and newer.

When the Operating system fails to boot, by default it will take you to a boot menu with an option to boot into WinRE which would say ‘Repair your computer’ or ‘Launch Startup repair’.

image

Image1: Boot menu to go to WinRE in Windows 7 or Windows Server 2008 R2

image

Image 2: Boot Menu to go to WinRE in Windows 8 / 2012 / 2012 R2 / 8.1 / 10

Choosing the ‘Startup Repair’ option will run the ‘Startup Repair Wizard’ and attempt to fix the most common issues that cause operating system boot failures.

Capture

Image 3: Startup Repair running in Windows 8 and newer OS

image

image

image

Images 4: Startup Repair running in Windows version before Windows 8

In the end a report is provided with the tests that ran to detect issues and what the result was. This information can be useful to understand why Windows failed to boot.

Capture2

Image 5: Startup repair Results

If you miss seeing this in the wizard, you can always go to the Command Prompt in WinRE and open the below file which has the information logged: %WINDIR%\System32\LogFiles\SrtTrail.txt

If you do not see the ‘Repair Your Computer’ or ‘Launch Startup Repair’ option, it means that WinRE was not installed when the OS was installed. In such cases you can still boot to WinRE by using the operating system disk and selecting ‘Repair your computer’ at the install screen.

Capture3

Image 6: Boot from CD/DVD/ISO screen for Windows 7

Capture4

Image 7: Boot from CD/DVD/ISO screens for Windows 8 / 2012 / 8.1 / 2012 R2 / 10

 

On Windows 8 and newer OS’s, you have to navigate through the options further as shown below:

Select ‘Troubleshoot’

image

Select ‘Advanced Options’

image

Select ‘Command Prompt’, or you could run the ‘Startup Repair’ from here

image

For OS versions of Vista thru Windows Server 2008 R2

Click ‘Next’

image

Select ‘Command Prompt’, or you could run the ‘Startup Repair’ from here

image

Once we are at the command prompt we can do our magic.

First thing we want to do is see what the drive letters are for each partition. DISKPART is our friend here.

Run the command: diskpart

List Volume

image

Now we come to the most interesting part.

To establish network connectivity with a file share on some machine where you need to copy your data, log files or that memory dump Microsoft support is asking for when the machine blue screens on startup, you can run ‘wpeinit’ from the command prompt. This is a program built into WinPE from which WinRE is created.

Now you can run ‘ipconfig’ and you will see that an IP address is assigned to the WinRE session. This will work only if you have a DHCP server assigning IP addresses.

In certain cases, ‘wpeinit’ runs, however does not initialize the NIC or does not assign IP address. There are couple of reasons why that happens.

1. NIC driver is not loaded

In this scenario you can manually load the NIC driver. First you need to identify the right driver that may already reside on the machine. All drivers that were installed on the machine are maintained, unless explicitly removed, in the path %WINDIR%\System32\DriverStore\FileRepository with a folder name starting with the driver inf file name followed by a GUID. You may have multiple folders starting with inf filename if you have installed multiple versions of the same driver. In any case you can download the driver and extract it on to a USB stick. We need the .sys, .inf, etc files uncompressed to be able to load the driver manually.

An example of the driver files in FileRepository is below.

image

Run the below command to load the NIC driver from the above image:

drvload c:\Windows\system32\DriverStore\FileRepository\netwew01.inf_amd64_9963f911be06feae\netwew01.inf

2. There’s no DHCP Server in the environment that could automatically assign an IP address

image

What do you do if there isn’t a DHCP server assigning IP addresses? Well, you can assign a static IP address using the below netsh command. You may use the same IP address of the server, however if you have trouble with that use a different IP address.

netsh int ipv4 set address “” static

The Connection Name can be obtained by running ‘ipconfig /all’ command. It’s the text highlighted in blue in the below image.

image

Once you have an IP address, you can map a network drive using the command below to a file server or a simple share on another machine.

net use y: \\ServerName\ShareName

ServerName is the Computer Name of the server or IP Address in case name resolution is not working and ShareName is the name of the share. You will be asked for credentials to access the network share.

You could run the command below for it to take next available drive letter and display the letter.

net use * \\ServerName\ShareName

Now you can copy files and folders from the non-booting machine to a network share using copy, xcopy or much better use robocopy.

I hope this helps you save some time when you have a machine that is not booting up, whether it’s a server or a client machine and help you copy/backup important data or logs to investigate the issue

Neil Dsouza
Support Escalation Engineer
Windows Core Team


Where’s the Macro? Malware authors are now using OLE embedding to deliver malicious files

$
0
0

Recently, we’ve seen reports of malicious files that misuse the legitimate Office object linking and embedding (OLE) capability to trick users into enabling and downloading malicious content. Previously, we’ve seen macros used in a similar matter, and this use of OLE might indicate a shift in behavior as administrators and enterprises are mitigating against this infection vector with better security and new options in Office.

In these new cases, we’re seeing OLE-embedded objects and content surrounded by well-formatted text and images to encourage users to enable the object or content, and thus run the malicious code. So far, we’ve seen these files use malicious Visual Basic (VB) and JavaScript (JS) scripts embedded in a document.

The script or object is surrounded by text that encourages the user to click or interact with the script (which is usually represented with a script-like icon). When the user interacts with the object, a warning prompts the user whether to proceed or not. If the user chooses to proceed (by clicking Open), the malicious script runs and any form of infection can occur.

Packager warning

Figure 1: Warning message prompts the users to check whether they should open the script or not.

It’s important to note that user interaction and consent is still required to execute the malicious payload. If the user doesn’t enable the object or click on the object – then the code will not run and an infection will not occur.

Education is therefore an important part of mitigation – as with spam emails, suspicious websites, and unverified apps. Don’t click the link, enable the content, or run the program unless you absolutely trust it and can verify its source.

In late May 2016, we came across the following Word document (Figure 2) that used VB script and language similar to that used in CAPTCHA and other human-verification tools.

 

Screenshot of an invitation to unlock contents

Figure 2: Invitation to unlock contents

 

It’s relatively easy for the malware author to replace the contents of the file (the OLE or embedded object that the user is invited to double-click or activate). We can see this in Figure 3, which indicates the control or script is a JS script.

A screenshot of a possible JavaScript variant

Figure 3: Possible JavaScript variant

 

The icon used to indicate the object or content can be just about anything. It can be a completely different icon that has nothing to do with the scripting language being used – as the authors can use any pictures and any type

Screenshot of an embedded object variant

Figure 4: Embedded object variant

 

It’s helpful to be aware of what this kind of threat looks like, what it can look like, and to educate users to not enable, double-click, or activate embedded content in any file without first verifying its source.

Technical details – downloading and decrypting a binary

On the sample we investigated, the contents of the social engineering document is a malicious VB script, which we detect as TrojanDownloader:VBS/Vibrio and TrojanDownloader:VBS/Donvibs. This sample also distinguishes itself from the typical download-and-execute routine common to this type of infection vector – it has a “decryption function”.

This malicious VB script will download an encrypted binary, bypassing any network-based protection designed to recognize malicious formats and block them, decrypt the binary, and then run it. Figure 5 illustrates the encrypted binary we saw in this sample.

Screenshot of the encrypted binary

Figure 5: The encrypted binary

 

The embedded object or script downloads the encrypted file to %appdata% with a random file name, and proceeds to decrypt it using the script’s decryption function (Figure 6).

Screenshot of the decryption process, part 1

Screenshot of the decryption process, part 2

Screenshot of the decryption process, part 3

Figure 6: Decryption process

Lastly, it executes the now-decrypted binary, which in this example was Ransom:Win32/Cerber.

Screenshot of the decrypted Win32 executable

Figure 7: Decrypted Win32 executable

Prevalence

Our data shows these threats (TrojanDownloader:VBS/Vibrio and TrojanDownloader:VBS/Donvibs) are not particularly prevalent, with the greatest concentration in the United States.

We’ve also seen a steady decline since we first discovered it in late May 2016.

Worldwide prevalence of TrojanDownloader:VBS/Vibrio and TrojanDownloader:VBS/Donvibs

Figure 8: Worldwide prevalence

Daily prevalence of TrojanDownloader:VBS/Vibrio and TrojanDownloader:VBS/Donvibs

Figure 9: Daily prevalence

 

Prevention and recovery recommendations

Administrators can prevent activation of OLE packages by modifying the registry key HKCU\Software\Microsoft\Office\\\Security\PackagerPrompt.

The Office version values should be:

  • 16.0 (Office 2016)
  • 15.0 (Office 2013)
  • 14.0 (Office 2010)
  • 12.0 (Office 2007)

 

Setting the value to 2 will cause the  to disable packages, and they won’t be activated if a user tries to interact with or double-click them.

The value options for the key are:

  • 0 – No prompt from Office when user clicks, object executes
  • 1 – Prompt from Office when user clicks, object executes
  • 2 – No prompt, Object does not execute

You can find details about this registry key the Microsoft Support article, https://support.microsoft.com/en-us/kb/926530

 

See our other blogs and our ransomware help page for further guidance on preventing and recovering from these types of attacks:

 

 

Alden Pornasdoro

MMPC

 

The Version Store Called, and They’re All Out of Buckets

$
0
0

Hello, Ryan Ries back at it again with another exciting installment of esoteric Active Directory and ESE database details!

I think we need to have another little chat about something called the version store.

The version store is an inherent mechanism of the Extensible Storage Engine and a commonly seen concept among databases in general. (ESE is sometimes referred to as Jet Blue. Sometimes old codenames are so catchy that they just won’t die.) Therefore, the following information should be relevant to any application or service that uses an ESE database (such as Exchange,) but today I’m specifically focused on its usage as it pertains to Active Directory.

The version store is one of those details that the majority of customers will never need to think about. The stock configuration of the version store for Active Directory will be sufficient to handle any situation encountered by 99% of AD administrators. But for that 1% out there with exceptionally large and/or busy Active Directory deployments, (or for those who make “interesting” administrative choices,) the monitoring and tuning of the version store can become a very important topic. And quite suddenly too, as replication throughout your environment grinds to a halt because of version store exhaustion and you scramble to figure out why.

The purpose of this blog post is to provide up-to-date (as of the year 2016) information and guidance on the version store, and to do it in a format that may be more palatable to many readers than sifting through reams of old MSDN and TechNet documentation that may or may not be accurate or up to date. I can also offer more practical examples than you would probably get from straight technical documentation. There has been quite an uptick lately in the number of cases we’re seeing here in Support that center around version store exhaustion. While the job security for us is nice, knowing this stuff ahead of time can save you from having to call us and spend lots of costly support hours.

Version Store: What is it?

As mentioned earlier, the version store is an integral part of the ESE database engine. It’s an area of temporary storage in memory that holds copies of objects that are in the process of being modified, for the sake of providing atomic transactions. This allows the database to roll back transactions in case it can’t commit them, and it allows other threads to read from a copy of the data while it’s in the process of being modified. All applications and services that utilize an ESE database use version store to some extent. The article “How the Data Store Works” describes it well:

“ESE provides transactional views of the database. The cost of providing these views is that any object that is modified in a transaction has to be temporarily copied so that two views of the object can be provided: one to the thread inside that transaction and one to threads in other transactions. This copy must remain as long as any two transactions in the process have different views of the object. The repository that holds these temporary copies is called the version store. Because the version store requires contiguous virtual address space, it has a size limit. If a transaction is open for a long time while changes are being made (either in that transaction or in others), eventually the version store can be exhausted. At this point, no further database updates are possible.”

When Active Directory was first introduced, it was deployed on machines with a single x86 processor with less than 4 GB of RAM supporting NTDS.DIT files that ranged between 2MB and a few hundred MB. Most of the documentation you’ll find on the internet regarding the version store still has its roots in that era and was written with the aforementioned hardware in mind. Today, things like hardware refreshes, OS version upgrades, cloud adoption and an improved understanding of AD architecture are driving massive consolidation in the number of forests, domains and domain controllers in them, DIT sizes are getting bigger… all while still relying on default configuration values from the Windows 2000 era.

The number-one killer of version store is long-running transactions. Transactions that tend to be long-running include, but are not limited to:

– Deleting a group with 100,000 members
– Deleting any object, not just a group, with 100,000 or more forward/back links to clean
– Modifying ACLs in Active Directory on a parent container that propagate down to many thousands of inheriting child objects
– Creating new database indices
– Having underpowered or overtaxed domain controllers, causing transactions to take longer in general
– Anything that requires boat-loads of database modification
– Large SDProp and garbage collection tasks
– Any combination thereof

I will show some examples of the errors that you would see in your event logs when you experience version store exhaustion in the next section.

Monitoring Version Store Usage

To monitor version store usage, leverage the Performance Monitor (perfmon) counter:

‘\\dc01\Database ==> Instances(lsass/NTDSA)\Version buckets allocated’

image
(Figure 1: The ‘Version buckets allocated’ perfmon counter.)

The version store divides the amount of memory that it has been given into “buckets,” or “pages.” Version store pages need not (and in AD, they do not) equal the size of database pages elsewhere in the database. We’ll get into the exact size of these buckets in a minute.

During typical operation, when the database is not busy, this counter will be low. It may even be zero if the database really just isn’t doing anything. But when you perform one of those actions that I mentioned above that qualify as “long-running transactions,” you will trigger a spike in the version store usage. Here is an example of me deleting a group that contains 200,000 members, on a DC running 2012 R2 with 1 64bit CPU:

image(Figure 2: Deleting a group containing 200k members on a 2012 R2 DC with 1 64bit CPU.)

The version store spikes to 5332 buckets allocated here, seconds after I deleted the group, but as long as the DC recovers and falls back down to nominal levels, you’ll be alright. If it stays high or even maxed out for extended periods of time, then no more database transactions for you. This includes no more replication. This is just an example using the common member/memberOf relationship, but any linked-value attribute relationship can cause this behavior. (I’ve talked a little about linked value attributes before here.) There are plenty of other types of objects that may invoke this same kind of behavior, such as deleting an RODC computer object, and then its msDs-RevealedUsers links must be processed, etc..

I’m not saying that deleting a group with fewer than 200K members couldn’t also trigger version store exhaustion if there are other transactions taking place on your domain controller simultaneously or other extenuating circumstances. I’ve seen transactions involving as few as 70K linked values cause major problems.

After you delete an object in AD, and the domain controller turns it into a tombstone, each domain controller has to process the linked-value attributes of that object to maintain the referential integrity of the database. It does this in “batches,” usually 1000 or 10,000 depending on Windows version and configuration. This was only very recently documented here. Since each “batch” of 1000 or 10,000 is considered a single transaction, a smaller batch size will tend to complete faster and thus require less version store usage. (But the overall job will take longer.)

An interesting curveball here is that having the AD Recycle Bin enabled will defer this action by an msDs-DeletedObjectLifetime number of days after an object is deleted, since that’s the appeal behind the AD Recycle Bin – it allows you to easily restore deleted objects with all their links intact. (More detail on the AD Recycle Bin here.)

When you run out of version storage, no other database transactions can be committed until the transaction or transactions that are causing the version store exhaustion are completed or rolled back. At this point, most people start rebooting their domain controllers, and this may or may not resolve the immediate issue for them depending on exactly what’s going on. Another thing that may alleviate this issue is offline defragmentation of the database. (Or reducing the links batch size, or increasing the version store size – more on that later.) Again, we’re usually looking at 100+ gigabyte DITs when we see this kind of issue, so we’re essentially talking about pushing the limits of AD. And we’re also talking about hours of downtime for a domain controller while we do that offline defrag and semantic database analysis.

Here, Active Directory is completely tapping out the version store. Notice the plateau once it has reached its max:

image(Figure 3: Version store being maxed out at 13078 buckets on a 2012 R2 DC with 1 64bit CPU.)

So it has maxed out at 13,078 buckets.

When you hit this wall, you will see events such as these in your event logs:

Log Name: Directory Service
Source: Microsoft-Windows-ActiveDirectory_DomainService
Date: 5/16/2016 5:54:52 PM
Event ID: 1519
Task Category: Internal Processing
Level: Error
Keywords: Classic
User: S-1-5-21-4276753195-2149800008-4148487879-500
Computer: DC01.contoso.com
Description:
Internal Error: Active Directory Domain Services could not perform an operation because the database has run out of version storage.

And also:

Log Name: Directory Service
Source: NTDS ISAM
Date: 5/16/2016 5:54:52 PM
Event ID: 623
Task Category: (14)
Level: Error
Keywords: Classic
User: N/A
Computer: DC01.contoso.com
Description:
NTDS (480) NTDSA: The version store for this instance (0) has reached its maximum size of 408Mb. It is likely that a long-running transaction is preventing cleanup of the version store and causing it to build up in size. Updates will be rejected until the long-running transaction has been completely committed or rolled back.

The peculiar “408Mb” figure that comes along with that last event leads us into the next section…

How big is the Version Store by default?

The “How the Data Store Works” article that I linked to earlier says:

“The version store has a size limit that is the lesser of the following: one-fourth of total random access memory (RAM) or 100 MB. Because most domain controllers have more than 400 MB of RAM, the most common version store size is the maximum size of 100 MB.”

Incorrect.

And then you have other articles that have even gone to print, such as this one, that say:

“Typically, the version store is 25 percent of the physical RAM.”

Extremely incorrect.

What about my earlier question about the bucket size? Well if you consulted this KB article you would read:

The value for the setting is the number of 16KB memory chunks that will be reserved.”

Nope, that’s wrong.

Or if I go to the MSDN documentation for ESE:

“JET_paramMaxVerPages
This parameter reserves the requested number of version store pages for use by an instance.

Each version store page as configured by this parameter is 16KB in size.”

Not true.

The pages are not 16KB anymore on 64bit DCs. And the only time that the “100MB” figure was ever even close to accurate was when domain controllers were 32bit and had 1 CPU. But today, domain controllers are 64bit and have lots of CPUs. Both version store bucket size and number of version store buckets allocated by default both double based on whether your domain controller is 32bit or 64bit. And the figure also scales a little bit based on how many CPUs are in your domain controller.

So without further ado, here is how to calculate the actual number of buckets that Active Directory will allocate by default:

(2 * (3 * (15 + 4 + 4 *#CPUs)) + 6400) *PointerSize/ 4

Pointer size is 4 if you’re using a 32bit processor, and 8 if you’re using a 64bit processor.

And secondly, version store pages are 16KB if you’re on a 32bit processor, and 32KB if you’re on a 64bit processor. So using a 64bit processor effectively quadruples the default size of your AD version store. To convert number of buckets allocated into bytes for a 32bit processor:

(((2 * (3 * (15 + 4 + 4 *1)) +6400) *4/ 4) * 16KB) / 1MB

And for a 64bit processor:

(((2 * (3 * (15 + 4 + 4 * 1)) + 6400) * 8 / 4) * 32KB) / 1MB

So using the above formulae, the version store size for a single-core, 64bit DC would be ~408MB, which matches that event ID 623 we got from ESE earlier. It also conveniently matches 13078 * 32KB buckets, which is where we plateaued with our perfmon counter earlier.

If you had a 4-core, 64bit domain controller, the formula would come out to ~412MB, and you will see this line up with the event log event ID 623 on that machine. When a 4-core, Windows 2008 R2 domain controller with default configuration runs out of version store:

Log Name:      Directory Service
Source:        NTDS ISAM
Date:          5/15/2016 1:18:25 PM
Event ID:      623
Task Category: (14)
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      dc02.fabrikam.com
Description:
NTDS (476) NTDSA: The version store for this instance (0) has reached its maximum size of 412Mb. It is likely that a long-running transaction is preventing cleanup of the version store and causing it to build up in size. Updates will be rejected until the long-running transaction has been completely committed or rolled back.

The version store size for a single-core, 32bit DC is ~102MB. This must be where the original “100MB” adage came from. But as you can see now, that information is woefully outdated.

The 6400 number in the equation comes from the fact that 6400 is the absolute, hard-coded minimum number of version store pages/buckets that AD will give you. Turns out that’s about 100MB, if you assumed 16KB pages, or 200MB if you assume 32KB pages. The interesting side-effect from this is that the documented “EDB max ver pages (increment over the minimum)” registry entry, which is the supported way of increasing your version store size, doesn’t actually have any effect unless you set it to some value greater than 6400 decimal. If you set that registry key to something less than 6400, then it will just get overridden to 6400 when AD starts. But if you set that registry entry to, say, 9600 decimal, then your version store size calculation will be:

(((2 *(3 * (15 + 4 + 4 * 1)) + 9600) * 8 / 4) * 32KB) / 1MB = 608.6MB

For a 64bit, 1-core domain controller.

So let’s set those values on a DC, then run up the version store, and let’s get empirical up in here:

image(Figure 4: Version store exhaustion at 19478 buckets on a 2012 R2 DC with 1 64bit CPU.)

(19478 * 32KB) / 1MB = 608.7MB

And wouldn’t you know it, the event log now reads:

image(Figure 5: The event log from the previous version store exhaustion, showing the effect of setting the “EDB max ver pages (increment over the minimum)” registry value to 9600.)

Here’s a table that shows version store sizes based on the “EDB max ver pages (increment over the minimum)” value and common CPU counts:

Buckets

1 CPU

2 CPUs

4 CPUs

8 CPUs

16 CPUs

6400

(The default)

x64: 410 MB

x86: 103 MB

x64: 412 MB

x86: 103 MB

x64: 415 MB

x86: 104 MB

x64: 421 MB

x86: 105 MB

x64: 433 MB

x86: 108 MB

9600

x64: 608 MB

x86: 152 MB

x64: 610 MB

x86: 153 MB

x64: 613 MB

x86: 153 MB

x64: 619MB

x86: 155 MB

x64: 631 MB

x86: 158 MB

12800

x64: 808 MB

x86: 202 MB

x64: 810 MB

x86: 203 MB

x64: 813 MB

x86: 203 MB

x64: 819 MB

x86: 205 MB

x64: 831 MB

x86: 208 MB

16000

x64: 1008 MB

x86: 252 MB

x64: 1010 MB

x86: 253 MB

x64: 1013 MB

x86: 253 MB

x64: 1019 MB

x86: 255 MB

x64: 1031 MB

x86: 258 MB

19200

x64: 1208 MB

x86: 302 MB

x64: 1210 MB

x86: 303 MB

x64: 1213 MB

x86: 303 MB

x64: 1219 MB

x86: 305 MB

x64: 1231 MB

x86: 308 MB

Sorry for the slight rounding errors – I just didn’t want to deal with decimals. As you can see, the number of CPUs in your domain controller only has a slight effect on the version store size. The processor architecture, however, makes all the difference. Good thing absolutely no one uses x86 DCs anymore, right?

Now I want to add a final word of caution.

I want to make it clear that we recommend changing the “EDB max ver pages (increment over the minimum)” only when necessary; when the event ID 623s start appearing. (If it ain’t broke, don’t fix it.) I also want to reiterate the warnings that appear on the support KB, that you must not set this value arbitrarily high, you should increment this setting in small (50MB or 100MB increments,) and that if setting the value to 19200 buckets still does not resolve your issue, then you should contact Microsoft Support. If you are going to change this value, it is advisable to change it consistently across all domain controllers, but you must also carefully consider the processor architecture and available memory on each DC before you change this setting. The version store requires a contiguous allocation of memory – precious real-estate – and raising the value too high can prevent lsass from being able to perform other work. Once the problem has subsided, you should then return this setting back to its default value.

In my next post on this topic, I plan on going into more detail on how one might actually troubleshoot the issue and track down the reason behind why the version store exhaustion is happening.

Conclusions

There is a lot of old documentation out there that has misled many an AD administrator on this topic. It was essentially accurate at the time it was written, but AD has evolved since then. I hope that with this post I was able to shed more light on the topic than you probably ever thought was necessary. It’s an undeniable truth that more and more of our customers continue to push the limits of AD beyond that which was originally conceived. I also want to remind the reader that the majority of the information in this article is AD-specific. If you’re thinking about Exchange or Certificate Services or Windows Update or DFSR or anything else that uses an ESE database, then you need to go figure out your own application-specific details, because we don’t use the same page sizes or algorithms as those guys.

I hope this will be valuable to those who find themselves asking questions about the ESE version store in Active Directory.

With love,

Ryan “Buckets of Fun” Ries


Trusted cloud considerations and financial services

$
0
0

A discussion with Bill Fearnley, research director for compliance, fraud, and risk analytics at IDC Financial Insights.

Print

This is the third in a guest blog post series by IDC on trusted cloud. This post details a Q&A discussion between Bonnie Kearney, director, trusted cloud marketing at Microsoft and IDC financial insights research director, Bill Fearnley.

Bonnie: Hi, Bill. As an analyst focused on financial services, can you give some perspective on the current role of cloud in financial institutions in a cloud first world? What is driving demand?

Bill: Certainly. A couple of key thoughts come to mind immediately. The shift to cloud is happening now. While not all financial services workloads will move to the cloud, IDC Financial Insights believes that financial firms will continue to increase their investments in cloud architectures to control costs, move to a services delivery model for employee applications and data, and leverage the enterprise datacenter architectures of leading providers.

  • Varied deployment for cost control. Because customer records and the firm’s IP and transaction data are very sensitive, financial firms are using the cloud to help lower costs and provide “elastic capacity,” as well as deploying a wide variety of clouds — including private, public, and hybrid clouds — in a variety of cloud deployment options: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS).
  • Relying on a trusted cloud. Key things that financial services customers and regulators care about when moving workloads into the cloud include operational security, protecting customer and employee privacy, and meeting compliance requirements. Also, cloud service providers must be transparent about what they do with the data, who has access to it, and where it resides. Trust is fundamental to relying on a cloud vendor as both employees and customers of financial services firms increase their use of personal mobile devices and expect anytime, anywhere remote access to information and their accounts.
  • Data resiliency, availability and reliability are important factors. Cloud providers have built enterprise class datacenters at a global scale to ensure systems, services and operations are functioning at all times. Architected to maintain data availability, data resiliency, business continuance and disaster recovery planning (BC/DR) are critical operational features that customers must look to when choosing a cloud vendor. Ability to choose vendors that offer choice of where data is stored at a regional level is another factor to consider.
  • Risk assurance and accountability remain the same. Cloud computing offers substantial innovative benefits, yet financial services customers remain accountable for managing and supervising cloud vendors from a risk assurance perspective. Financial services firms need to evaluate the benefits as well as the trust related investments of cloud service providers as they navigate the journey.

Bonnie: At Microsoft, we place a great deal of importance on our long-term investment in trusted cloud including principled approach to security, privacy, control, compliance, reliability and transparency. Can you give some insight into the relevancy these key investments are likely to have for financial services investigating and investing in cloud service?

Bill: Happy to. In terms of realizing the promise of cloud computing, an area that comes immediately to mind is compliance benefits in the cloud, where the scalable capacity to meet the demands of the business and BI analytics show great promise for helping financial services insure compliance and avoid financial penalties from regulators.

  • In compliance and fraud detection and prevention, financial services firms are increasing their use of risk based analytics models for risk scoring, transaction monitory and investigations of compliance and fraud alerts. The cloud provides access to huge amounts of data needed that is scalable for analysts and data scientists. We are now seeing leading firms make this move to cloud, especially for access to market events and pricing data.
  • In addition, firms are aggregating large amounts of internal and external data to help comply with regulations such as Anti-Money Laundering (AML) and Know Your Customer (KYC) analysis.
  • Analysts use huge data sets to develop and test statistical and analytics models to help detect financial crime and compliance violations often these data sets are also accessed in the cloud.
  • When developing investment and portfolio strategies, analysis build portfolios that they believe will provide investment upside for their clients. To test how their strategies might perform in a variety of market conditions, they will “back test” their portfolios over a long time period with large amounts of historical transaction and market data. To the changing  data and investment research needs of investment professionals, many firms are looking at providing access to cloud-based market and transaction data that analysts can download as needed to build investment and portfolio back-testing models.

Bonnie: The cloud has the potential to help reduce the compliance burdens for banks. As financial institutions evaluate cloud service providers, what other trust principles are key in their considerations?

Bill: Financial services firms must keep assets and information safe. This includes a continued focus on some key elements:

  • Security. Financial firms have a lot of intellectual property (IP) that must be kept secure and protected. In addition to customer records and transaction data, firms have billions invested in proprietary financial models, investment research and development. Firms must protect their transaction data and customer information from threats that could come from customers, employees, contractors or counterparties. For financial firms, a breach can hurt their brand and an erosion of trust in all of their relationships. Cloud service providers with proven experience in data security can help financial institutions combat (and stay ahead of) the growing number and increasing sophistication of cyber attacks.
  • Reliability/availability. In addition to security, firms must provide continuous system and data availability, business continuance and disaster recovery (BC/DR) planning to monitor events (e.g., weather and other events) to keep data secure and out of harm’s way whenever possible. Cloud service providers with multiple datacenters can help firms stay out of trouble by moving customer and firm data and workloads away from threats (e.g., natural disasters or political unrest) producing more reliable/resilient service.
    • Banks and financial firms with international operations must make data available at the speed of the business and make it accessible to those that need it, so master data management is recommended to help make data accessible (and protected) and how it is managed and configured internally is important.
  • Privacy and control. The control of information and privacy are paramount to maintaining trust in financial services. Financial firms know how and where money is spent and invested which gives them “privileged access to information” so they are responsible (and liable) for maintaining customer privacy. Firms must have control and access to their data at all times, especially in these times when financial regulations and data security in financial services is making headlines.
  • Transparency. It’s important that cloud vendors are transparent about where data resides at a regional level, and what they do to keep it safe and secure. Increasingly, regulators are inspecting data, analytical models and financial firms Governance, Risk and Compliance (GRC) policies and procedures around data and data management, especially accessibility and security of customer and compliance information.
  • Compliance. International firms have to comply with data usage rules and regulations, and cloud providers can help firms make sure they are compliant with data management and data security rules and regulations that often vary from country to country. Access to data is key to innovative analytics, especially for data scientists and analysts to do queries, build models and run tests on analytical models.
  • Choice. For some customers, the need to choose a cloud vendor that offers flexibility and choice of workloads, both private hosted cloud, hybrid and multi-tenant is important depending upon the types of data it puts in the cloud and its risk appetite to manage more critical workloads and data in a multi-tenant environment.

Bonnie: Thanks for your insights Bill. Any final thoughts for our readers?

Bill: Thank you Bonnie. I would like to end with a few concluding points:

  • Financial services firms are increasing their investments in the cloud to provide new applications and data as a service to their employees and partners.
  • Trust is paramount to relationships that firms have with customers, employees, counterparties, partners and investors. Firms are being very selective in their network, software, services and applications vendors when making cloud investments and successful cloud service providers will need to continue to invest in security, data privacy and data management, regulatory compliance, and transparency to establish and maintain that trust.
  • As financial firms move to cloud, they must continue to adhere to strong risk assurance programs and maintain appropriate oversight and control of the cloud vendor.
  • Regulations have not kept pace with the speed of innovation in the cloud, yet so long as customers meet their risk assurance obligations with cloud vendors, financial services regulators will watch carefully, but are unlikely to stop, adoption of cloud given the market has already moved in this direction.

To learn more, visit the Trusted Cloud website.

Details on the June 2016 Microsoft security update release

$
0
0

Yesterday we released security updates to provide additional protection against malicious attackers. As a best practice, we encourage customers to apply security updates as soon as they are released. More information about June’s security updates and advisories can be found in the Security TechNet Library.

J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

Reading a pixel on a VM Screen

$
0
0

Two weeks ago I provided a code sample that allowed you to capture a Hyper-V virtual machine screen to a bitmap.  As part of this script – the virtual machine screen is stored in a Windows bitmap object.  There are actually a number of interesting things you can do with this object.

On such thing is to get individual pixel data from the screen.  You can do this as follows:

This will produce results like this:

R             : 57
G             : 81
B             : 82
A             : 255
IsKnownColor  : False
IsEmpty       : False
IsNamedColor  : False
IsSystemColor : False
Name          : ff395152

You can then use this to test for what is happening inside the virtual machine in a non-intrusive manner.

Cheers,
Ben

KB: Data Warehouse jobs fail and event ID 33502 is logged in Microsoft System Center 2012 Service Manager

$
0
0

We recently published a new Knowledge Base article that discusses an issue where Data Warehouse jobs fail in SCSM 2012. When this problem occurs the following event is logged in the Operations Manager event log on the Data Warehouse server:

Log Name: Operations Manager
Source: Data Warehouse
Event ID: 33502
Level: Error
Description:
ETL Module Execution failed:
ETL process type: Transform
Batch ID: ######
Module name: TransformEntityRelatesToEntityFact
Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

Also, when you run certain Data Warehouse related cmdlets you may also see a timeout error recorded for the TransformEntityRelatesToEntityFact module that resembles the following:

Get-SCDWJobModule -JobName transform.common
. . .
1952 TransformEntityRelatesToEntityFact Failed
. . .

For all the details regarding why this problem might occur and a couple options to resolve it, please see the following:

3137611Data Warehouse jobs fail and event ID 33502 is logged (https://support.microsoft.com/en-us/kb/3137611)

J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

Find a bitmap on the VM screen

$
0
0

Continuing with my series on how to do interesting automation stuff with Hyper-V – today I want to show you how to use PowerShell and the AForge.NET library to locate a bitmap on a virtual machine screen.

For example – imagine that I had a test running in a Windows Server Core 2016 virtual machine:

source

And I wanted to be able to tell if PowerShell was running interactively – but I did not want to interfere with the guest operating system in anyway.  What could I do?  Well – I could get the screen and look for a bitmap that told me that PowerShell was running.  Something like this (enlarged to make it easier to view):

PShell

Fortunately – this is quite easy to do thanks to the handy AForge.NET libraries.  The result looks something like this:

If I run this script with a single PowerShell Window open I get this result:

Rectangle                        Similarity
---------                        ----------
{X=381,Y=371,Width=13,Height=12}          1

One perfect match at X=381, Y=371

If I open up two more PowerShell Windows and run this script I get this result:

Rectangle                        Similarity
---------                        ----------
{X=303,Y=522,Width=13,Height=12}          1
{X=535,Y=355,Width=13,Height=12}   0.993087
{X=74,Y=700,Width=13,Height=12}    0.993087

One perfect match – and two close matches.  The reason for the difference here is that the close matches are PowerShell Windows that do not have focus – so the graphic is slightly different.

One final point to make about this code sample: Hyper-V produces a 16bit RGB565 image for a screen capture.  Unfortunately the AForge.NET libraries do not accept this format.  So in my sample code you will see that I upsample into 24bit RGB in order to make everything work.

Cheers,
Ben


What’s new in failover clustering: #3 Stretched Clusters

$
0
0

This post was authored by Ned Pyle, Principal Program Manager, Windows Server

Why should you care about clustered storage? Everyone’s talking about apps, mobile, DevOps, containers, platforms. That’s cutting edge stuff in the IT world. Storage is boring, right?

Well, they’re all wrong. Storage is the key. You care about storage because it contains the only irreplaceable part of your IT environment: your data. That data is what makes your company run, what makes the money, what keeps the lights on. And that data usage is ever increasing.

Your datacenter could burn to the ground, all your servers could flood, your network could be shut down by a malicious attack, but if your data is safely protected, you can always get back to business.

Windows Server 2016 stretch clustering is here to protect that data and run those workloads so that your business stays in business.

Stretching clusters with Storage Replica in Windows Server 2016

Storage Replica offers new disaster recovery and preparedness capabilities to the already robust Failover Cluster in Windows Server 2016 Technical Preview. For the first time, Windows Server offers the peace of mind of a zero data loss recovery point objective, with the ability to synchronously protect data on different racks, floors, buildings, campuses, counties and cities. After a disaster strikes, all data will exist elsewhere, without any possibility of loss. The same applies before a disaster strikes; Storage Replica offers you the ability to switch workloads to safe locations prior to catastrophes when granted a few moments warning – again, with no data loss.

Storage Replica allows more efficient use of multiple datacenters. By stretching clusters or replicating clusters, workloads can be run in multiple datacenters for quicker data access by local proximity users and applications, as well as for better load distribution and use of compute resources. If a disaster takes one datacenter offline, you can move its typical workloads to the other site temporarily. It is also workload agnostic – you can replicate Hyper-V VMs, MS SQL Server databases, unstructured data or third party application workloads.

Stretch Cluster allows configuration of computers and storage in a single cluster, where some nodes share one set of asymmetric storage and some nodes share another, then synchronously or asynchronously replicate with site awareness. This scenario can utilize shared Storage Spaces on JBOD, SAN and iSCSI-attached LUNs. It is managed with PowerShell and the Failover Cluster Manager graphical tool, and allows for automated workload failover.

Besides synchronous replication, Storage Replica can utilize asynchronous replication for higher latency networks or for lower bandwidth networks.

Ease of Deployment and Management

You deploy and manage stretch clusters using familiar and mature tools like Failover Cluster Manager, which means reduced training time for staff. Wizard-based setup allows administrators to quickly deploy new replication groups and protect their data and workloads.

To ensure successful deployment and operational guidance, Storage Replica and the Failover Cluster both provide validation mechanisms with detailed reports. For instance, prior to deploying a stretch cluster, you can test the topology for requirements, estimate sync times, log size recommendations and write IO performance.

Windows Server 2016 also implements site fault domains, allowing you to specify the location of nodes in your cluster and set preferences. For instance, you could specify New York and New Jersey sites, then ensure that all nodes in New York must be offline for the workloads and storage replication to automatically switch over to the New Jersey site. All of this is implemented through a simple PowerShell cmdlet, but can also be automated with XML files for larger private cloud environments.

Summary

Windows Server 2016 stretch clustering was designed with your data’s safety in mind. The mature, robust failover clustering combined with synchronous replication offer peace of mind at commodity pricing. Try this new feature in Windows Server 2016 and download the Technical Preview. For additional details, see the feature Cluster blog here.

Check out the series:

Windows – Read me that virtual machine

$
0
0

After a couple of weeks of playing around with Hyper-V APIs for reading virtual machine screens and sending keystrokes – I hit upon an interesting idea.  What would it take to make a “virtual machine screen reader”?

You see, Windows itself has great support for a number of accessibility options.  And these work both in the host operating system environment – and inside the virtual machine when you are running Windows as a guest.  But what if you are not running Windows as a guest?  What if the guest OS is not actually running (e.g. BIOS screens, fatal errors, etc…)?

Well – with a little work I now have a sample script that will:

  1. Scrape the graphical content of a virtual machine screen
  2. Feed it into the Tesseract OCR library
  3. Feed the results of that into the Windows Speech Synthesis engine
  4. And read the screen to you

The results look like this:

And the code needed to do this is as follows:

A couple of things to call out here:

  • To pull this off I am using the Tesseract Open Source OCR Engine and the PowerShell wrapper for it written by Jourdan Templeton
  • In order to get the best level of accuracy in OCR – I made two specific changes:
    • I stretch the VM screen bitmap before performing an OCR (I do not know why this matters – but it does make a difference)
    • I edited tesseractlib.psm1 from Jourdan’s wrapper to specify [Tesseract.EngineMode]::TesseractandCube instead of [Tesseract.EngineMode]::default.  This makes it slower – but more accurate
  • The sample above will capture the whole screen by default – and read it to you in a female voice.  There are a number of changes that you can make here:
    • If you specify a crop rectangle on line 4 – the script will only read a portion of the screen.
    • If you set $speakItToMe = $false on line 5 – the script will output text, instead of speaking.
    • If you change line 60 to $speak.SelectVoiceByHints(‘Male’) – you will get a male speaker instead.

Cheers,
Ben

What’s new in failover clustering: #04 Workgroup and multi-domain clusters

$
0
0

This post was authored by Subhasish Bhattacharya, Program Manager, Windows Server

Introduction: Active Directory integration with your private cloud

Active Directory integration provides significant value for most of the private cloud deployments. However, for a subset of scenarios, it is desirable to be able to decouple your deployment from Active Directory. In prior Windows Server releases, we introduced a number of features to minimize the dependence of your private cloud on Active Directory. Some of these features include:

Bootstrapping without Active Directory: Introduced in Windows Server 2012 and allows you to boot your private cloud without Active Directory dependencies. This is especially useful when you have lost power to your entire datacenter and have to bootstrap. It therefore enables you to virtualize your entire datacenter, including domain controllers.

Cluster Shared Volumes independent of Active Directory: Cluster Shared Volumes, in Windows Server 2012 and beyond, have no dependence on Active Directory. This is especially advantageous in deployments in branch offices and off-site deployments.

Active Directory-detached clusters: In Windows Server 2012 R2, Failover Clusters can be created without computer objects in Active Directory, thereby decreasing your deployment and maintenance complexity. However, this deployment model still requires all the nodes in your private cloud to be joined to a single domain.

Flexible private cloud – the need for domain independence

In our discussions with you over the last few years, we learned about why you wanted domain independence (freedom from domain requirements for your clusters)!

SQL Server workload

1.    You wish to have AlwaysOn Availability Groups (AG) span across multiple domains and on workgroup nodes. This in some cases is motivated by your desire to move away from database mirroring. You have described how:

  • Your enterprise needs to operate with multiple domains due to events such as mergers and acquisitions.
  • You would like to consolidate multiple replicas from multiple sources to a single destination.
  • You have AG replicas not in a domain.
  • You wish to address deployment complexity and the dependence of your DBA administrators on the owners of your Active Directory infrastructure.

2.    Today there are thousands of SQL Server production deployments on Azure IaaS Virtual Machine (VM) environments. You love the flexibility of Azure, but these deployments require you to deploy two additional VMs for redundant DCs. You would like to avoid this requirement and reduce the deployment cost of your solution.

3.    You love Hybrid deployments, where some replicas are running in Azure VMs and other replicas are running on-premises for cross-site disaster recovery. However, all replicas are required to be in the same domain. This is a deployment burden for you. More details about this deployment model can be found here.

Hyper-V and File Server workloads

You would like to be able to deploy Hyper-V and File Server clusters without the cost and complexity of configuring a domain infrastructure for the following scenarios:

  • Small deployments
  • Branch office
  • DMZ deployment outside firewall
  • Highly secure deployments (domain-joined is considered a security weakness in highly secure environments)
  • Test and development environments

In Windows Server 2016, we have addressed your SQL Server workload scenarios, end-to-end! We continue to strive to light up your Hyper-V and File Server workload scenarios for subsequent Windows Server releases. Hyper-V live migration and File Server have a dependency on Kerberos, which currently remains unaddressed in Windows Server 2016.

Domain-independent clusters

Windows Server 2016 breaks down domain barriers and introduces the ability to create a Failover Cluster without domain dependencies. Failover Clusters can now therefore be created in the following configurations:

  • Single-domain clusters: Clusters with all nodes joined to the same domain.

  • Workgroup clusters: Clusters with nodes that are member servers/workgroup (not domain-joined).

  • Multi-domain clusters: Clusters with nodes that are members of different domains.

  • Workgroup and domain clusters: Clusters with nodes that are members of domains and nodes that are member servers/workgroup.

Creating domain-independent clusters

All the options to create “traditional” clusters are still applicable for domain-independent clusters in Windows Server 2016. To try this new feature in Windows Server 2016, download the Technical Preview. For additional details, see the feature Cluster blog here. Some of the options to create domain-independent clusters are:

1. Using Failover Cluster Manager

2. Using Microsoft PowerShell©

New-Cluster –Name -Node -AdministrativeAccessPoint DNS

Check out the series:

Reverse-engineering DUBNIUM’s Flash-targeting exploit

$
0
0

The DUBNIUM campaign in December involved one exploit in-the-wild that affected Adobe Flash Player. In this blog, we’re going to examine the technical details of the exploit that targeted vulnerability CVE-2015-8651. For more details on this vulnerability, see Adobe Security Bulletin APSB16-01.

Note that Microsoft Edge on Windows 10 was protected from this attack due to the mitigations introduced into the browser.

 

Vulnerability exploitation

Adobe Flash Player version checks

The nature of the vulnerability is an integer overflow, and the exploit code has quite extensive subroutines in it. It tries to cover versions of the player from 11.x to the most recent version at the time of the campaign, 20.0.0.235.

The earliest version of Adobe Flash Player 11.x was released in October 2011 (11.0.1.152) and the last version of Adobe Flash Player 10.x was released in June 2013 (10.3.183.90). This doesn’t necessarily mean the exploit existed from 2011 or 2013, but it again demonstrates the broad target the exploit tries to cover.

Figure 1 Version check for oldest Flash Player the exploit targets

Figure 1 Version check for oldest Flash Player the exploit targets

 

Mainly we focused our analysis upon the function named qeiofdsa, as the routine covers any Adobe Flash player version since 19.0.0.185 (released on September 21, 2015).

Figure 2 Version check for latest Flash Player the exploit supports

Figure 2 Version check for latest Flash Player the exploit supports

 

Why is this version of Flash Player so important? Because that is the release which had the latest Vector length corruption hardening applied at the time of the incident. The original Vector length hardening came with 18.0.0.209 and it is well explained in the Security @ Adobe blog https://blogs.adobe.com/security/2015/12/community-collaboration-enhances-flash.html.

The Vector object from Adobe Flash Player can be used as a corruption target to acquire read or write (RW) primitives.

This object has a very simple object structure and predictable allocation patterns without any sanity checks on the objects. This made this object a very popular target for exploitation for recent years. There were a few more bypasses found after that hardening, and 19.0.0.185 had another bypass hardening. The exploit uses a new exploitation method (ByteArray length corruption) since this new version of Adobe Flash Player.

Note, however, that with new mitigation from Adobe released after this incident, the ByteArray length corruption method no longer works.

To better understand the impact of the mitigations on attacker patterns, we compared exploit code line counts for the pdfsajoe routine, which exploits Adobe Flash Player versions earlier than 19.0.0.185, to the qeiofdsa routine, which exploits versions after 19.0.0.185. We learned that pdfsajoe has 139 lines of code versus qeiofdsa with 5,021.

While there is really no absolute way to measure the impact and line code alone is not a standard measurement, we know that in order to target the newer versions of Adobe Flash Player, the attacker would have to write 36 more times the lines of code.

Subroutine namepdfsajoeqeiofdsa
Vulnerable Flash Player versionBelow 19.0.0.18519.0.0.185 and up
MitigationsNo latest Vector mitigationsLatest Vector mitigations applied
Lines of attack code139 lines5,021 lines
Ratio136

Table 1 Before and after Vector mitigation

 

This tells us a lot about the importance of mitigation and the increasing cost of exploit code development. Mitigation in itself doesn’t fix existing vulnerabilities, but it is definitely raising the bar for exploits.

 

Heap spraying and vulnerability triggering

The exploit heavily relies on heap spraying. Among heap spraying of various objects, the code from Figure 3 shows the code where the ByteArray objects are sprayed. This ByteArray has length of 0x10. These sprayed objects are corruption targets.

Figure 3 Heap-spraying code

Figure 3 Heap-spraying code

 

The vulnerability lies in the implementation of fast memory opcodes. More detailed information on the usage of fast memory opcodes are available in the Faster byte array operations with ASC2 article at the Adobe Developer Center.

After setting up application domain memory, the code can use avm2.intrinsics.memory. The package provides various methods including li32 and si32 instructions. The li32 can be used to load 32bit integer values from fast memory and si32 can be used to store 32bit integer values to fast memory. These functions are used as methods, but in the AVM2 bytecode level, they are opcode themselves.

Figure 4 Setting up application domain memory

Figure 4 Setting up application domain memory

 

Due to the way these instructions are implemented, the out-of-bounds access vulnerability happens (Figure 5). The key to this vulnerability is the second li32 statement just after first li32 one in each IF statement. For example, from the li32((_local_4+0x7FEDFFD8)) statement, the _local_4+0x7FEDFFD8 value ends up as 4 after integer overflow. From the just-in-time (JIT) level, the range check is only generated for this li32 statement, skipping the range check JIT code for the first li32 statement.

Figure 5 Out-of-bounds access code using li32 instructions

Figure 5 Out-of-bounds access code using li32 instructions

 

We compared the bytecode level AVM2 instructions with the low-level x86 JIT instructions. Figure 6 shows the comparisons and our findings. Basically two li32 accesses are made and the JIT compiler optimizes length check for both li32 instructions and generates only one length check. The problem is that integer overflow happens and the length check code becomes faulty and allows bypasses of ByteArray length restrictions. This directly ends with out-of-bounds RW access of the process memory. Historically, fast memory implementation suffered range check vulnerabilities (CVE-2013-5330, CVE-2014-0497). The Virus Bulletin 2014 paper by Chun Feng and Elia Florio, Ubiquitous Flash, ubiquitous exploits, ubiquitous mitigation (PDF download), provides more details on other old but similar vulnerabilities.

Figure 6 Length check confusion

Figure 6 Length check confusion

 

Using this out-of-bounds vulnerability, the exploit tries to locate heap-sprayed objects.

These are the last part of memory sweeping code. We counted 95 IF/ELSE statements that sweep through memory range from ba+0x121028 to ba+0x17F028 (where ba is the base address of fast memory), which is 0x5E000 (385,024) byte size. Therefore, these memory ranges are very critical for this exploit’s successful run.

Figure 7 End of memory sweeping code

Figure 7 End of memory sweeping code

 

Figure 8 shows a crash point where the heap spraying fails. The exploit heavily relies on a specific heap layout for successful exploitation, and the need for heap spraying is one element that makes this exploit unreliable.

Figure 8 Out-of-bounds memory access

Figure 8 Out-of-bounds memory access

 

This exploit uses a corrupt ByteArray.length field and uses it as RW primitives (Figure 9).

Figure 9 Instruction si32 is used to corrupt ByteArray.length field

Figure 9 Instruction si32 is used to corrupt ByteArray.length field

 

After ByteArray.length corruption, it needs to determine which ByteArray is corrupt out of the sprayed ByteArrays (Figure 10).

 

Figure 10 Determining corrupt ByteArray

Figure 10 Determining corrupt ByteArray

(This blog is continued on the next page)

Comparing Windows Server 2016 Nano TP5 Provisioning Time to GUI/Core Installation Options

$
0
0

Foreword

Hello again from Prague! Here is Jaromir and, in this post, I’ll show you how you can play with ws2016lab to measure I/O consumption of each server installation option.

I decided to compare the Nano Server IO footprint to other, traditional GUI/Core installations. Why? I just wanted to demonstrate that having unnecessary components is something you really don’t want because it is not only a bigger security surface, but also significantly higher IO overhead.

Why should you care? Let’s say you would like to deploy a new application – therefore you need to provision a new server. It’s not only the provisioning time you should be concerned about, but also how much you will slow down production, as some server will be spinning up.

In this blog post I’ll be comparing very first boot IOs of domain joined and offline patched servers (Cumulative Update for Windows Server 2016 TP5 and 2 Cumulative Updates for Windows Server 2012R2). You will see how many Mbytes each edition consumes during very first boot from your Storage.

Scenario

I’ll be using my ws2016lab (https://github.com/Microsoft/ws2016lab) with following configuration.

$LabConfig=@{AdminPassword=‘LS1setup!’; DomainAdminName=‘Jaromirk’; Prefix =
‘PerfTest-‘; SecureBoot=‘ON’; CreateClientParent=‘No’;DCEdition=‘ServerDataCenter’}

$NetworkConfig=@{SwitchName =
‘LabSwitch’ ; StorageNet1=‘172.16.1.’; StorageNet2=‘172.16.2.’}

$LAbVMs
= @()

$LAbVMs
+= @{ VMName =
‘Win2016’ ; Configuration =
‘Simple’ ; ParentVHD =
‘Win2016_G2.vhdx’ ; MemoryStartupBytes=
1GB }

$LAbVMs
+= @{ VMName =
‘Win2016_Core’ ; Configuration =
‘Simple’ ; ParentVHD =
‘Win2016Core_G2.vhdx’ ; MemoryStartupBytes=
1GB }

$LAbVMs
+= @{ VMName =
‘Win2016_Nano’ ; Configuration =
‘Simple’ ; ParentVHD =
‘Win2016Nano_G2.vhdx’ ; MemoryStartupBytes=
1GB }

$LAbVMs
+= @{ VMName =
‘Win2012’ ; Configuration =
‘Simple’ ; ParentVHD =
‘Win2012r2_G2.vhdx’ ; MemoryStartupBytes=
1GB ; Win2012Djoin=‘Yes’ }

$LAbVMs
+= @{ VMName =
‘Win2012_Core’ ; Configuration =
‘Simple’ ; ParentVHD =
‘Win2012r2Core_G2.vhdx’ ; MemoryStartupBytes=
1GB ; Win2012Djoin=‘Yes’ }

Ws2016 lab does not create Windows Server 2016 Full, Windows Server 2012 R2, and Windows Server 2012 R2 Core images, so I’ll create them manually using following commands and tool convert-windowsimage.ps1 ]https://github.com/Microsoft/Virtualization-Documentation/tree/master/hyperv-tools/Convert-WindowsImage

#loading convert-windowsimage into the memory (notice . in front of command)

.
.\Tools\convert-windowsimage.ps1

#win 2012r2

Convert-WindowsImage
-SourcePath
X:\sources\install.wim
-DiskLayout
UEFI
-VHDPath
.\ParentDisks\win2012r2_G2.vhdx `

-Edition
datacenter
-SizeBytes
40GB
-Package
.\Packages\Windows8.1-KB2919355-x64.msu,.\Packages\Windows8.1-KB3156418-x64.msu

#win 2012r2 core

Convert-WindowsImage
-SourcePath
X:\sources\install.wim
-DiskLayout
UEFI
-VHDPath
.\ParentDisks\win2012r2Core_G2.vhdx `

-Edition
datacentercore
-SizeBytes
40GB
-Package
.\Packages\Windows8.1-KB2919355-x64.msu,.\Packages\Windows8.1-KB3156418-x64.msu

#win 2016

Convert-WindowsImage
-SourcePath
x:\sources\install.wim
-DiskLayout
UEFI
-VHDPath
.\ParentDisks\win2016_G2.vhdx `

-Edition
datacenter
-SizeBytes
40GB
-Package
.\Packages\AMD64-all-windows10.0-kb3158987-x64_6b363d8ecc6ac98ca26396daf231017a258bfc94.msu

You can see that each image is using the following update rollups.

Windows 2012R2: https://support.microsoft.com/en-us/kb/2919355 + https://support.microsoft.com/en-us/kb/3156418

Windows 2016: http://support.microsoft.com/kb/3158987

In this screenshot you can see all vhdx files in the parent disks folder. Big size differences, right? And yes, I was writing this blog for you late at night.

Note: Nano server contains all of these packages:

Microsoft-NanoServer-DSC-Package.cab

Microsoft-NanoServer-FailoverCluster-Package.cab

Microsoft-NanoServer-Guest-Package.cab

Microsoft-NanoServer-Storage-Package.cab

Microsoft-NanoServer-SCVMM-Package.cab

All servers are hydrated as VMs on my laptop on my 1T Samsung 840 EVO SSD

Hydrated machines ready to be started

Msinfo32

Measuring IO overhead

For measuring IO overhead, I’ll be using Windows Performance Recorder and Windows Performance Analyzer – tools that you can download and install for free as part of Windows ADK. For each VM, I’ll start recording, boot VM, wait for logon screen to appear and then wait additional 60 seconds just to settle things down.

Windows Performance Recorder settings:

Results

For displaying results I’m using Windows Performance Analyzer. Again – its free and part of Windows ADK.

Windows Server 2016 Nano

From the screenshot below you can see that IO settled down after 9 seconds. Total reads were 156 MB and Total Writes 135 MB. Machine did not reboot at all.

Windows Server 2016 Core

From the screenshot below you can see that IO settled down after 85 seconds. Total reads were 2,304 MB and Total Writes 1,170 MB. Machine rebooted once.

Windows Server 2016 Full

From the screenshot below you can see that IO settled down after 135 seconds. Total reads were 2,533 MB and Total Writes 2,089 MB. Machine rebooted once.

Windows Server 2012 Core

From the screenshot below you can see that IO settled down after 82 seconds. Total reads were 1,618 MB and Total Writes 1,426 MB. Machine rebooted once.

Windows Server 2012 Full

From the screenshot below you can see that IO settled down after 84 seconds. Total reads were 1,694 MB and Total Writes 1,453 MB. Machine rebooted once.

Summary

Results table: Degradation column is how many times is result bigger compared to Nano Server.

Server

Boot Time (s)

Boot Degradation

Read (MB)

Read Degradation

Write (MB)

Write Degradation

Total (MB)

Total IO Degradation

2016 Nano

9

1.0

156

1.0

135

1.0

290

1.0

2016 Core

85

9.4

2,304

14.8

1,170

8.7

3,474

12.0

2016 Full

135

15.0

2,533

16.3

2,089

15.5

4,622

15.9

2012 Core

82

9.1

1,618

10.4

1,426

10.6

3,044

10.5

2012 Full

84

9.3

1,694

10.9

1,453

10.8

3,147

10.8

Bonus

Memory Demand

Running Services

Wrap-up

As you can see, there are significant differences between each installation option in terms of boot time and Read/Write Mbytes. Now imagine you have to deploy dozens of servers. Sure, you can have caching that improves things a lot, but still… If you want GUI, use a Client with RSAT https://www.microsoft.com/en-us/download/details.aspx?id=45520

The results are illustrative as I did not do proper lab measurement (I don’t have an extra machine), I did not repeat measurements (Its really late here J ) and I did not calculate uncertainty – or whatever it is called now (it’s been a long time since I finished my University, so I don’t remember formulas anymore). Anyway – with all this info you can repro it on your laptop, measure it yourself and ping me with your results!

Do you want to see what other stuff I can do with ws2016lab tool? Let me know in comments.

Cheers!

Jaromirk@msft

Comparing Windows Server 2016 Nano TP5 Provisioning Time to GUI/Core Installation Options

$
0
0

Foreword

Hello again from Prague! Here is Jaromir and, in this post, I’ll show you how you can play with ws2016lab to measure I/O consumption of each server installation option.

I decided to compare the Nano Server IO footprint to other, traditional GUI/Core installations. Why? I just wanted to demonstrate that having unnecessary components is something you really don’t want because it is not only a bigger security surface, but also significantly higher IO overhead.

Why should you care? Let’s say you would like to deploy a new application – therefore you need to provision a new server. It’s not only the provisioning time you should be concerned about, but also how much you will slow down production, as some server will be spinning up.

In this blog post I’ll be comparing very first boot IOs of domain joined and offline patched servers (Cumulative Update for Windows Server 2016 TP5 and 2 Cumulative Updates for Windows Server 2012R2). You will see how many Mbytes each edition consumes during very first boot from your Storage.

Scenario

I’ll be using my ws2016lab (https://github.com/Microsoft/ws2016lab) with following configuration.

$LabConfig=@{AdminPassword=‘LS1setup!’; DomainAdminName=‘Jaromirk’; Prefix = ‘PerfTest-‘; SecureBoot=‘ON’; CreateClientParent=‘No’;DCEdition=‘ServerDataCenter’}

$NetworkConfig=@{SwitchName ‘LabSwitch’ ; StorageNet1=‘172.16.1.’; StorageNet2=‘172.16.2.’}

$LAbVMs = @()

$LAbVMs += @{ VMName ‘Win2016’ ; Configuration ‘Simple’ ; ParentVHD ‘Win2016_G2.vhdx’ ; MemoryStartupBytes1GB }

$LAbVMs += @{ VMName ‘Win2016_Core’ ; Configuration ‘Simple’ ; ParentVHD ‘Win2016Core_G2.vhdx’ ; MemoryStartupBytes1GB }

$LAbVMs += @{ VMName =‘Win2016_Nano’ ; Configuration ‘Simple’ ; ParentVHD ‘Win2016Nano_G2.vhdx’ ; MemoryStartupBytes1GB }

$LAbVMs += @{ VMName ‘Win2012’ ; Configuration ‘Simple’ ; ParentVHD ‘Win2012r2_G2.vhdx’ ; MemoryStartupBytes1GB ; Win2012Djoin=‘Yes’ }

$LAbVMs += @{ VMName ‘Win2012_Core’ ; Configuration ‘Simple’ ; ParentVHD ‘Win2012r2Core_G2.vhdx’ ; MemoryStartupBytes1GB ; Win2012Djoin=‘Yes’ }

Ws2016 lab does not create Windows Server 2016 Full, Windows Server 2012 R2, and Windows Server 2012 R2 Core images, so I’ll create them manually using following commands and tool convert-windowsimage.ps1 ]https://github.com/Microsoft/Virtualization-Documentation/tree/master/hyperv-tools/Convert-WindowsImage

#loading convert-windowsimage into the memory (notice . in front of command)

..\Tools\convert-windowsimage.ps1

#win 2012r2

Convert-WindowsImage -SourcePath X:\sources\install.wim -DiskLayout UEFI -VHDPath .\ParentDisks\win2012r2_G2.vhdx-Edition datacenter -SizeBytes 40GB -Package .\Packages\Windows8.1-KB2919355-x64.msu,.\Packages\Windows8.1-KB3156418-x64.msu

#win 2012r2 core

Convert-WindowsImage -SourcePath X:\sources\install.wim -DiskLayout UEFI -VHDPath .\ParentDisks\win2012r2Core_G2.vhdx-Edition datacentercore -SizeBytes 40GB -Package
.\Packages\Windows8.1-KB2919355-x64.msu,.\Packages\Windows8.1-KB3156418-x64.msu

#win 2016

Convert-WindowsImage -SourcePath x:\sources\install.wim -DiskLayout UEFI -VHDPath .\ParentDisks\win2016_G2.vhdx-Edition datacenter -SizeBytes 40GB -Package .\Packages\AMD64-all-windows10.0-kb3158987-x64_6b363d8ecc6ac98ca26396daf231017a258bfc94.msu

You can see that each image is using the following update rollups.

Windows 2012R2: https://support.microsoft.com/en-us/kb/2919355 + https://support.microsoft.com/en-us/kb/3156418

Windows 2016: http://support.microsoft.com/kb/3158987

In this screenshot you can see all vhdx files in the parent disks folder. Big size differences, right? And yes, I was writing this blog for you late at night.

Note: Nano server contains all of these packages:

Microsoft-NanoServer-DSC-Package.cab

Microsoft-NanoServer-FailoverCluster-Package.cab

Microsoft-NanoServer-Guest-Package.cab

Microsoft-NanoServer-Storage-Package.cab

Microsoft-NanoServer-SCVMM-Package.cab

All servers are hydrated as VMs on my laptop on my 1T Samsung 840 EVO SSD

Hydrated machines ready to be started

Msinfo32

Measuring IO overhead

For measuring IO overhead, I’ll be using Windows Performance Recorder and Windows Performance Analyzer – tools that you can download and install for free as part of Windows ADK. For each VM, I’ll start recording, boot VM, wait for logon screen to appear and then wait additional 60 seconds just to settle things down.

Windows Performance Recorder settings:

Results

For displaying results I’m using Windows Performance Analyzer. Again – its free and part of Windows ADK.

Windows Server 2016 Nano

From the screenshot below you can see that IO settled down after 9 seconds. Total reads were 156 MB and Total Writes 135 MB. Machine did not reboot at all.

Windows Server 2016 Core

From the screenshot below you can see that IO settled down after 85 seconds. Total reads were 2,304 MB and Total Writes 1,170 MB. Machine rebooted once.

Windows Server 2016 Full

From the screenshot below you can see that IO settled down after 135 seconds. Total reads were 2,533 MB and Total Writes 2,089 MB. Machine rebooted once.

Windows Server 2012 Core

From the screenshot below you can see that IO settled down after 82 seconds. Total reads were 1,618 MB and Total Writes 1,426 MB. Machine rebooted once.

Windows Server 2012 Full

From the screenshot below you can see that IO settled down after 84 seconds. Total reads were 1,694 MB and Total Writes 1,453 MB. Machine rebooted once.

Summary

Results table: Degradation column is how many times is result bigger compared to Nano Server.

Server

Boot Time (s)

Boot Degradation

Read (MB)

Read Degradation

Write (MB)

Write Degradation

Total (MB)

Total IO Degradation

2016 Nano

9

1.0

156

1.0

135

1.0

290

1.0

2016 Core

85

9.4

2,304

14.8

1,170

8.7

3,474

12.0

2016 Full

135

15.0

2,533

16.3

2,089

15.5

4,622

15.9

2012 Core

82

9.1

1,618

10.4

1,426

10.6

3,044

10.5

2012 Full

84

9.3

1,694

10.9

1,453

10.8

3,147

10.8

Bonus

Memory Demand

Running Services

Wrap-up

As you can see, there are significant differences between each installation option in terms of boot time and Read/Write Mbytes. Now imagine you have to deploy dozens of servers. Sure, you can have caching that improves things a lot, but still… If you want GUI, use a Client with RSAT https://www.microsoft.com/en-us/download/details.aspx?id=45520

The results are illustrative as I did not do proper lab measurement (I don’t have an extra machine), I did not repeat measurements (Its really late here J ) and I did not calculate uncertainty – or whatever it is called now (it’s been a long time since I finished my University, so I don’t remember formulas anymore). Anyway – with all this info you can repro it on your laptop, measure it yourself and ping me with your results!

Do you want to see what other stuff I can do with ws2016lab tool? Let me know in comments.

Cheers!

Jaromirk@msft

Configuration Manager as a Managed Installer with Windows 10

$
0
0

Introduction

Windows 10 introduced a new set of features called Device Guard that helps enterprises protect their business critical machines against malware and other unwanted software. Key amongst these is a new application and software whitelisting technology known as configurable code integrity that, together with AppLocker, enables enterprises to strongly control what is allowed to run in their environment.

Like all whitelisting solutions, configurable code integrity and AppLocker policies can be complex to set up and difficult to maintain, particularly for enterprises whose software catalogs are large, ever-changing, and include applications from a variety of internal and 3rd-party software developers. Enter the concept of the Managed Installer.

As of Windows 10 Enterprise Anniversary Edition, administrators can configure a new type of AppLocker rule that identifies a specific trusted installation authority, or Managed Installer. Any applications or other software (executables and .dll’s) that are installed by that specified installation authority will be automatically trusted by AppLocker and allowed to run without needing to create any other rules. Applications and software that are installed using any other mechanism will not pass the Managed Installer rule and will only run if explicitly allowed by another AppLocker rule. This will drastically reduce the overhead required to maintain whitelisting policy when deploying applications and software to systems protected by Windows AppLocker.

Managed Installer functionality is still in a prototype phase at the moment and does not yet have any associated user interface screens within Windows. However, thanks to collaboration between the ConfigMgr and Windows engineering teams it can be set up today and tested in any environment on machines with the ConfigMgr 1606 Technical Preview client that are running Windows 10 Enterprise with Windows Insider Program build 14367 or later with some caveats explained below. The Windows version and ConfigMgr client version are the only two prerequisites for this functionality. As noted, Managed Installer functionality currently only applies to AppLocker, but the Windows engineering team intends to integrate the functionality with Device Guard’s configurable code integrity feature in a later release. The remainder of this blog will provide detailed instructions on how clients can leverage this new functionality.

For additional reading about Device Guard and AppLocker, please consult the following resources:

Device Guard Documentation

Device Guard Deployment Guide

AppLocker Documentation

Blog: Managing Device Guard Configurable Code Integrity with existing ConfigMgr functionality

Creating the Custom AppLocker Policy

Creating an AppLocker Policy that contains a Managed Installer is most easily done in the Local Security Policy snap-in in Microsoft Management Console (MMC), then moving to the XML editor of your choice. This can be done with similar workflows on any recent version of Windows, but in this example a Windows 10 client is used.

  1. From the Windows Start menu, type “secpol.msc” and then press enter to launch the MMC snap-in. Once the console opens, navigate to Application Control Policies > AppLocker > Executable Rules.
    SCCM_DeviceGuard1
  2. Right click Executable Rules and create a new rule that allows “Everyone” to run CCMExec.exe based on a condition of your choice. For this example, a File Path condition has been selected (this is the least secure option but it should allow readers to copy the policy used here for basic testing).
    SCCM_DeviceGuard2
  3. Once the rule has been created it will appear in the console. Now, export the policy XML for editing. Right-click Applocker in the navigation pane and select Export Policy… highlighted below.
    SCCM_DeviceGuard3
    The exported policy XML will look similar to the example below. The three default rules are present, as well as the new file hash rule for CCMExec.exe, highlighted in yellow.SCCM_DeviceGuard4
  4. Next, duplicate the entire EXE rule collection via copy-paste, and then remove the default rules in the duplicate version. The original CCMExec.exe file hash rule in the first rule collection can also be deleted at this point. Change the value of the Type attribute on the new rule collection to “ManagedInstaller”. What remains is a new Rule Collection of type “ManagedInstaller” and an EXE rule collection that contains only the original (in this case default) rules.
    SCCM_DeviceGuard5
  5. Now that the Managed Installer rule collection has been created, the Services Enforcement extension that was introduced in the first release of Windows 10 must be added. To add the extension, that allows for the enforcement of AppLocker policies against Windows Services, paste the below into your policy inside the EXE rule collection. You can see the result highlighted in green in the below.Insert this text:
            
                
                    
                
            

    SCCM_DeviceGuard6

  6. Finally, select the Enforcement mode for the EXE and Managed Installer rule collections. The possible options are “Notconfigured”, “AuditOnly”, or “Enabled”. They have the following significance:
    • NotConfigured – No enforcement or auditing occurs.
    • AuditOnly – Applications and executables are not blocked from running by AppLocker, but logging occurs in the client event logs (visible in Event Viewer under Applications and Services Logs > Microsoft > Windows > AppLocker) whenever an application or executable is allowed to run or would have been blocked if enforcement mode had been enabled. Note: Logging for Managed Installer rules is shared with the logging for EXE rules. The only visible difference between the EXE rule log entries and the Managed Installer log entries, both found in Event Viewer under Applocker > EXE and DLL is that the rule type is specified in the details tab in the information pane when a given log entry is selected.
    • Enabled– Applications and executables in violation of the AppLocker policies are blocked from running.

    The recommended way of configuring AppLocker is to set up your policy and first set the enforcement mode to AuditOnly and then examine the event logs on the client machine to assess whether the policy is working correctly. Once the correctness of the policy has been adequately verified, then enforcement mode can be changed to enabled. Extreme care should be taken when auditing AppLocker policies because if they are configured incorrectly it can cause severe instability on affected machines.

    To complete this example, the policy enforcement mode will be changed to AuditOnly in this case. The change is highlighted in blue.

    With this final change the policy is ready to be saved and subsequently deployed. Once the policy has been validated and client event logs appear to be exhibiting the desired behavior, then the values of EnforcementMode highlighted below in blue can be changed to Enabled to enforce the new AppLocker policy (the policy must also be redeployed for the changes to take effect).
    SCCM_DeviceGuard7

Configuring Client Devices

Four steps are required to configure clients to treat ConfigMgr as a Managed Installer. These can be accomplished with via Group Policy or using ConfigMgr’s configuration Items, programs, or task sequences, and PowerShell. In this example a short PowerShell script is used and can be deployed in a package containing both the script and the AppLocker policy XML file. The script must be run with Administrative privileges to have the desired result. Note that these commands can be run from any folder except for the step to set the AppLocker policy, which needs to be run from the folder where the policy XML file is located.

  1. Start Windows Application Identity services
    The PowerShell command to accomplish this is as follows:
    PS C:\WINDOWS\system32> AppIdtel start
  2. Create a custom DWORD in the client registry
    To configure the ConfigMgr client to behave as a Managed Installer, the following registry DWORD must be added with a value of “1”.
    HKLM\SOFTWARE\Microsoft\CCM\EnableManagedInstaller

    This mechanism for changing the client behavior is subject to change in subsequent releases once this functionality has its own Configuration Manager Console user interface screen. This can be accomplished using reg.exe that can be executed from PowerShell as follows:

    PS C:\WINDOWS\system32> reg.exe add HKLM\SOFTWARE\Microsoft\CCM /v EnableManagedInstaller /t REG_DWORD /d "1" /f
  3. Deploy the custom AppLocker policy that was created above
    AppLocker policies are often deployed via Group Policy, but in this example the policy will be applied using one of the AppLocker PowerShell cmdlets to apply policy from the policy XML file distributed in the same package as the script. The PowerShell command for this is:
    PS C:\WINDOWS\system32> set-ApplockerPolicy -XmlPolicy AuditPolicy.xml
  4. Restart the client SMS Host Agent service (CCMExec), or restart the device
    The final step to configure clients is to restart the CCMExec service that can be accomplished by executing the net.exe command from PowerShell as follows:
    PS C:\WINDOWS\system32> net stop ccmexec
    PS C:\WINDOWS\system32> net start ccm

These four sets of command can be combined into a simple PowerShell script by copying the lines from above into a text file and naming the file with a .ps1 file extension. The resulting script looks like the below.

AppIdtel start
reg.exe add HKLM\SOFTWARE\Microsoft\CCM /v EnableManagedInstaller /t REG_DWORD /d "1" /f
set-ApplockerPolicy -XmlPolicy AuditPolicy.xml
net stop ccmexec
net start ccm

The above should be saved to a .ps1 PowerShell script file and that file can then be distributed along with the policy XML file created above to be run on clients using a required package and program. Clients that have run the script will treat ConfigMgr as a Managed Installer. At time of writing, when using Windows Insider Preview build 14367, packages and programs deployed from ConfigMgr 1606 Technical Preview with programs set to run with administrative privileges will be trusted automatically. All other deployments from the same ConfigMgr version will be automatically trusted by clients running an upcoming Windows Insider Program Fast Ring build. This blog will be updated to reflect this upon the release of that build of Windows. To validate the policy once it has been deployed, normal application, update, and package deployments should be made to the clients (taking the aforementioned caveats into consideration) and then the local client event logs should be examined to ensure that no trusted software is in violation of both the EXE and Managed Installer AppLocker rule. Software that is allowed by at least one of these rules will be allowed to run. Once the policy has been validated, the AppLocker policy should be edited so that EnforcementMode is set to “Enabled”, and then the AppLocker policy deployment step (and only this step) should be re-run to update the policy on the client.

Once this is complete then the original goal has been accomplished! The client has been locked down and only existing software and new software deployed from ConfigMgr will be allowed to run on the client device.

Let us know what you think about the Managed Installer functionally with Configuration Manager Technical Preview. To provide feedback or report any issues with the functionality included in this Technical Preview, please use Connect. If there’s a new feature or enhancement you want us to consider including in future updates, please use the Configuration Manager UserVoice site.

Thanks,

Dune Desormeaux

Configuration Manager Resources:

Documentation for System Center Configuration Manager Technical Previews
Documentation for System Center Configuration Manager
System Center Configuration Manager Forums
System Center Configuration Manager Support
System Center Configuration Manager Technical Preview 5 (v1603)


Update 1606 for Configuration Manager Technical Preview – Available Now!

$
0
0

Hello everyone! Update 1606 for Configuration Manager Technical Preview has been released. New and improved features in this update include:

  • ConfigMgr as a managed installer with Device Guard:You can now configure clients so that ConfigMgr-deployed software is automatically trusted, but software from other sources is not. Read more in this blog post.
  • Cloud Proxy Service: This technical preview provides a simple way to manage ConfigMgr clients on the Internet. The Cloud Proxy Service, which is deployed to Microsoft Azure and requires an Azure subscription, connects to your on-premises ConfigMgr infrastructure using a new role called the cloud proxy connector point. You can use the ConfigMgr console to deploy the service to Azure and configure the supported roles to allow cloud proxy traffic.
  • Grace period for application and software update deployments: You are now able to give users a grace period to install required applications or software updates beyond any deadlines you configured. This can be useful for when a computer has been turned off for an extended period of time like when an end user has just returned from vacation.
  • Multiple device management points for Windows 10 Anniversary Edition devices: On-premises Mobile Device Management (MDM) supports a new capability in Windows 10 Anniversary Edition that automatically configures an enrolled device to have more than one device management point available for use. This capability allows the device to fall back to another device management point when the one it was using is not available.

This release also includes the following new feature for customers using System Center Configuration Manager connected with Microsoft Intune to manage mobile devices:

  • Device categories: You can create device categories, which can be used to automatically place devices in device collections when used in hybrid environments. Users are then required to choose a device category when they enroll a device in Intune.

Update 1606 for Technical Preview is available directly in the Configuration Manager console. If you want to install Configuration Manager Technical Preview for the first time, the installation bits (currently based on Technical Preview 1603) are available on TechNet Evaluation Center.

We would love to get your thoughts about the latest Technical Preview! To provide feedback or report any issues with the functionality included in this Technical Preview, please use Connect. If there’s a new feature or enhancement you want us to consider including in future updates, please use the Configuration Manager UserVoice site.

Thanks,

The System Center Configuration Manager team

Configuration Manager Resources:

Documentation for System Center Configuration Manager Technical Previews
Documentation for System Center Configuration Manager
System Center Configuration Manager Forums
System Center Configuration Manager Support
System Center Configuration Manager Technical Preview 5 (v1603)

Quick Survey: Windows File Server Usage and Pain Points

$
0
0

Hi all,

We need your input to help us prioritize our future investments for File Server scenarios. We’ve created a short 5 question survey to better understand File Server usage and pain points. Any feedback is appreciated.

https://www.surveymonkey.com/r/C3MFT6Q

Thanks,

Jeff

The Windows Server 2016 Application Platform – Nano Server, Containers and DevOps

$
0
0

This post was authored by Andrew Mason, Principal Program Manager on the Nano Server team.

There has been a lot of press on Nano Server and Containers as new technologies coming in Windows Server 2016. In this blog post we’ll discuss how these two technologies are core pieces of the Windows Server 2016 developer/DevOps solution, discuss the full inbox stack, and provide links to additional resources to get you started. In addition to this post, at Build 2016, Taylor Brown and I presented a session with demos on this topic. The recording is available on Channel9.

Windows Server development projects generally follow a common set of tenants with a corresponding set of artifacts for the resulting app or service. These are develop, package, configure, deploy, run, test and secure. For each of these there are a common set of best practices:

  • Develop: Minimize your dependencies so you can run on the smallest OS configuration possible.
  • Package: Know your dependencies so that at deployment time ops can easily deploy on the smallest OS configuration possible. If ops has to guess at the configuration needed, the default will likely be the largest configuration so they don’t have to worry about hitting a missing dependency in production.
  • Configure: Use intent based configuration to avoid the need for special configuration, scripts, tools, etc., in order to get the OS and app or service properly configured.
  • Deploy: Use modular, componentized deployments expressing your dependencies rather than rolling them into a large monolithic install that deploys outdated components.
  • Run: Use physical hosts, guest VMs, or containers to run your app or service.
  • Test: Use unit tests to ensure quality.
  • Secure: Don’t let security be an afterthought or add-on, ensure security is part of your app or service from the beginning.

The challenge has been that in previous releases of Windows Server, there was no clear choice, guidelines, or even opinion on how to accomplish these and what artifacts you should produce.

For example, for packaging you could use an MSI, xcopy deployment, WebPI, etc. and for configuration you could use a config file (in a variety of formats), registry entries, a custom database or binary file like the IIS metabase in IIS 6 and earlier and so on.

With Windows Server 2016 we have a clear point of view for developers and operators, using two models:

  • Traditional ops model
  • Emerging model with Containers

With these, Windows Server 2016 resolves the interface between devs and ops.

Let’s look at the Traditional model first, where for the first time Windows Server 2016 provides inbox solutions across each area:

In addition to developing your app or service code, you need to produce the artifacts necessary to use the above solutions so that ops can take advantage of the benefits these provide as well as have consistency across apps and services.

As noted above under Run, these concepts apply to containers as well if you plan to run them using traditional ops models. For example, if you plan to make your app or service available for your customers to deploy and run either physical, guest, or container then WSA is the correct packaging solution to use.

However, for apps or services that are fully container based, there is a container only model that can be used as well:

  • Develop apps using your favorite Framework supported on Nano Server so you can use Nano Server as the base of your container.
  • Package apps as Container Images pushed to repositories.
  • Configure apps using Container Images.
  • Deploy container images from repositories.
  • Run containers though orchestrators.
  • Test apps using your test frameworks.
  • Secure apps using multiple containers and JEA.

With the container model you can leverage the capabilities of the container infrastructure and integrate the artifacts you need for your app or service directly into your containers.

Windows Server 2016 resolves the interface between devs and ops by providing both a traditional and container model with prescribed solutions and artifacts for you to achieve the best practices for your app or service. As mentioned above, the traditional model can be applied across physical, guest, or containers, providing the flexibility for you or your customers to run your app or service in any configuration. If your app or service will only be delivered as a container and it will be managed fully using the container model, then you can use only container artifacts. Which set of artifacts you deliver with your app or service will depend on which model you and your customers prefer.

Cumulative Update 2 for System Center 2016 Virtual Machine Manager Technical Preview 5 is now available

$
0
0

Cumulative Update 2 (CU2) for Microsoft System Center 2016 Virtual Machine Manager Technical Preview 5 is now available. There are two updates available for Cumulative Update 2 for System Center 2016 Virtual Machine Manager Technical Preview 5: An update for VMM Server and an update for the Administrator console.

For a complete list of scenarios enabled, issues fixed, known problems as well as download and installation instructions, please see the following:

3160164Cumulative Update 2 for System Center 2016 Virtual Machine Manager Technical Preview 5 (https://support.microsoft.com/en-us/kb/3160164)


J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

The Endpoint Zone, Episode 15: How Avanade uses Intune MAM

$
0
0

This might be one of the most interesting episodes of The Endpoint Zone in a really long time.

In episode 15, we spend 15 minutes talking with Joseph Paradi, from the consulting firm Avanade, about the expansive work his organization has done to deploy Intune MAM across the company — and the ways this has made the Avanade workforce more productive and more secure.  The move from the heavy control of MDM to the far lighter touch (requiring no enrollment of devices) with MAM has made a huge difference at Avanade — and Joseph has a great perspective on how to protect your organization’s most important assets and client data, while providing a great user experience that enables your workforce to stay productive.  You can skip ahead to Joseph’s section here.

 

 

Simon and I also talk more about how MAM enrollment works, how your organization can benefit from it, and also a recent report from Gartner gives a glimpse of how they see organizations changing.

 

In_The_Cloud_Logos

Viewing all 5932 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>