866-754-3592

UIU Blog

rss
But Windows 10 already has all the drivers I need – No, not really. (Again.)

With the release of each successive new Microsoft operating system, from the most recent Windows 8.1 going back to at least Windows 2000, Microsoft tries to tell us the same thing; namely that the operating system already contains all the required drivers for all hardware components on all machines, old and new.

Arguably, with Microsoft's release of each new operating system, new drivers are included in the inbox driver store and can increasingly, successfully avoid initial problems with BSOD, and other common first-run issues. Windows 10 promises to be nothing different.

But here's what you should know. The drivers that do get included in the inbox driver store (native to the OS) are by and large generic drivers or, at the very least, antiquated versions of drivers that are multiple revisions old. While these outmoded drivers may successfully install and control the individual hardware components, interoperability with the operating system and certainly feature richness for that hardware may be impacted or altogether missing. Furthermore, the inbox drivers generally include no access to customization software.

For example, the new Microsoft Surface includes a driver for the Intel HD 4000 graphics chipset which, in fact, is discovered and installed allowing the device to operate. That said, there is noticeable performance increase when using the official, up to date driver from Intel including access to more features native to the hardware relating to frame rate and power saving technology (improving battery life), which may be considered paramount to the usefulness of such a portable device.

Also, on many platforms including the Surface, there is a noticeable improvement to SSD disk access (random and/or sequential reads and writes) when using Intel's AHCI drivers (iastor) over Microsoft's AHCI drivers (msahci).

We have found, (since 2005), that as time progresses, particularly as it extends past the release of the new OS and generally past the release of each successive Service Pack, that the quantity of drivers specific to a released OS/SP (and not included in the inbox driver store) grows substantially up to and shortly following the end of MS Support for that OS.

Microsoft has consistently demonstrated no willingness to deliver updated drivers in a time-critical or comprehensive manner; instead, they have relied upon OEM PC manufacturers (new hardware releases/sales), component manufacturers (existing hardware), and third-party software (regular OS deployments) to provide updated, OS-specific drivers.

So overall, the native inbox driver store will in fact get the associated hardware to operate with rudimentary functionality. But such generic drivers will not allow the machine to take full advantage of the hardware to which they're associated, and represent a potentially significant opportunity lost on a substantial capital business investment.

So, does Microsoft include all of the drivers you need?  Well, again, not really.



5 Steps to Switching OS Deployment Solutions
So, you're thinking about switching OS Deployment Solutions. Congratulations! You're in for a serious amount of work not only choosing a solution, but also in planning, configuring, organizing and executing the little monster. Sure, some solutions are easier than others, yet all solutions require much more effort than you think. If you're thinking, what are you talking about? This stuff is easy…, I probably can't help you. You'll figure it out, I'm sure.

STEP 1 –
First, ask yourself, Why am I doing this? Seriously, unless you're hoping to solve a mission-critical problem with respect to OS deployment, it likely won't be worth it, regardless of the foundation of your reasons; be they political, financial, technical, talent-level or fear based. Be honest and go into the process knowing the actual reasons, it'll help you most when you're attempting to choose between the various offerings out there. Side note: If the reason is purely political, you probably need to suck it up and get it done and the rest of us out here in the Interwebs will feel appropriately bad for you. We know…

STEP 2 -
When choosing a new OS deployment solution, you'll need to know what your critical requirements are. What combination of features, available add-ons or plug-ins, integration within your environment, usability, security, delegation of responsibilities and learning curve (just to name a few) meet the needs of not only your organization, but also of your staff. Make a spreadsheet and fill it out or find one on the web, or if you have an extraordinary memory for really boring details, make one in your head! (I don't recommend the latter…) Start by assuming that you're a noob and go from there. The biggest mistakes we professionals often make are assuming that we know how everything works.

STEP 3 -
Once your new OS deployment solution is chosen, planning begins. Prepare for a lot of time to be spent making sure that you know all of the technical limitations of your chosen solution. These limitations may cause you to scrap an iteration of your plan (or several iterations). If you're considering the actual reasons that you chose the solution, you'll scrap a few as you learn the caveats. Whatever time you set aside for planning, a good rule of thumb is to double it, at least. Planning covers:  solution installation (with required, supporting network services such as WDS/PXE, multicasting switches, etc.), network infrastructure requirements and resources, target machine topography, technical staff training, and end-user training (where applicable), etc.

Planning without action is a daydream. Action without planning is a nightmare. – Japanese proverb

STEP 4 -
Next comes configuring the solution. There are undoubtedly systemic parameters that must be verified if not set, either within the solution or on the network resources that will support it (or both, likely). Organize the deployment methodologies. Determine which method or methods you'll use, (e.g. PXE vs. bootable media, etc.) and exactly what hardware you'll be deploying to, neatly organized into interlaced groupings. Don't solely consider the new hardware that you've got in your lab, but also consider the hardware that's been deployed into your production environment, especially that ancient machine in accounting that has specialized software on it, developed by one guy in his basement, long dead, and it just works, so we don't touch it. Perform the laborious task of discovering and researching the implications of EVERY parameter available in the solution, (assuming that you didn't do so prior to making the choice), as these can often either save your backside - or kick it.

STEP 5 -
Finally, execute the solution, preferably in a test lab or at least in a segregated environment at first, work out the bugs (hello, forums…), and then pull the proverbial trigger when you're satisfied. If you were diligent, you'll undoubtedly be a proud and happy camper (after some beverages). If not, well… You know the drill.

 

Unsolicited Advice:

  • Choose an OS Deployment Solution that can handle Application package deployment as well.

  • Choose a solution that can allow you to create small, specialized groups on which unique or at least highly customized OS images may be employed.

  • Choose a solution that will meet the anticipated future growth needs of your organization.

  • Train your staff on the solution! Train the hell out of them!

  • Use PXE, it's awesome…

  • Open Source usually means no organized support effort. Factor that in…


If you considered this post to be overly alarming and elect to ignore it, good luck. If you agree with its basic tenets or elect to take it with a grain of salt, that's cool too. Either way, it has hopefully prompted you to consider a couple of things more seriously and my only hope is that it helps you in your decision somehow. You're welcome, and I'm sorry that there's no Easy button…



Drastically Improve Image Deployment for SCCM

If you work in a Microsoft System Center Configuration Manager (SCCM) environment, you are familiar with the major challenges you face during the deployment process within SCCM's native Operating System Deployment (OSD). First, you have to locate drivers for specific hardware components, then organize and package them. Next, you need to create a task sequence to advertise to a hardware-specific collection. If any errors present themselves along the way, you have to start again from square one.

On paper, it sounds easy, but in reality, when you consider how many different hardware configurations are scattered around your company, it quickly becomes an overwhelmingly complex and time-consuming process. It's a process that's necessary only because none of the native Microsoft tools features a driver database, which makes locating and managing the correct drivers a manual (and burdensome) process.

Streamline your deployment process with the UIU for SCCM

We can take the headache out of OSD through a fully integrated plug-in that safely and smoothly enhances and streamlines your existing SCCM 2007 or 2012 environment. Our Universal Imaging Utility allows SCCM administrators to easily advertise any UIU-configured task sequence to any collection of computers, regardless of manufacturer or model.

All you need to do is create a new, or modify an existing task sequence with the UIU Machine Configuration step, and you've completely eliminated driver packages from the process. During deployment, the UIU real-time discovery tool ascertains the onboard hardware, locates the correct drivers, and incorporates them with the image deployment, ensuring that only the most appropriate drivers are staged. By using only the  latest and most appropriate drivers, the UIU makes sure that every machine boots properly after every deployment.

The driver database
Drivers are always the lynchpin to any successful deployment, and a pain to manage. The UIU contains a fully vetted and updated driver database that validates and maintains over 2,000 business-class drivers and 40,000 Plug-n-Play Ids for supported Windows operating systems. And the database can be set up to update automatically. Because the UIU completely automates driver management, it eliminates the need for SCCM administrators to locate, manage, and package driver files.

The UIU enables IT departments to save considerable time and money by delivering a hardware independent image to any PC.

- Learn more about the UIU for SCCM or request a Free Trial



Considerations for Supporting Student PCs

As an educational institution in the age of ubiquitous computing, decisions need to be made regarding the technical support of any individual student's hardware and/or software. There are many factors that must be considered prior to establishing a policy with regard to the same. These include, but are not limited to, hardware purchasing, standardization, support commitments (how far to troubleshoot before re-imaging), deployment and logistics - and let's not forget network security.


How do we support student PCs? Or shouldn't we?
As with everything else, there are reasons for and at least an equal quantity of reasons opposed. Without pretending to know all of the intricacies of every type of institution and give advice on what to do, I'll simply offer up some points  for consideration.

Assumptions:
  • All students have PCs/laptops.
  • All institutions offer some form of network-based computing resources, ranging from web pages (public or internal) to direct network connectivity.

Considerations:
  • Is PC computing required for curricula execution?
  • Is hardware supplied by/through the institution?
  • Is software supplied by/through the institution?

In the case where hardware is supplied by the institution, cost and tuition issues aside, efforts can (and should) be made to limit the selections of make/model and operating system. Uniformity enhances standardization, reduces security vulnerabilities, leverages purchasing discounts, and reduces logistics and support costs.

Student-supplied hardware could represent any make/model and could be in any state of vulnerability/security risk at any given time.

The case where an institution recommends or requires students to supply their own hardware, the problems change face a bit. For example, a criterion that enters the equation is whether the institution insists on providing an OS/Software image for the student computing population. Therein lie software (incl. OS) licensing as well as compatibility and version control issues. That's a topic for another day…

How is this different from supporting PC Lab machines or staff PCs?

All student PCs are mobile (or at least we'll assume/consider them to be as such). As student machines may be inaccessible by network administration services at any given time for any reason and without notice, we need to consider them to effectively be considered as mobile even if they may be traditional desktop/mini-tower models.

Simply put, PC Lab machines are not only under the control of some institutional entity, they are also static; they don't move around. Desktop policies can be set, physical access can be gained at will, and visual inspections can be performed with regularity. Staff machines, although they may be mobile, are still firmly under control and policies and standards apply, whereas student machines are very often an unknown and frequently present not only security vulnerabilities but also logistics and maintenance issues. How can the institution be sure that updates are regularly applied? How can the institution be sure that the machine is not infected with a digital pathogen?



If student-maintained machines are to be let anywhere near an institutional network, great care should be taken to mitigate threats, not just before they infect a network resource, but also after this has inevitably occurred. In addition to the standard anti-virus and anti-malware software, strategies such as the employment of honey pots or ghost armies can misdirect and distract would-be hackers, allowing more time to detect the intrusion and respond to the threat.

Let's face it, support of multifarious machines with questionable levels of security and significant limitations on institutional control is looming if not already upon us.
How do we protect our network resources from the risk of infected student machines?

Is standardization an option? If so, use it heavily. Mandate that all student PCs have anti-virus/anti-malware installed and updated in order to gain access to network resources. Ensure that all public or Lab PCs have AV/AM enabled on all removable devices. Keep AV/AM as well as operating system patches up-to-date!

As it's not a matter of if but when, have a lockdown protocol and practice it. Employ advanced misdirection strategies as discussed above. There are or will soon be both appliance and software implementations available to make it easier to instantiate.

How about break/fix?
Some institutions provide break/fix services; I would wager, however, that most insist that the student take care of their PC issues on their own. This, in my opinion is strictly a liability/cost vs. service/value proposition. If the intention is to provide such a service on non-institutional machines, have the necessary waivers in place and, for the love of Homer, provide adequate training for your technicians.

That said, as institutions become more and more dependent upon PC computing to execute their curricula, they may be compelled to assist students with hardware/software problems in order to maximize the effectiveness of the application of same.

I hope that this is received as helpful and has provoked some thought. Please feel free to send constructive feedback.





Ghost Console Integration with the UIU

Latest UIU Tutorial Video- Ghost Console


If you are using Ghost Console to deploy your OS images, this brief instructional video will show you how to easily integrate the Universal Imaging Utility to deploy to any laptop or desktop in your environment.


 


The latest version of the UIU for Ghost also fully integrates with Symantec Ghost Solution Suite and Ghost Cast Server.

Read more information about the UIU for Ghost



Creating Offline Media with Microsoft Deployment Toolkit (MDT)

Sometimes you need to deploy a Windows OS to a PC or PCs that are not connected to a network, or to a PC that exists on a network where deployment services (PXE/network boot) are not provided. What do you do? Revert back to old sneakernet procedures and suffer the 45-60min manual setup which, due to its mundane nature, will inevitably result in a Windows configuration that is not entirely consistent? What if you have an entire office of machines that need to be imaged?

Well, most modern Operating System imaging solutions include an off-network or offline option that will allow you image PCs regardless of their network connectivity. Microsoft Deployment Toolkit (MDT) is no exception. But just like any other OS deployment solution, additional considerations must be made in order to optimize this process in your environment.

First, you need to determine what operating systems and applications need to be made available in the offline media package. Will you be using more than one version of Windows? What applications will the user(s) need? Which applications are you prepared to install manually (one-offs)? Be thorough.

Next, configure your MDT Task sequence or sequences in a manner consistent with your network policies to ensure consistency in your environment. Now, if you simply create offline media with MDT, all applications will be included in the ISO. This can get large and unwieldy, making media creation inordinately lengthy and increase the time required to image each individual PC. See the problem? There is a solution. Create sub-folders in the Deployment Share and copy appropriate content from the Deployment Share to each sub-folder created.

Next, create a New Selection Profile, choosing only the Applications and Operating Systems that are desired for the particular off-network deployment. Note that some plug-ins will require that you create a subfolder for their inclusion in the final, offline media build.

Then, when the offline media is built with MDT via Create New Media, simply mount the ISO and copy the contents from the designated Content folder to a bootable, physical, removable media (USB is recommended due to the probable large size) and boot on the necessary PCs. Let the MDT Task Sequence do its trick. I know I don't have to say it, but I will anyhow; make sure to test your MDT offline media prior to use in production.


Here's a sample procedure:


  1. The configured MDT environment must have ability to deploy images (prerequisite).

  2. Prepare the desired offline deployment task sequence. (According to network policies)




  3. Creating off network media

    1. If Deployment Share has multiple operating systems, applications, or packages, now is the time to create folders so that an optimized (slim) offline selection profile can be created. Copy appropriate content from the Deployment Share to each folder created. These folders may be selected individually to exclude unwanted applications from being copied to the off-network media.

    2. Under Advanced Configuration>Selection Profiles, create a New Selection Profile choosing only the Applications and Operating Systems that are desired for the particular off-network deployment.

      NOTE: Selecting more than what is necessary will result in larger than necessary ISO files and USB storage requirements.




    3. Once the selection profile has been created, right-click on Advanced Configuration>Media and Create New Media using the selection profile above. Remember the Save Location that you designate in this wizard for later use.




    4. Format USB drive (recommended) using Diskpart.exe. This USB drive should be large enough to hold the entire media as defined by the Selection Profile used. (As a point of note, the inclusion of Win7x64 and the off-network task sequence bare bones configuration is approximately 3 Gb in size. )

      1. On a computer running the Windows 7 operating system, insert the USB drive.

      2. From a command prompt, run Diskpart.exe

        1. Execute the command list disk to determine the disk number associated with the device. 
        2. Input the following commands, where N is the disk number identified in the previous step:
           - select disk N
           - clean
           - create partition primary
           - select partition 1
           - active
           - format fs=fat32 [ntfs] [quick]
           - assign - exit 

          Note: UEFI partitions will only boot with fat32 formatted USB drives. Diskpart.exe is a powerful utility and can cause damage to your system. Make sure to format the correct drive! 

    5. After media has been created there will be a Content folder in the save location that was chosen in step 4.c. Copy the contents of the Content folder to your freshly formatted USB.

  4. Test the bootable USB containing the off-network media for desired results before implementing the process in any production environment.

    In summary, although additional planning and configuration is required, Microsoft Deployment Toolkit is capable of deploying Windows operating systems and applications regardless of network connectivity or network boot services.


Hardware-Independent OS deployment with Dell KACE

Utilizing the Dell KACE K2000 Deployment Appliance to deploy Windows operating systems images gives admins a fully integrated systems provisioning solution.

The challenge with deploying an OS to disparate hardware still always comes down to device driver management. While KACE offers features like computer scanning and assessment, managing the unique drivers for each existing recipient machine, let alone new machines, is still a cumbersome process.

Below is a solution for deploying a single OS image with the Dell KACE K2000 Deployment appliance to any laptop or desktop regardless of manufacturer or model when used in conjunction with the Universal Imaging Utility.


1.    Create an Active Directory Domain Service Account for use with the UIU.
2.    Create a Repository Folder that is a Share on a Server that will be accessible or on   
       the Same Subnet as the KACE Server where you are deploying the images.
  • For Example: \\Server. Domain.com\Repository
  • This is a Folder on the Server and is shared with the name Repository.
  • Add the Above Active Directory Domain Service Account to the Security of the Repository Share with Read and Write Rights.
Once the Domain Account is created, the Shared Doled is created and the Folder rights are assigned, open the Dell KACE Web Console of your Appliance.

3.    Install the UIU 5 using the Domain Service Account and shared Repository folder.
4.    Open the WEB interface of the Dell KACE Appliance and Login



5.    Click on the Library Tab



6.    Click on the Postinstallation Tasks Tab




7.    Click on the Choose Action… Drop Down and select Add New BAT Script…



8.    Provide the Mid-Level Task with a Name, be sure to select the Runtime Environment:  
       K2000 Boot Environment (Windows), then enter the following two command Lines   
        under BAT Script: and enter a description in the Notes: field and click on the Save
        Button.

Command Line 1:  Net Use \\Server.Domain.com\Repository /user: Domain\DomainServiceAccount Password

Command Line 2:  \\Server. Domain.com\Repository\x86\uiuprep.exe -run -license UIU Product Key

Reminder: Substitute real values for Server, Domain, Account, Password and UIU Product Key




Once the Mid-Level Task is saved, add it to your Deployment Package

9.    Click on the Deployments tab and choose the desired Deployment Package




10.    Scroll down to the Mid-Level Tasks, grab the UIU Task and drag it under the Run
         Mid-Level Tasks field.
        
Please note:  When you hover over and grab the task to drop it, your cursor will turn 
into a cross arrow as shown.



11.    Scroll down to the bottom of the screen and click on the Save Button




When the KACE deployment task sequence is executed, the OS will be deployed and the mid-level task(s) will invoke the UIU to perform its driver servicing operations and facilitate the staging of only the drivers required for each affected machine.



After the Lovin' : Post-OS-Deployment Blues

**See Survey opportunity at conclusion of post - for a chance to $100**

So, you've successfully deployed your new Microsoft OS with all of the insightful configurations made to optimize performance in your environment. Excellent! Oh, wait; you're not finished yet, are you? Nope, not by a long shot. Once you're finished with the OS deployment, you need to establish to which end-user the machine will be allocated, what their specific access and application needs are, and under what conditions (network connectivity) they'll be operating.

Who's getting the machine?
Regardless of whether it is upper management, manufacturing, engineering, clerical or professional staff, their needs may be, (and usually are) quite different both from an applications perspective as well as a computational power perspective.

What applications do they need?
The application sets range from standards like Microsoft Office to highly specialized applications such as AutoCAD and everything in between. Some applications are popular and highly supported in the industry. Some are so specific to your business that they were developed by an individual either in-house or outsourced and have a much smaller support base. Delivering and supporting the applications to meet the end-user's specific needs can be a daunting challenge.

Where are they using it?
This is a fun one. Not all employees are sitting in cubicles in one or more offices connected by heavy duty WAN lines any longer. The increase in telecommuting (not to mention BYOD) and the needs of mobile users continue to put pressure on IT organizations to quickly and efficiently provision, configure, deliver and support effective computing solutions for competitive business.

What do the responses to these three questions mean?
They translate into a set of parameters that will need to be applied to the specific machine either prior to delivery or upon initial login by the end-user (or administrator on behalf of the end-user). Some of the settings may be already determined and will be set by group policy when the end-user logs in for the first time, joins a domain and creates a local profile to which group policy parameters are dictated. Other settings may be applied either manually (by administrator) or initiated through a deployment solution/3rd party solution at varying stages of provisioning. Finally, applications need to be installed and personalized for that target end-user and again, this can either be achieved manually (by administrator), or initiated through a deployment solution/3rd party solution at varying stages of provisioning. What we have here appears to be a Gordian Knot that anyone would lament being required to untie.

The task of provisioning end-user machines is often underestimated with respect to the actual time it requires and with respect to the expectations of those requesting the machine to be provisioned in the first place. Have you ever been asked, Why is it taking so long to get me my new laptop? I thought so.

Hardware provisioning is a process that is often greater than the sum of its parts, involving a varied and sometimes conflict-ridden selection of applications with specific pre-requirements, involving a multitude of managed systems to deploy and configure those applications for first use as well as to reestablish previously applied application settings from past session use and further involving many different infrastructure-specific components to allow access to the data manipulated by the applicable applications.

It's a difficult proposition and furthermore it's difficult to find, analyze and implement solutions with a complicated variety of hardware, applications and end-user needs. Hang in there!

Windows 8 Sysprep Error
If you're deploying Windows 8 in your environment, you may want to be aware of a little undocumented feature that is certain to cause you some time-consuming headaches.

In preparing to capture a Windows 8 (or 8.1) OS image, several of our customers have experienced the following Sysprep error:

System Preparation Tool 3.14
A fatal error occurred while trying to Sysprep the machine


Essentially, the error occurs when applications associated with user accounts are out-of-synch with the provisioning information for the master machine. A detailed explanation with three use cases can be found in the following Microsoft Support article: http://support.microsoft.com/kb/2769827

Also, in certain cases, if Windows Updates have not been checked in a set period of time, Sysprep may present errors as well. In our experience, simply allowing the machine to check (knowing it will fail as it has no Internet access) will subsequently allow Sysprep to execute without error.

Lessons learned?


1.  When possible, create no additional users on the master 
    machine.  Only login as the user account from which you intend 
    to execute Sysprep.

2.  When possible, do not install any Windows Store applications
    (Appx)

3.  When possible, remove (Uninstall and Un-provision) all
    unnecessary Windows Store applications (Appx) – This will need
    to be done for every user account created on the machine…

4.  Disconnect the master machine from the Internet

5.  Turn off Windows Updates on the Master Machine

6.  Optional & universally recommended- Lock down the ability for
    users to install applications


Additionally, if you're implementing Windows 8.1, also consider that a known issue exists when Sysprep is executed on the master machine more than 1 hour after the first user has logged in. There is a disk cleanup service that is executed via Scheduled Tasks which causes interference with Sysprep and a resultant error. The following command will disable the task:

Schtasks.exe /change /disable /tn "\Microsoft\Windows\AppxDeploymentClient\Pre-staged app cleanup"

http://technet.microsoft.com/en-us/library/dn303413.aspx

So Which Is It - ImageX or DISM?

What the heck is a DISM any way? The short answer is yes. Simply put, DISM is the new, improved version of ImageX.


What is DISM?


ImageX and therefore DISM (Deployment Image Service and Management tool) is Microsoft's native, command line based imaging tool and is used to create, edit and deploy disk images in the Windows Imaging Format (WIM). WIM files are mounted and serviced (edited) and then dismounted for application to a physical or virtual disk. The benefits of the WIM format include the ability to edit the OS in an offline or not-currently-running state and the ability to include multiple OS images in the same file without bloat from multiple, identical files.


Version Explanation


ImageX released with Windows Vista as part of the Windows Automated Installation Kit (WIAK) and continued in use through Windows 7. Microsoft Deployment Image Servicing and Management (DISM), ImageX's replacement, was released with Windows 8 as part of the Windows Assessment and Deployment Kit (WADK) and will be upgraded, not replaced, with the release of Windows 8.1. ImageX had been deprecated as of the release of Windows 8.


Where does Windows Pre-Installation Environment (Windows PE) come in to play?


Windows PE or WinPE is the utility (stripped down and much smaller) version of Windows operating system that ImageX or DISM is executed to achieve image creation, editing, or deployment.

ImageX         ->        Windows Vista & Windows 7 (WAIK) + Windows PE v3.0 or v3.1

DISM             ->        Windows 8 (WADK) + Windows PE v4.0 (v4.1 with Windows 8.1)


Various hardware requirements are necessary to successfully instantiate a WinPE v4.x session. The requirements specific to hardware (processor) for WADK (v4.0 and 4.1) include support for the following:

Physical Address Extension (PAE), NX processor bit (NX), and Streaming SIMD Extensions 2 (SSE2) are features of the processor, and they're needed to run Windows 8.0 or 8.1.


  • PAE gives 32-bit processors the ability to use more than 4 GB of physical memory on capable versions of Windows, and is a prerequisite for NX.
  • NX helps your processor guard the PC from attacks by malicious software.
  • SSE2 is a standard instruction set on processors that is increasingly used by third-party apps and drivers.

If your PC doesn't support PAE, NX, and SSE2, you won't be able to install Windows 8.0 or 8.1

http://windows.microsoft.com/en-us/windows-8/what-is-pae-nx-sse2


If the processors of machines targeted for deployment do not meet the aforementioned processor requirements, deployment tools will not be able to affect those target machines and therefore users must also install Windows Automated Deployment Kit (WAIK v3.0) on their administration machine.


How do I get DISM?


As alluded to previously, the DISM tool is made available by Microsoft in the Windows Assessment and Deployment Kit (WADK) just as the ImageX tool is made available in the Windows Automated Installation Kit (WAIK).

Even if a user has determined that their older hardware requires a version of Windows PE older than the one supplied in WADK and opted to also install WAIK for that purpose, and the older Windows PE (version 3.x) can be used to boot the target PC, and the DISM commands can be used to deploy a WIM image.


Differences in Syntax


Although the function of DISM is identical in most respects to ImageX, there are notable differences in syntax or command line format. That said, those familiar with ImageX will not have any trouble deciphering the DISM commands. Below are the same apply image function formatted for both ImageX and DISM. The similarities are obvious:

ImageX:           ImageX /apply <image file path> <index number> <drive letter>

DISM:              DISM /apply-image /imagefile:<image file path> /index:1 /applydir:<drive letter>


How do I use DISM to deploy an OS image?


There are three basic steps by which an operating system image may be deployed using DISM:

Target PC preparation

First, the target PC will need to be formatted and partitioned, using a tool named DISKPART, in order to accept a Windows image. An example of a simple DISKPART script file is illustrated below.

______________________________________
select disk 0
clean
create partition primary
format quick fs=ntfs label="Windows"
assign letter=D
active
______________________________________

There may be additional partitions created to support UEFI (as opposed to GPT) including Windows RE Tools, System, and Recovery partitions.

http://technet.microsoft.com/en-us/library/cc766465(v=ws.10).aspx


Image application

Next, the image must be extracted from the WIM file and applied to the partition designated for Windows during DISKPART.

DISM /apply-image /imagefile:<image file path> /index:1 /applydir:<drive letter>

http://msdn.microsoft.com/en-us/library/jj979808(v=winembedded.81).aspx


Boot file creation
Lastly, a boot entry must be made in order to boot to the applied Windows image using a tool named BCDBOOT (Boot Configuration Data editor).

BCDBOOT.exe <driveletter>\windows

http://msdn.microsoft.com/en-us/library/ff793718(v=winembedded.60).aspx

 

If you've used ImageX in the past (or still do), the introduction of Windows 8 will most certainly encourage you to familiarize yourself with the DISM tool and update your image creation and deployment processes. Don't worry, it's not that much different!