tag:blogger.com,1999:blog-26167652231853758142024-02-08T04:37:21.355-08:00The Infrastructure HackIT powered by luck, google and coffeeRPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.comBlogger113125tag:blogger.com,1999:blog-2616765223185375814.post-60731085521446918082016-04-07T18:01:00.001-07:002016-04-07T18:01:01.302-07:00TesttestRPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-52470990434151757802009-09-08T14:06:00.001-07:002009-09-08T14:06:59.545-07:00Using ISA Server 2006 to Protect Active Directory One-Way Forest Trusts<span xmlns=''><p><a href='http://blog.msfirewall.org.uk/2008/06/using-isa-server-2006-to-protect-active.html'><span style='color:#6699cc; font-family:Verdana; font-size:13pt; text-decoration:underline'><strong>Using ISA Server 2006 to Protect Active Directory One-Way Forest Trusts</strong></span></a><span style='color:#333333; font-family:Verdana; font-size:13pt'><strong><br /> </strong></span></p><p><br /> </p><p><span style='color:#333333; font-family:Verdana; font-size:13pt'><strong>From http://blog.msfirewall.org.uk/2008/06/using-isa-server-2006-to-protect-active.html<br /></strong></span></p><p><span style='color:#333333; font-size:12pt'>An area that I get involved with my 'day job' quite a lot is protecting Microsoft Office SharePoint Server (MOSS) 2007 and Windows SharePoint Services (WSS) extranets with ISA Server. Depeding on the customer needs, and security policy, this often results in an architecture design that includes a one-way Microsoft Active Directory forest trust. The key concept of this model is to solve two things; namely functionality <em>and</em> security.<span style='font-family:Verdana'><br /> </span></span></p><p><span style='color:#333333; font-size:12pt'><span style='text-decoration:underline'>So, lets look at <em>functionality</em> first - this is the 'forest trust' bit:</span><span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>By using a forest trust we have a way for both internal and external users to share extranet resources. By its very nature, a forest trust is often popular because it ensures that internal users will be able to access extranet information transparently. By this we mean that users will be able to use their normal Windows credendials (that they are normally logged in with) to access extranet resources without the user even realising that this is occuring.<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'><span style='text-decoration:underline'>Next, lets look at <em>security</em> next - this is the 'one-way' bit:</span><span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='font-size:12pt'><span style='color:#333333'>Placing external users into an internal Active Directory environement, the 'Intranet forest', is never really a good idea, as the forest is the only real security boundary in Active Directory. Subsequently, a common solution is to create an 'Extranet forest' that is used to host external user credentials and create an isoltation point or segmented boundary from internal users. This makes the security people happy, and by way of a forest trust, this makes the users happy because it 'just works'. However, to make the security people even happier and to protect against account compomise in the Extranet forest, we ensure that the trust relationship between these two 'zones' is configured such that the Externet forest trusts the Intranet forest, but the Intranet forest </span><span style='color:red'><strong>DOES NOT</strong></span><span style='color:#333333'> trust the Extranet forest; hence the term 'one-way'. In this scenario, the Intranet forest would be termed 'Trusted' and the Extranet forest termed 'Trusting'. <span style='font-family:Verdana'><br/><br/></span>So, what does this have to do with ISA then? Well, in addition to creating logical separation with forest boundaries, the model also normally includes some form of physical segmentation. This often results in the Extranet forest being placed into a perimeter network away from the internal network. In order to create this boundary and define the perimeter network, ISA Server is an ideal choice as the border firewall between these two security zones.<span style='font-family:Verdana'><br /> </span></span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>When you consider the communications that are required in order to support a forest trust, you will soon realise that you need a good application-layer firewall, ideally one that is able to inspect and control RPC traffic as this protocol is notoriously difficult to secure with <em>most </em>firewalls. If you combine this with the level of protection ISA Server can provide specifically for MOSS/WSS, it is not difficult to understand why ISA Server is often a key component in the overall security landscape for MOSS/WSS extranet solutions. <span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='font-size:12pt'><span style='color:#333333'>Obviously, I have over simplified things quite a bit and there are lots of other potential models (like the <a href='http://technet.microsoft.com/en-us/library/cc268155(TechNet.10).aspx'/></span><span style='color:#6699cc; text-decoration:underline'>External Collaboration Toolkit for SharePoint</span><span style='color:#333333'> for example) which could be used. However, a one-way forest trust model architetcure doesn't have to be just for extranets, they can pop up all over the place! You can also find a similar model in our old friend the <a href='http://www.microsoft.com/technet/solutionaccelerators/wssra/raguide/default.mspx'/></span><span style='color:#6699cc; text-decoration:underline'>Windows Server System Reference Architecture</span><span style='color:#333333'> document set.<span style='font-family:Verdana'><br /> </span></span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>An overview of the architecture and concept being discussed is provided below:<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'><br/><span style='font-family:Verdana'><br /> </span></span></p><p><span style='color:#333333; font-size:12pt'>So the the aim of this blog entry is to provide an overview of how to configure ISA Server to support this one-way forest trust architecture and define the necessary firewall policy rules and associated protocols to make it all work. This blog entry is by no means a 'ground up' walkthrough, but just gives a good overview of how ISA should be configured to allow for trust creation, validation and successful operation. In addition, I have tried to make my examples a little more generic by using a Web server, as opposed to SharePoint, but the overall concepts apply to lots of difference scenarios where a one-way forest trust is used across firewall boundaries.<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>One of the first things to realise with forest trusts is that they are not supported with Network Address Translation (NAT), hence your ISA Server will need a route relationship between the Extranet network and the Intranet network (or the other way around, as route relationships are bi-directional). Assuming this is in place, we now look at the necessary firewall policy rules to get it all working!<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>Like most security people, I like least privilege :-)<span style='font-family:Verdana'><br /> </span>Consequently, I will always create more firewall policy rules than are necessarily needed, just to ensure that we have sufficient granularity to provide a minimal set of communications by disabling certain rules that we don't need all the time.<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='font-size:12pt'><span style='color:#333333'>A lot of the communications or protocols defined in this blog entry are based upon the following <a href='http://technet.microsoft.com/en-us/library/cc756944.aspx'/></span><span style='color:#6699cc; text-decoration:underline'>whitepaper</span><span style='color:#333333'> and <a href='http://support.microsoft.com/kb/179442'/></span><span style='color:#6699cc; text-decoration:underline'>KB article</span><span style='color:#333333'>. However, I found that these articles weren't clear enough in places and needed 'improving' based upon real-world findings from using ISA Server as part of the solution with customers.<span style='font-family:Verdana'><br /> </span></span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>As we have a route relationship between our networks, we gain little advantage of using ISA Server server publishing in this particluar example, and therefore all firewall policy rules are defined as access rules. The RPC filter is also one of the only application filters that works with access rules, so this makes the decision the correct one in my opinion. <span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'><strong>Please Note:</strong> ISA Server SP1 now allows for RPC UUID filtering even when using access rules, which historically required server publishing to be used. With this change, RPC access rules are now as powerful as RPC server publishing rules, which is nice!. Consequently, it is possible to extend the current solution even further by defininig a specific list of UUIDs used for forest trusts. However, this is going to take a fair bit of testing, so I think I will leave it for a future blog post! <span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>So, based upon the above, we can summarise the necessary firewall policies as follows:<span style='font-family:Verdana'><br /> </span></span></p><ul style='margin-left: 39pt'><li><span style='color:#333333; font-size:12pt'>AD Forest Trust: Allow Access for Forest Trust Creation/Validation<span style='font-family:Verdana'><br /> </span></span></li><li><span style='color:#333333; font-size:12pt'>AD Forest Trust: Allow Access for Conditional DNS Forwarding<span style='font-family:Verdana'><br /> </span></span></li><li><span style='color:#333333; font-size:12pt'>AD Forest Trust: Allow Access for Kerberos Client Authentication<span style='font-family:Verdana'><br /> </span></span></li><li><span style='color:#333333; font-size:12pt'>AD Forest Trust: Allow Access for NTLM Client Authentication<span style='font-family:Verdana'><br /> </span></span></li><li><span style='color:#333333; font-size:12pt'>AD Forest Trust: Allow Access for Object Picker (Extranet Web Servers) <span style='font-family:Verdana'><br /> </span></span></li><li><span style='color:#333333; font-size:12pt'>AD Forest Trust: Allow Access for Object Picker (Extranet Domain Controllers) <span style='font-family:Verdana'><br /> </span></span></li><li><span style='color:#333333; font-size:12pt'>AD Forest Trust: Allow Access for Object Picker (Extranet ISA Servers) <span style='font-family:Verdana'><br /> </span></span></li></ul><p><span style='color:#333333; font-size:12pt'>An overview of each rule is provided below:<span style='font-family:Verdana'><br /> </span></span></p><p><span style='font-size:12pt'><span style='color:#3333ff'><span style='text-decoration:underline'>AD Forest Trust: Allow Access for Forest Trust Creation/Validation</span><br /> </span><span style='color:#333333; font-family:Verdana'><br /> </span></span></p><p><span style='color:#333333; font-size:12pt'>This rules allows the necessary communication required to initially setup and validate the trust relationship. Once the trust has been established, this rule can be disabled unless it is necessary to recreate/revalidate the trust for troubleshooting purposes. Following a least privilege approach, this step is strongly recommeneded for day-to-day operations. <span style='font-family:Verdana'><br /> </span></span></p><p><span style='font-size:12pt'><span style='color:#3333ff; text-decoration:underline'>AD Forest Trust: Allow Access for Conditional DNS Forwarding </span><span style='color:#333333; font-family:Verdana'><br /> </span></span></p><p><span style='color:#333333; font-size:12pt'>This rule allows DNS servers in the Extranet forest to communication with DNS servers in the Intranet forest (and vice sersa). This is a based upon the use of conditional DNS forwarding which is needed to provide underlying name resolution services between the two AD environments.<span style='font-family:Verdana'><br /> </span></span></p><p><span style='color:#333333; font-size:12pt'><strong>Please Note:</strong> This is one of the few examples where using server publishing <em>would </em>have some security benefit as this would allow ISA Server to inspect and filter DNS as necessary. However, for this example an access rule is used for simplicity. <span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='font-size:12pt'><span style='color:#3333ff; text-decoration:underline'>AD Forest Trust: Allow Access for Kerberos Client Authentication </span><span style='color:#333333; font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>This rule allow clients from the Intranet forest to authenticate to systems in the Extranet forest using Kerberos authentication. If Kerberos is not required, this rule is not required and should be disabled in order to adhere to least privilege. <br /></span></p><p><span style='font-size:12pt'><span style='color:#3333ff; text-decoration:underline'>AD Forest Trust: Allow Access for NTLM Client Authentication</span><span style='color:#333333; font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>This rule allow clients from the Intranet forest to authenticate to systems in the Extranet forest using NTLM authentication. It is very likely that this rule will need to remain enabled continuously and cannot be disabled, even in the event of using Kerberos. <span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='font-size:12pt'><span style='color:#3333ff; text-decoration:underline'>AD Forest Trust: Allow Access for Object Picker (Extranet Web Servers)</span><span style='color:#333333; font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>This rule is needed to allow resources in the Intranet forest to be defined or 'picked' from the Extranet-Web server(s) themselves. For example within MOSS the object picker would be used to define SharePoint security permissions to a user or group in the Intranet forest. Without this rule, it will not be possible to select resources from the Intranet forest. Ideally this rule should be disabled most of the time and only enabled when adminiistrative changes are required.<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='font-size:12pt'><span style='color:#3333ff; text-decoration:underline'>AD Forest Trust: Allow Access for Object Picker (Extranet Domain Controllers)</span><span style='color:#333333; font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>This rule is the same as the Extranet Web Servers rule, but allows the object picker to be used from the Extranet Domain Controllers themselves. Depending on how the system will be administered, this rule may or may not be required. If this rule is not required, it should be disabled or deleted.<span style='font-family:Verdana'><br /> </span></span></p><p><span style='font-size:12pt'><span style='color:#3333ff; text-decoration:underline'>AD Forest Trust: Allow Access for Object Picker (Extranet ISA Servers)</span><span style='color:#333333; font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>Again, this rule is the same as the Extranet Web Servers rule, but allows the object picker to be used from the ISA Servers themselves. Depending on how the system will be administered, this rule may or may not be required. If this rule is not required, it should be disabled or deleted.<span style='font-family:Verdana'><br/><br/></span>So, now that we know what firewall polices are required to map the key communications required for everthing to function, we now need to look at the required policy objects in more detail. Before approaching this, it is worthwhile defining a few elements of the example environement that will be used as part of firewall policies:<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'><strong>Extranet-Web</strong> => Computer object for the Web server located in the Extranet forest. This would be running MOSS 2007 in our SharePoint analogy.<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'><strong>Extranet-DC1</strong> => Computer object for the First Domain Controller in the Extranet forest.<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'><strong>Extranet-DC2</strong> => Computer object for the Second Domain Controller in the Extranet forest<span style='font-family:Verdana'><br /> </span></span></p><p><span style='color:#333333; font-size:12pt'><strong>Intranet-DC1</strong> => Computer object for the First Domain Controller in the Intranet forest.<span style='font-family:Verdana'><br /> </span></span></p><p><span style='color:#333333; font-size:12pt'><strong>Intranet-DC2</strong> => Computer object for the Second Domain Controller in the Intranet forest.<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'><strong>Extranet Domain Controllers</strong> => Computer set for the Extranet Domain Controller computers.<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'><strong>Extranet Web Servers</strong> => Computer set for the Extranet Web server computers.<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'><strong>Intranet Domain Controllers</strong> => Computer set for the Intranet Domain Controller computers.<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'><strong>Internal Clients</strong> => Computer set to represent Intranet machines that are allowed access to the Extranet Web Servers. In reality, this may be <strong>all</strong> Intranet clients. <br /></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>Based upon the firewall polices discussed above, the following pictures show a more detailed view of the rules including the necessary source, destination and protocol definitions for each set of communications (<strong><em>click to enlagre the pictures</em></strong>)<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='font-size:12pt'><span style='color:black'><strong>AD Forest Trust: Allow Access for Forest Trust Creation/Validation</strong></span><span style='color:#333333; font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='font-size:12pt'><span style='color:#333333; font-family:Verdana'><br/><br/></span><span style='color:black'><strong>AD Forest Trust: Allow Access for Conditional DNS Forwarding </strong></span><span style='color:#333333; font-family:Verdana'><br /> </span></span></p><p><span style='color:#333333; font-size:12pt'><span style='font-family:Verdana'><br/><br/><br/></span><strong>AD Forest Trust: Allow Access for Kerberos Client Authentication </strong><span style='font-family:Verdana'><br/><br/><br/><br/></span><strong>AD Forest Trust: Allow Access for NTLM Client Authentication</strong><span style='font-family:Verdana'><br/><br /> </span></span></p><p><span style='color:#333333; font-size:12pt'><span style='font-family:Verdana'><br/></span><strong>AD Forest Trust: Allow Access for Object Picker (Extranet Web Servers)</strong><span style='font-family:Verdana'><br /> </span></span></p><p><span style='color:#333333; font-family:Verdana; font-size:12pt'><br/><br /> </span></p><p><span style='color:#333333; font-size:12pt'><strong>AD Forest Trust: Allow Access for Object Picker (Extranet Domain Controllers)</strong><span style='font-family:Verdana'><br /> </span></span></p><p><span style='color:#333333; font-family:Verdana; font-size:12pt'><br/><br/><br /> </span></p><p><span style='color:#333333; font-size:12pt'><strong>AD Forest Trust: Allow Access for Object Picker (Extranet ISA Servers) </strong><span style='font-family:Verdana'><br /> </span></span></p><p><span style='color:#333333; font-family:Verdana; font-size:12pt'><br/><br /> </span></p><p><span style='color:#333333; font-size:12pt'><strong>Please Note:</strong> Based upon real-world testing and experience it was necessary to add both UDP and TCP transports for some protocols in order to prevent the object picker from experiencing delays during Intranet resource browsing tasks. Without both transports, it often became quite painful to administer Intranet resources whilst waiting for components to timeout and use an alternative transport. It was also necessary to add the 'LDAP GC (Global Catalog)' to the object picker firewall policy rules for everything to function correctly, but this deosn't appear to be documented in any of the Microsoft documents I have seen.<span style='font-family:Verdana'><br /> </span></span></p><p><br /> </p><p><span style='color:#333333; font-size:12pt'>So, that should be all you need to get the trust established and have a fully functioning one-way forest trust. With this platform in place, it is now possible to start using ISA Server web publishing to provide secure access to Extranet web services. If you got everything correct, users should be able to authenticate to ISA Server using either Intranet or Extranet forest credentials by way of the forest trust.<span style='font-family:Verdana'><br /> </span></span></p></span>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-62757567232253017712009-08-09T16:47:00.001-07:002009-08-09T16:48:34.155-07:00<span xmlns=''><p><a href='http://www.pbbergs.com/windows/articles/TestDomain.html'>http://www.pbbergs.com/windows/articles/TestDomain.html</a><br /> </p><p><br /> </p><p><span style='font-size:14pt'>This document was prepared for the building of a copy of the production Active Directory. Following these steps will define how to rebuild the entire Microsoft Active Directory for a test domain. <strong>*** Be careful ***</strong></span><br /> </p><p> <br /> </p><p><em>The first set of steps is to get a good pc into the production domain. Once this pc is a member it needs to be promoted and be a healthy participant in the network. The new DC then needs to be removed from the network before it is restarted (From its restore) to prevent any replication activity from damaging the production system. Reconnection to the production system will create major problems in the production system</em><br /> </p><p> <br /> </p><p style='margin-left: 36pt'>1.<span style='font-size:7pt'> </span>Shutdown <strong>ALL</strong> pc's within the test sub-net (For this document it will be 192.168.1.x, gateway = 192.168.1.250), mask = 255.255.255.0<br /></p><p style='margin-left: 36pt'>2.<span style='font-size:7pt'> </span>Remove the physical cable for the new pc and build the member server (This all should reside within the test domain) in production<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Install DNS (AD Integrated needed for this document)<br /></p><p style='margin-left: 36pt'>3.<span style='font-size:7pt'> </span>Re-connect the cable and join the Domain_Name.com domain<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Select the IP Address 192.168.1.101<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Select the mask to 255.255.255.0<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Select the Gateway 192.168.1.250<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Point the DNS services to a production AD DNS server<br /></p><p style='margin-left: 36pt'>4.<span style='font-size:7pt'> </span>Promote the server to a Domain Controller (DC) via dcpromo.exe<br /></p><p style='margin-left: 36pt'>5.<span style='font-size:7pt'> </span>Promote the server to a Global Catalog Server<br /></p><p style='margin-left: 36pt'>6.<span style='font-size:7pt'> </span>Let the system sit idle (2 hours) for Replication to sync up<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Point the DNS services to itself<br /></p><p style='margin-left: 36pt'>7.<span style='font-size:7pt'> </span>Open up a command prompt<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>dcdiag /v /test:ridmanager<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Make sure no errors with the rid manager<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Create an object on the new DC<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Physically disconnect the cable<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Bring up "Active Directory Users and Computers"<br /></p><p style='margin-left: 90pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>By disconnecting you force the system to attach locally<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Create a test user with the account disabled<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Reconnect the physical cable<br /></p><p style='margin-left: 36pt'>8.<span style='font-size:7pt'> </span>At a command prompt type in NTBACKUP and do a system state backup saving the file to the local server<br /></p><p style='margin-left: 36pt'>9.<span style='font-size:7pt'> </span>Demote this server to a member server with in the production domain (DCPROMO)<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Remove the NS record in the production environment<br /></p><p style='margin-left: 36pt'>10.<span style='font-size:7pt'> </span>Physically disconnect the server from the network by unplugging the cable from the hub<br /></p><p style='margin-left: 36pt'>11.<span style='font-size:7pt'> </span><strong>Move the server to </strong>the test domain<br /></p><p style='margin-left: 36pt'>12.<span style='font-size:7pt'> </span>Re-Promote once this system has been disconnected and the ip changed<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Dcpromo<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Domain Name = Domain_Name.com<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>NetBios Name = NetBIOS_Name<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Allow the promotion to create the DNS domain<br /></p><p style='margin-left: 90pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Once this DC is brought online (The DNS services on the member server can be shut down), define it with Integrated Active Directory DNS and all name space records will be restored. Make sure to bring up DNS and select reload to refresh all data<br /></p><p style='margin-left: 90pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Active Directory Integrated<br /></p><p style='margin-left: 90pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Only Secure Updates <br /></p><p style='margin-left: 36pt'>13.<span style='font-size:7pt'> </span>Reboot this server and After the POST Select F8 <br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Scroll down and select the option<br /></p><p style='margin-left: 36pt'>"Directory Services Restore Mode (Windows 200x domain controllers only)"<br /></p><p style='margin-left: 36pt'>14.<span style='font-size:7pt'> </span>Log on as the administrator (This is within the old SAM account)<br /></p><p style='margin-left: 36pt'>15.<span style='font-size:7pt'> </span>Restore the System State from the previous NTBACKUP<br /></p><p style='margin-left: 36pt'>16.<span style='font-size:7pt'> </span>Re-boot the Domain Controller (DC)<br /></p><p> <br /> </p><p><em>Now that the DC is restored it needs to take control of all Flexible Single Master Operation roles (FSMO and the File Replication service). Because of this utilities need to be loaded off of the Windows 200x install CD. NTDSUTIL will perform most of these steps. Since this is the first DC it needs to be a Global Catalog server and validate that it is the primary server in the domain.</em><br /> </p><p> <br /> </p><p style='margin-left: 36pt'>17.<span style='font-size:7pt'> </span>After the POST Select F8 <br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Scroll down and select the option<br /></p><p style='margin-left: 36pt'>"Directory Services Restore Mode (Windows 200x domain controllers only)"<br /></p><p style='margin-left: 36pt'>18.<span style='font-size:7pt'> </span>Log on as the administrator (This is within the old SAM account)<br /></p><p style='margin-left: 36pt'>19.<span style='font-size:7pt'> </span>Install the Windows 200x Active Directory Administration Tools from the server cd<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>D:\i386\ Adminpak.msi<br /></p><p style='margin-left: 36pt'>20.<span style='font-size:7pt'> </span>Install the Windows 200x Server Resource Kit from the server cd<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>D:\support\tools\200xrkst.msi<br /></p><p style='margin-left: 36pt'>21.<span style='font-size:7pt'> </span>Re-boot the Domain Controller (DC)<br /></p><p style='margin-left: 36pt'>22.<span style='font-size:7pt'> </span>Log on as the administrator (This is with the AD account)<br /></p><p style='margin-left: 36pt'>23.<span style='font-size:7pt'> </span>Reset the ip address to the test domain, the restore resets the ip address. Make sure to also point the dns server to itself as well<br /></p><p style='margin-left: 36pt'>24.<span style='font-size:7pt'> </span>Set this server as a Global Catalog (Ignore this step in a multi-domain environment and this DC holds the Infrastructure Master Role)<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Click Start, click Run, type mmc, and then click OK<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>On the Console menu, click Add/Remove Snap-in, click Add, double-click Active Directory Sites and Services, click Close, and then click OK<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Double Click Active Directory Sites and Services<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Double Click Sites<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Double Click MP-Default-Site<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Double Click Servers<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Double Click the DC<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Right Click on NTDS Settings and Select Properties<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>If the "Global Catalog" check box is not checked, check it<br /></p><p style='margin-left: 36pt'>25.<span style='font-size:7pt'> </span>All Flexible Single Master Operations (FSMO) roles need to reside on this DC<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Seize the PDC<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Click Start and then click Run<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>In the Open text box, type ntdsutil<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>roles</strong><br /> </p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>connections</strong><br /> </p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>connect to server <em><DC name></em></strong><br /> </p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>q</strong><br /> </p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>seize pdc</strong><br /> </p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Click "Yes"<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Seize the Infrastructure master role<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>seize infrastructure master</strong><br /> </p><p style='margin-left: 72pt'><span style='font-family:Symbol; font-size:10pt'></span><span style='font-size:7pt'> </span>Click "Yes"<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol; font-size:10pt'></span><span style='font-size:7pt'> </span>Seize the Domain Naming master role<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>seize domain naming master</strong><br /> </p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Click "Yes"<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Seize the schema master role<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>seize schema master</strong><br /> </p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Click "Yes"<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Seize the RID Master Role<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>seize rid master</strong><br /> </p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Click "Yes"<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>q</strong><br /> </p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type<strong> q</strong><br /> </p><p style='margin-left: 36pt'>26.<span style='font-size:7pt'> </span>Remove all other DC server objects <strong>(Repeat this step for each DC) <a href='http://support.microsoft.com/kb/216498/en-us'>KB216498</a></strong><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Click Start and then click Run<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>In the Open text box, type <strong>ntdsutil</strong><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol; font-size:10pt'></span><span style='font-size:7pt'> </span>Type <strong>metadata cleanup</strong><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>connections</strong><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type<strong> connect to server <em><DC></em></strong><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>q</strong> (The metadata cleanup prompt should now show)<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>select operation target</strong><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>list domains </strong>(A list of domains should be displayed)<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>select domain <<em>#> </em></strong>(This is the domain of the server to be pruned)<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>list sites</strong> (A list of sites should be displayed)<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>select site <<em>#> </em></strong>(This is the site of the server to be pruned)<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>list servers in site </strong>(A list of servers should be displayed)<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>select server <<em>#></em></strong> (This is the server to be pruned)<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>q</strong><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>remove selected server</strong> (You should get confirmation of the removal)<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>q</strong><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Type <strong>q</strong><br /> </p><p style='margin-left: 36pt'>27.<span style='font-size:7pt'> </span>Remove all other DC orphaned records in Active Directory <strong>(Repeat this step for each DC) <a href='http://support.microsoft.com/kb/216498/en-us'>KB216498</a></strong><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Click Start - Programs - Windows 200x Support Tools - Tools - ADSI Edit<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol; font-size:10pt'></span><span style='font-size:7pt'> </span>Delete the computer account in <span style='font-size:10pt'>OU=Domain Controllers, DC=Domain_Name,DC=com</span><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Delete the FRS member object in <span style='font-size:10pt'>CN=Domain System Volume (SYSVOL share),CN=File Replication Service,CN=System,DC=Domain_Name,DC=com</span><br /> </p><p style='margin-left: 36pt'>28.<span style='font-size:7pt'> </span>Remove all other DC orphaned records in DNS<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Click Start - Programs - Administrative Tools - DNS<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Click <DC>.Domain_Name.com - Forward Lookup Zones - Domain_Name.com<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Delete the cname (alias) of all other DC's<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Delete the a record of all other DC's<br /></p><p style='margin-left: 36pt'>29.<span style='font-size:7pt'> </span>This DC needs to be the File Replication Service Master <strong>(<a href='http://support.microsoft.com/kb/316790/en-us'>KB316790</a>)</strong><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Stop the File Replication service on the DC<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Make sure the following folders exist, if not create them<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span> C:\WINNT\SYSVOL\staging<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span> C:\WINNT\SYSVOL\sysvol (Share as SYSVOL)<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span> C:\WINNT\SYSVOL\sysvol\Domain_Name.com<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span> copy the contents of C:\WINNT\SYSVOL\domain to this folder<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Start Registry Editor (Regedt32.exe)<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Locate and then click the <strong>BurFlags</strong> value under the following key in the registry:<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span><strong>HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NtFrs\Parameters\Backup/Restore\Process at Startup</strong><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>On the <strong>Edit</strong> menu, click <strong>DWORD</strong>, click <strong>Hex</strong>, type <span style='font-family:Courier New'><strong>D2</strong></span>, and then click <strong>OK</strong><br /> </p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Quit Registry Editor<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Restart the File Replication Service<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Check the FRS event viewer to see if the system states that the sysvol is now being shared and defines all the paths<br /></p><p style='margin-left: 36pt'>30.<span style='font-size:7pt'> </span>Ensure that the DC has registered the proper computer role<br /></p><p style='margin-left: 54pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>Enter <strong>net accounts</strong> at a dos prompt<br /></p><p style='margin-left: 72pt'><span style='font-family:Symbol'></span><span style='font-size:7pt'> </span>The computer role should say "primary"<br /></p><p> <br /> </p><p><em>Finally any information related to the old DC's need to be purged from AD.</em><br /> </p><p> <br /> </p><p style='margin-left: 36pt'>31.<span style='font-size:7pt'> </span>Re-boot the Authoritatively restored DC<br /></p><p style='margin-left: 36pt'>32.<span style='font-size:7pt'> </span>Within the production system delete the test user and computer account<br /></p><p style='margin-left: 36pt'>33.<span style='font-size:7pt'> </span>Within the production system delete the server object within the site that it was placed into for replication<br /></p><p> <br /> </p><p><span style='font-size:10pt'><em>Note: The File Replication Service can prevent the computer from becoming a Domain Controller (See below). If when doing a dcdiag a message states that the rid pool is corrupt, what is probably happening is there are problems with replication. Check the "File Replication Service" Event Log. Also make sure that all sub-folders are available within c:\winnt\sysvol.</em></span><br /> </p><p><span style='font-size:10pt'><em>To re-test just the rid pool: dcdiag /v test:ridmanager</em></span><br /> </p><p> <br /> </p><p> <br /> </p><p> <br /> </p><p><span style='font-size:16pt'><strong>Never again connect this server to the production system!!!</strong></span><br /> </p><p> <br /> </p><p> <br /> </p><p>When you restore a domain controller from backup (or when you restore the System State), the FRS database is not restored because the most up-to-date state exists on a current replica instead of in the restored database. When FRS starts, it enters a "seeding" state and then tries to locate a replica with which it can synchronize. Until FRS completes replication, it cannot share Sysvol and Netlogon.<br/><br/>If you restore all of the domain controllers in the domain backup, all the domain controllers enter the seeding state for FRS and try to synchronize with an online replica. This replication does not occur because all of the domain controllers are in the same seeding state. Setting the primary domain controller FSMO role holder to be authoritative forces the domain controller to rebuild its database based on the current contents of the system volume. When that task is completed, the Sysvol and Netlogon shares are shared. All the other domain controllers can then start synchronizing from the online replica<br /></p><p><span style='font-size:10pt'><em>(See - <a href='http://support.microsoft.com/kb/316790/en-us'><strong>KB</strong></a></em>316790<strong>)</strong></span><br /> </p></span>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-79087086196615886042009-07-05T00:48:00.000-07:002009-07-05T00:53:32.864-07:00Installing Office 2007 with Group Policy GPOThis eventually seemed to work for me:<br /><br />1) Create a shared network location<br />2) Copy all the contents from the CD to this location<br />3) Run the setup.exe /admin and create the MSP file you like<br />4) Copy this MSP to the /Updates folder<br />5) Create a GPO as per normal and (for Office 2007 Pro +) choose the \ProPlus.ww\ProPlusww.msi<br />This gave me a "cannot validate" error which I think ties in with my next step so perhaps these steps should be reversed<br />6) Edit the config.xml file to suit your organisation: Note that it appears all the settings are COMMENTED OUT by default<br /><blockquote><br /><br /><Configuration Product="ProPlus"><br /><br /> <Display Level="full" CompletionNotice="no" SuppressModal="yes" AcceptEula="yes" /><br /> <br /> <Logging Type="standard" Path="%temp%" Template="Microsoft Office Professional Plus Setup(*).txt" /> <br /> <br /> <PIDKEY Value="GXXXXXXXXXXXXXXXXXXXXXJ" /> <br /><br /> <USERNAME Value="%USERNAME%" /> <br /> <br /> <COMPANYNAME Value="MY COMPANY" /> <br /> <br /> <INSTALLLOCATION Value="%programfiles%\Microsoft Office" /> <br /> <br /> <LIS CACHEACTION="CacheOnly" /> <br /> <br /> <SOURCELIST Value="\\DEPLOYSERVER\INSTALL$\Deployment\Office2007ProfessionalPlus;\\server2\share\Office12" /> <br /> <br /> <DistributionPoint Location="\\DEPLOYSERVER\INSTALL$\Deployment\Office2007ProfessionalPlus" /> <br /> <br /> <!-- <OptionState Id="OptionID" State="absent" Children="force" /> --><br /> <br /> <!-- <Setting Id="Reboot" Value="IfNeeded" /> --><br /> <br /> <!-- <Command Path="msiexec.exe" Args="/i \\server\share\my.msi" QuietArg="/q" ChainPosition="after" Execute="install" /> --><br /><br /></Configuration><br /><br /></blockquote>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-84108469229879503472009-04-29T02:23:00.000-07:002009-04-29T02:27:22.416-07:00Vista Home Premium updating from WSUS 3.0To get Vista Premium to update from WSUS...<br /><br />I simply exported the follwoing keys from my Windows 2008 server<br />-------------------------------------------<br /><br />Windows Registry Editor Version 5.00<br /><br />[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate]<br />"WUServer"="http://MYWSUSSERVER:8530"<br />"WUStatusServer"="http://MYWSUSSERVER:8530"<br />"AcceptTrustedPublisherCerts"=dword:00000001<br /><br />[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU]<br />"NoAutoUpdate"=dword:00000000<br />"AUOptions"=dword:00000004<br />"ScheduledInstallDay"=dword:00000000<br />"ScheduledInstallTime"=dword:00000003<br />"UseWUServer"=dword:00000001<br />"IncludeRecommendedUpdates"=dword:00000001<br />"AUPowerManagement"=dword:00000001<br />"DetectionFrequencyEnabled"=dword:00000001<br />"DetectionFrequency"=dword:00000003<br />"AutoInstallMinorUpdates"=dword:00000001<br />"RebootWarningTimeoutEnabled"=dword:00000001<br />"RebootWarningTimeout"=dword:00000005<br />"RescheduleWaitTimeEnabled"=dword:00000001<br />"RescheduleWaitTime"=dword:00000001<br /><br /><br />-------------------------------<br /><br />Reboot and done.RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-65721258615785357492009-04-28T15:12:00.001-07:002009-06-19T15:40:16.041-07:00FW: How to install and uninstall a program in Safe Mode - USEFUL TIP<div class="Section1"><p class="MsoNormal"><span style="color:blue;">From 4SYSOPS.COM</span></p><p class="MsoNormal"><span style="font-family:'Arial','sans-serif';font-size:10;color:blue;"> <?xml:namespace prefix = o /><o:p></o:p></span></p><p class="MsoNormal"><span style="font-family:'Arial','sans-serif';font-size:10;color:blue;"><o:p> </o:p></span></p><table class="MsoNormalTable" border="0" cellpadding="0"><tbody><tr><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p><span style="font-family:'Calibri','sans-serif';">I've never really understood why uninstalling programs in Safe Mode isn't officially supported in Windows. The main purpose of Safe Mode is to troubleshoot Windows, and what usually causes the trouble? Right, misbehaving programs. This may not even be the fault of the program itself. Windows is a very complex system and sometimes unforeseeable things happen. If an application has been somehow damaged, it might not even be possible to uninstall it. For example, its service could hang immediately after the system boots, or other programs could interfere.<o:p></o:p></span></p><p><span style="font-family:'Calibri','sans-serif';">In <strong><span style="font-family:'Calibri','sans-serif';">Safe Mode</span></strong>, Windows has reduced functionality, because only the core components have been loaded. In such an environment it is much easier to get rid of an application that has gone mad. Windows Safe Mode can be entered by pressing the F8 key before Windows boots up.<o:p></o:p></span></p><p><span style="font-family:'Calibri','sans-serif';">In order to uninstall a program in Windows, the <strong><span style="font-family:'Calibri','sans-serif';">Windows Installer Service</span></strong> has to be running. If you try to uninstall software in Safe Mode, Windows will just inform you that: "The Windows Installer Service could not be started." Trying to start the service manually will only get you: "Error 1084: This service cannot be started in Safe Mode."<o:p></o:p></span></p><p><span style="font-family:'Calibri','sans-serif';">The good thing is that it is not really difficult to outsmart Windows Safe Mode. All of the services that are allowed to start in Safe Mode are stored in the registry folder<strong><span style="font-family:'Calibri','sans-serif';"> HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SafeBoot\Minimal\</span></strong><o:p></o:p></span></p><p><span style="font-family:'Calibri','sans-serif';">All you have to do is to add a REG_SZ key with the service name (not the display dame) and the value data "Service" (without quotes). The service name of the Windows Installer Service is <strong><span style="font-family:'Calibri','sans-serif';">MSIService</span></strong>. As such, the REG file that adds the correct key looks like this:<o:p></o:p></span></p><p><span style="font-family:'Calibri','sans-serif';">Windows Registry Editor Version 5.00<o:p></o:p></span></p><p><span style="font-family:'Calibri','sans-serif';">[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SafeBoot\Minimal\MSIServer]<br />@="Service"<o:p></o:p></span></p><p><span style="font-family:'Calibri','sans-serif';">All you have to do is copy this to a text file, with the extension .reg, and drop the file into your tool box. Anytime you want to uninstall a program in Safe Mode, you just click on the REG file. You have to remove the key manually if you want to disable this feature. However, I think it usually won't do any harm.<o:p></o:p></span></p><p><span style="font-family:'Calibri','sans-serif';">Please note that it is not always possible to <strong><span style="font-family:'Calibri','sans-serif';">uninstall software in Safe Mode</span></strong> because the corresponding installer program requires certain services to be running. In such a case you might just enable these services as well in Safe Mode by adding their service names to the Registry. The Service Name can be found in the service's properties in the Services snap-in.<o:p></o:p></span></p><p><span style="font-family:'Calibri','sans-serif';">Note: If you know similar tips, please just mail them to<o:p></o:p></span></p><p><span style="font-family:'Calibri','sans-serif';"><img id="_x0000_i1025" hspace="5" alt="tips-at-4sysops.com" src="http://4sysops.com/wp-content/uploads/2009/04/tipsat4sysopscom-thumb.png" width="121" height="15" /><o:p></o:p></span></p><p><span style="font-family:'Calibri','sans-serif';">If I like the tip, I will post it as an article on 4sysops with your name and a link to your blog or website.<o:p></o:p></span></p></td></tr></tbody></table><p><span style="font-family:'Calibri','sans-serif';"><o:p> </o:p></span></p></div>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-47532311596792117242009-04-02T15:22:00.001-07:002009-06-19T15:40:38.680-07:00Using Bit Locker on Virtual Machines<div class="Section1"><p class="MsoNormal"><span style="font-family:'Arial','sans-serif';font-size:10;">See this article:<?xml:namespace prefix = o /><o:p></o:p></span></p><p class="MsoNormal"><span style="font-family:'Arial','sans-serif';font-size:10;"><o:p> </o:p></span></p><p class="MsoNormal"><span style="font-family:'Arial','sans-serif';font-size:10;"><a href="http://blogs.msdn.com/virtual_pc_guy/archive/2008/01/23/using-bitlocker-under-virtual-pc-virtual-server.aspx">http://blogs.msdn.com/virtual_pc_guy/archive/2008/01/23/using-bitlocker-under-virtual-pc-virtual-server.aspx</a><o:p></o:p></span></p><p class="MsoNormal"><span style="font-family:'Arial','sans-serif';font-size:10;"><o:p> </o:p></span></p><p class="MsoNormal"><span style="font-family:'Arial','sans-serif';font-size:10;">Note the comments as well.<o:p></o:p></span></p><p class="MsoNormal"><span style="font-family:'Arial','sans-serif';font-size:10;"><o:p> </o:p></span></p><p class="MsoNormal"><o:p> </o:p></p></div>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-40807038458130291062009-02-25T15:55:00.001-08:002009-06-19T15:41:20.560-07:00Join Samba 3 to Your Active Directory Domain<div class="Section1"><h1><span style="font-size:12;">Always worth retrying….<?xml:namespace prefix = o /><o:p></o:p></span></h1><h1><span style="font-size:12;"><o:p> </o:p></span></h1><h1><span style="font-size:12;"><o:p> </o:p></span></h1><h1><span style="font-size:12;"><a href="http://www.enterprisenetworkingplanet.com/linux_unix/article.php/3487081/Join-Samba-3-to-Your--Active-Directory-Domain.htm">http://www.enterprisenetworkingplanet.com/linux_unix/article.php/3487081/Join-Samba-3-to-Your--Active-Directory-Domain.htm</a><o:p></o:p></span></h1><h1><span style="font-size:12;">Join Samba 3 to Your Active Directory Domain<o:p></o:p></span></h1><p>A popular thing to do with Samba these days is to join a Samba 3 host to a Windows Active Directory domain. You may freely set up any number of Samba servers in a Windows network without joining them to the domain. The advantages of domain membership are central management and authentication, and single sign-on. Using Winbind allows Linux clients to log on to the AD domain without requiring local Linux system accounts, which is a lovely time- and hassle-saver.<o:p></o:p></p><p>Presumably you already have a functioning Active Directory domain, and know how to run it. AD is very dependent on DNS (domain name system) so I'll assume your DNS house is also in order. On your Linux box you'll need Samba 3, version 3.0.8 or newer. Plus MIT Kerberos 5, version 1.3.1 or newer, and OpenLDAP. (The Samba documentation states that Heimdal Kerberos, version 0.6.3 or newer, also works. The examples in this article use MIT Kerberos.) Debian users need the <i>krb5-user, krb5-config, krb5-doc,</i> and <i>libkrb53</i> packages. Red Hat and Fedora users need the <i>krb5</i> and <i>krb5-client</i> RPMs.<o:p></o:p></p><p>First you should verify that your Samba installation has been compiled to support Kerberos, LDAP, Active Directory, and Winbind. Most likely it has, but you need to make sure. The <b>smbd</b> command has a switch for printing build information. You will see a lot more lines of output than are shown here:<o:p></o:p></p><p><tt><b><span style="font-size:10;">root@windbag:/usr/sbin# cd /usr/sbin </span></b></tt><b><span style="font-family:'Courier New';font-size:10;"><br /><tt>root@windbag:/usr/sbin# smbd -b grep LDAP</tt></span></b><tt><span style="font-size:10;"> </span></tt><span style="font-family:'Courier New';font-size:10;"><br /><tt>HAVE_LDAP_H </tt><br /><tt>HAVE_LDAP </tt><br /><tt>HAVE_LDAP_DOMAIN2HOSTLIST </tt><br /><tt>... </tt><br /><tt><b>root@windbag:/usr/sbin# smbd -b grep KRB</b> </tt><br /><tt>HAVE_KRB5_H </tt><br /><tt>HAVE_ADDRTYPE_IN_KRB5_ADDRESS </tt><br /><tt>HAVE_KRB5 </tt><br /><tt>... </tt><br /><tt><b>root@windbag:/usr/sbin# smbd -b grep ADS</b> </tt><br /><tt>WITH_ADS </tt><br /><tt>WITH_ADS </tt><br /><tt><b>root@windbag:/usr/sbin# smbd -b grep WINBIND</b> </tt><br /><tt>WITH_WINBIND </tt><br /><tt>WITH_WINBIND </tt></span><o:p></o:p></p><p>If you are in the unfortunate position of missing any of these, which will be indicated by a blank line, you need to recompile Samba. See <a href="http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/compiling.html">Chapter 37</a> of the The Official Samba-3 HOWTO and Reference Guide.<o:p></o:p></p><h3>Configure and Test Kerberos<o:p></o:p></h3><p>Let's say our Active Directory domain server is <i>bigserver.domain.net</i>, and the Samba server is named <i>samba1</i>. This is the absolute minimum Kerberos configuration file, <i>/etc/krb5.conf</i>, for connecting to this domain: <tt><span style="font-size:10;"><o:p></o:p></span></tt></p><p><span style="font-family:'Courier New';font-size:10;">'libdefaults'<br /> default_realm = DOMAIN.NET</span><o:p></o:p></p><h3><span style="font-family:'Courier New';">Related Articles<o:p></o:p></span></h3><ul type="disc"><li style="mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l0 level1 lfo3" class="MsoNormal"><span style="font-family:'Courier New';font-size:10;"><a href="http://www.enterprisenetworkingplanet.com/netos/article.php/3454421">Replace Your NT4 Domain Controller with Samba 3</a><o:p></o:p></span></li><li style="mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l0 level1 lfo3" class="MsoNormal"><span style="font-family:'Courier New';font-size:10;"><a href="http://www.enterprisenetworkingplanet.com/netos/article.php/3069121">Samba 3: Linux File Serving for the Active Directory Generation</a><o:p></o:p></span></li><li style="mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l0 level1 lfo3" class="MsoNormal"><span style="font-family:'Courier New';font-size:10;"><a href="http://www.enterprisenetworkingplanet.com/netsysm/article.php/2239891">From NT Domain to Server 2003 Active Directory</a><o:p></o:p></span></li><li style="mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l0 level1 lfo3" class="MsoNormal"><span style="font-family:'Courier New';font-size:10;"><a href="http://www.enterprisenetworkingplanet.com/netos/article.php/3457461">Replace Your NT4 Domain Controller with Samba 3 (Part 2)</a><o:p></o:p></span></li><li style="mso-margin-top-alt: auto; mso-margin-bottom-alt: auto; mso-list: l0 level1 lfo3" class="MsoNormal"><span style="font-family:'Courier New';font-size:10;"><a href="http://www.enterprisenetworkingplanet.com/netsysm/article.php/3452591">In Review: Interoperability Headlined 2004</a><o:p></o:p></span></li></ul><p><span style="font-family:'Courier New';font-size:10;">'realms' DOMAIN.NET = {<br /> kdc = bigserver.domain.net<br /> }<o:p></o:p></span></p><p><span style="font-family:'Courier New';font-size:10;">'domain_realms'<br /> .kerberos.server = DOMAIN.NET<o:p></o:p></span></p><p>Use uppercase where it shows. Now try to connect, and mind your cases:<o:p></o:p></p><p><tt><b><span style="font-size:10;"># kinit Administrator@DOMAIN.NET</span></b></tt><tt><span style="font-size:10;"> </span></tt><span style="font-family:'Courier New';font-size:10;"><br /><tt>Password for Administrator@DOMAIN.NET </tt></span><o:p></o:p></p><p><b>Configure /etc/hosts</b> <o:p></o:p></p><p>Even if your DNS servers are perfect in every way, it is a good idea to add important servers to your local <i>/etc/hosts</i> file. It speeds up lookups and provides a fallback in case the DNS servers go down:<o:p></o:p></p><p><tt><span style="font-size:10;">192.168.10.5 bigserver.domain.net bigserver </span></tt><o:p></o:p></p><h3>Configure Samba<o:p></o:p></h3><p>This example <b>smb.conf </b>shows a basic setup for a printer server and home shares. Shares are configured in the usual manner, only the <i>global</i> section changes when you join to an AD domain.<o:p></o:p></p><p><tt><span style="font-size:10;"># Global parameters </span></tt><span style="font-family:'Courier New';font-size:10;"><br /><tt>'global' </tt><br /><tt>workgroup = BIGSERVER </tt><br /><tt>realm = DOMAIN.NET </tt><br /><tt>preferred master = no </tt><br /><tt>server string = Samba file and print server </tt><br /><tt>security = ADS </tt><br /><tt>encrypt passwords = yes </tt><br /><tt>log level = 3 </tt><br /><tt>log file = /var/log/samba/%m </tt><br /><tt>max log size = 50 </tt><br /><tt>winbind separator = + </tt><br /><tt>printcap name = cups </tt><br /><tt>printing = cups </tt><br /><tt>idmap uid = 10000-20000 </tt><br /><tt>idmap gid = 10000-20000 </tt><br /><br /><tt>'homes' </tt><br /><tt>comment = Home Directories </tt><br /><tt>valid users = %S </tt><br /><tt>read only = No </tt><br /><tt>browseable = No </tt><br /><br /><tt>'printers' </tt><br /><tt>comment = All Printers </tt><br /><tt>browseable = no </tt><br /><tt>printable = yes </tt><br /><tt>guest ok = yes </tt></span><o:p></o:p></p><p>The workgroup is the name of your AD domain. Server string is a comment describing the server, make this anything you want. Log level runs from 0, for no logging, to 10, extreme logging. See <b>man smbd.conf</b> for the rest.<o:p></o:p></p><p>Save your changes and run<o:p></o:p></p><p><tt><b><span style="font-size:10;">$ testparm</span></b></tt><tt><span style="font-size:10;"> </span></tt><o:p></o:p></p><p>This checks <i>smb.conf</i> for syntax errors. Any errors must be corrected before going ahead. Then start up Samba:<o:p></o:p></p><p><tt><b><span style="font-size:10;"># /etc/init.d/samba start</span></b></tt><tt><span style="font-size:10;"> </span></tt><o:p></o:p></p><p>Finally, join your Samba machine to Active Directory:<o:p></o:p></p><p><tt><b><span style="font-size:10;"># net ads join -U Administrator</span></b></tt><tt><span style="font-size:10;"> </span></tt><span style="font-family:'Courier New';font-size:10;"><br /><tt>Administrator's password: </tt><br /><tt>Joined 'SAMBA1' to realm 'DOMAIN.NET.' </tt></span><o:p></o:p></p><p>Hurrah! Success. The Samba box will now appear as a machine account under "Computers" in your AD console. Now stop Samba until the final steps are completed.<o:p></o:p></p><h3>Enabling Windbind<o:p></o:p></h3><p>Debian users may need to install the <i>winbind</i> package separately. RPM users will find it in the <i>samba-common</i> RPM. First, edit <i>/etc/nsswitch.conf</i>. The first three lines are the most important; the others vary according to your system:<o:p></o:p></p><table class="MsoNormalTable" border="0" cellpadding="0"><tbody><tr><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">passwd: <span style="font-size:12;"><o:p></o:p></span></p></td><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">compat winbind <span style="font-size:12;"><o:p></o:p></span></p></td></tr><tr><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">group: <span style="font-size:12;"><o:p></o:p></span></p></td><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">compat winbind <span style="font-size:12;"><o:p></o:p></span></p></td></tr><tr><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">shadow: <span style="font-size:12;"><o:p></o:p></span></p></td><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">compat <span style="font-size:12;"><o:p></o:p></span></p></td></tr><tr><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">hosts: <span style="font-size:12;"><o:p></o:p></span></p></td><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">files dns wins <span style="font-size:12;"><o:p></o:p></span></p></td></tr><tr><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">networks: <span style="font-size:12;"><o:p></o:p></span></p></td><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">files dns <span style="font-size:12;"><o:p></o:p></span></p></td></tr><tr><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">protocols: <span style="font-size:12;"><o:p></o:p></span></p></td><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">db files <span style="font-size:12;"><o:p></o:p></span></p></td></tr><tr><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">services: <span style="font-size:12;"><o:p></o:p></span></p></td><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">db files <span style="font-size:12;"><o:p></o:p></span></p></td></tr><tr><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">ethers: <span style="font-size:12;"><o:p></o:p></span></p></td><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">db files <span style="font-size:12;"><o:p></o:p></span></p></td></tr><tr><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">rpc: <span style="font-size:12;"><o:p></o:p></span></p></td><td style="PADDING-BOTTOM: 0.75pt; PADDING-LEFT: 0.75pt; PADDING-RIGHT: 0.75pt; PADDING-TOP: 0.75pt"><p class="MsoNormal">db files <span style="font-size:12;"><o:p></o:p></span></p></td></tr></tbody></table><p>Save your changes, and fire up windbind and Samba:<o:p></o:p></p><p><tt><b><span style="font-size:10;"># winbind </span></b></tt><b><span style="font-family:'Courier New';font-size:10;"><br /><tt># /etc/init.d/samba start </tt></span></b><o:p></o:p></p><p>Now verify that windbind is working. These commands pull lists of users and groups from the AD domain controller:<o:p></o:p></p><p><tt><b><span style="font-size:10;"># wbinfo -u</span></b></tt><tt><span style="font-size:10;"> </span></tt><span style="font-family:'Courier New';font-size:10;"><br /><tt>BIGSERVER+Administrator </tt><br /><tt>BIGSERVER+Guest </tt><br /><tt>BIGSERVER+cschroder </tt><br /><tt>BIGSERVER+mhall </tt><br /><tt><b># wbinfo -g</b> </tt><br /><tt>BIGSERVER+Domain Computers </tt><br /><tt>BIGSERVER+Domain Admins </tt><br /><tt>BIGSERVER+Domain Guests </tt><br /><tt>BIGSERVER+Domain Users </tt></span><o:p></o:p></p><p>This command verifies that logins and passwords are coming from the AD server, and not the local machine:<o:p></o:p></p><p><tt><b><span style="font-size:10;"># getent passwd</span></b></tt><tt><span style="font-size:10;"> </span></tt><span style="font-family:'Courier New';font-size:10;"><br /><tt>BIGSERVER+cschroder:x:1000:1000:,,,:/home/BIGSERVER/cschroder:/bin/bash </tt></span><o:p></o:p></p><p>If winbind is not working and local authentication is still active, they will not have the BIGSERVER+ prefix. Finally, as root run <b>net ads info</b> to display the AD server information.<o:p></o:p></p><h3>Troubleshooting<o:p></o:p></h3><p>If you've gotten this far and everything works, your Samba server is now a fully-fledged member of your Active Directory domain, and can be managed like any other AD object. A nice bonus is you may have local Linux accounts on the Samba box that are not visible in Active Directory; which means your Samba admins can SSH directly into the Samba server for admin chores, and not have to fuss with AD roadblocks.<o:p></o:p></p><p>A good troubleshooting guide is chapter 9 of "Samba-3 by Example" (<a href="http://samba.org/samba/docs/man/Samba-Guide/index.html">Adding UNIX/LINUX Servers and Clients</a>). Also refer to chapter 12 (<a href="http://samba.org/samba/docs/man/Samba-HOWTO-Collection/index.html">Identity Mapping</a>) of "The Official Samba-3 HOWTO and Reference Guide" to learn about winbind in greater depth.<o:p></o:p></p><p class="MsoNormal"><o:p> </o:p></p></div><pre> </pre>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-86296619467129638472009-02-11T14:12:00.001-08:002009-06-19T15:41:58.698-07:00A cool way to block internet access to certain users / machines<div class="Section1"><p style="MARGIN-BOTTOM: 12pt; MARGIN-LEFT: 0cm; MARGIN-RIGHT: 0cm; mso-margin-top-alt: 0cm">http://www.smart-x.com/?CategoryID=171&ArticleID=149<?xml:namespace prefix = o /><o:p></o:p></p><p style="MARGIN-BOTTOM: 12pt; MARGIN-LEFT: 0cm; MARGIN-RIGHT: 0cm; mso-margin-top-alt: 0cm"><o:p> </o:p></p><p style="MARGIN-BOTTOM: 12pt; MARGIN-LEFT: 0cm; MARGIN-RIGHT: 0cm; mso-margin-top-alt: 0cm"><o:p> </o:p></p><p style="MARGIN-BOTTOM: 12pt; MARGIN-LEFT: 0cm; MARGIN-RIGHT: 0cm; mso-margin-top-alt: 0cm">As a system administrator, you might find it useful to block internet access for certain users<br /><br />and / or machine, but in many cases, you do want to allow access to several specific web sites.<br /><br />This article shows an alternative way of doing it without using ISA, Firewall applications,<br /><br />IPSec and other complex solutions.<br /><br />The first thing you want to do is create a simple HTML document which says<br /><br />'Internet access is forbidden… blah blah blah'.<br /><br />You can use MS Word or simple Notepad to create such HTML file and save it somewhere<br /><br />under the name 'Default.htm'.<br /><br />The next step would be to publish this HTML document on one of your IIS servers.<br /><br />You should use a dedicated web site which listens on some unused TCP port for this.<br /><br />You can use any IIS server (or other OS) for publishing the HTML document.<br /><br />However, I used IIS7 for enumerating the steps:<br /><br />1. Create a folder on the IIS server and assign read access to the server's computer account<br /><br /> in the domain. (For example, if your server name is 'IISSRV01', assign to<br /><br /> <img id="Picture_x0020_1" alt="http://www.smart-x.com/_uploads/imagesgallery/text1.jpg" src="cid:image001.jpg@01C98D02.C3206160" width="193" height="23" /><span style="font-family:'Calibri','sans-serif';font-size:11;">read permissions on the folder.<br /><br /> <img id="Picture_x0020_2" alt="http://www.smart-x.com/_uploads/imagesgallery/pic1.jpg" src="cid:image002.jpg@01C98D02.C3206160" width="368" height="478" /></span><o:p></o:p></p><p style="MARGIN-BOTTOM: 12pt; MARGIN-LEFT: 0cm; MARGIN-RIGHT: 0cm; mso-margin-top-alt: 0cm"><br />2. Copy the 'default.htm' file you created to this directory.<br /><br />3. Open Internet Information Services (IIS) manager (Shortcut: Start --> Run --> inetmgr)<br /><br />4. On the left pane, Expand <img id="Picture_x0020_3" alt="http://www.smart-x.com/_uploads/imagesgallery/text2.jpg" src="cid:image003.jpg@01C98D02.C3206160" width="77" height="15" />.<br /><br />5. Right click 'Sites' and choose 'Add new web site…'<br /><br /> a. Type 'InternetForbidden' in the 'Site Name' text box<br /><br /> b. Under the 'Physical Path' text box, type the path to the directory you copied<br /> the 'default.htm' to.<o:p></o:p></p><p style="MARGIN-BOTTOM: 12pt; MARGIN-LEFT: 0cm; MARGIN-RIGHT: 0cm; mso-margin-top-alt: 0cm"> c. Under the 'Port' text box, type any available TCP port number, higher than 1025.<br /> For example: '8765'<o:p></o:p></p><p style="MARGIN-BOTTOM: 12pt; MARGIN-LEFT: 0cm; MARGIN-RIGHT: 0cm; mso-margin-top-alt: 0cm"> <img id="Picture_x0020_4" alt="http://www.smart-x.com/_uploads/imagesgallery/pic2.jpg" src="cid:image004.jpg@01C98D02.C3206160" width="448" height="441" /><br /><br /> d. Click 'OK' to save the web site. If your newly added web site appears with a<br /> red X next to it, click 'Sites' and the refresh display by using 'F5' keyboard key.<br /> At this point, your new site should appear with a little 'Earth' icon, meaning<br /> everything is fine.<br /> e. In order to test your settings, try to browse to the web site by typing the<br /> following address in the Internet Explorer Address bar of one of your<br /> client machines: <img id="Picture_x0020_5" alt="http://www.smart-x.com/_uploads/imagesgallery/text3.jpg" src="cid:image011.jpg@01C98D02.C6417780" width="232" height="18" /><br /> If everything worked fine by now, continue to the next stage.<br /><br />The next stage would be to set this web site address as a proxy server for those<br />users / machines you want to restrict. There are many ways to apply these settings to clients.<br />In this article, I will go through the steps of configuring the proxy address through Group Policy.<o:p></o:p></p><p style="MARGIN-BOTTOM: 12pt; MARGIN-LEFT: 0cm; MARGIN-RIGHT: 0cm; mso-margin-top-alt: 0cm">1. Create a security group that will include all user / computer accounts which should be restricted.<br /><br />2. Start Group Policy Management (Shortcut: Start --> Run --> GPMC.msc)<br /><br /> If you don't have GPMC installed, it is about time you install it! - <a href="http://www.microsoft.com/downloads/details.aspx?familyid=0a6d4c24-8cbd-4b35-9272-dd3cbfc81887&displaylang=en" target="_blank">http://www.microsoft.com/downloads/details.aspx?familyid=0a6d4c24-8cbd-4b35-9272-dd3cbfc81887&displaylang=en</a><br /><br />3. On the left pane, select the OU which contain the user / computer accounts which you want<br /> restrict.<br /><br />4. Right click the selected OU and choose 'Create and link a GPO here…'<br /><br />5. Type a name for the GPO and click 'OK'<o:p></o:p></p><p style="MARGIN-BOTTOM: 12pt; MARGIN-LEFT: 0cm; MARGIN-RIGHT: 0cm; mso-margin-top-alt: 0cm"> <img id="Picture_x0020_6" border="0" alt="http://www.smart-x.com/_uploads/imagesgallery/pic3.jpg" src="cid:image007.jpg@01C98D02.C3206160" width="498" height="352" /><br /><br />6. On the left pane, click on the newly created GPO.<br />7. On the lower part of the right pane, click the 'Authenticated Users' group<br /> (under 'Security Filtering') and click 'Remove'. Click 'OK' to approve.<br />8. Click 'Add…' and browse to select the security group you created in the first step.<br /><br /> <img id="Picture_x0020_7" border="0" alt="http://www.smart-x.com/_uploads/imagesgallery/pic4.jpg" src="cid:image008.jpg@01C98D02.C3206160" width="493" height="468" /><br /><br />9. On the left pane, right click the GPO and click 'Edit…'<br /><br />10. On the left pane, Expand 'User Configuration' à 'Windows Settings' --><br /> 'Internet Explorer Maintenance' --> 'Connections'<o:p></o:p></p><p style="MARGIN-BOTTOM: 12pt; MARGIN-LEFT: 0cm; MARGIN-RIGHT: 0cm; mso-margin-top-alt: 0cm">11. On the right pane, double click 'Proxy Settings'<br /><br />12. Check 'Enable Proxy Settings'<br /><br />13. In the 'Address of proxy' text box, type the address of the web site you created<br /> at the beginning of the article. On the 'Port' text box, type the port of your web site<br /> (In this example – 8765)<br /><br /> <a href="http://www.smart-x.com/_Uploads/dbsAttachedFiles/pic5.jpg"><span style="TEXT-DECORATION: none;color:blue;" ><img id="Picture_x0020_8" border="0" alt="http://www.smart-x.com/_uploads/imagesgallery/pic5.jpg" src="cid:image012.jpg@01C98D02.C6417780" width="719" height="523" /></span></a><br /><br />14. If you have URLs of sites which should not be restricted, type the URLs in the<br /> 'Exceptions' list.<o:p></o:p></p><p style="MARGIN-BOTTOM: 12pt; MARGIN-LEFT: 0cm; MARGIN-RIGHT: 0cm; mso-margin-top-alt: 0cm">15. Click 'OK'<o:p></o:p></p><p style="MARGIN-BOTTOM: 12pt; MARGIN-LEFT: 0cm; MARGIN-RIGHT: 0cm; mso-margin-top-alt: 0cm">16. On the left pane, Expand 'Administrative Templates' --> 'Windows Components' --><br /> 'Internet Explorer'.<br /><br />17. On the right pane, double click 'Disable Changing proxy settings', change to 'Enabled'<br /> and click 'OK'.<br /><br />18. If you are restricting computer accounts (and not user accounts), meaning that the<br /> OU you selected in step #3 contains the computer accounts and that the security<br /> group you created in step #1 contains computer accounts, perform the following tasks:<br /> a. On the left pane, Expand 'Computer Configuration' --><br /> 'Administrative Tools' --> 'System' --> 'Group Policy'.<br /> b. On the right pane, double click 'User Group Policy loopback processing mode',<br /> choose 'Enabled', select 'Merge' and click 'OK'.<br />19. That's it! You can now close the Group Policy Object Editor and the<br /> Group Policy Management Console and test your settings.<br />Note that group membership is updated at logon, so you will need your clients to log off<br />and back on in order to be restricted. If you are applying the GPO on a group of<br />computer accounts, the client computer should be restarted in order for the<br />computer account's group membership to be applied. <o:p></o:p></p><p class="MsoNormal"><o:p> </o:p></p><p class="MsoNormal"><o:p> </o:p></p></div>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-2384314747416775082009-02-11T14:07:00.001-08:002009-06-19T15:42:29.098-07:00Why some applications malfunction when one of the Domain Controllers is down?<div class="Section1"><p class="MsoNormal"><a href="http://www.smart-x.com/?CategoryID=171&ArticleID=165">http://www.smart-x.com/?CategoryID=171&ArticleID=165</a><?xml:namespace prefix = o /><o:p></o:p></p><p class="MsoNormal"><o:p> </o:p></p><p class="MsoNormal"><o:p> </o:p></p><p style="TEXT-ALIGN: center" align="center"><strong><span lang="EN">Why some applications malfunction when one of the Domain Controllers is down? </span></strong><b><span lang="EN"><br /><strong>- Or -</strong><br /><strong>How to switch to disaster recovery site without booting my clients</strong></span></b><span lang="EN"><o:p></o:p></span></p><p><span lang="EN">Every Sysadmin installs at least two Domain Controllers on his domain for redundancy and<br />fault tolerance. But what actually happens when one of the DC's is down?<br /><br />If you do a simple and disconnect one of your DCs from the network, you'll see that about<br />half of the workstations and member server who hasn't booted since the DC is down are<br />experiencing problems such as sluggishness, performance issues and some of the<br />applications simply stop working. The reason for that is the way Netlogon works.<br /><br />Netlogon is the process which is responsible, among other tasks, to detect Active Directory<br />environment and the closest DC. The detection process is called DC Locator.<br /><br />It is implemented in the NetAPI.DLL in a function named dsGetDCName and invoked by the<br />Netlogon service when the service starts. The DC Locator process sends a request to all<br />Domain Controllers in the domain and waits for them to respond. Once responded,<br />Netlogon caches the Domain Controller who was first to respond and saves its details<br />in the cache. From that moment, every call made by any application for the dsGetDCName<br />function returns this DC.<br /><br />The DC Locator process does not re-check the availability of the cached DC periodically..<br />Therefore, if this DC is gone for any reason, workstations and member servers who have already<br />cached this DC remain with the faulty cache until the workstation is rebooted. As a result,<br />any application that needs to access the DC (and call the dsGetDCName for it) receives the<br />faulted DC and is expected to have problems when trying to connect to it.<br /><br />In the last years, fault tolerance became an essential requirement in many organizations.<br />Many enterprises implement expensive disaster recovery sites, buy expensive clusters and<br />replicate data to at least one additional location.<br /><br />When the disaster does happen and the main site is going down, this limitation will cause<br />you lots of trouble until you reboot your entire organization.<o:p></o:p></span></p><p class="MsoNormal"><o:p> </o:p></p></div>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-41297423032675212672009-02-10T14:16:00.001-08:002009-06-19T15:42:57.643-07:00Settings Windows Time on PDC emulator using w32tm<div class="Section1"><p><span style="font-family:Symbol;">·</span> Type the following command to configure the PDC emulator and then press ENTER:<?xml:namespace prefix = o /><o:p></o:p></p><p><strong>w32tm /config /manualpeerlist:</strong><em>peers</em><strong> /syncfromflags:manual /reliable:yes /update</strong><o:p></o:p></p><p>where <em>peers</em> specifies the list of DNS names and/or IP addresses of the NTP time source that the PDC emulator synchronizes from. For example, you can specify time.windows.com. When specifying multiple peers, use a space as the delimiter and enclose them in quotation marks.<o:p></o:p></p><p><o:p> </o:p></p><p>Then<o:p></o:p></p><p><o:p> </o:p></p><p>w32tm /resync<o:p></o:p></p></div>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-64133733047310823002009-01-27T19:40:00.001-08:002009-01-27T19:40:45.322-08:00WDS and server 2008 - WDS and DHCP on separate boxes.<div class=Section1> <p class=MsoNormal>NB: After installing WDS – REBOOT!!!!<o:p></o:p></p> <p class=MsoNormal><o:p> </o:p></p> <p class=MsoNormal><a href="http://www.infotechguyz.com/server2008/wds.html">http://www.infotechguyz.com/server2008/wds.html</a><o:p></o:p></p> <p class=MsoNormal><o:p> </o:p></p> <p class=MsoNormal><o:p> </o:p></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><b><span style='font-size:18.0pt;font-family:"Times New Roman","serif"'>Free system imaging solution ? Server 2008 Windows Deployment Services (WDS)?<o:p></o:p></span></b></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>Server 2008 Windows Deployment Services (WDS) replaces Remote <br> Installation Services (RIS) offered in Windows Server 2003 and 2000. WDS use PXE and TFTP to boot from WDS server. <o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>The main difference between Windows Deployment Services (WDS) and other imaging solutions like Ghost is that WDS uses file-based imaging format where others use sector based. WIM format uses single instance store which files are stored only once and referenced multiple times. As result, images are a lot smaller. <o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>Windows Deployment Services (WDS) supported OSes:<br> - Windows XP<br> - Windows Server 2003<br> - Windows Vista<br> - Windows Server 2008<o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'> <o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><b><span style='font-size:13.5pt;font-family:"Times New Roman","serif"'>How to install and configure Server 2008 Windows Deployment Services (WDS)<o:p></o:p></span></b></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>Server 2008 Windows Deployment Services (WDS) prerequisites<o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>- WDS server must be a member server of an Active Directory domain<br> - DHCP must be configured for PXE boot to work<br> - DNS, you will mostly have this.<br> - OS media <br> - NTFS partition on the WDS server<br> - Server 2008<o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>To install Windows Deployment Services (WDS) on Server 2008<o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>open <b>server manager</b> > Click on <b>Add Roles</b> link > click <b>Next</b> > on the <b>Select Server Roles</b> screen, select<b> Windows Deployment Services</b>, and <br> then click <b>Next</b>.<o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>On the Role Services screen, verify that Deployment Server and Transport <br> Server are checked; then click Next, then click Install<o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><b><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>Start</span></b><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'> > <b>Administrative Tools</b> ><b> Windows Deployment Services</b> to access<br> the <b>Windows Deployment Services Management console</b>.<br> Choose the path to where images will stored.<br> Configure PXE Server settings, choose “Respond to all”, and Click finish.<o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><b><span style='font-size:13.5pt;font-family:"Times New Roman","serif"'>Add a Boot Image to WDS Server<o:p></o:p></span></b></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>Boot image is the image file used during pre-installation OS, also known as boot OS and delivered via PXE boot. <br> <br> 1. Start > Administrative Tools > Windows Deployment Services to access<br> the Windows Deployment Services Management console<br> 2. Right click the Boot Images node. Then click Add Boot Image<br> 3. Click Browse to locate the boot image you wish to add. (Use the Boot.wim from the Windows Server 2008 installation DVD)<br> 4. Once completed, you should be able to see this image you when perform a PXE boot.<o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><b><span style='font-size:13.5pt;font-family:"Times New Roman","serif"'><br> Create a Capture Boot Image<o:p></o:p></span></b></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>Capture Boot Image is a boot image used when capturing images. You will use capture image to boot a server/client to capture its image into a .wim file. You can create a capture boot image by using the Boot.wim from the Windows Server 2008 installation DVD.<br> <br> 1. Start > Administrative Tools > Windows Deployment Services to access<br> the Windows Deployment Services Management console.<br> 2. expand the Boot Images node<br> 3. Right click the image you added earlier (See step 2 from Add a Boot Image to WDS Server)<br> 4. Click Create Capture Boot Image<br> 5. Once completed, click Finish.<br> 6. Right click on boot image folder, choose "Add Boot Image"<br> 7. Select the capture boot image we just created and click Next<br> 8. Once completed, you should be able to use this boot image to capture Operating System images<o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><b><span style='font-size:13.5pt;font-family:"Times New Roman","serif"'><br> Create an Install Image (create an image)<o:p></o:p></span></b></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>Install image includes the OS, custom applications and settings. It is most likely that you will have an install image for every OS you support. <br> <br> 1. Create a base computer (A computer that includes the OS, custom applications and settings).<br> 2. Install sysprep.exe (If you are using windows 2003 or XP, you can find it deploy.cab of Installation CD,), note: sysprep is included by default in Server 2008<br> 3. Run sysprep.exe on the base computer (on XP, sysprep –mini –reseal –forceshutdown )<br> 4. Verify that the base computer is connected to the network and powered on the system<br> 5. Perform a network boot (Often you can do this with the F12 key)<br> 6. In the boot menu screen, select the capture boot image that you created earlier<br> 7. Choose the source drive and enter a name and description for the image. Click Next. Note: only Sysprep drives will appear)<br> 8. Choose "Browse" to select a destination for the image. Enter a name and click "Save", Select "Upload image to WDS Server"<br> 9. Enter the name of the WDS server, and then click Connect.<br> 10. Provide a user name and password if prompted<br> 11. Select the "Image Group" from the list<br> 12. Click "Finish"<br> 13. Now, you should be able to install this image to a server/client via PXE boot.<o:p></o:p></span></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><b><span style='font-size:13.5pt;font-family:"Times New Roman","serif"'><br> Install an Install Image (restore an image)<o:p></o:p></span></b></p> <p class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><span style='font-size:12.0pt;font-family:"Times New Roman","serif"'>This process restores the Install Image we created earlier. <br> <br> 1. Configure your BIOS to enable PXE boot (aka Network Boot)<br> 2. Perform a network boot (usually by press F12)<br> 3. Select the boot image from the boot menu.<br> 4. WDS will load the computer into GUI and follow the wizard.<o:p></o:p></span></p> <p class=MsoNormal><o:p> </o:p></p> </div> <pre>At the Datamail Group we value teamwork, respect, achievement, client focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by replying immediately, destroy it and do not copy, disclose or use it in any way. The Datamail Group, through our GoGreen programme, is committed to environmental sustainability. Help us in our efforts by not printing this email. __________________________________________________________________ This email has been scanned by the DMZGlobal Business Quality Electronic Messaging Suite. Please see http://www.dmzglobal.com/dmzmessaging.htm for details. __________________________________________________________________ </pre>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-8036540932359164682009-01-17T19:38:00.000-08:002009-01-17T19:47:47.074-08:00What should have been a joy became a disaster....So I got myself a new monitor - A samsung 32" 1080p LCD (cost $150 NZD more than comparable 26" monitor after some strong negotiations so great option and the desktop real estate is HUGE!<br /><br />Anyway, current ATI drivers for onboard chipset (790G) couldn't scale to 1080p, cue several frsutratiing hours of installing adn uninstalling latest ATI drivers (8.12) for Windows Server 2008 x64 (which is my 'PC' and HyperV host....you can see where this is heading...)<br /><br />No dice, just cannot get the server to recognise the driver install<br /><br />Desparate times so I rebuilt the system thinking HyperV's will easily restart with minimal fuss. Oh. No. They. Won't.<br /><br />Monitor looks great now with older drivers off DVD that came with mobo so start VMs (discover you need to create NEW>Pick your old <span class="blsp-spelling-error" id="SPELLING_ERROR_0">VHD</span> file. All <span class="blsp-spelling-corrected" id="SPELLING_ERROR_1">systems</span> (<span class="blsp-spelling-corrected" id="SPELLING_ERROR_2">including</span> a DC started OK but weird things started happening - <span class="blsp-spelling-corrected" id="SPELLING_ERROR_3">getting</span> unauthenticated in the network centre, needing to rework the DC. End result was drop <span class="blsp-spelling-error" id="SPELLING_ERROR_4">VMs</span> from <span class="blsp-spelling-corrected" id="SPELLING_ERROR_5">domain</span> and <span class="blsp-spelling-corrected" id="SPELLING_ERROR_6">re add</span>.<br /><br /><span class="blsp-spelling-corrected" id="SPELLING_ERROR_7">Unfortunately</span> on my <span class="blsp-spelling-error" id="SPELLING_ERROR_8">Kerio</span> <span class="blsp-spelling-corrected" id="SPELLING_ERROR_9">mail server</span> this blew away ALL my mail - could not find it at all.<br /><br /><span class="blsp-spelling-error" id="SPELLING_ERROR_10">Arrrgh</span>!<br /><br />After much googling found some help here:<br /><br />http://jigar-mehta.blogspot.com/2008/02/how-do-i-extract-file-from-virtual-hard.html<br /><br />Identified that I actually had snapshots (<span class="blsp-spelling-error" id="SPELLING_ERROR_11">AVHD</span>) of my mail server disk that were a lot bigger than the <span class="blsp-spelling-corrected" id="SPELLING_ERROR_12">original</span> <span class="blsp-spelling-error" id="SPELLING_ERROR_13">VHD</span> so using <span class="blsp-spelling-error" id="SPELLING_ERROR_14">winimage</span> was able to open these up <span class="blsp-spelling-corrected" id="SPELLING_ERROR_15">and</span> extract most of my email.<br /><br /><span class="blsp-spelling-error" id="SPELLING_ERROR_16">HyperV</span> - you b@stard!<br /><br />My fault I'm sure but you gotta learn somewhere the old adage, "There are 2 kinds of people in the world, <span class="blsp-spelling-corrected" id="SPELLING_ERROR_17">those</span> who backup and those who will backup"RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-66852353751230577222008-11-23T14:09:00.000-08:002008-11-23T14:10:42.565-08:00Deduplication - bit of a show downLooking at Data de-duplciation today:<br /><br />This article contains summary by most of the industry players so I'm going to put it here in full in case it gets lost from drunkendata.com:<br /><br /><br /><div id="page"> <a href="http://www.drunkendata.com/"><img src="http://www.drunkendata.com/wp-content/themes/default/images/kubrickheader.jpg" /></a> <hr /> <div id="content" class="widecolumn"> <div class="navigation"> <div class="alignleft">« <a href="http://www.drunkendata.com/?p=1691">Can GSA Schedules Provide Insight into Reasonable Storage Product Costs?</a></div> <div class="alignright"><a href="http://www.drunkendata.com/?p=1693">The Front/Back Office Disconnect</a> »</div> </div> <div class="post" id="post-1692"> <h2>Invitation to De-Duplication Vendors</h2> <div class="entry"> <p>There are some questions I would like to get answers to in the area of De-Duplication. I am hoping that some of the vendor readers of this blog will help out.</p> <p>Here is an opportunity to shine, folks, and to tell the world why, what, how and where. Here is the question list. You can either respond on line through comment cut and paste or email me your response at <a href="mailto:jtoigo@toigopartners.com">jtoigo@toigopartners.com</a> and I will put your response on-line for you. From where I am sitting, these are the kinds of questions that consumers would ask.</p> <p>1. Please provide the name of your company and the de-dupe product(s) you sell. Please summarize what you think are the key values and differentiators of your wares.</p> <p>2. InfoPro has said that de-dupe is the number one technology that companies are seeking today — well ahead of even server or storage virtualization. Is there any appeal beyond squeezing more undifferentiated data into the storage junk drawer? </p> <p>3. Every vendor seems to have its own secret sauce de-dupe algorithm and implementation. One, Diligent Technologies (just acquired by IBM), claims that their’s is best because it collapses two functions — de-dupe then ingest — into one in-line function, achieving great throughput in the process. What should be the gating factors in selecting the right de-dupe technology?</p> <p>4. Despite the nuances, it seems that all block level de-dupe technology does the same thing: removes bit string patterns and substitutes a stub. Is this technically accurate or does your product do things differently?</p> <p>5. De-dupe is changing data. To return data to its original state (pre-de-dupe) seems to require access to the original algorithm plus stubs/pointers to bit patterns that have been removed to deflate data. If I am correct in this assumption, please explain how data recovery is accomplished if there is a disaster. Do I need to backup your wares and store them off site, or do I need another copy of your appliance or software at a recovery center?</p> <p>6. De-dupe changes data. Is there any possibility that this will get me into trouble with the regulators or legal eagles when I respond to a supoena or discovery request? Does de-dupe conflict with the nonrepudiation requirements of certain laws?</p> <p>7. Some say that de-dupe obviates the need for encryption. What do you think?</p> <p>8. Some say that de-duped data is inappropriate for tape backup, that data should be re-inflated prior to write to tape. Yet, one vendor is planning to enable an “NDMP-like” tape backup around his de-dupe system at the request of his customers. Is this smart?</p> <p>9. Some vendors are claiming de-dupe is “green” — do you see it as such?</p> <p>10. De-dupe and VTL seem to be joined at the hip in a lot of vendor discussions: Use de-dupe to store a lot of archival data on line in less space for fast retrieval in the event of the accidental loss of files or data sets on primary storage. Are there other applications for de-duplication besides compressing data in a nearline storage repository?</p> <p>11. Just suggested by a reader: What do you see as the advantages/disadvantages of software based deduplication vs. hardware (chip-based) deduplication? Will this be a differentiating feature in the future… especially now that Hifn is pushing their Compression/DeDupe card to OEMs?</p> <p>Thanks in advance for your response.</p> <p class="postmetadata alt"> <small> This entry was posted on Monday, April 21st, 2008 at 12:15 pm and is filed under <a href="http://www.drunkendata.com/?cat=2" title="View all posts in Product Foo" rel="category">Product Foo</a>. You can follow any responses to this entry through the <a href="http://www.drunkendata.com/?feed=rss2&p=1692">RSS 2.0</a> feed. You can skip to the end and leave a response. Pinging is currently not allowed. </small> </p> </div> </div> <!-- You can start editing here. --> <h3 id="comments">15 Responses to “Invitation to De-Duplication Vendors”</h3> <ol class="commentlist"><li class="alt" id="comment-18147"> <img alt="" src="http://www.gravatar.com/avatar/2664570a974c8ffd9e8302bb511be1ac?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite>draft_ceo</cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18147" title="">April 21st, 2008 at 11:07 pm</a> </small> <p>If Diligent is the best, then I am curious to understand why it sold for less than $200M.</p> </li><li id="comment-18148"> <img alt="" src="http://www.gravatar.com/avatar/4f92cf4d45b5619f007fba688f33a7f4?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite><a href="http://www.datainstitute.org/" rel="external nofollow">Administrator</a></cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18148" title="">April 22nd, 2008 at 10:40 am</a> </small> <p>IBM hasn’t revealed how much it spent on Diligent. Not sure where you are getting your numbers. Also, no one, except maybe IBM, has suggested that Diligent was best.</p> </li><li class="alt" id="comment-18149"> <img alt="" src="http://www.gravatar.com/avatar/4f92cf4d45b5619f007fba688f33a7f4?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite><a href="http://www.datainstitute.org/" rel="external nofollow">Administrator</a></cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18149" title="">April 22nd, 2008 at 10:56 am</a> </small> <p>Chris P, over at <a href="http://blogs.eweek.com/storage_station/content/dedupe_vendors_see_drunken_data_online_quiz.html" target="_blank" rel="nofollow">eWeek</a>, has pointed to this “quiz” and encouraged de-dupe vendors to open their corporate kimonos. Thanks, Chris.</p> </li><li id="comment-18154"> <img alt="" src="http://www.gravatar.com/avatar/2664570a974c8ffd9e8302bb511be1ac?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite>draft_ceo</cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18154" title="">April 23rd, 2008 at 1:27 am</a> </small> <p>I got the $200M number from here:<br /><a href="http://www.byteandswitch.com/document.asp?doc_id=151339" rel="nofollow">http://www.byteandswitch.com/document.asp?doc_id=151339</a></p> </li><li class="alt" id="comment-18155"> <img alt="" src="http://www.gravatar.com/avatar/4f92cf4d45b5619f007fba688f33a7f4?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite><a href="http://www.datainstitute.org/" rel="external nofollow">Administrator</a></cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18155" title="">April 23rd, 2008 at 7:47 am</a> </small> <p>IBM said in its conference call that they did not, as a matter of company policy, disclose acquisition prices. I don’t know where B&S got its numbers or if they are accurate.</p> </li><li id="comment-18156"> <img alt="" src="http://www.gravatar.com/avatar/56f039c7f6b88ecc64bef999274650fd?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite>Howard.Marks</cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18156" title="">April 24th, 2008 at 8:48 pm</a> </small> <p>Jon,</p> <p>As you know I’m not a vendor but I play Blogger at InformationWeek. Starting with question 4 your description of deduping as using stubs isn’t a good analogy.</p> <p>Think of a deduped data store as a file system. In the case of a NAS device like a DataDomian or NetApp A-SIS it really is a file system. In a VTL think of each virtual tape as a file.</p> <p>Somewhere there’s a directory that says the file FOO.BAR is stored on blocks 123-345, 500-510 and 12999-14090. That’s true of ANY file system. The difference between the deduped and normal file system is that more than one file can use the same block. If I edit FOO.BAR, add 10Kbytes to the end and save it as FOO2.BAR at some point (real time or later) the deduper will recognize (via hashes or a byte by byte compare) that my file has the same data and will build a directory entry that says FOO2.BAR uses 123-345, 500-510, 12999-14090 and 66666-66669. So the second file takes up just 10K bytes.</p> <p>Now the file system needs to keep track of how many files point to each block and update that list when files are deleted.</p> <p>Re 5 I reject that dedupe is changing data. It’s storing it differently. Now LZS compression is changing data and AES encryption is changing data but dedupe as I described it above (which is good enough a description of all the techniques and 99% acruate for NetApp) isn’t. Strictly speaking RLL in the disk drive is modifying the data.</p> <p>Re 6: not any more than LZS or AES truth is those regulations mean “tamper with the change the meaning” when they say no modify.</p> <p>7 no it don’t</p> <p>8 - The only use of tape for deduped data would be to backup/restore the WHOLE deduped data store in one fell swoop.</p> <p>9 - If I dedupe and store 1/20th the data on 1/20th the drives using 1/20th the power it seems greenish. Tape is greener as I blogged a couple days ago.</p> <p>10 - If you think about hash based dedupe and CAS you could use dedupe to replace any of the online archive apps CAS is used for. Riverbed and Silverpeak use it for WAN acceleration and NetApp is pitching it for primary file storage. Downside is reading files back is slower because it’s not a sequential read as it would be if the file were on contigious blocks. In fact reading from a deduped store is VERY much like reading from a badly fragmented disk on a file server. Since these are devices made for backup restore they could use long read ahead queues to spedd it up.</p> <p>11 - The hard part in deduping is finding the right places to divide data into blocks. Think of the corporate file server. There are 10,000 Word docs with the corporate logo embedded. If the blocking algorithm can put that logo into a block by itself you’ll get much better data reduction than if it uses fixed size 4K blocks.</p> <p>The other hard part is building the index so you can QUICKLY check if a block being stored now has been stored before.</p> <p>All the HiFn card does is calculate the hashes for blocks. So chips can help but there’s no such thing as chip dedupe.</p> <p>Howard Marks<br /><a href="http://www.informationweek.com/blog/main/archives/backup_and_business_continuity/index.html" target="_blank" rel="nofollow">Backup and Business Continuity Blogger</a></p> </li><li class="alt" id="comment-18157"> <img alt="" src="http://www.gravatar.com/avatar/4f92cf4d45b5619f007fba688f33a7f4?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite><a href="http://www.datainstitute.org/" rel="external nofollow">Administrator</a></cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18157" title="">April 25th, 2008 at 8:16 am</a> </small> <p>Thanks for your feedback, Howard. The questions on the list are for clarification from the vendors, none of whom — by the way — have seen fit to respond as yet. The points you make are very valid, but the questions are not a reflection of my misunderstanding of de-dupe as much as they are concerns raised to me by consumers who really don’t understand how de-dupe works.</p> <p>Stubbing is still a technique used by certain products, though not by all. I wanted vendors to clarify what techniques they actually use. As for the other questions, consumers believe that de-dupe is changing data, that it imposes a hit on access speeds, that it may jeopardize compliance. I have actually had several de-dupe vendors tell me that de-duped data “is already encrypted.”</p> <p>Bottom line: there are equal parts hype and marketecture around the technologies in play. Lots of players are doing things differently. There are no standards for doing it at all. Hence, the questionnaire.</p> <p>Thanks again for your thoughtful insights. I hope some of the vendors actually chime in.</p> </li><li id="comment-18166"> <img alt="" src="http://www.gravatar.com/avatar/4f92cf4d45b5619f007fba688f33a7f4?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite><a href="http://www.datainstitute.org/" rel="external nofollow">Administrator</a></cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18166" title="">April 30th, 2008 at 11:20 am</a> </small> <p>Larry Freeman, Senior Marketing Manager, Storage Efficiency Solutions, Network Appliance, writes</p> <p>1. Please provide the name of your company and the de-dupe product(s) you sell. Please summarize what you think are the key values and differentiators of your wares.</p> <blockquote><p>Company: NetApp<br />Dedupe product: NetApp deduplication<br />NetApp deduplication is a fundamental component of NetApp’s core operating architecture - Data ONTAP. NetApp deduplication is the first dedupe technology that can be used broadly across many applications, including primary data, backup data, and archival data.</p></blockquote> <p>2. InfoPro has said that de-dupe is the number one technology that companies are seeking today — well ahead of even server or storage virtualization. Is there any appeal beyond squeezing more undifferentiated data into the storage junk drawer?</p> <blockquote><p>Storage admins are reluctant (or prohibited) from deleting data or sending the data to permanent tape archival. But as everyone knows, data keeps growing. This presents a quandary. You can’t just keep buying more and more storage, rather you need to figure out the best way to compress the data you are required to store on disk. Of all the storage space reduction options, deduplication provides the highest degree of data compression, the lowest amount of compute resources and is usually very simple to implement. This is the reason for the broad interest and adoption of deduplication.</p></blockquote> <p>3. Every vendor seems to have its own secret sauce de-dupe algorithm and implementation. One, Diligent Technologies (just acquired by IBM), claims that their’s is best because it collapses two functions — de-dupe then ingest — into one in-line function, achieving great throughput in the process. What should be the gating factors in selecting the right de-dupe technology?</p> <blockquote><p>As with many other technologies, the gating factor for selecting the right deduplication technology is “What are you trying to accomplish?”</p> <p>Inline deduplication’s main benefit is that it never requires the storage of redundant data; that data is eliminated before it is written. The drawback of inline, however, is that the decision to “store or throw away” data must be made in real time which precludes any data validation to guarantee the data being thrown away is in fact unique. Inline deduplication is also limited in scalability, since fingerprint compares are done “on the fly” the preferred method is to store all fingerprints in memory to prevent disk look-ups. When the number of fingerprints exceeds the storage system’s memory capacity, inline deduplication ingest speeds will become substantially degraded.</p> <p>Post-processing deduplication, the method that NetApp uses, requires data to be stored first, then deduplicated. This allows the deduplication process to run at a more leisurely pace. Since the data is stored and then examined, a higher level of validation can be done. Post-processing also requires less system resources since fingerprints can be stored on disk and hence requires fewer system resources during the deduplication process.</p> <p>So bottom line, if your main goal is to never write duplicate data to the storage system, and you can accept “false fingerprint compares”, inline deduplication might be your best choice. If your main objective is to decrease storage consumption over time while insuring that unique data is never accidentally deleted, post-processing deduplication would be the choice.</p></blockquote> <p>4. Despite the nuances, it seems that all block level de-dupe technology does the same thing: removes bit string patterns and substitutes a stub. Is this technically accurate or does your product do things differently?</p> <blockquote><p>A better way to describe this would be “removes bit string patterns and substitutes a reference pointer or stub.” In NetApp’s case, a single data block can be referenced 255 times. When we identify and validate two identical data blocks, we re-reference the data pointer of the duplicate block to the original block, and release this duplicate block back to the “free” block pool. No stubs are required.</p></blockquote> <p>5. De-dupe is changing data. To return data to its original state (pre-de-dupe) seems to require access to the original algorithm plus stubs/pointers to bit patterns that have been removed to deflate data. If I am correct in this assumption, please explain how data recovery is accomplished if there is a disaster. Do I need to backup your wares and store them off site, or do I need another copy of your appliance or software at a recovery center?</p> <blockquote><p>This is generally referred to as the “reconstitution” of the deduped data. With NetApp, when we deduplicate, we are merely reorganizing the data structure of the filesystem by using multiple block references. Once the data set is deduplicated, there is no external algorithm needed to reconstitute the data. The direct and indirect nodes that make up the filesystem are traversed and the blocks are recovered, just as they would be in a “normal” NetApp filesystem.</p></blockquote> <p>6. De-dupe changes data. Is there any possibility that this will get me into trouble with the regulators or legal eagles when I respond to a supoena or discovery request? Does de-dupe conflict with the nonrepudiation requirements of certain laws?</p> <blockquote><p>The regulators want proof that the data is immutable, or in other words data that has not been altered or tampered with. NetApp deduplication does not alter one byte of data from its original form, its just stored differently on disk. I use this analogy - if a disk volume is unfragmented, isn’t it still the same data? Just stored in a different place? Same thing with data compression, data that has been compressed then uncompressed has changed but the data is still in its original form. One interesting point though, is what happens if a “false fingerprint compare” as described above with inline deduplication occurs. Now the data HAS been changed. Because of this, inline deduplication may not be acceptable in regulatory environments.</p></blockquote> <p>7. Some say that de-dupe obviates the need for encryption. What do you think?</p> <blockquote><p>Interesting concept but has one big flaw. Unlike encryption, deduplication does not guarantee that files will be unreadable.</p></blockquote> <p>8. Some say that de-duped data is inappropriate for tape backup, that data should be re-inflated prior to write to tape. Yet, one vendor is planning to enable an “NDMP-like” tape backup around his de-dupe system at the request of his customers. Is this smart?</p> <blockquote><p>Its a customer decision, which again depends on the customer’s objective. Many customers are willing to take the trade-off of a proprietary dedupe format written to tape in exchange for significantly reduced number of tapes to manage. Others view this as a show-stopper and don’t want to rely on the deduplication vendor’s ability to recover data from tapes.</p></blockquote> <p>9. Some vendors are claiming de-dupe is “green” — do you see it as such?</p> <blockquote><p>In many respects, yes deduplication is green. If I can reduce my physical storage needs by say, 50% through deduplication that means I need 50% fewer spinning disks to house the same data. The trouble is that as soon as any disk space becomes available, it manages to fill back up pretty quickly, as in your “storage junk drawer” example above. NetApp believes that deduplication is just one component of overall green storage, and should be combined with features like thin provisioning, writeable snapshots, and higher capacity disk drives for optimal “greening.” We have published a whitepaper <a href="http://media.netapp.com/documents/wp-7022-0507.pdf" target="_blank" rel="nofollow">“Buying Less Storage With NetApp”</a> that addresses just this topic.</p></blockquote> <p>10. De-dupe and VTL seem to be joined at the hip in a lot of vendor discussions: Use de-dupe to store a lot of archival data on line in less space for fast retrieval in the event of the accidental loss of files or data sets on primary storage. Are there other applications for de-duplication besides compressing data in a nearline storage repository?</p> <blockquote><p>Approximately 30% of all NetApp deduplication users are dedupe-ing primary storage applications, and the area we are seeing the greatest growth in is our deduplication. VMware, Exchange, SQL, Oracle, and SharePoint are the primary apps we predict will see the greatest adoption of NetApp deduplication on 2008.</p></blockquote> <p>11. Just suggested by a reader: What do you see as the advantages/disadvantages of software based deduplication vs. hardware (chip-based) deduplication? Will this be a differentiating feature in the future… especially now that Hifn is pushing their Compression/DeDupe card to OEMs?</p> <blockquote><p>Most customers have not adopted software-based deduplication because of the challenges in managing multiple agents and deduplication points across their environment. Users seem to clearly prefer deduplication at the destination storage system. Simple to implement, manage and control. In NetApp’s case, a 10 minute installation. As far as the Hifn card, important to note that this card does not actually perform deduplication, it merely provides a “hash” function to create fingerprints. The fingerprint cataloging, and the stub or data pointer creation will still be the responsibility of the OEM storage provider.</p></blockquote> <p> </p> <p>Thanks, Larry. </p> </li><li class="alt" id="comment-18167"> <img alt="" src="http://www.gravatar.com/avatar/4f92cf4d45b5619f007fba688f33a7f4?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite><a href="http://www.datainstitute.org/" rel="external nofollow">Administrator</a></cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18167" title="">April 30th, 2008 at 11:25 am</a> </small> <p>From Bill Andrews, CEO of ExaGrid.</p> <p>1. Please provide the name of your company and the de-dupe product(s) you sell. Please summarize what you think are the key values and differentiators of your wares.</p> <blockquote><p>ExaGrid Systems<br />ExaGrid 1000, 2000, 3000, 4000 and 5000 as well as the 5000-GWi (iSCSI gateway).</p></blockquote> <p>2. InfoPro has said that de-dupe is the number one technology that companies are seeking today — well ahead of even server or storage virtualization. Is there any appeal beyond squeezing more undifferentiated data into the storage junk drawer?</p> <blockquote><p>There are two values, the first is to store a lot of data in a small foot print of disk. This works great for backup as each backup job that comes in is 98% the same the backup job before it. So de-dup works great because so much of the data is redundant. However, the additional value is the only way to keep an offsite copy of the data is compare one backup to another and only move the changes. De-dup is required for backup because the backup file names change so you need to compare one to the other to find the differences. For example: primary storage snaps would not each backup as the same file and would try and move the entire backup across the WAN. The net de-dup reduces storages but also enable WAN efficient offsite.</p></blockquote> <p>3. Every vendor seems to have its own secret sauce de-dupe algorithm and implementation. One, Diligent Technologies (just acquired by IBM), claims that their’s is best because it collapses two functions — de-dupe then ingest — into one in-line function, achieving great throughput in the process. What should be the gating factors in selecting the right de-dupe technology?</p> <blockquote><p>There are 3 basic methods we see.</p> <ul><li>Break backup jobs or files into roughly 8KB blocks and then compare blocks to store only unique blocks (Data Domain, Quantum, etc.)</li><li>Byte-level delta where each backup job is compared is compared to the previous backup jobs and only the bytes that change are stored (ExaGrid, Sepaton, etc.)<br />Near dup block level that bring the dup part way there and then takes the big segments and compares them for byte level delta (Diligent, etc.)<br />The benefit of the last two which ultimately use byte level allow for great scalability. If you use you block for every 10TB there are over 1 billion has table entries (10TB / 8KB).</li><li>For byte level or near dup to byte level the segments that are compared are typically 100meg each so there are significantly less pieces to track. This allows data to be managed across servers in a scalable solution. If you notice the 3 players that have scalability Sepaton for the enterprise, Diligent for the enterprise and ExaGrid has a scalable GRID architecture all use byte.</li></ul> </blockquote> <p>4. Despite the nuances, it seems that all block level de-dupe technology does the same thing: removes bit string patterns and substitutes a stub. Is this technically accurate or does your product do things differently?</p> <blockquote><p>As described above it compares a backup job to the former backup job to find the bytes that change. About 2% of the bytes change from back to backup. For a 10TB backup job this means that the differences are about 200GB each. We store the most recent backup compressed (2 to 1) and then all previous backups as just the bytes that change. Therefore, if some were keeping 20 copies (200TB of straight disk) the result would be a 5TB most recent backup plus 19 byte deltas of 200GB each or a total of 8.8TB. 200TB/8.8TB = 22.7 to 1 as an example.</p></blockquote> <p>5. De-dupe is changing data. To return data to its original state (pre-de-dupe) seems to require access to the original algorithm plus stubs/pointers to bit patterns that have been removed to deflate data. If I am correct in this assumption, please explain how data recovery is accomplished if there is a disaster. Do I need to backup your wares and store them off site, or do I need another copy of your appliance or software at a recovery center?</p> <blockquote><p>At the primary site we store the most recent backup it its complete form – compressed. We store all previous versions as just the bytes that change. We replicate the bytes that change over the WAN to the second site system. We store the bytes that change but we also merge those bytes, each time, into the full backup on the other side. Therefore, both sides are identical with the most recent backup in its complete form and all previous backups as byte deltas. To do a DR recovery all you do is set a backup application and restore as the most recent backup is sitting there in its entirety. You can also, at any time, do a test recovery to make sure the data is there for when you need it.<br />An additional benefit of have the most recent copy always ready to go in its entirety (compressed) is that 90% of restores come from the most recent backup and therefore restores are fast. Even more important if you are still making offsite tapes, 100% of offsite tape copies come from the Friday night full backup. If the Friday full is a de-duped set of blocks, the tape copy is slow, however if the Friday full is a complete un-duped full the tape copy is fast.</p></blockquote> <p>6. De-dupe changes data. Is there any possibility that this will get me into trouble with the regulators or legal eagles when I respond to a supoena or discovery request? Does de-dupe conflict with the nonrepudiation requirements of certain laws?</p> <blockquote><p>We have not heard this. All data can be restored at any point in time. The backup application controls the catalog and the retention periods. If you want to do a restore from 3 years ago you do the restore and the bytes that change merge into the full backup and the backup application receives the full backup as of that date. All byte are check summed. Even with block level de-dup the hash table can put a backup back together from any point in time.</p></blockquote> <p>7. Some say that de-dupe obviates the need for encryption. What do you think?</p> <blockquote><p>If you have a two site disk-back up system with de-dup there is no need for encryption as the system uses physical data center security, standard network security and standard VPN security. The requirement of encryption is to protect the data leaving the building on tape, typically in cartons. Obviously, if data leaves the building on physical media you want the data encrypted. We have a lot of Health Care customers who need to encrypt. With these types of systems there is no need for encryption because the security and encryption is built into the network. On the primary side, the primary site disk-based backup system sits in the data center, secured by data center and network security. It is as secure as the primary data. The second site or offsite system also sits behind data center and network security.</p> <p>Again, it is as secure as all data or applications in that data center. The data from one system to the other traverses the WAN over an encrypted VPN. Therefore, the data moves from the primary site to the second / offsite over the same encrypted VPN that all the company’s traffic goes over. So what you have is two systems, sitting in two secured data centers with data going over an encrypted VPN. Therefore, security is inherent in the infrastructure.</p></blockquote> <p>8. Some say that de-duped data is inappropriate for tape backup, that data should be re-inflated prior to write to tape. Yet, one vendor is planning to enable an “NDMP-like” tape backup around his de-dupe system at the request of his customers. Is this smart?</p> <blockquote><p>We don’t have this problem as we keep the most recent backup in its complete form. When tapes are made, whether a nightly incremental or a full, it is always made from the full copy.</p> <p>The real process need to be thought not from a de-duplication perspective but from a total backup process perspective. As an IT person I care about:</p> <ul><li>Fast backups – so I want the system to take in data as fast as it can. Writing the backup job to disk first and then doing all de-duplication is the fastest (post process). This keeps my backup window to absolute minimum which is my biggest challenge.</li><li>Fast restores – 90% of my restores come from my recent backup so I want that in its complete form ready to be restored versus having to be put back together before it can be restored.</li><li>Fast tape copy – my tape copies are made as soon as the backups are done. Therefore, I want a complete backup job on disk so I can make quick tape copies versus waiting until the data gets put back together to get a tape copy.</li><li>Storage efficiency – I want all the above but I also want storage efficiency and I don’t care how you get there as long as it takes the least space possible and when I want my data back it is there.</li></ul> <p>If I eliminate tape offsite I want an updated copy on the other side ready to restore in case of a disaster.</p> <p>My data is growing at 30% a year which means it doubles ever 2.5 years. I need a system I can capacity to that keeps the performance up with my ever growing data. Therefore, I need more than just disk capacity added I need each set of disk to be accompanied by the appropriate memory, processor and bandwidth such that I am not degrading in performance as I add more data.</p> <p>I want all this at the lowest price possible because IT budget is tight.</p> <p>So therefore, the right system offers:</p> <ul><li>Post process to get the backup job off the network fast (short backup window) and all de-dup is performed after the backup is down</li><li>The most recent backup in its complete for quick restores and quick tape copy</li><li>The ability to store only changes from backup to backup to have a small footprint of disk</li><li>That only changes be moved offsite for WAN efficiency and that at the offsite the full is constantly kept up to date for quick Disaster Recovery</li><li>Storage servers in a scalable system so that each group of disk is not just storage but has more memory, processor and bandwidth to keep up with the increased data</li><li>The lowest price</li></ul> </blockquote> <p>9. Some vendors are claiming de-dupe is “green” — do you see it as such?</p> <blockquote><p>It is and it isn’t</p> <p>Against the amount of straight disk you would need it absolutely is as it takes a smaller footprint, less power and cooling to store long term retention.</p> <p>Against tape we think it might be the same but we are not 100% sure.</p></blockquote> <p>10. De-dupe and VTL seem to be joined at the hip in a lot of vendor discussions: Use de-dupe to store a lot of archival data on line in less space for fast retrieval in the event of the accidental loss of files or data sets on primary storage. Are there other applications for de-duplication besides compressing data in a nearline storage repository?</p> <blockquote><p>De-dup has 4 uses that we can see</p> <ol><li>Efficiently store backup data</li><li>Efficiently store primary storage archival data (nearline)</li><li>Efficiently store primary data</li><li>Only move unique data from remote site to a central site (Symantec PureDisk, EMC Avamar) or de-dup any data over a WAN for WAN efficiency ala Riverbed.</li></ol> </blockquote> <p>11. Just suggested by a reader: What do you see as the advantages/disadvantages of software based deduplication vs. hardware (chip-based) deduplication? Will this be a differentiating feature in the future… especially now that Hifn is pushing their Compression/DeDupe card to OEMs?</p> <blockquote><p>Hifn as far as we can tell is just compression but accelerated by hardware. In other words typically 2 to 1</p> <p>It is not storing only unique blocks or bytes and therefore will not achieve 20 to 1 or more a retention/history grows</p> <p>Most customers do not want to put together their own storage servers and the load software. Who do they call if they have a problem? Is it the RAID Card, is it the controller, is it the disk drives, is it the OS, is the de-dup software, is the backup software configuration? Customers want to call one vendor and get their answer. In the enterprise software might win, in the mass market, an appliance approach will win as storage is hardware.</p></blockquote> <p> </p> <p>Thanks, Bill.</p> </li><li id="comment-18186"> <img alt="" src="http://www.gravatar.com/avatar/0ba3649eb24c4c9d4df84b371d310259?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite>pete</cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18186" title="">May 5th, 2008 at 8:14 am</a> </small> <p>1. Please provide the name of your company and the de-dupe product(s) you sell. Please summarize what you think are the key values and differentiators of your wares.</p> <blockquote><p>Company: <strong>Data Storage Group, Inc.</strong><br />Product Family: <strong>ArchiveIQ™</strong><br />Shipping Products:<br />Quantum GoVault Data Protection (SOHO)<br />ArchiveIQ™ Enterprise Server (SMB – Medium Enterprise)<br />Differentiators:<br /><strong>Source-based data deduplication</strong> – redundant data is identified and removed at the source, before it is transferred across the network. This allows organizations to protect remote office data and dramatically reduce the backup window and recovery point objective for all systems. The system can also be configured for post-process data deduplication if the source system is a non-Windows server.<br /><strong>Multiple data deduplication techniques</strong> – high levels of data reduction are achieved through multiple data deduplication techniques. With minimal server impact advanced data compression, single instance storage and sub-file data reduction are all included with ArchiveIQ.<br /><strong>Simple and fast data recovery</strong> – Restoring individual files is easy because every file is included in a filename index which allows wildcard searching across several months of recovery points. Additionally, every recovery point can be quickly explored like a normal file share. Full folder recovery is a simple drag-and-drop from the explorer window. Finally, since ArchiveIQ does not “chunk” all the data into small pieces the restore jobs are at full disk speeds.<br /><strong>Automated Data Validation</strong> – every recovery point is continuously validated based on administrative policies. Any unexpected problems with the storage media or deduplicated data will be identified early and the system will repair itself from the source data. Automated Data Retention – the administrator simply specifies how long recovery points should be retained. The system automatically identifies and removes deduplicated data that does not meet the defined data retention policy. This process increases the available storage capacity and limits litigation and compliance liability.<br /><strong>Source data space management</strong> – optionally increase the available storage capacity on Windows file servers that are constantly running out of space. ArchiveIQ will transparently “stub” inactive file data and free expensive storage capacity for new files and active data. If a user or application needs to access the stubbed data, it is transparently cached back from the ArchiveIQ Server.<br /><strong>No Hardware ties</strong> – the administrator has the freedom to use existing server and storage capacity, or purchase new capacity based on various considerations like replication, expansion, migration and price. As long as the storage platform supports NTFS volumes, ArchiveIQ can use it for data deduplication. Future purchases will also cost less because of this freedom of choice.</p></blockquote> <p>2. InfoPro has said that de-dupe is the number one technology that companies are seeking today — well ahead of even server or storage virtualization. Is there any appeal beyond squeezing more undifferentiated data into the storage junk drawer?</p> <blockquote><p>The one primary goal of data deduplication is to have a longer period of data history on disk and readily available. This is more appealing to organizations because it enables efficient data recovery and discovery. It also improves data reliability because the system can be continuously validating backup images. With tape-based data retention a media problem will go un-detected until the production data has been lost and needs to be recovered.<br />Data deduplication can also improve the process of creating offsite copies. Instead of copying all of the source data, the system can focus on the deduplicated data. This reduces the total amount of replicated data and network impact. Instead of managing several replication plans, one for each production volume, the focus can be on the unique bits of data.</p></blockquote> <p>3. Every vendor seems to have its own secret sauce de-dupe algorithm and implementation. One, Diligent Technologies (just acquired by IBM), claims that their’s is best because it collapses two functions — de-dupe then ingest — into one in-line function, achieving great throughput in the process. What should be the gating factors in selecting the right de-dupe technology?</p> <blockquote><p>When talking specifically about the data deduplication process there are two main questions to ask.<br /><strong>1) What data reduction techniques are being applied?</strong><br />At a high level there are four basic data deduplication techniques being deployed. Every vendor has their own IP, or has licensed IP, but the end-results are similar. Products that claim data deduplication need to deliver at least the first three of the four (excluding data “chunking”).<br />a) Advanced Data Compression – data that is highly compressible should be transferred and stored in the compressed state.<br />b) Single Instance Storage – redundancy at the file level should be completely removed and a single compressed version should be transferred and stored.<br />c) Sub-file data reduction – Active data and structured data (Exchange, SQL, System State, VHD, VMDK, PST) that do not deduplicate well with SIS should be processed for sub-file data reduction.<br />d) Data “chunking” - A fourth data deduplication technique breaks all the data into small “chunks” and identifies redundancy at the chunk level. This technique will identify the most redundancy, but it is at a cost. The processing power, core memory and time required to recovery 1TB of data after it has been broken into 8 kilobyte chunks is significant. Also, the master index that maps all these chunks back together will become very large over time. For most organizations it is difficult to know if the overall data reduction from this fourth technique is worth the system and recovery impact.<br /><strong>2) Where can the system perform data deduplication?</strong><br />Today there are three basic areas where data deduplication is taking place. Source-based data deduplication offers the most cost-savings when it comes to backup window, space management and ROBO protection.<br />a) Post-process data deduplication<br />b) Inline data deduplication<br />c) Source-based data deduplication<br /><strong>NOTE: Don’t be too focused on data deduplication and ignore data recovery, validation, retention and hardware dependencies.</strong></p></blockquote> <p>4. Despite the nuances, it seems that all block level de-dupe technology does the same thing: removes bit string patterns and substitutes a stub. Is this technically accurate or does your product do things differently?</p> <blockquote><p>Unlike block level de-dupe technology which substitute variable or fixed length blocks of data with references (usually hash codes) to identical previously stored blocks of data (a well-known global compression technique), ArchiveIQ uses advanced single pass sub-file content factoring technology to identify and store only the new and unique content of a given data source.</p></blockquote> <p>5. De-dupe is changing data. To return data to its original state (pre-de-dupe) seems to require access to the original algorithm plus stubs/pointers to bit patterns that have been removed to deflate data. If I am correct in this assumption, please explain how data recovery is accomplished if there is a disaster. Do I need to backup your wares and store them off site, or do I need another copy of your appliance or software at a recovery center?</p> <blockquote><p>Our deduplicated data is stored on standard NTFS volumes. Any replication product that supports NTFS can be used for offsite copies. There is no additional charge or appliance required.</p></blockquote> <p>6. De-dupe changes data. Is there any possibility that this will get me into trouble with the regulators or legal eagles when I respond to a supoena or discovery request? Does de-dupe conflict with the nonrepudiation requirements of certain laws?</p> <blockquote><p>Data on tape is usually compressed at given blocking factor. As long as the immutability of the content can be assured, deduplication should be acceptable and can be used in conjunction with digital signatures (one or more) and/or write once media formats for nonrepudiation requirements. The management practices for compliance and non-repudiation requirements do not change with the application of de-dupe. Source-based deduplication adds a level of data integrity by being able to verify the contents of the source with what is in the destination repository.</p></blockquote> <p>7. Some say that de-dupe obviates the need for encryption. What do you think?</p> <blockquote><p>No, encrypting data over the wire or at rest should still be considered. Since our store is NTFS you can use Windows encryption or third-party products that support NTFS.</p></blockquote> <p>8. Some say that de-duped data is inappropriate for tape backup, that data should be re-inflated prior to write to tape. Yet, one vendor is planning to enable an “NDMP-like” tape backup around his de-dupe system at the request of his customers. Is this smart?</p> <blockquote><p>Sounds like FUD - I guess the same rationale should be applied to tape formats. If we asked most D-2-D-2-T vendors to restore the backup image before moving it to tape they would say no. ArchiveIQ should raise fewer concerns since the deduplicated store is fully self-describing and just a series of NTFS files and folders.</p></blockquote> <p>9. Some vendors are claiming de-dupe is “green” — do you see it as such?</p> <blockquote><p>Research is still being done to determine if a VLT draws less power than a tape library for the same environment. <a href="http://www.informationweek.com/blog/main/archives/2008/04/deduped_vtl_gre.html" target="_blank" rel="nofollow">Article Cite</a>. <a href="http://blogs.hds.com/hu/2008/04/the_greening_of_it_oxymoron_or_journey_to_a_new_reality.html" target="_blank" rel="nofollow">Blog Cite.</a></p> <p>One thing is certain – a software only solution like ArchiveIQ draws less power than both a VTL and tape. Especially if you just reuse an existing server and storage.</p></blockquote> <p>10. De-dupe and VTL seem to be joined at the hip in a lot of vendor discussions: Use de-dupe to store a lot of archival data on line in less space for fast retrieval in the event of the accidental loss of files or data sets on primary storage. Are there other applications for de-duplication besides compressing data in a nearline storage repository?</p> <blockquote><p>Space management of primary storage, E-Discovery, and data protection all using the same deduplicated backend data store, all in a unified product.</p></blockquote> <p>11. Just suggested by a reader: What do you see as the advantages/disadvantages of software based deduplication vs. hardware (chip-based) deduplication? Will this be a differentiating feature in the future… especially now that Hifn is pushing their Compression/DeDupe card to OEMs?</p> <blockquote><p>Typically, hardware based deduplication happens at the server or appliance. This requires the totality of the data to be streamed across the network. Network bandwidth and time to backup are unchanged from traditional backup strategies. Software deduplication, at least for our product, allows us to deduplicate the data at the source, only send the changes across the network, thereby saving network bandwidth and reducing the time to perform a backup.<br />If the deduplication happens at the “client” machine, then the enterprise backup cycles are distributed across a large number of computers which also means that the “server” on which the deduplicated store resides can handle a large number of clients.<br />Finally ask administrator how they feel when tape hardware changes. Often months or years worth of data is trapped on the old media formats. The same can happen with data deduplication that is tied at the hip with hardware. You can’t avoid being tied to the data deduplication techniques, but you can try to avoid being tied to hardware from the same vendor.</p></blockquote> </li><li class="alt" id="comment-18189"> <img alt="" src="http://www.gravatar.com/avatar/9757af114f66cbafa0cf106c0da6382f?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite><a href="http://www.copansystems.com/" rel="external nofollow">jgagne</a></cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18189" title="">May 9th, 2008 at 10:06 am</a> </small> <p>From Jay Gagne, Global Solution Architect, COPAN Systems</p> <p>1. Please provide the name of your company and the de-dupe product(s) you sell. Please summarize what you think are the key values and differentiators of your wares.</p> <blockquote><p>COPAN Systems - Copan Revolution 300 SIRM, COPAN Revolution 100T</p> <p>COPAN Systems provides a purpose built enterprise-class persistent data storage platform designed for long-term retention of persistent (fixed) data.</p> <p>The COPAN Systems persistent data platform differentiation starts by maximizing both scalability and access to data. While a traditional storage device provides 100 percent access, they are limited in scalability. On the other hand, traditional tape devices provide scalability but limit access. When coupled with de-duplication, the COPAN Systems persistent data storage platform is massively scalable while maintaining accessibility. This differentiation is only magnified with the addition of our de-duplication technology. The result is an incredibly cost effective and power efficient system built to scale to massive enterprise storage needs.</p></blockquote> <p>2. InfoPro has said that de-dupe is the number one technology that companies are seeking today — well ahead of even server or storage virtualization. Is there any appeal beyond squeezing more undifferentiated data into the storage junk drawer?</p> <blockquote><p>There are two main drivers pushing the momentum of de-duplication. First is the reduction of physical space, but as important or perhaps more so, is the reduction of network bandwidth required. WAN accelerators don’t help when you are sending the same data over and over again, but de-duplication before replication does help significantly reduce the bandwidth required. I believe these two elements are the driving force behind the buzz. The benefits include the decrease in RAW capacity, decrease in the costs of offsite storage and management and the instant access to data when it is needed.</p></blockquote> <p>3. Every vendor seems to have its own secret sauce de-dupe algorithm and implementation. One, Diligent Technologies (just acquired by IBM), claims that their’s is best because it collapses two functions — de-dupe then ingest — into one in-line function, achieving great throughput in the process. What should be the gating factors in selecting the right de-dupe technology?</p> <blockquote><p>There are many questions a potential de-duplication customer must ask themselves before selecting the right de-duplication vendor. The process of demystifying de-duplication breaks down into 10 questions:</p> <p>1. How will de-dupe impact performance on backup/restores - and combined backup and restores? Depending on where in the backup cycle the de-duplication is performed, there will be impacts to your cycle. The same is true of the restore process as well since some solutions require a two-step restore process. COPAN Systems de-duplication solution is based on post processing, which means it does not impact the current backup scenario, and in many cases enhances it. The restoration of data from our solution is a single-step streamlined operation.</p> <p>2. How do I compare compression ratios? Scalability of a de-dupe solution is crucial. The actual de-dupe ratio will vary, but comparing each solution with a constant de-dupe ratio will clearly show the scalability of a given solution. Always investigate the assumptions the vendor is using for their de-dupe ratio to ensure consistent comparisons.</p> <p>3. How do I effectively scope the size of the storage solution? You need to not only factor in the immediate needs of the current environment, but also to factor in the annual growth over time and the retention requirements. This will clearly show the scale (capacity and access), efficiency and complexity of managing each solution’s de-dupe infrastructure. Given the massive scalability of the COPAN Systems de-duplication solution and its efficient design, scaling overtime is simple, efficient and cost effective.</p> <p>4. How do all the parts of the de-duplication solution communicate with each other? The more data that can be seen by a single system (or cluster of systems), the more efficient the solution will be at reducing the storage requirements, complexity of management and overall total cost of ownership. Also beware of the extra costs associated with having more units, appliances, IP and FC switch ports, etc.</p> <p>The COPAN Systems de-duplication platform was purpose built to minimize the amount of components while still delivering guaranteed access to data. Given the massive scalability and clustering options, it also provides the largest single data repository of up to 8PB in a single system.</p> <p>5. Is the solution file format aware? (i.e. does it understand the type of data being backed up). Not all of the de-dupe solutions are aware of the file format. Being aware of the actual file type increases the efficiency of solution. Some solutions only look at blocks of data without the ability to understand the whole file. The most efficient solutions have the ability to understand the file as well as break into blocks to achieve maximum de-dupe efficiency. Both de-duplication approaches are candidates for implementation onto COPAN’s persistent data storage platform.</p> <p>6. How easily can I create tape media? The flexibility for creating tape media is essential to many organizations. Similar to the restore process, some solutions require a two-step process to re-hydrate the virtual tape first and then create the tape replica as a second step. The COPAN Systems de-duplication solution has the ability to easily create tape media in its original format using a single streamlined process.</p> <p>7. How does the product replicate data? The ability to replicate data is vital for a de-dupe solution. Any single block of data stored in the de-duped state may be part of many original backups. Having multiple copies of your de-dupe data will ensure the level of protection required for an enterprise ready solution. The COPAN Systems de-duplication solution provides and efficient, bandwidth friendly replication option.</p> <p>8. Should I be concerned with the disk type and disk failure rates? Given the volume and criticality of data being stored for five years or more in a de-duped environment, the protection of this data is essential. Drive failures can lead to data loss. Minimizing the amount of drive failures will increase the level of protection for you data. Also, the type of disk (Fibre Channel, SATA, MAID) need to be considered. Some de-dupe applications require high-end Fibre Channel disks and connectivity to meet performance requirements, while others can operate with lower cost drives and still achieve the necessary performance specifications. The need to use higher performing drives will increase the cost of the solution, especially when factoring in five years or more of annual growth.<br />The COPAN Systems de-duplication solution uses Massive Array of Idle Disks. Technology (MAID) on SATA-based disk drives that proved 6X greater reliability. It also uses patented Disk Aerobics ® technology to proactively monitor and ensure data integrity. The measured disk failure rate for COPAN Systems is 0.03 percent per year compared to an average of 4-5 percent with standard SATA storage devices.</p> <p>9. How does the de-duplication solution consume or conserve power? Does this help in my infrastructure costs? Given the amount of data stored in your de-dupe infrastructure, the floor space, power and cooling requirements should be considered when calculating the total cost of ownership. Since your de-dupe solution will be a massive repository, utilizing a purpose built archive platform will help guarantee data integrity as well as cost effectiveness. Given the fact that the COPAN Systems de-duplication solution is based on MAID, it guarantees power savings of up to 85 percent and up to 7X savings in data center floor space.</p> <p>10. What is the working life of the system and what migration strategy do you offer at that time? Many systems use standard transactional storage systems in the backend. These were designed for a 3-4 year technical refresh cycle. Based on the amount of data stored in a de-duplication platform and the product refresh cycle, thought must be given to how data will be migrated and how often you will be required to migrate it. Due to the massive scalability, increased reliability and low failure rate, COPAN Systems Persistent Data Storage Platform has a product life of 7-plus years.</p></blockquote> <p>4. Despite the nuances, it seems that all block level de-dupe technology does the same thing: removes bit string patterns and substitutes a stub. Is this technically accurate or does your product do things differently?</p> <blockquote><p>The COPAN Systems de-duplication solution operates in the manner described, as do most, if not all, options in the marketplace. Since the differentiation is not in the algorithm, then where is it? I think the whole solution needs to be taken into account. It boils down to the same 10 questions above. If you don’t look at the big picture, you run the risk of choosing a solution that won’t meet you long term needs.</p></blockquote> <p>5. De-dupe is changing data. To return data to its original state (pre-de-dupe) seems to require access to the original algorithm plus stubs/pointers to bit patterns that have been removed to deflate data. If I am correct in this assumption, please explain how data recovery is accomplished if there is a disaster. Do I need to backup your wares and store them off site, or do I need another copy of your appliance or software at a recovery center?</p> <blockquote><p>To efficiently and effectively perform a disaster recovery, you would require a system in the recovery site that had the data replicated to it. The recovery then would be exactly like a normal restore function performed in the primary site.</p></blockquote> <p>6. De-dupe changes data. Is there any possibility that this will get me into trouble with the regulators or legal eagles when I respond to a supoena or discovery request? Does de-dupe conflict with the nonrepudiation requirements of certain laws?</p> <blockquote><p>I believe that immutability refers to the changing (or tampering) with data. COPAN Systems’ de-duplication does not change the data, rather it stores it differently. The traditional backup applications have always done a similar scenario by combining files (ie. tarring them up). Since that too is only storing differently (and more efficiently) but the data is easily returned to its original form, then de-duplication is not very different.</p></blockquote> <p>7. Some say that de-dupe obviates the need for encryption. What do you think?</p> <blockquote><p>To be more clear, we need to say “tape encryption.” The greatest risk for data is when it is in motion, either on a network or on a truck. If encryption for data at rest was a requirement, we would see it in all tiers of storage, starting at the top. A de-duplication solution still needs to provide a means for encryption of data in motion. Potentially the network itself already has that ability. If the question was “Does de-duplication with encrypted replication obviate the need for physical tape encryption?” provided you can de-dupe and replicate everything you need to….yes it does. De-duplication can be used for “tape shredding” when the pointers to the data have been removed, just as if the “encryption keys” were lost for data that had been encrypted.</p></blockquote> <p>8. Some say that de-duped data is inappropriate for tape backup, that data should be re-inflated prior to write to tape. Yet, one vendor is planning to enable an “NDMP-like” tape backup around his de-dupe system at the request of his customers. Is this smart?</p> <blockquote><p>This one gets the good ol’ answer of “it depends”. Only a customer can decide what the objectives and goals are. It’s then a vendor’s job to deliver a solution to meet those goals. However, one of the benefits of a standardized format for the physical media is that it could be easily restored. And, more importantly, restored in any order. The above example leads me to believe I would have to restore an entire repository first before I could restore any data. That may make meeting your DR SLA of 24-72 hours for the mission critical data a bit of a challenge.</p></blockquote> <p>9. Some vendors are claiming de-dupe is “green” — do you see it as such?</p> <blockquote><p>The cynic in me wants to say that’s like saying just because I doubled the size of my disk drives, I am twice as power efficient. However at the end of the day, one of the best ways to determine storage power efficiency is Terabytes per Kilowatt. Given the fact that de-duplication does actually increase the amount of data stored per kilowatt, I would have to agree that it is “green.” That doesn’t mean that the storage platform the data is sitting on provides any efficiency. To be truly considered a “green” technology, I think you need to combine things like compression and de-dupe with a storage platform that expands upon the benefits. Then you will have a truly efficient solution, one that provides hundreds of Terabytes to Petabytes of storage per kilowatt, which COPAN does by powering on disk drives only when needed.</p></blockquote> <p>10. De-dupe and VTL seem to be joined at the hip in a lot of vendor discussions: Use de-dupe to store a lot of archival data on line in less space for fast retrieval in the event of the accidental loss of files or data sets on primary storage. Are there other applications for de-duplication besides compressing data in a nearline storage repository?</p> <blockquote><p>Anywhere where there is a high occurrence of repetitive data is a candidate for de-duplication. It started with VTL (i.e. backup) because that is likely the location of the largest amount of repetitive data. There are many applications and general storage repositories throughout most customer environments that could benefit from de-duplication, such as user home directories and departmental data stores. De-duplication can be applied to data that has multiple generations resulting in commonality of like-records to be optimized. The result is “changed data” from record generation is “factored,” then “common-factored” across the entire set of data. This approach is best utilized at the file and record data types.</p></blockquote> <p>11. Just suggested by a reader: What do you see as the advantages/disadvantages of software based deduplication vs. hardware (chip-based) deduplication? Will this be a differentiating feature in the future… especially now that Hifn is pushing their Compression/DeDupe card to OEMs?</p> <blockquote><p>The move to hardware based de-dupe, or to be more accurate, hardware based hashing (since the HW cards don’t actually do the de-duping, they only create the index) is very likely, provided there is value either in time or cost or even better both. It’s not likely to be a differentiator since speed and cost are usually part of the vendor battleground anyway. An enterprise deduplication approach will need multiple processing/memory combinations for creating hash values within an enterprise system. There will be use cases for both approaches depending upon the implementation used by a storage vendor.</p></blockquote> <p>Thanks, Jay.</p> </li><li id="comment-18193"> <img alt="" src="http://www.gravatar.com/avatar/82973fa07a2c7e71333a35cdc4bda562?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite><a href="http://www.symantec.com/netbackup" rel="external nofollow">Peter E</a></cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18193" title="">May 9th, 2008 at 8:21 pm</a> </small> <p>1. Please provide the name of your company and the de-dupe product(s) you sell. Please summarize what you think are the key values and differentiators of your wares.</p> <blockquote><p>Company Name: Symantec<br />Deduplication Products: NetBackup PureDisk, part of the NetBackup Platform</p> <p>Veritas NetBackup PureDisk provides optimized protection for decentralized data using data deduplication, resulting in reduced total storage consumed from backups by 10–50 times and reduced network bandwidth required for daily full backups by up to 500 times.<br />NBU PureDisk:</p> <p>• Reduces complexity and risk from remote offices by allowing companies to eliminate tape, encrypt backup data, and centralize data protection in the data center.<br />• Improves the return-on-investment (ROI) of disk-based backups versus traditional methods with a scalable and open software based storage system.<br />• Centralizes data protection administration, management and compliance by providing a reliable and consistent backup and recovery process.<br />• Controls and manages the retention of backup data and enable recovery from remote offices, the data center, or other sites.</p></blockquote> <p>2. InfoPro has said that de-dupe is the number one technology that companies are seeking today - well ahead of even server or storage virtualization. Is there any appeal beyond squeezing more undifferentiated data into the storage junk drawer?</p> <blockquote><p>Yes, deduplication is about more than just storage reduction. A deduplication engine can reduce the bandwidth required to move backup data as well. We refer to this as “client-side” deduplication. It can be deployed on a server to be protected, in a physical or virtual environment, and reduce the size of the backup at the source, before any data is moved. As a result the bandwidth requirements needed to move the data decrease dramatically. Client-side deduplication is very effective for remote office data and applications. For example, the NetBackup PureDisk client can send data directly to the data center over the WAN, eliminating the need for a remote backup application and/or remote backup storage. Client-side deduplication can also be an effective means to protect virtual server environments because of how it reduces the I/O requirements (90% less data, less bandwidth) and consequently reduces the backup load on a virtual host.</p> <p>The client-side deduplication approach eliminates the need for more than one full backup as it identifies changed blocks and only backs up the unique blocks. While every backup is a block incremental, a “full image” can be restored at any time. A casual observer familiar with backups may ask how this is different from “synthetic backups?” The difference lies in the size of the incremental backup and the data movement. Client-side deduplication records only the changed blocks in an incremental or subsequent backup pass, not every file that has changed. In a deduplicated file system, the file metadata references the new and existing blocks on disk, thus a new synthetic backup, ready for restore, is available immediately after the backup completes, without any data movement.</p> <p>NetBackup supports both client-side deduplication and target-side deduplication, the later is what most storage companies posting here offer. In target side-deduplication, the deduplication engine lives on the storage device. We often place NetBackup PureDisk in this category for convenience sake, but what we really offer is proxy-side deduplication. By proxy, I mean that we use our NetBackup server (specifically the media server component) as a proxy to perform the deduplication process. With this approach a customer can increase throughput on both backups and restores with additional media servers.</p></blockquote> <p>3. Every vendor seems to have its own secret sauce de-dupe algorithm and implementation. One, Diligent Technologies (just acquired by IBM), claims that their’s is best because it collapses two functions - de-dupe then ingest - into one in-line function, achieving great throughput in the process. What should be the gating factors in selecting the right de-dupe technology?</p> <blockquote> <p align="left">Deduplication is an important component to any backup strategy, but it needs to be used based on RTO and RPO requirements of data (and the business). There is no single factor for selecting a solution, but rather a series of factors that should be considered.</p> <p align="left">All of the other responses seemed to immediately jump to explaining the pros & cons of the secret sauce approaches. We believe this to be only one of several selection criteria.</p> <p align="left">GATING FACTORS IN SELECTING THE RIGHT DEDUPE TECHNOLOGY</p> <p align="left">• Dedupe Process / Efficiency (bit, byte, block, chunk, etc…)<br />• Integration with the backup application<br />• Hardware Flexibility & Cost<br />• Scalability<br />• High Availability<br />• Disaster Recovery </p> <p align="left">First, let’s clear up earlier misconceptions where someone replied that “in-line” deduplication somehow impacts the “data validation process” and results in “false fingerprint compares”. This is simply FUD.</p> <p align="left">DEDUPE PROCESS/ EFFICENCY: The efficiency factor refers to the data reduction effectiveness of the deduplication process as this will impact how much storage a customer buys.</p> <p align="left">The level of granularity in your dedupe process will affect the performance of your dedupe solution and the storage consumed. In other words, 4k byte blocks will have 16 times more pointers and blocks than a 64k byte block comparisons. This is why NetBackup PureDisk has a default block size of 128kb. Years of implementations had shown this to be an excellent point at which to achieve both optimization and performance. We also allow the customer to choose the size of the segment for different backup jobs. The size can range from 64kb to 16 MBs.</p> <p align="left">NetBackup PureDisk detects dedupe patterns using a hash based approach that combines 2 hashes for identification and verification.</p> <p align="left">As stated earlier, we can place our dedupe engine in two places – on a client (or source server) and within the NetBackup server. As such, PureDisk deduplication is supported both in-line (during the backup) or as post-process operation from staging disk on a NetBackup media server (when using PureDisk inside of NetBackup). With our dedupe engine on the NetBackup media server we can increase throughput by spreading the load across multiple NetBackup media servers (load balanced)</p> <p align="left">Integration with Backup Application – Replication of backup data without backup application awareness creates storage and management headaches. How does an administrator know when to delete an image? In a disaster recovery scenario, how does the backup application handle recovery of data not in its catalog? The NetBackup Platform eliminates that problem by providing a means for the backup application to manage the replication and deletion of duplicate images, wherever they may be. This functionality is available with NBU PureDisk as well as with qualified OpenStorage partners (some of whom have posted here).</p> <p align="left">Hardware Flexibility & Cost – We asked ESG to write a whitepaper on the differences between hardware and software-based deduplication. We encourage readers to check it out.</p> <p align="left"><em>[Link Redacted -- DD does not link to analyst papers, especially the pay-per-view variety, unless it is to poke fun. If they think ESG papers are useful, readers can find the link on Symantec's web site. -- The Management]</em></p> <p align="left">NetBackup PureDisk is software-based which means that you can build out a deduplication system with legacy storage or new storage. In fact, you can even use different types of storage within a given storage pool or across locations.</p> <p align="left">So we think customers should consider how a deduplication solution might lock you into specific hardware, and to ask if it can it be used with your legacy datacenter servers and storage?</p> <p align="left">Scalability –Question to consider here include the following:<br />• How does the deduplication solution scale in performance and capacity?<br />• If capacity is being added, does this increase your aggregate dedupe pool of storage or create another pool?<br />• How can the aggregate performance of the solution be increased without major reconfiguration of the backup environment? </p> <p align="left">NetBackup PureDisk delivers scalability by breaking apart several components, where dedupe occurs, where metadata is stored and where file content data is stored. PureDisk stores metadata in a metabase engine and file content data in a Content Router. These are the two primary components of our storage pool.</p> <p align="left">The benefit to this approach for customers is that performance and storage can be improved by adding additional servers with any one or both of these components. Both the metabase engine and content router components are horizontally scalable. So when you want to expand capacity with NetBackup PureDisk, you add another content router node (each node holds 8 TB of dedupe data – much more backup data). PureDisk automatically load balances the content across the two content routers to improve performance of backup and restore. The same concept can be applied to the metabase engine, which is an integrated relational database, where we store file references. In short, when needed, aggregate performance can be improved by adding additional nodes (a server with one or both of these components).</p> <p align="left">High Availability - Does the deduplication solution have high availability failover to spare nodes build-in to protect it against server (node or controller) failure. What happens when one controller or node goes down in a distributed storage system?</p> <p align="left">NetBackup PureDisk can protect against this with integrated high-availability using Veritas Cluster server.</p> <p align="left">See question 5 for more details on the Symantec solution.</p> <p align="left">Disaster Recovery – Does the deduplication solution have recovery features to recover data in case of disk failure or data corruption on disk?</p> <p align="left">NetBackup PureDisk provides several disaster recovery options including optimized replication, reverse replication, and of course the ability to recovery a complete system from tape.</p> <p align="left">Again, See question 5 for more detail on the Symantec solution.</p> </blockquote> <p>4. Despite the nuances, it seems that all block level de-dupe technology does the same thing: removes bit string patterns and substitutes a stub. Is this technically accurate or does your product do things differently?</p> <blockquote><p>At the highest level of abstraction, all dedupe systems use pointers to references duplicate blocks, bytes, or bits. And this is how NetBackup PureDisk operates. With this in mind customers should think about how the deduplication architecture, specifically the storage rather than the deduplication engine, tracks and manages those references and what happens when the number of references grows very large. For example, when your dedupe system has grown to 100s of terabytes of information, how does the expiration of backup data affect the system?</p> <p>When you expand your deduplication system with another node (or controller) are you expanding the same dedupe pool or creating another pool of storage? If you can grow a single pool, how does the system balance metadata and content data (the blocks) across the whole system?</p> <p>The architecture of NetBackup PureDisk offers the best of both worlds by storing metadata in a horizontally scalable database, as opposed to the file system, and content blocks in a horizontally scalable file system. The separation of these two components improves scalability and performance.</p></blockquote> <p>5. De-dupe is changing data. To return data to its original state (pre-de-dupe) seems to require access to the original algorithm plus stubs/pointers to bit patterns that have been removed to deflate data. If I am correct in this assumption, please explain how data recovery is accomplished if there is a disaster. Do I need to backup your wares and store them off site, or do I need another copy of your appliance or software at a recovery center?</p> <blockquote><p>This question appears to have two parts – standard recovery and disaster recovery. It seems not all responses have consistently addressed the DR scenarios for their dedupe systems. I will address how a regular recovery works and how DR works for NetBackup PureDisk.</p> <p>For standard data recovery with NetBackup PureDisk, a customer selects the data they wish to recover and initiates the recovery request. The data is reassembled by the PureDisk dedupe engine. We deliver this engine in two different places –on the PureDisk client for client-side deduplication and in the NetBackup media server for proxy-deduplication. Beside network speed and disk speed, the two primary variables that can affect restore speed, are how fast you can get that data to the dedupe engine and how many engines you run. First, if a customer needs to get data to the PureDisk dedupe engine more quickly they can add additional storage nodes (we call these content routers) to increase performance. Second, we can increase the dedupe engine speed, so to speak, with additional engines or specifically NetBackup media servers. These engines can run in parallel on the NetBackup media servers to restore data even more quickly. Finally, PureDisk stores the most recent backup data in such a way that it can be recalled more quickly than older data.</p> <p>NetBackup PureDisk provides protection against disk failure by using Storage Foundation (SF) in combination with hardware array based redundant-array-of-independent-disks (RAID) or SF software RAID protection. Storage Foundation can also manage multiple storage paths for PureDisk to provide redundancy and performance. Protection against node failure, by failing over to a spare node in the storage pool, is provided using Veritas Cluster Server (VCS). PureDisk can also provide a scripted manual failover in cases where VCS is not desired. VCS can provide protection against network failure in an HA configuration. PureDisk can provide native protection against network failure in non-HA configurations. Protection against site failure is provided using PureDisk’s native replication capability to perform bandwidth optimized replication from the datacenter to a DR site. The PureDisk storage pool, including the deduplicated backup data, can be protected, by using PureDisk’s optimized Disaster Recovery (DR) backup capability with NetBackup. This NetBackup integration enables users to perform both incremental backups of a multi-node storage pool, including configuration and all deduplicated data to any medium (including tape), and synthetic full images to improve recovery times.</p> <p>Similar to the data center, for remote sites where DR backup is not possible onsite, both configuration and backup data from a remote storage pool can be replicated to a data center, which allows for fast recovery of a remote storage pool to a spare system in the datacenter.</p> <p>Finally, PureDisk can also export data (out of a dedupe state) to NetBackup to create standard tapes of backup data at desired intervals for long term data vaulting or archival purposes.</p> <p>PureDisk software, related configuration information, and the relevant data are all required to recover any data written into a PureDisk storage pool.</p></blockquote> <p>6. De-dupe changes data. Is there any possibility that this will get me into trouble with the regulators or legal eagles when I respond to a subpoena or discovery request? Does de-dupe conflict with the non-repudiation requirements of certain laws?</p> <blockquote><p>We have not encountered this legal question. Data deduplication does not change the underlying content of the data; it merely breaks the data up into pieces to store it more efficiently.</p></blockquote> <p>7. Some say that de-dupe obviates the need for encryption. What do you think?</p> <blockquote><p>While it is true that layout of dedupe data differs from a standard file system, someone could potentially inspect the data and reassemble information based on the blocks stored on disk. And the first time a dedupe engine encounters a new file with all unique data, it will need to send every block to storage. In this manner, someone could reassemble a file or data from the pieces. Each block could also contain sensitive data; thus even if the whole file cannot be easily reassembled from the blocks, encryption will still be required.</p> <p>Though physical and network security may exist within the data center and when transferring data between sites, we find some customer still want additional levels of security and encryption provides that additional layer. Symantec’s PureDisk offers client-side encryption of data for those customers that need an additional layer of security. PureDisk goes beyond this and offers a feature called Data Lock, which allows users outside of IT, such as HR or legal, to add a password to a backup selection and prevent browsing and/or recovery of data without a unique password that is separate from application access controls.</p></blockquote> <p>8. Some say that de-duped data is inappropriate for tape backup, that data should be re-inflated prior to write to tape. Yet, one vendor is planning to enable an “NDMP-like” tape backup around his de-dupe system at the request of his customers. Is this smart?</p> <blockquote><p>Tape is an excellent media for sequential read/write processes. Data deduplication is the epitome of a random access process with variation driven based on the location of metadata, the number of nodes, the type of data etc.</p> <p>The need/requirement to back up a dedupe system to tape stems from a disaster recovery concern. As companies reduce the copies of data down to one, they become more reliant on that single copy. Similarly, as the size of a dedupe system containing single copies of backup data grows, the need to have periodic recovery points in case of some type of corruption or disaster increases. PureDisk replication can provide recovery in case of local disaster or corruption.</p> <p>If no replication can be implemented, the PureDisk storage pool, including the dedupe backup data, can be protected, by using PureDisk’s optimized Disaster Recovery (DR) backup capability with NBU. This NetBackup integration enables users to perform incremental forever backups of a multi-node storage pool, including configuration and all deduplicated data to any NBU medium (including tape), and to synthesize them into full backups for faster recovery.</p> <p>In addition, PureDisk can export deduplicated backup data to NetBackup, in which the data is indeed re-inflated prior to writing it to tape (or any other NBU supported media). This feature supports customers that have a tape archive requirement for long term data retention or compliance. Data is written to tape in standard NBU format which is accepted as a long term data retention format.</p> <p>Writing data in deduplicated form to tape for long term retention (and single file restore) does not seem feasible: e.g. a file that consists of 100 blocks could potentially require 100 tapes to recover from. This is not practically usable.</p></blockquote> <p>9. Some vendors are claiming de-dupe is “green” - do you see it as such?</p> <blockquote><p>Yes, we see dedupe as a “green type” technology because it allows customers to store more data on a given amount of disk in the data center. If we assume that in lieu of dedupe disk, a customer were to use regular disk, then a savings in floor space and electricity has already been realized.</p> <p>For long-term retention of backup or archive data (e.g., beyond 1-2 years), tape may become the preferred storage medium when the data is no longer expected to be accessed. The assumption here would be that the number of recovery points required would drop such that a weekly or monthly full would be sufficient.</p></blockquote> <p>10. De-dupe and VTL seem to be joined at the hip in a lot of vendor discussions: Use de-dupe to store a lot of archival data on line in less space for fast retrieval in the event of the accidental loss of files or data sets on primary storage. Are there other applications for de-duplication besides compressing data in a nearline storage repository?</p> <blockquote><p>We addressed how a deduplication engine can be deployed out to the physical or virtual server to back up data in bandwidth or I/O constrained environments. Deduplication storage can be an excellent media for both backup and archive data with medium term (months range) retention times.</p> <p>Symantec released its OpenStorage API last year which allows the customers to better leverage the capabilities of intelligent disk systems (including deduplication appliances) more optimally without having to go through the intermediate limiting tape emulation step.</p> <p>Deduplication also enables on-line vaulting and disaster recovery. As the amount of data is dramatically reduced, replication of the data over the WAN to a DR site becomes economically viable, eliminating the need for tape collection and vaulting services.</p></blockquote> <p>11. Just suggested by a reader: What do you see as the advantages/disadvantages of software based deduplication vs. hardware (chip-based) deduplication? Will this be a differentiating feature in the future… especially now that Hifn is pushing their Compression/DeDupe card to OEMs?</p> <blockquote><p>One submission stated statement that all “software based dedupe” presents a challenge because it involves multiple agents or deduplication points is a mischaracterization of software-based deduplication. The heart of software based dedupe is both the dedupe engine and the storage architecture that supports it. We have a dedupe engine for our clients as well as for our backup media server.</p> <p>Again, per an earlier question Enterprise Strategy Group recently wrote an interesting paper for Symantec entitled “Differentiating Hardware and Software-based Data De-duplication.” Symantec’s software approach to deduplication lies in the PureDisk storage pool architecture where we separate out metadata and content into two horizontally scalable components called the metabase engine and the content router (see previous answers).</p> <p>With regards to compression, it is important to understand that compression and deduplication are radically different. While compression is only looking for repetitive patterns in a single file, it is fairly easy to build the algorithm and look-up cache in a chip. Deduplication is comparing patterns in new incoming data to the total dataset already stored in the deduplicated backend. While these accelerator chips can accelerate parts of the process such as MD5 or other fingerprint calculation, the whole deduplicated storage system still requires software to control the global index, data removal, scalability, HA and DR.</p> <p>Symantec is looking into supporting some of these hardware boards through the appropriate drivers.</p></blockquote> <blockquote><p>Thanks,<br />Peter</p></blockquote> <p>No, thank you, Peter.</p> </li><li class="alt" id="comment-18224"> <img alt="" src="http://www.gravatar.com/avatar/ad5dcf5f4c00b3f3dd1febb6b8baf8e5?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite><a href="http://www.permabit.com/" rel="external nofollow">Jered</a></cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18224" title="">May 27th, 2008 at 10:47 am</a> </small> <p>Jon,</p> <p>Thanks for the opportunity to comment on this.</p> <p>> 1. Please provide the name of your company and the de-dupe product(s) you sell. Please summarize what you think are the key values and differentiators of your wares.</p> <p>Permabit Technology Corporation delivers Permabit Enterprise Archive, a disk-based storage system with standard NAS interfaces. Permabit Enterprise Archive provides enterprise class archival storage with the flexibility and speed of disk, but at or below the cost of tape. The system includes Scalable Data Reduction, combining traditional compression with sub-file deduplication, and has a grid architecture that uniquely allows scaling to petabytes of real, physical disk (and many more times that of data).</p> <p>> 2. InfoPro has said that de-dupe is the number one technology that companies are seeking today — well ahead of even server or storage virtualization. Is there any appeal beyond squeezing more undifferentiated data into the storage junk drawer?</p> <p>If all that data were junk, we wouldn’t have this problem! Pretty much all the analysts point to data growth rates of 60 to 80% annually. While some of this data is perhaps unnecessary, the bulk of it does need to be kept around for future reference. Digital documents keep growing in both size and volume, and either regulations require that they be kept around, or businesses see value in later data mining.</p> <p>Most of this data is being dumped onto existing primary storage, and those primary storage environments (the very costly “junk drawers” out there) keep growing — at an average cost of around $43/GB. That’s an outrageous price, and the number one driver for deduplication. Customers don’t want deduplication, per se; what they want is cheaper storage. Deduplication is a great way of helping deliver that, but it’s only one way in which Permabit drives down costs.</p> <p>> 3. Every vendor seems to have its own secret sauce de-dupe algorithm and implementation. One, Diligent Technologies (just acquired by IBM), claims that their’s is best because it collapses two functions — de-dupe then ingest — into one in-line function, achieving great throughput in the process. What should be the gating factors in selecting the right de-dupe technology?</p> <p>Requirements will differ by use case; dedupe for backup is different from dedupe for archive. For the market space we address, Enterprise Archive, we see four key factors:</p> <p>- Scalability: Enterprise Archive environments range from 50 terabytes to multiple petabytes today, and if current growth rates are sustained, a 100TB archive will be over 3PB by 2012. To see significant cost savings, the archive must be managed and maintained as a single unit. Any storage system for archive, deduplicating or not, must be able to scale to petabytes of real physical disk, not pie-in-the-sky 50X deduped data.</p> <p>- Cost: To escape the pain of growing primary storage costs, an enterprise archive has to deliver a major change in storage costs. Primary storage averages $43/GB; Permabit is $5/GB before any savings due to deduplication. With even 5X deduplication that realized cost is $1/GB, and competitive with tape offerings. Deduplication is not the feature; low cost of acquisition is the feature. On top of that, we deliver lower TCO through ease of management, and by eliminating the need to ever migrate data to a new system by having hardware upgrades managed entirely internal to our system.</p> <p>- Availability: Archive data must be always available. When data is required, it needs to be available in milliseconds, not hours or days. Legal discovery may require a full response in as little as three days, and tape is just not a valid option.</p> <p>- Reliability: An Enterprise Archive system must be as reliable as possible, as it may hold the only remaining copy of a critical piece of information. Tapes don’t cut it — failure rates are quoted as high as 20%. Even RAID 6 shows weakness when considered across petabytes of data and dozens of years.</p> <p>> 4. Despite the nuances, it seems that all block level de-dupe technology does the same thing: removes bit string patterns and substitutes a stub. Is this technically accurate or does your product do things differently?</p> <p>I wouldn’t use the word “stub”, but otherwise that’s a generally fair statement. As data is ingested into a Permabit Enterprise Archive system, we break it up into variable-sized sub-file chunks. For each of those chunks, we determine if it already exists anywhere in the system; if not, we store it. A file is then a list of named chunks that, in order, contain all the data for the file. This is not terribly different from a file in a conventional file system, which is just a list of named disk blocks that, in order, contain all the data for that file. We simply have variable sized “blocks”, and those “blocks” may be in use by multiple files, if they contain the same data.</p> <p>> 5. De-dupe is changing data. To return data to its original state (pre-de-dupe) seems to require access to the original algorithm plus stubs/pointers to bit patterns that have been removed to deflate data. If I am correct in this assumption, please explain how data recovery is accomplished if there is a disaster. Do I need to backup your wares and store them off site, or do I need another copy of your appliance or software at a recovery center?</p> <p>For disaster recovery purposes, Permabit Enterprise Archive incorporates replication features that allow replication to a remote site, be it another office, a data center, or a service provider. Permabit’s replication takes advantage of our Scalable Data Reduction (SDR), our combination of compression and sub-file deduplication, to minimize bandwidth over the WAN.</p> <p>> 6. De-dupe changes data. Is there any possibility that this will get me into trouble with the regulators or legal eagles when I respond to a supoena or discovery request? Does de-dupe conflict with the nonrepudiation requirements of certain laws?</p> <p>Again, dedupe does not change data any more than compression changes data, or traditional file systems change data. Plain old LZW compression gives you a different output bitstream than what went in, with redundant parts removed. Conventional file systems break up files into blocks and scatter those blocks across one or more disks, requiring complicated algorithms to retrieve and return the data. Dedupe is no different. Nonrepudiation requirements are satisfied by the reliability and immutability of the system as a whole, deduplicating or not.</p> <p>> 7. Some say that de-dupe obviates the need for encryption. What do you think?</p> <p>Anyone who says that is selling snake oil; would you care to name names here? Encryption technologies make it mathematically infeasable to determine the contents of a message (or file) without the cipher’s key. Dedupe has nothing to do with this — however, the two technologies can be combined. Permabit uses AES, the current federal encryption standard, for both data protection on disk and over the wire.</p> <p>> 8. Some say that de-duped data is inappropriate for tape backup, that data should be re-inflated prior to write to tape. Yet, one vendor is planning to enable an “NDMP-like” tape backup around his de-dupe system at the request of his customers. Is this smart?</p> <p>Deduplicating to tape is fine, as long as the data set is entirely self-contained, and the only sort of restore expected is a full-system restore. If you have a dedupe pool across multiple tapes, a restore operation will turn into a messy experience of “please insert tape number 263″, and if the restore is not a full-system restore, the performance will be terrible due to seeking along the tape for each individual chunk.</p> <p>For the case of the “NDMP-like” feature I’d have to understand the use case better; there are certainly sensible things I can imagine.</p> <p>> 9. Some vendors are claiming de-dupe is “green” — do you see it as such?</p> <p>Certainly; it’s as green as any other technology that reduces the number of disks spinning. 10X dedupe means 10X fewer disk spindles. Larger drive capacities are green too.</p> <p>> 10. De-dupe and VTL seem to be joined at the hip in a lot of vendor discussions: Use de-dupe to store a lot of archival data on line in less space for fast retrieval in the event of the accidental loss of files or data sets on primary storage. Are there other applications for de-duplication besides compressing data in a nearline storage repository?</p> <p>This question is asking a few different things. The first thing; dedupe and VTL come up together frequently because VTL is a blindingly obvious use case. The driving factor behind VTL, versus other backup to disk technologies, is that the only thing that needs to change in the environment is swapping the tape library for the VTL. No need to rearchitect your backup scheme, no need to change the software, just plug and play. So, VTL vendors tell customers to just keep doing what they’re doing, which involves things like weekly full backups that don’t really make sense in the disk world. Of course VTL vendors can get 25X dedupe — they’re telling their customers to write the same data 25 times!</p> <p>The second thing is that backup and archive are very different things. Backups are generally additional copies of data you have elsewhere, and backups are things that you hope you never, ever have to use. They don’t have to be completely reliable, because you have many copies of the same data on other tapes. They don’t have to be always available, because you have a nightly backup window. Archives, on the other hand, contain the last and final copy of data that you don’t need right now, but probably will in the future. These need to be completely reliable and available.</p> <p>As I talked about above, dedupe is very important in archives as well, strictly from the perspective of cost savings. But it’s also much harder to dedupe archives, because you don’t have the built-in advantage that VTL backups have — telling customers to save the same data over and over. Building deduplication for archives is a much harder problem, because you have to work harder to find opportunities for dedupe, and you must be able to scale to enormous amounts of disk. In the archive space, you can’t sell your 30TB box as a “one petabyte” appliance.</p> <p>> 11. Just suggested by a reader: What do you see as the advantages/disadvantages of software based deduplication vs. hardware (chip-based) deduplication? Will this be a differentiating feature in the future… especially now that Hifn is pushing their Compression/DeDupe card to OEMs?</p> <p>Anyone who’s pitching “hardware deduplication” is just selling a coprocessor that helps with operations common to deduplication, like cryptographic hashing. If hashing is the performance bottleneck for a vendor, adding in a hardware accelerator will help; if it isn’t, it won’t. Software vs. hardware deduplication will have no user-visible differences other than perhaps performance, but generally the hashing isn’t the part that’s resource intensive, it’s the indexing of all the data in the system. Oh, and hardware dedupe systems will be more expensive, because it’s one more piece of hardware to buy and put in the box.</p> </li><li id="comment-18240"> <img alt="" src="http://www.gravatar.com/avatar/57f11d1f552d65f3c60e17683bdee0e7?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite><a href="http://www.ibm.com/developerworks/blogs/page/InsideSystemStorage" rel="external nofollow">TonyLovesLinux</a></cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18240" title="">May 30th, 2008 at 7:13 pm</a> </small> <p>Hi Jon,<br />Response from IBM <a href="http://www.ibm.com/developerworks/blogs/page/InsideSystemStorage?entry=eleven_answers_about_deduplication_from" target="_blank" rel="nofollow">here</a>:</p> </li><li class="alt" id="comment-18272"> <img alt="" src="http://www.gravatar.com/avatar/4f92cf4d45b5619f007fba688f33a7f4?s=32&d=http%3A%2F%2Fwww.gravatar.com%2Favatar%2Fad516503a11cd5ca435acc9bb6523536%3Fs%3D32&r=G" class="avatar avatar-32" height="32" width="32" /> <cite><a href="http://www.datainstitute.org/" rel="external nofollow">Administrator</a></cite> Says: <br /> <small class="commentmetadata"><a href="http://www.drunkendata.com/?p=1692.#comment-18272" title="">June 18th, 2008 at 11:06 am</a> </small> <p>A late response from Sepaton.</p> <p>1. Please provide the name of your company and the de-dupe product(s) you sell. Please summarize what you think are the key values and differentiators of your wares.</p> <blockquote><p>Company: SEPATON<br />Dedupe Product: DeltaStor® ContentAware™ deduplication<br />SEPATON’s DeltaStor technology is a software feature for existing S2100-ES2’s VTL solutions. It leverages the grid architecture of the S2100-ES2 and can scale capacity or performance independently to meet the needs of enterprise customers.</p></blockquote> <p>2. InfoPro has said that de-dupe is the number one technology that companies are seeking today — well ahead of even server or storage virtualization. Is there any appeal beyond squeezing more undifferentiated data into the storage junk drawer?</p> <blockquote><p>Absolutely, that is why SEPATON introduced a ContentAware approach to deduplication, which enables the ability to significantly leverage the content that is deduplicated. Solutions should have an inherent understanding of the data that is being stored, including the application type, and enable the ability to turn deduplication on or off depending on business and regulatory requirements. In addtion, metadata about the content should also be stored, enabling much more efficient content indexing and search, and therefore the ability to meet discovery requests. Deduplication solutions should not simply perpetuate the “storage junk drawer” scenario and should instead enable much higher value functions for IT and the business to leverage. That is SEPATON’s approach.</p></blockquote> <p>3. Every vendor seems to have its own secret sauce de-dupe algorithm and implementation. One, Diligent Technologies (just acquired by IBM), claims that their’s is best because it collapses two functions — de-dupe then ingest — into one in-line function, achieving great throughput in the process. What should be the gating factors in selecting the right de-dupe technology?</p> <blockquote><p>Customers need to understand the problem that they are trying to solve. Deduplication provides a reduction in disk footprint, but is not a panacea. Typically, we find that customers have core business SLAs that they need to meet around data protection.</p> <p>Customers need to evaluate solutions around how they can meet these requirements in a cost effective manner. Some solutions focus on a single system metaphor where they provide separate and independent boxes with limited capacity and performance metrics.</p> <p>Implementing these solutions will typically require multiple separate instances which adds to complexity and cost. Some vendors also aggressively promote inline deduplication which typically results in a decrease in performance and limits capacity within the appliance. Concurrent- process solutions like SEPATON’s DeltaStor typically don’t have these limitations, but will intitially require some incremental disk space.</p> <p>In short, the customer must first evaluate their data protection requirements:</p> <p>• What is their backup window?</p> <p>• Do they have requirements on restore time? (Remember, restore performance impacts not just DR, but also physical tape creation.)</p> <p>• What is their data growth rate?</p> <p>Once customers understand their requirements they should then look for a deduplication solution that meets those needs. All to often, we see customers taking the opposite approach where they decide they need dedupe for whatever reason without giving thought to the impact of the technology on their SLAs and costs.</p></blockquote> <p>4. Despite the nuances, it seems that all block level de-dupe technology does the same thing: removes bit string patterns and substitutes a stub. Is this technically accurate or does your product do things differently?</p> <blockquote><p>At a high level, you are correct that all deduplication algorithms do the same thing. They use varying approaches to identify what is unique data and what is redundant and then replace the redundant data with pointers. The interesting thing is that while the high level process is the same, the vendors use radically different approaches to process the data and these approaches can offer dramatically different metrics around scalability, deduplication ratios, performance and TCO.</p> <p>SEPATON leverages ContentAware technology for our DeltaStor deduplication. Through it we gather information about the content of the backup at the object level to identify objects that contain duplicate data. By narrowing the search we can then compare data at the byte level for much more granular deduplication. Additionally, we can perform the various deduplication activities across multiple nodes allowing us to easily scale deduplication performance. This approach enables DeltaStor to find more redundancies and to outperform other solutions.</p></blockquote> <p>5. De-dupe is changing data. To return data to its original state (pre-de-dupe) seems to require access to the original algorithm plus stubs/pointers to bit patterns that have been removed to deflate data. If I am correct in this assumption, please explain how data recovery is accomplished if there is a disaster. Do I need to backup your wares and store them off site, or do I need another copy of your appliance or software at a recovery center?</p> <blockquote><p>It depends. In SEPATON’s case, the newest data (i.e. the last backup) is kept in its native, non-de-duplicated format. Older versions of data are de-duplicated. Once de-duplication is accomplished, all information necessary to reconstruct that data (the fragments of unique data plus any required pointers) are kept in SEPATON’s filesystem directly, and no longer require any “recipe” – data is directly recoverable. In particular, the filesystem is built to be robust and reliable (i.e. self-discoverable, self-healing, redundant, etc.).</p> <p>SEPATON further believes that our appliance should be transparent to the data protection environment. That is, it should work with existing policies and/or procedures. Most customers use the VTL as the primary target for their backups on their local site.</p> <p>Most large enterprises still have a substantial investment in tape and prefer to use that medium for long-term archival. In these environments, the VTL will hold the data onsite for local restores and un-deduplicated tapes will be created for offsite storage by the backup application. This ensures that the tapes are fully recoverable in a remote site even without the VTL.</p> <p>Also remember, restore performance is vital here since the process of creating tapes depends on data being read from the VTL at high speed. DeltaStor’s forward differencing technology maintains a complete copy of the newest backup ensuring the fastest restores on the data with no re-assembly required.</p> <p>A replication solution is also offered. This product integrates with the backup application and replicates data to a remote VTL based on policies established within the backup software. In this case, both VTLs will hold deduplicated data. The DR process in this scenario is essentially similar as mentioned above since the remote VTL will present itself to the remote backup server as a tape library and drives that exactly matches the one on the primary site.</p> <p>Customers need choice, and SEPATON offers multiple solutions. They can maintain tape procedures and use tape for DR or they can use SEPATON replication and use a second system for their remote site. Either way, there is very little change in the customer’s policies or procedures/</p></blockquote> <p>6. De-dupe changes data. Is there any possibility that this will get me into trouble with the regulators or legal eagles when I respond to a supoena or discovery request? Does de-dupe conflict with the nonrepudiation requirements of certain laws?</p> <blockquote><p>As previously mentioned, deduplication does not change what data is available for recovery or restore. It simply changes the way it is stored. However with limited case law on deduplication, there is still some uncertainty here. A potential issue is that some deduplication algorithms rely on hashing for deduplication.</p> <p>There is a known risk of hash collisions in these algorithms which would result in silent data corruption. While the likelihood is small, it is still a possibility and it is unclear what the legal implications are. Some approaches, like that used in DeltaStor, avoid relying on hashes for this exact reason.</p> <p>In the end the decision regarding your question comes down to the customer. They have to decide for themselves about these issues, and what we can do is provide them a tested low-risk high availability platform to leverage as a part of their corporate data governance practices . We have seen some cases where customers prefer to avoid deduplicating certain data types, backup jobs or even servers which they deem most likely to be subpoenaed. Many solutions do not allow the flexibility to enable or disable deduplication by application, which SEPATON’s DeltaStor does. In short, a customer should examine their legal and discovery requirements carefully and should take the requirements into consideration when evaluating deduplication options.</p></blockquote> <p>7. Some say that de-dupe obviates the need for encryption. What do you think?</p> <blockquote><p>These two technologies solve different problems. Encryption is about limiting data access and preventing inappropriate parties from accessing private data. In many environments, encryption is based on military grade algorithms that are virtually impossible to decipher without the appropriate key, and typically encryption strength is valued over performance.</p> <p>Deduplication, on the other hand, is designed to reduce the footprint of data on disk. It allows customers to store more data in a smaller footprint. Performance is typically an important element of deduplication solution because it can be a bottleneck in data protection. Most of the solutions are based off of NAS and/or VTL access methodologies and while they provide access controls, they are not designed to provide the level of protection of military grade encryption.</p> <p>In summary, encryption and data deduplication are technologies targeted at two different problems. In fact these technologies can be complementary and we have seen many companies looking to use the two technologies together.</p></blockquote> <p>8. Some say that de-duped data is inappropriate for tape backup, that data should be re-inflated prior to write to tape. Yet, one vendor is planning to enable an “NDMP-like” tape backup around his de-dupe system at the request of his customers. Is this smart?</p> <blockquote><p>It depends on the customer’s need, but it seems like it could be a challenge. By breaking data into smaller chunks and creating pointers, deduplication essentially fragments data. This works in a random access environment as seen on a disk subsystem. Once you move the deduplicated data to tape, your tape now contains fragmented which is impossible to restore directly. Instead, all tapes from a de-duplication “NDMP-like” backup will need to first be restored onto the disk-based system, and then access to the backup data is possible.</p> <p>Finally, customers need to think about accessing their data in the future. The beauty of today’s backup applications is that they use a consistent tape format and so you can be confident that data written can be recovered. As soon as you created proprietary tape format, as suggested here, the customer is now completely dependant on the deduplication for all future restore requirements. This may not seem like a huge problem in the near term, but what if you need to restore the data in 2 years?</p></blockquote> <p>9. Some vendors are claiming de-dupe is “green” — do you see it as such?</p> <blockquote><p>It depends what you are comparing it to. It is clearly greener than non-deduped disk; it is unclear how more or less green it is than physical tape. That said, most customers are implementing or have implemented disk in the datacenter for data protection due to its reliability and performance profile. Many customers are actively looking to implement non-deduped disk to retain more data onsite. Deduplication solutions are often considered instead of implementing more traditional disk. In these environments, it is clear that deduplication provides strong green benefits.</p></blockquote> <p>10. De-dupe and VTL seem to be joined at the hip in a lot of vendor discussions: Use de-dupe to store a lot of archival data on line in less space for fast retrieval in the event of the accidental loss of files or data sets on primary storage. Are there other applications for de-duplication besides compressing data in a nearline storage repository?</p> <blockquote><p>The redundant nature of backup data makes it an ideal target for deduplication. In what other environments are you making a full and completely redundant copy of your data on a weekly basis? Thus deduplication in the data protection is naturally the first market because it provides the opportunity for substantial disk savings.</p> <p>Going forward, we would anticipate seeing deduplication available across a wide range of storage devices. It is unlikely that you will ever see it in high Fibre Channel arrays where performance is the number one priority, but we would expect to see similar technologies implemented in a wide variety of second tier storage applications. The deduplication ratios experienced will likely be much less than that in data protection environments, but it can still provide footprint savings.</p></blockquote> <p>11. Just suggested by a reader: What do you see as the advantages/disadvantages of software based deduplication vs. hardware (chip-based) deduplication? Will this be a differentiating feature in the future… especially now that Hifn is pushing their Compression/DeDupe card to OEMs?</p> <blockquote><p>The whole concept of software vs hardware deduplication is a bit confusing. We make dedicated VTL appliances that are deduplication enabled: Would you consider that a hardware or software solution? Does your opinion change when we tell you that we specifically engineer our appliances for performance by optimizing it for the included hardware infrastructure? In the end all deduplication solutions rely on some kind of software to run.</p> <p>The Hifn card accelerates the creation of hashes for hash-based deduplication solutions. Remember, these algorithms include numerous different steps of which creating the hash is only one small step. Thus adding a Hifn card to one of these solutions does not necessarily mean that the performance will suddenly skyrocket; there are numerous other elements that could bottleneck performance. This brings me back to the first point which is that at the lowest level all deduplication solutions are based off of software and the distinction between “software” and “hardware” deduplication is vague.</p> <p>Customers should not focus on whether a solution is hardware or software based, but rather on how individual solutions meet their business requirements.</p></blockquote> <p>Thanks for the responses, Sepaton.</p> </li></ol><br /></div><div id="footer"><table border="0" cellpadding="0" cellspacing="0" width="780"><tbody><tr><td><br /></td> </tr> </tbody></table> </div> </div> <!-- Gorgeous design by Michael Heilemann - http://binarybonsai.com/kubrick/ -->RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com1tag:blogger.com,1999:blog-2616765223185375814.post-40851227083086752832008-11-20T11:54:00.000-08:002008-11-20T11:55:28.340-08:00IIS and KerberosGood articles here:<br /><br /><ul class="SmallFont"><li class="faqListing"><a href="http://www.adopenstatic.com/cs/blogs/ken/archive/2006/10/20/512.aspx" title="IIS and Kerberos Part 1 - What is Kerberos and how does it work?">IIS and Kerberos Part 1 - What is Kerberos and how does it work?</a></li><li class="faqListing"><a href="http://www.adopenstatic.com/cs/blogs/ken/archive/2006/11/19/606.aspx" title="IIS and Kerberos Part 2 - Service Principal Names (SPNs)">IIS and Kerberos Part 2 - Service Principal Names (SPNs)</a></li><li class="faqListing"><a href="http://www.adopenstatic.com/cs/blogs/ken/archive/2007/01/16/1054.aspx" title="IIS and Kerberos Part 3 - A simple scenario">IIS and Kerberos Part 3 - A simple scenario</a></li><li class="faqListing"><a href="http://www.adopenstatic.com/cs/blogs/ken/archive/2007/01/28/1282.aspx" title="IIS and Kerberos Part 4 - A simple delegation scenario">IIS and Kerberos Part 4 - A simple delegation scenario</a></li><li class="faqListing"><a href="http://www.adopenstatic.com/cs/blogs/ken/archive/2007/07/19/8460.aspx" title="IIS and Kerberos Part 5 - Protocol Transition, Constrained Delegation, S4U2S and S4U2P">IIS and Kerberos Part 5 - Protocol Transition, Constrained Delegation, S4U2S and S4U2P</a></li><li class="faqListing"><a href="http://www.adopenstatic.com/cs/blogs/ken/archive/2008/02/21/16275.aspx" title="IIS and Kerberos Part 6 - What's new in IIS 7">IIS and Kerberos Part 6 - What's new in IIS 7</a></li><li class="faqListing"><a href="http://www.adopenstatic.com/cs/blogs/ken/archive/2008/05/12/17533.aspx" title="IIS and Kerberos Part 7 - A simple cross Forest/Domain scenario">IIS and Kerberos Part 7 - A simple cross Forest scenario</a></li><li class="faqListing"><a href="http://www.adopenstatic.com/cs/blogs/ken/archive/2008/06/28/17805.aspx" title="IIS and Kerberos Part 8 - A simple cross Forest/Domain scenario delegation scenario">IIS and Kerberos Part 8 - A simple cross Forest/Domain scenario delegation scenario</a></li></ul>and<br /><br />http://blogs.msdn.com/vivekkum/archive/2008/06/15/step-by-step-kerberos-in-nlb-with-shared-content.aspx?CommentPosted=true#commentmessageRPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-14199869494056859002008-11-16T12:45:00.000-08:002008-11-20T11:53:09.525-08:00List of free email servers for Windows<div class="Section1"> <p class="MsoNormal"><span style="font-family:Arial;font-size:85%;"><span style=";font-family:Arial;font-size:10;" >http://www.donationcoder.com/Forums/bb/index.php?topic=11152.0;prev_next=next<o:p></o:p></span></span></p> </div>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com1tag:blogger.com,1999:blog-2616765223185375814.post-33807263069954304502008-11-16T12:00:00.001-08:002008-11-20T11:53:36.358-08:00Good article on relative merits of Security by Obscurity<div class="Section1"> <p class="MsoNormal"><span style="font-family:Arial;font-size:85%;"><span style=";font-family:Arial;font-size:10;" >http://technet.microsoft.com/en-us/magazine/cc510319.aspx<o:p></o:p></span></span></p></div>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-49293892432248901132008-11-11T14:15:00.001-08:002008-11-20T11:54:03.588-08:00Kerio Mail server and authentication<div class="Section1"> <p class="MsoNormal"><span style="font-family:Arial;font-size:85%;"><span style=";font-family:Arial;font-size:10;" lang="EN-NZ" >To use Secure Password authentication make sure the Windows 2000/NT domain name e.g. myhome.lan -> myhome is included on the DOMAIN/ADVANCED tab below the Kerberos domain name (myhome.lan)<o:p></o:p></span></span></p> </div> <pre><br /></pre>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-51005683794521369712008-11-11T13:19:00.001-08:002008-11-11T13:19:32.148-08:00Another nice one on host headers<div class=Section1> <h3><b><font size=4 face="Times New Roman"><span style='font-size:13.5pt'>http://www.gafvert.info/notes/iis_multiple_websites.htm<o:p></o:p></span></font></b></h3> <h3><b><font size=4 face="Times New Roman"><span style='font-size:13.5pt'><o:p> </o:p></span></font></b></h3> <h3><b><font size=4 face="Times New Roman"><span style='font-size:13.5pt'>Introduction<o:p></o:p></span></font></b></h3> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>IIS 6.0 is capable of hosting multiple websites on one server. This is done by separating the websites with unique combinations of host header name, IP number and port. This article will explain the different ways of hosting multiple websites using IIS, and will guide you thru setting IIS 6.0 up to host multiple websites using the host header approach. <o:p></o:p></span></font></p> <h3><b><font size=4 face="Times New Roman"><span style='font-size:13.5pt'>How IIS can host multiple websites<o:p></o:p></span></font></b></h3> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>To distinguish between websites, IIS looks at three attributes: <o:p></o:p></span></font></p> <ul type=disc> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l6 level1 lfo1'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>The host header name<o:p></o:p></span></font></li> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l6 level1 lfo1'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>The IP number<o:p></o:p></span></font></li> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l6 level1 lfo1'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>The port number<o:p></o:p></span></font></li> </ul> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>For each website, the combination of these three attributes must be unique. This means that you can have two websites using two different IP numbers, two different ports, or two different host headers (and of course also any combination of the three). Whatever combination you select, it is stored in the metabase property called ServerBindings[1] in the string format IP:Port:Hostname, for example 192.168.0.1:80:www.gafvert.info. Luckily enough, you do not need to understand anything of the metabase nor the ServerBindings property to follow this article. <o:p></o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>The host header name and IP number can be omitted and in that case the website accepts all host header names and/or (depending on if both attributes were omitted) IP numbers. This can of course only be done for one website (otherwise IIS would be confused and would not know which website should handle the request). <o:p></o:p></span></font></p> <h3><b><font size=4 face="Times New Roman"><span style='font-size:13.5pt'>What is a host header?<o:p></o:p></span></font></b></h3> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>A host header is a string part of the request sent to the web server (it is in the HTTP header). This means that configuring IIS to use host headers is only one step in the approach to host multiple websites using host headers to distinguish between the websites. A configuration of the DNS server (usually means that you need to add an (A) record for the domain) is also required, so the client can find the web server. <o:p></o:p></span></font></p> <h3><b><font size=4 face="Times New Roman"><span style='font-size:13.5pt'>Setting up IIS for multiple websites<o:p></o:p></span></font></b></h3> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Now when we have some background and understand how IIS works in relation to hosting multiple websites, and that the DNS server (also known as name server) must be updated to include the new domain, we can start with the configuration of IIS. <o:p></o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Configuring IIS for this is actually not difficult at all. I will use a default installation of Windows Server 2003 Standard Edition, and a default installation of IIS (also see <a href="http://www.gafvert.info/notes/install_iis_6.htm">Install and configure IIS 6.0 to serve ASP, ASP.NET and static pages</a>). <o:p></o:p></span></font></p> <ul type=disc> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l5 level1 lfo2'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Open IIS Manager<o:p></o:p></span></font></li> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l5 level1 lfo2'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Right click the "Web Sites" folder and click New->Web Site<o:p></o:p></span></font></li> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l5 level1 lfo2'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Click Next when the "Web Site Creation Wizard" starts.<o:p></o:p></span></font></li> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l5 level1 lfo2'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Type a description of the website and click Next. The description has nothing to do with how the clients access the website; it is only a description that you, the website administrator, use to know which website it is.<o:p></o:p></span></font></li> </ul> <p class=MsoNormal align=center style='text-align:center'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'><img border=0 width=350 height=272 id="_x0000_i1025" src="cid:image001.jpg@01C944B0.2234FA80" alt="Web Site Creation Wizard - Set Description"><o:p></o:p></span></font></p> <ul type=disc> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l2 level1 lfo3'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Enter the IP Address or use "All unassigned", the port number and the host header name and then click Next. In this example I used "All unassigned", port 80 and a host header name of beta.gafvert.info.<o:p></o:p></span></font></li> </ul> <p class=MsoNormal align=center style='text-align:center'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'><img border=0 width=350 height=272 id="_x0000_i1026" src="cid:image002.jpg@01C944B0.2234FA80" alt="Web Site Creation Wizard - Set Host Header"><o:p></o:p></span></font></p> <ul type=disc> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l0 level1 lfo4'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Enter the path to the home directory and click Next. This directory do not have to be in the Inetpub or wwwroot folder, it can be anywhere on the file system. In this example I placed it on the D: drive in the folder D:\webs\beta.<o:p></o:p></span></font></li> </ul> <p class=MsoNormal align=center style='text-align:center'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'><img border=0 width=350 height=272 id="_x0000_i1027" src="cid:image003.jpg@01C944B0.2234FA80" alt="Web Site Creation Wizard - Set Home Directory"><o:p></o:p></span></font></p> <ul type=disc> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l1 level1 lfo5'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Set the access permissions for the new website and click Next. By default, only read access is allowed, and unless you need to, do not change.<o:p></o:p></span></font></li> </ul> <p class=MsoNormal align=center style='text-align:center'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'><img border=0 width=350 height=272 id="_x0000_i1028" src="cid:image004.jpg@01C944B0.2234FA80" alt="Web Site Creation Wizard - Set Access Permissions"><o:p></o:p></span></font></p> <ul type=disc> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l3 level1 lfo6'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Click Finish.<o:p></o:p></span></font></li> </ul> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>You have now successfully configured IIS to host two websites, using a host header to distinguish the second website. <o:p></o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>The next step is to add an (A) record for beta.gafvert.info (the example) in the DNS server. How to do this depends on the DNS server, and what user interface you have to configure the DNS server, so a step-by-step guide for this has been excluded. This step must however be done, or else your visitors will not be able to find the web server. <o:p></o:p></span></font></p> <p class=note><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>A DNS Server is not necessary. The hosts file can also be used, but that means the hosts file must be edited on each client. A CNAME can also be used instead of an (A) record. What is important however is that the client machine must find the web server by using the name you have chosen as host header name, and how the client does this is not related to IIS. <o:p></o:p></span></font></p> <h3><b><font size=4 face="Times New Roman"><span style='font-size:13.5pt'>Troubleshooting<o:p></o:p></span></font></b></h3> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>The most critical part of this is actually outside of IIS – it is the name resolution. So most problems when setting up multiple websites is that the client is unable to find the server. If this is true can easily be checked: <o:p></o:p></span></font></p> <ul type=disc> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l4 level1 lfo7'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Open a command prompt (Start->Run, cmd)<o:p></o:p></span></font></li> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l4 level1 lfo7'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Type "nslookup beta.gafvert.info" (without the quotes, and replace beta.gafvert.info with the name you want to test)<o:p></o:p></span></font></li> </ul> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>The IP address returned by nslookup should match the IP address of the web server. If it does not, the problem is related to name resolution and cannot be fixed by an IIS configuration. That is, the DNS server, hosts file, or whatever you use so that the clients find the web server is incorrect configured. <o:p></o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Another common problem reveals itself during the Web Site Creation Wizard; you are unable to create the website with the combination of host header, IP and port you want to. This means that the combination you try to use is not unique, and you will have to check the websites already configured on the server. <o:p></o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>A third common problem is that although you use the domain name you have specified to reach the website, you are taken to the "default website" (or any other website that has been configured without host header and listens on the same port number and IP number). This has two possible explanations – the website is not properly configured, or IIS does not receive the host header. The latter sounds very uncommon, but is actually quite common. First of all, the Host header is part of the HTTP 1.1 specification, so if the client is using HTTP 1.0, this is expected (all new clients use HTTP 1.1). Second, if the DNS service you use is not true DNS but instead some kind of forwarding service, it happens that the host header is vanished somewhere. So the tip is that if you experience this, verify that IIS actually gets the host header you expect it to get. This problem is rarely related to IIS (since there is not much that can go wrong in the configuration of IIS). <o:p></o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>A "400 – Bad Request (Invalid Hostname)" is returned if the above problem is true, but there is no "default website" that accepts all requests on that IP number and port. <o:p></o:p></span></font></p> </div> <pre>At the Datamail Group we value teamwork, respect, achievement, client focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by replying immediately, destroy it and do not copy, disclose or use it in any way. The Datamail Group, through our GoGreen programme, is committed to environmental sustainability. Help us in our efforts by not printing this email. __________________________________________________________________ This email has been scanned by the DMZGlobal Business Quality Electronic Messaging Suite. Please see http://www.dmzglobal.com/dmzmessaging.htm for details. __________________________________________________________________ </pre>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-71284540551575010342008-11-11T13:18:00.001-08:002008-11-11T13:18:09.883-08:00Host Headers on IIS 6 - nice guide<div class=Section1> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'><o:p> </o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>http://www.4guysfromrolla.com/webtech/080200-1.shtml<o:p></o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'><o:p> </o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>In the aforementioned article the author explains how to create an ASP page that will check for the domain name requested by the client and automatically send the client the correct page for that site. This will all work just fine but in my opinion it is slow and more difficult to administer than "Host Header Names." <o:p></o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Host Header Names are a feature of Microsoft Internet Information Server (versions 4 and 5, I’m unsure about earlier versions) that allows you to operate multiple domains from one I.P address. As far as I can tell, using IIS to run more than one domain name from a machine with a single I.P address will work in much the same way as the ASP based solution except that the server does all the work for you. <o:p></o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Below is a screenshot of the Host Header Name configuration window as it appears in the Internet Services Manager. (We are using Windows 2000 Advanced Server so the screenshots may differ slightly from your system) <o:p></o:p></span></font></p> <p class=MsoNormal align=center style='text-align:center'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'><img width=452 height=220 id="_x0000_i1025" src="cid:image001.gif@01C944AF.F3175D10" border=0><o:p></o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>The column on the left is for your I.P address, since I only have one address, and that address is subject to change, I use (All Unassigned), the second column is your Port number and the third all-important column is for your Host header Name. This is the name that people will have typed into their web browser in order to access your site. In the example, the Host Header Name in use is </span></font><code><font size=2 face="Courier New"><span style='font-size:10.0pt'>after12.sale.net</span></font></code>. This is a test website running over our intranet but the host header names function just the same way. If I type <code><font size=2 face="Courier New"><span style='font-size:10.0pt'>http://after12.sale.net</span></font></code> in to my browser, the site comes up, even though my machine has only one I.P address. <o:p></o:p></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>If your site is on the Internet, you may use a Host header Name similar to this: </span></font><code><font size=2 face="Courier New"><span style='font-size:10.0pt'>www.mysitename.com</span></font></code> <o:p></o:p></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Here are a bunch of sites on this machine and their associated host header names: <o:p></o:p></span></font></p> <p class=MsoNormal align=center style='text-align:center'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'><img width=361 height=339 id="_x0000_i1026" src="cid:image002.gif@01C944AF.F3175D10" border=0><o:p></o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>To access the Host Header Name options, do the following: <o:p></o:p></span></font></p> <ul type=disc> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l0 level1 lfo1'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>In the Internet Services Manager, Right click on one of your sites and choose "properties" from the pop-up menu.<o:p></o:p></span></font></li> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l0 level1 lfo1'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>In the "Website Identification Area" of that page click on the "Advanced" button.<o:p></o:p></span></font></li> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l0 level1 lfo1'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Now select your site in the "Multiple Identities For This Website" area of the page and click on the "Edit" button.<o:p></o:p></span></font></li> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l0 level1 lfo1'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>In the edit window type your chosen Host Header Name into the appropriate box and click OK.<o:p></o:p></span></font></li> <li class=MsoNormal style='mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; mso-list:l0 level1 lfo1'><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Click OK on the other open options windows to close them all and save your new settings. <o:p></o:p></span></font></li> </ul> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>That's about all there is to it, although in order for you to run this on a single machine you must have some kind of DNS system up and running although you do not need a real (internet registered) domain name. <o:p></o:p></span></font></p> <p><font size=3 face="Times New Roman"><span style='font-size:12.0pt'>Happy Programming! <o:p></o:p></span></font></p> <p class=MsoNormal><font size=2 face=Arial><span style='font-size:10.0pt; font-family:Arial'><o:p> </o:p></span></font></p> <p class=MsoNormal><font size=3 face="Times New Roman"><span style='font-size: 12.0pt'><o:p> </o:p></span></font></p> </div> <pre>At the Datamail Group we value teamwork, respect, achievement, client focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by replying immediately, destroy it and do not copy, disclose or use it in any way. The Datamail Group, through our GoGreen programme, is committed to environmental sustainability. Help us in our efforts by not printing this email. __________________________________________________________________ This email has been scanned by the DMZGlobal Business Quality Electronic Messaging Suite. Please see http://www.dmzglobal.com/dmzmessaging.htm for details. __________________________________________________________________ </pre>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-5421305467901058812008-11-04T13:09:00.001-08:002008-11-04T13:09:49.805-08:00Purging Items from Outlook using IMAP (itmes appear as crossed out only)<div class=Section1> <p class=MsoNormal><font size=2 face=Arial><span style='font-size:10.0pt; font-family:Arial'>http://www.ideanode.com/supportdocs/16/how-to-delete-imap-messages-in-outlook<o:p></o:p></span></font></p> <p class=MsoNormal><font size=2 face=Arial><span style='font-size:10.0pt; font-family:Arial'><o:p> </o:p></span></font></p> <p class=MsoNormal><font size=3 face="Times New Roman"><span style='font-size: 12.0pt'><o:p> </o:p></span></font></p> </div> <pre>At the Datamail Group we value teamwork, respect, achievement, client focus, and courage. This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by replying immediately, destroy it and do not copy, disclose or use it in any way. The Datamail Group, through our GoGreen programme, is committed to environmental sustainability. Help us in our efforts by not printing this email. __________________________________________________________________ This email has been scanned by the DMZGlobal Business Quality Electronic Messaging Suite. Please see http://www.dmzglobal.com/dmzmessaging.htm for details. __________________________________________________________________ </pre>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-72421050404388108112008-10-26T23:29:00.000-07:002008-10-26T23:30:51.691-07:00How to export your IIS7 config from one server and import into another<div class=Section1> <p class=MsoNormal>http://www.phishthis.com/2008/05/27/how-to-export-your-iis-config-from-one-box-and-import-on-another/<o:p></o:p></p> <p class=MsoNormal><o:p> </o:p></p> <p class=MsoNormal>Good way to keep NLB servers in sync and will work against a clustered file server too.<o:p></o:p></p> </div> <BR><BR>__________ Information from ESET NOD32 Antivirus, version of virus signature database 3557 (20081026) __________<BR><BR>The message was checked by ESET NOD32 Antivirus.<BR><BR><A HREF="http://www.eset.com">http://www.eset.com</A><BR> RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-14875036150854322222008-10-26T16:45:00.001-07:002008-10-26T16:45:36.072-07:00Setting a Clustered file share as an FTP home directory in IIS7 (testing)<div class=Section1> <p class=MsoNormal>TO ALLOW ANONYMOUS ACCESS (AT OWN RISK - no security around this)<o:p></o:p></p> <p class=MsoNormal>Create a new AD user, e.g. FTPUSER@DOMAIN.LAN<o:p></o:p></p> <p class=MsoNormal>Create a new home directory using UNC paths within the FTP server mapping to the Clustered file share<o:p></o:p></p> <p class=MsoNormal>Assign appropriate permissions to the new FTP user on that file share (no apparent way to use Web Servers default IUSR account??)<o:p></o:p></p> <p class=MsoNormal>Within the FTP basic settings, change the connect as from Application to specific user.<o:p></o:p></p> <p class=MsoNormal><o:p> </o:p></p> </div> <BR><BR>__________ Information from ESET NOD32 Antivirus, version of virus signature database 3557 (20081026) __________<BR><BR>The message was checked by ESET NOD32 Antivirus.<BR><BR><A HREF="http://www.eset.com">http://www.eset.com</A><BR> RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-15987933842500287882008-10-22T01:01:00.000-07:002008-10-22T01:39:59.004-07:00NLB on server 2008<div class="Section1"> <p class="MsoNormal" style=""><b><span style=";font-family:";font-size:18;" >Installing and Configuring<o:p></o:p></span></b></p> <p class="MsoNormal" style=""><b><span style=";font-family:";font-size:12;" >To install NLB</span></b><span style=";font-family:";font-size:12;" ><o:p></o:p></span></p> <p class="MsoNormal" style=""><span style=";font-family:";font-size:12;" >1. Navigate to <b>Administrative Tools </b>and click <b>Server Manager.</b><o:p></o:p></span></p> <p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >2. Scroll down to the <b>Features </b>section or click the <b>Features </b>node in the left-hand tree view.<o:p></o:p></span></p> <p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >3. Click <b>Add Features</b>.<o:p></o:p></span></p> <p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >4. In the <b>Add Features Wizard</b>, select <b>Network Load Balancing </b>from the list of available optional components.<o:p></o:p></span></p> <p class="MsoNormal" style=""><span style=";font-family:";font-size:12;" >5. Click <b>Next</b>,<b> </b>and <b>Install</b>, as applicable to complete the wizard.<o:p></o:p></span></p> <p class="MsoNormal" style=""><b><span style=";font-family:";font-size:12;" >To configure NLB</span></b><span style=";font-family:";font-size:12;" ><o:p></o:p></span></p> <p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >1. Navigate to <b>Administrative Tools </b>and click <b>Network Load Balancing Manager</b>, or run nlbmgr from a command prompt.<o:p></o:p></span></p> <p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >2. Right-click <b>Network Load Balancing Clusters </b>and click <b>New Cluster</b>.<o:p></o:p></span></p> <p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >3. Connect to the host that will be part of the cluster, in this case the Web server. In the <b>Host</b> text box, type the name of the host, and then click <b>Connect</b>.<o:p></o:p></span></p> <p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >4. Select the interface you want to use with the cluster, and then click <b>Next</b>. <o:p></o:p></span></p> <p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >5. On the <b>Host Parameters</b> page, select a value from the <b>Priority (unique host identifier)</b> drop-down list.<o:p></o:p></span></p> <p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >6. In the <b>Dedicated IP Addresses </b>area, click <b>Add</b> to type the IP address that is shared by every host in the cluster. NLB will add this IP address to the TCP/IP stack on the selected interface of all hosts chosen to be part of the cluster. Click <b>Next</b> to continue.<o:p></o:p></span></p> <p class="MsoNormal" style=""><span style=";font-family:";font-size:12;" >7. On the <b>Cluster IP Addresses </b>page, click <b>Add</b>.<o:p></o:p></span></p> <p class="MsoNormal" style=""><span style=";font-family:";font-size:12;" >8. In the <b>Add IP Address</b> dialog box, type the IP address and subnet mask, and then click <b>OK</b>.<o:p></o:p></span></p> <p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >9. Click <b>Next</b>. <o:p></o:p></span></p> <p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >10. On the <b>Cluster Parameters</b> page, in the <b>Cluster operation mode </b>area, click <b>Unicast</b> to specify that a unicast media access control (MAC) address should be used for cluster operations.<br /></span></p><p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >NB: ON single NIC hosts or virtual hosts, this may need ot me multicast or connection will fail.</span></p><p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >Click <b>Next</b> to continue.<o:p></o:p></span></p> <p class="MsoNormal" style="margin-bottom: 12pt;"><span style=";font-family:";font-size:12;" >11. On the <b>Port Rules</b> page, click <b>Edit</b> to modify the default port rules if you need advanced rules. Otherwise, use the default.<o:p></o:p></span></p> <p class="MsoNormal" style=""><span style=";font-family:";font-size:12;" >12. Click <b>Finish</b> to create the cluster.<br /><br />To add more hosts to the cluster, right-click the new cluster, and then click <b>Add Host to Cluster</b>.<o:p></o:p></span></p> <p class="MsoNormal">http://learn.iis.net/page.aspx/213/network-load-balancing/<o:p></o:p></p> </div>RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0tag:blogger.com,1999:blog-2616765223185375814.post-52882961150648121652008-10-20T13:25:00.000-07:002008-10-21T19:29:00.849-07:00Guide to getting TS Web Access Gateway Windows 2008 WorkingI had quite a bit of trouble getting the TS Gateway to work for my home test lab.<br /><br />Situation:<br />-TS Gateway box was also the Terminal server<br />-no fixed IP address<br />-no SSL certificate<br />-firewall fowarding only port 443 to internal TS server (not 80 or 3389)<br /><br />I could gain access to the TS Gaterway webpage (easy to setup) but no further - couldn't run any of the published apps or remote desktops. This is what fixed it for me.<br /><br />1) I use no-ip.info for my domain/IP. I created an additoanl zone on my internal DC/DNS server for no-ip.info and added the EXTERNAL host name I use to access my home lab, e.g. MYPC.NO-IP.INFO and added the INTERNAL IP address (e.g 192.168.1.20)<br />Apparently you need this split DNS if you want to use the Gaterway internally as well as the certificate is critical to this.<br /><br />2) I generated a new self signed certificate on my TS Gateway box using the PUBLIC name e.g. MYPC.NO-IP.INFO -> when accessed from the outside, the certificate name now matches the site ->CRITICAL!!<br /><br />3) On your TS GAte way, open server manager and dive into IIS>SERVER NAME>DEFAULT WEB SITE>TS - choose 'application settings' and change Group by to Entry type. You should now see and option for <span class="blsp-spelling-error" id="SPELLING_ERROR_0">DefaultTSGateway</span>. Edit this to be your EXTERNAL address (e.g. MYPC.NO-IP.INFO) and restart <span class="blsp-spelling-error" id="SPELLING_ERROR_1">IIS</span><br /><br />4) Now, on your client PC make sure you have version 6.1 of the <span class="blsp-spelling-error" id="SPELLING_ERROR_2">RDP</span> client - none of this will work without it.<br /><br />5) Go to HTTPS://MYPC.NO-IP.INFO/TS (or whatever you use)<br /><br />6) Make sure you accept the <span class="blsp-spelling-corrected" id="SPELLING_ERROR_3">certificate</span> and add it to your Trusted Certificate Root Authority store. - restart your IE and reconnect and ticks should be green.<br /><br />7) This may not be <span class="blsp-spelling-corrected" id="SPELLING_ERROR_4">necessary</span>, but I went to the configuration tab and changed the TS Web Access Properties Terminal Server name to the internal <span class="blsp-spelling-error" id="SPELLING_ERROR_5">DNS</span> name of my Terminal server e.g. ts.internaldomain.lan<br /><br />8) Give it a whirl, I can now connect to internal servers via their internal names or <span class="blsp-spelling-error" id="SPELLING_ERROR_6">IP</span> addresses and run up my remote apps.<br /><br />9) To lock down access a bit (i.e. I don't want every man and their dog on the internet to see my TS Gateway) I added some simple authentication. Go into IIS>Default Web site and DISABLE anonymous authentication and enable Forms based authentication (tick the box for require SSL certificate) Doing this, any request to the front page requires logging in first.<br /><br />10) Couple of other points that maybe relevant:<br />Under Terminal Services>TS Remote App Manager>Terminal server settings I used the full FQDN of my TS (VMTS.MYLAN.LAN)<br />Under TS GAteway Settings I chose the 'Use these TS Gateway server settings' and used the EXTERNAL site name (MYPV.NO-IP.INFO) and logon method NTLM. Ticked the 2 boxes underneath as well.RPhttp://www.blogger.com/profile/13091302812813011520noreply@blogger.com0