Installing XAMPP on ubuntu 11.04

This is a “how to” for setting up the XAMPP server on Ubuntu to produce a web development environment. This “how to” will install the XAMPP stack into /opt and link the htdocs folder to a folder in your local home folder. This avoids permission problems whilst development work is ongoing.

Step 1

Download the latest version of XAMPP from here.

Step 2

Extract the archive to /opt using sudo (make sure you are in the directory that you downloaded the archive to).

1 sudo tar xvfz xampp-linux-1.7.4.tar.gz -C /opt

Thats it!

Step 3

to start or stop the server use

1 sudo /opt/lampp/lampp start
2 sudo /opt/lampp/lampp stop

The pages should now be available at http://localhost

Step 4

XAMPP uses /opt/lampp/htdocs as the default web folder. It is much cleaner for development purposes to use a folder within your home page. The creation of a symbolic link is suggested linking the /opt/lampp/htdocs folder to your home folder.
Assuming your name is david and you want a web development folder with public_html within this then use the following command.
I have set up the following folder structure within my home folder:

1 /home/david/webdev/public_html

to create the link

1 sudo ln -s /home/david/webdev/public_html /opt/lampp/htdocs/$USER

any files or folders in the public_html folder will now be served by the XAMPP server.

 

http://enginedave.wordpress.com/2011/06/11/installing-xampp-on-ubuntu-11-04/

 

 

Noderīgi :) How to change the root password in Ubuntu

http://www.ubuntux.org/how-to-change-the-root-password-in-ubuntu

As default Ubuntu has no password set for the root user. To gain root access you have to type in your own user password. This is the password you set for the first user while installing Ubuntu.

To manually set a password for the root user, type in the following in the shell:
sudo passwd

After that you are asked to type in the new root password twice. Finally, your root user has its own password.

How to Create & Restore a Backup with VMware vSphere Data Recovery

by Tom Finnis – January 20, 2010

Introduction

In my previous article on vSphere Data Recovery, you learned how to deploy the DR plug-in for the vSphere4 client and how to add the appliance to your virtual infrastructure. You also learned that one of its key features is an intuitive, wizard driven management interface that is integrated with the vSphere client to allow for simple configuration of your backup jobs. Assuming you followed the steps described in that article you should now be ready to learn how to use that management interface; in this article we will cover creating a backup schedule for a virtual machine, running a backup job and then how to restore that VM from the backup.

Data Recovery Basic Principles

The vSphere Client Data Recovery plug-in is used to configure the Data Recovery virtual machine, which then takes care of backup and restore jobs. In theory the DR VM can backup up to eight VMs concurrently, although its CPU utilization must be under 90% for it to start a backup job, otherwise it will wait until it drops. It works by using ESX’s snapshot feature to freeze a point-in-time copy of the target VM’s disks, which then give it a locked image to backup whilst the VM can continue to ope

Data Recovery - VM restore graphic

rate as any disk changes are instead written to an interim snapshot file. Once the backup has completed the DR VM then releases the snapshot so that the intervening disk changes are replayed from the interim snapshot file into the frozen disk image, bringing it back to a live state.

Data Recovery supports writing backups to a variety of locations, either a local ESX datastore or network targets utilizing  CIFS based file sharing such as SAMBA or Windows folder shares. However due to memory constraints only two separate storage locations can be written to concurrently, more than two locations can be specified but the jobs have to be scheduled to run separately. There is a limit of 100 virtual machines that can be backed up by a DR VM, although it will let you create backup jobs for more than that number of VMs it will simply omit to backup the excess. Additional DR VMs can be installed in order to work around this limitation but additional care needs to be taken when configuring the backup jobs as the appliances are not aware of each other.

It is important to note that to ensure a fully restorable backup of a Virtual Machine state Data Recovery attempts to make a “quiesced” snapshot. This requires the OS and any applications running on it to write any essential memory resident data to the disk so it is included in the snapshot for backup, otherwise applications may lose important data. To do this VMware Tools has to be installed on the guest operating system, Data Recovery then instructs it to quiesce the system for snapshot creation and then to de-quiesce when the process is completed. With Windows guest OS’s that support Volume Shadowcopy Services this is actioned by the VMware VSP service, otherwise VMware uses whatever quiescing support is available in the OS. Therefore you should always ensure you have installed the most up to date version of VMware Tools available on all your Virtual Machines wherever possible. Not having VMware Tools installed will not stop you from backing up a VM though, but your backups will only be “crash consistent” and may need a forced reboot after a restore.

vSphere 4.0 ESX hosts include optimisations for virtual machines created on them that enable advanced change tracking for the virtual disk states, these optimisations are not present on VMs created on older versions of ESX (3.5 and earlier). You can easily check what versions your VMs are from the Summary tab in the vSphere client:

Virtual machines created on vSphere4 should be version 7, which supports the advanced data change tracking features, but if you have VMs created on ESX 3.5 or earlier then they will be version 4 or less. Fortunately you can easily upgrade the VM version, and it is well worth doing, just shutdown the VM and then right-click it in the left hand pane and select “Upgrade Virtual Machine version”.

However before you do this make sure you have the latest version of VMware Tools installed on your VM, as the version upgrade also changes some of the virtual hardware, e.g. the NICs, which require new drivers included in VMware Tools.

This change-tracking function allows the Data Recovery VM to analyse the changes since the previous backup and thus will accelerate the backup process. Data Recovery also applies data de-duplication to each storage location so where information is repeated across VM backups it will only store that information once. This can lead to significant space savings, particularly when several VMs running the same OS are backed up to the same storage location, so should be taken into account when designing your backup strategy.

Setting Up Data Recovery

In the previous article you deployed the VMware Data Recovery appliance onto your vSphere infrastructure, now we need to finish configuring it and create a backup schedule. Open your vSphere Client and if it is not there already navigate to the “Home” page, you should now see a new icon under the “Solutions and Applications” section for “VMware Data Recovery” – click this to start managing your appliance. Should you not see the icon there then refer to the previous article for how to install the management plugin – it has to be installed on each vSphere Client system you intend to use, rather than the vCenter Server. Since the release of version 1.1 VMware have simplified the interface and initial setup process, now you can just select your VMware Data Recovery appliance from the list on the left and click “Connect”. The “Getting Started” wizard should then begin, if it doesn’t you can start it manually by clicking the “Configuration” tab and then the “Getting Started” link.

On the first page you will be prompted for credentials for the VM-DR appliance to connect to your vCenter Server with, depending on your security requirements you may want to create a separate user account for it to use. The VM-DR appliance initiates various tasks in order to perform its backups, such as creating VM snapshots, so by giving it its own login you can easily see which are its tasks when checking the vCenter logs.

The second step of the wizard configures the backup destination storage, for this guide we are assuming that you are using a VMFS store for your backup store, either on a SAN or local storage, which you attached to your VM-DR VM in the previous article. However if you want to use network based storage the process is the same, except you will first have to click the “Add Network Share” link here and provide the location of your storage.

Note that the VM-DR appliance is a Linux based system and as such only supports CIFS/SAMBA shares, technically this should include Windows shares but there are a number of potential issues you may encounter. The first thing to check is that you are using the IP of your network target rather than a name, after that if you are still having trouble connecting then I suggest a quick web search which turn up several things to check. If you are using a VMFS store you wont have to worry about this and you should see the disk you added to your appliance listed in the wizard already:

If under “Type” it says “unmounted” then you will need to click “Mount” first, then you need to “Format” the disk, once this has completed you can click “Next” and then complete the wizard. On the final page check that you are happy with the settings you have chosen, check the “Setup new backup job” option and then click “Close”.

The new backup job wizard should now start, if it doesn’t you may start it manually by clicking the “Backup” tab and then clicking “New”. The first page of the wizard will list all the virtual machines on your vSphere infrastructure, with check boxes so you can select them for backup:

Tick the boxes of the VMs you want backed up, if you wish you can expand a specific VM and select to only backup certain disks, or you can just select the cluster/datacenter to backup all the VMs it contains. Click “Next” and then on the next page select the backup store which you wish the backups to go to, there should only be one to choose from at this stage. VMware Data Recovery supports multiple stores, although it can only backup to two different stores simultaneously, however you should bear in mind that you will not maximise the benefits of the data de-duplication if you split your backups across several stores.

On the next page you need to define the “backup window” for the job, i.e. when it is allowed to run on each day of the week. The virtual machine backups themselves do not have a great impact on performance but the initial “quiesce” operation when the VM is snapshotted at the start of the backup can cause it to freeze for a while, especially if it has a high data throughput. As a result you should schedule your backup window so the backups start when your users aren’t online and when the processing demands on the VMs are at their lowest. In practice once the initial full backup of a VM has been completed the subsequent incrementals are much smaller and so are completed in a fraction of the time, so you may want to specify as large a window as possible to start with and then reduce it later on.

Its here that one of the limitations of VMware Data Recovery compared to other commercial backup solutions becomes apparent, you have fairly limited control over your backup scheduling. You cannot run more than one backup a day and the precise timing of that backup starting is hard to control, although usually they will start at the first window of opportunity each day. You can however restrict the backup frequency to less than daily by defining your backup windows appropriately.

On the last page you have to define your retention policy, i.e. how many historic backups you want to keep and for how long. What you choose here will be a compromise between the amount of storage space you need for your backups and how far you will be able to go back if you need to recover systems or files from the past. At this stage it is virtually impossible to judge how much space each backup will consume, since it is a combination of the daily changed data against the savings achieved by de-duplication. Therefore I would advise selecting a fairly conservative policy (e.g. “more” or “many”) for now, and then if necessary adjust it in a few weeks time when you are able to judge more accurately your storage consumption. Here you will discover another shortcoming of VM-DR, the reporting in general is rather concise and it can be fiddly to work out how much storage your backups are consuming. This is partly a side effect of the de-duplication, the logs will indicate figures for each backup but these are the “theoretical” total and as a result the best option is to monitor how the free space on your backup store declines with usage.

The final page of the wizard will confirm the settings you have chosen, so check these are ok and then click “Finish” to save the backup job. Depending on whether you are currently within your backup window it may start running the backup job immediately, in which case you will see the snapshot tasks appear in the task pane at the bottom. The backup jobs themselves do not appear here but you can monitor their progress by clicking the “Reports” tab and selecting “Running Tasks”. After a few days of operation if all has gone well you should see a list of successful backup tasks on this page, and if you click “Virtual Machines” you should be able to see the daily backup points for each VM.

 

Carrying Out a Restore Rehearsal

Its never a good idea to only find out that there is a problem with your backup solution when you need to restore something in a disaster situation, hence regular testing is recommended. VMware Data Recovery addresses this rather well with its “Restore Rehearsal” option which allows you to restore a virtual machine from backup without it affecting the live version of itself. It is simple to run, just right click on the virtual machine in the left hand pane and select “Restore Rehearsal”, then follow the wizard’s instructions to restore another copy of the VM to your vSphere datacenter. Once the restore is complete you can change to the Inventory view in your vSphere client and you will see the new VM listed, double-check the NIC is not connected and you can power it on to check that everything is working correctly. When you are happy that it has been a successful restore you can then shutdown the VM again and delete it from the datastore to release the storage space.

Conclusion

Assuming you have followed the steps laid out in this article you should now have your Data Recovery appliance up and running regular backups for you, and you can test it is working correctly with restore rehearsals. Unfortunately it does not have any reporting or alerting features so the only way you can check your backups are completing successfully is to regularly check it yourself, and remember to keep an eye on the free space in the backup store.

While VMware Data Recovery lacks many of the features you would expect from commercial backup applications but considering that it is included with most of the vSphere bundles it can be a useful addition to your disaster recovery provisions. In its present state I would not recommend it as your only backup solution but it can provide you with an additional level of protection and an alternative recovery option. Assuming you don’t already have a system image backup application it gives you the capability to rapidly restore complete virtual machine images when required and the incremental backups combined with the data de-duplication mean it’s storage requirements are not excessive.

You can try out VMware Data Recovery for free by evaluating VMware vSphere at this link.

Top 10 Linux Virtualization Software

Top 10 Linux Virtualization Software

by Vivek Gite on December 31, 2008 ·

Virtualization is the latest buzz word. You may wonder computers are getting cheaper every day, why should I care and why should I use virtualization? Virtualization is a broad term that refers to the abstraction of computer resources such as:

  1. Platform Virtualization
  2. Resource Virtualization
  3. Storage Virtualization
  4. Network Virtualization
  5. Desktop Virtualization

This article describes why you need virtualization and list commonly used FOSS and proprietary Linux virtualization software.

Why should I use virtualization?

  • Consolidation – It means combining multiple software workloads on one computer system. You can run various virtual machines in order to save money and power (electricity).
  • Testing – You can test various configuration. You can create less resource hungry and low priority virtual machines (VM). Often, I test new Linux distro inside VM. This is also good for students who wish to learn new operating systems and programming languages / database without making any changes to working environment. At my work place I give developers virtual test machines for testing and debugging their software.
  • Security and Isolation – If mail server or any other app gets cracked, only that VM will be under control of the attacker. Also, isolation means misbehaving apps (e.g. memory leaks) cannot bring down whole server.

Open Source Linux Virtualization Software

  1. OpenVZ is an operating system-level virtualization technology based on the Linux kernel and operating system.
  2. Xen is a virtual machine monitor for 32 / 64 bit Intel / AMD (IA 64) and PowerPC 970 architectures. It allows several guest operating systems to be executed on the same computer hardware concurrently. XEN is included with most popular Linux distributions such as Debian, Ubuntu, CentOS, RHEL, Fedora and many others.
  3. Kernel-based Virtual Machine (KVM) is a Linux kernel virtualization infrastructure. KVM currently supports native virtualization using Intel VT or AMD-V. A wide variety of guest operating systems work with KVM, including many flavours of Linux, BSD, Solaris, and Windows etc. KVM is included with Debian, OpenSuse and other Linux distributions.
  4. Linux-VServer is a virtual private server implementation done by adding operating system-level virtualization capabilities to the Linux kernel.
  5. VirtualBox is an x86 virtualization software package, developed by Sun Microsystems as part of its Sun xVM virtualization platform. Supported host operating systems include Linux, Mac OS X, OS/2 Warp, Windows XP or Vista, and Solaris, while supported guest operating systems include FreeBSD, Linux, OpenBSD, OS/2 Warp, Windows and Solaris.
  6. Bochs is a portable x86 and AMD64 PC emulator and debugger. Many guest operating systems can be run using the emulator including DOS, several versions of Microsoft Windows, BSDs, Linux, AmigaOS, Rhapsody and MorphOS. Bochs can run on many host operating systems, like Windows, Windows Mobile, Linux and Mac OS X.
  7. User Mode Linux (UML) was the first virtualization technology for Linux. User-mode Linux is generally considered to have lower performance than some competing technologies, such as Xen and OpenVZ. Future work in adding support for x86 virtualization to UML may reduce this disadvantage.

Proprietary Linux Virtualization Software

  1. VMware ESX Server and VMWare Server – VMware Server (also known as GSX Server) is an entry-level server virtualization software. VMware ESX Server is an enterprise-level virtualization product providing data center virtualization. It can run various guest operating systems such as FreeBSD, Linux, Solaris, Windows and others.
  2. Commercial implementations of XEN available with various features and support.
    • Citrix XenServer : XenServer is based on the open source Xen hypervisor, an exceptionally lean technology that delivers low overhead and near-native performance.
    • Oracle VM : Oracle VM is based on the open-source Xen hypervisor technology, supports both Windows and Linux guests and includes an integrated Web browser based management console. Oracle VM features fully tested and certified Oracle Applications stack in an enterprise virtualization environment.
    • Sun xVM : The xVM Server uses a bare-metal hypervisor based on the open source Xen under a Solaris environment on x86-64 systems. On SPARC systems, xVM is based on Sun’s Logical Domains and Solaris. Sun plans to support Microsoft Windows (on x86-64 systems only), Linux, and Solaris as guest operating systems.
  3. Parallels Virtuozzo Containers – It is an operating system-level virtualization product designed for large-scale homegenous server environments and data centers. Parallels Virtuozzo Containers is compatible with x86, x86-64 and IA-64 platforms. You can run various Linux distributions inside Parallels Virtuozzo Containers.

Personally, I’ve used VMware ESX / Server, XEN, OpenVZ and VirtualBox.

Folder roaming vai tomēr redirection

SolutionBase: Working with roaming profiles and Folder Redirection

By Guest Contributor
May 18, 2005, 7:00am PDT

/One of the nice things about the Windows XP operatingsystem (and some of the other previous Windows operating systems) is the waythat it allows each user to maintain their own individual settings. If multipleusers share a PC, each user can have their own wallpaper, screen saver,desktop, etc. without interfering with anything that other users might have setup. Windows accomplishes this feat be maintaining a separate profile for eachuser.

Although profiles do a great job of allowing each user totreat the PC as if it was their own, there is one major problem with them. Bydefault, profiles do not follow users from one machine to another. This meansthat if a user goes to use a different machine, they will have a totallydifferent experience from what they are used to. Profiles can help solve this problem, butthey can be a nightmare when users jump from machine to machine. Here’s how youcan make them work using Windows Server 2003’s Folder Redirection feature.

Why are roaming profiles a pain?

The reason why this is a problem is because a user’s profilecontains much more than just the user’s cosmetic preferences. A profile alsocontains application configuration data. For example, Microsoft Outlook doesn’tjust automatically know where to find a user’s mailbox, it must be configured.Since each user uses a different mailbox, there isn’t one global configurationthat can be applied to Outlook. Each user’s individual configurationinformation for Outlook is therefore stored within the user’s profile.

Obviously, this means that if a user decided to use adifferent PC, then Outlook won’t work, and there will probably be a few otherthings missing such as icons, Start menu items, and Internet Explorerfavorites. However, if you work in an office in which everyone has their ownPC, this might not even be a concern because there is no reason for users to beusing someone else’s PC.

In a perfect world, that might be true, but keep in mindthat PCs do occasionally malfunction. Imagine for a moment that a user is inthe middle of a critical project and their PC malfunctions. They would probablyend up having to borrow someone else’s PC while you fix their PC. If theborrowed PC isn’t set up for the user though, you may find yourself having toconfigure the PC for the user before you can even start trying to fix themalfunctioning computer. This means a lot of extra work for you.

Now, let’s look at another reason why local profiles mightbe an issue for you. Suppose that a user’s hard disk goes out. The user isn’tworking on anything important at the moment, so they can take the rest of theafternoon off while you replace the damaged hard drive. As you replace thedrive, you think to yourself how easy this job is going to be. You can use yourRIS server to reload the operating system and applications for you. Since theuser’s documents are all stored on the network, there is nothing for you torestore. However, the next day, the user comes back, logs on to their PC, andasks you “Where’s all my stuff”? The computer is no longer displayingthe user’s custom desktop, and the user’s shortcuts and Internet Explorerfavorites are all missing.

The problem is that all of the files related to the thingsthat are missing were stored locally. This means that those files were lostwhen the user’s hard drive failed. Since most companies do not backupworkstations, there is no way of getting the user’s configuration back. It willbe up to the user to recreate anything that was lost, and it will be up to youto reconfigure the user’s applications. You’ve now got that job ahead of youand you’ve got an upset user on your hands.

All of these problems could be prevented if you tookadvantage of the roaming profiles and folder redirection features offered byWindows Server 2003. The basic idea behind this concept is that the user’sprofiles are stored on the server. This means that the user will have the sameprofile regardless of where they log on. It also means that you can backup allof the files that make up the user’s profile.

It’s actually very easy to implement roaming profiles.Before I show you how to set up roaming profiles though, there are some issuesthat you need to be aware of. If you just blindly enable roaming profiles, youcan cause some serious performance and availability problems for your users.Just to make sure that we are all on the same page, I want to start out bytalking about what exactly is contained within a user’s profile.

How Windows handles profiles

Different versions of Windows handle profiles slightlydifferently, but in Windows XP Professional, user profiles are stored withinthe C:\Documents and Settings folder. The C:\Documents and Settings foldercontains sub folders for every user who has ever logged into the machine. Forexample, on the workstation that I am using to write this article, there arefolders named Administrator.Production, Administrator.Stewie, Brien, and AllUsers. There are also hidden folders named Default User, LocalService, andNetworkService. The three hidden folders are used by applications and servicesto interact with the operating system. They are beyond the scope of thisarticle, but I wanted to at least mention their existence.

OK, so what about the visible folders? The All Users foldercontains profile elements that apply to anyone who logs into this machine. TheBrien folder contains the profile for my user account. There are twoAdministrator folders; Administrator.Production and Administrator.Stewie.Production is the name of the domain that the workstation is connected to andStewie is the name of the local machine (named after one of the characters onthe cartoon Family Guy).

Windows treats a local user and a domain user as twocompletely separate user accounts, even if they have the same name, andtherefore maintains completely separate profiles for them. You will notice thatthe folder named Brien does not contain an extension. This folder contains aprofile for a domain user named Brien, but no extension is necessary becausethere isn’t a local user with the same name.

I should also point out that there are certain disasterrecovery situations in which you may have to install a fresh copy of Windows.When this happens, Windows won’t overwrite existing profiles, but it won’tre-use them either. Instead, Windows will add yet another extension. Forexample, if the Administrator.Production folder already existed, but Windowshad to be reloaded from scratch, then the next time that the Administrator fromthe Production domain logged on, Windows would create a profile folder namedAdministrator.Production.000. In a situation like this, you could actuallyrestore the user’s original profile by copying all of the files from Administrator.Productionto Administrator.Production.000.

Now that you know how the naming conventions work forprofile folders, let’s talk about the folder’s contents. Normally, a profilefolder contains about a dozen sub-folders and at least three files. Most of thefolders are pretty self explanatory. For example, the Cookies folder containsInternet Explorer cookies. The Application Data folder stores configurationinformation user specific information related to applications. However, theLocal Settings folder also contains its own Application Data folder.

Aside from that, the most important folders within a profilefolder are My Documents, Desktop, and Start Menu, which store the profile owner’sdocuments, desktop settings, and Start menu configurations respectively. Thereare several other folders used by profiles, but they are fairly self explanatory,and you won’t have to do anything with these folders as a part of any of thetechniques that I will be showing you.

As you can see, there are a lot of different components thatmake up a user profile. Profiles include a user’s application data, documents,cookies, desktop, favorites, recently opened document list, networkneighborhood list, network printer list, send to option list, and templates.The reason why I am telling you this is because after you enable roamingprofiles, all of these profile components will have to be copied to thenetwork. It wouldn’t be so bad if the information only had to be copied once,but usually, everything that I named has to be copied every time that a userlogs in or out.

The first time that a user logs on after roaming profileshave been enabled, Windows has to copy the local profile to the designated spoton the network. After that, every time the user logs on, the entire profile iscopied from the network server to the user’s local hard disk. The user thenworks off of the local copy of the profile throughout the duration of theirsession. When the user logs off, the local profile (including any changes thathave been made to it) is copied to the network. This might not sound so bad,but keep in mind that user’s profiles can be huge. Just the My Documents folderalone can easily be several hundred megs in size. I have personally seenseveral instances in which a user’s profile was so large that it took over anhour for the user to log on or off because so many files had to be copied.

Folder Redirection

The easiest way to get around this problem is to use folderredirection. The idea behind folder redirection is that you can tell Windowsthat certain parts of the user’s profile should remain on the server rather thanbeing copied each time that the user logs on or off. This drastically reducesthe amount of time that it takes users with large profiles to log on or off.

Windows allows you to individually select which folders youwant to redirect, but the folders that are most often redirected are MyDocuments, Application Data, Desktop, and Start Menu.

Caveats

In a few moments I’ll show you how to enable roamingprofiles and folder redirections. Before I do though, there are a few caveatsthat I want to talk about. The first issue that you might encounter has to dowith the user having limited functionality on a machine. Technically, a usershould be able to log into any PC and have the exact same experience. However,I have seen a few situations in which the user’s experience won’t be quiteright unless the user has been configured to be a power user on the machine.For example, the user’s wallpaper might not display, or the user’s screen savermight not work.

This behavior is the exception rather than the rule. If youdo run into this type of behavior though, you can fix the problem by openingthe Control Panel and clicking the User Accounts link. You can then add theuser’s domain account to Windows and make the user a member of the Power Usersgroup.

Another issue that sometimes causes a roaming profile to notact quite right is when the profile references a local file that existssomewhere other than the profile directory. For example, if a user were tocreate a wallpaper file and then place the file into the C:\Windows folder, theprofile would reference the wallpaper file, but would not actually include thewallpaper file. That means that if the user were to log onto a differentmachine, the profile would be unable to load the user’s wallpaper because thewallpaper file does not exist on the local machine or within the user’sprofile.

One last issue that I want to discuss is availability. EarlierI explained that it was a good idea to store profiles on a server because itallows you to back the profiles up each night. However, if the servercontaining the profiles were to go down, it can cause some problems for theusers.

If the server containing the profiles were to fail, theusers would still be able to log in and in may even be able to use their ownprofile because Windows XP maintains a cached copy of the profile. This cachedcopy would be available for the user’s use assuming that they had previouslylogged into the workstation. The users would just not be able to save changesto the profile since the profile server is down.

I have gotten around this particular issue by placing theuser profiles and redirected folders onto a DFS root. The reason for this isthat you can create multiple replicas of a DFS root. This means that you canhave copies of profiles and redirected folders on multiple servers. Wheneverything is functioning properly, the multiple replicas help to balance theworkload. If a DFS server goes down though, the remaining replicas can pick upthe slack. Having multiple DFS replicas also allows you to take a replicaoffline for maintenance without disrupting the users.

Configuring roaming profiles

The basic technique behind creating a roaming profileinvolves creating a shared folder on the server, creating the user a folderwithin the share, and then defining the user’s profile location through thegroup policy.

Begin the process by creating a folder named PROFILES on oneof your file servers. You must then share the folder. I recommend setting theshare level permissions to grant Everyone Full Control. You should then granteveryone Read permissions to the folder at the NTFS level.

At this point, you will want to create a folder for eachuser. The folder name should match the user name. For example, if you have auser with a username of Brien, you’d create a folder named \PROFILES\Brien.Once you have created a user’s folder, grant Full Control to the user who thefolder will belong to and to the domain administrator. You must then blockpermissions from being inherited from the parent object. Otherwise, everyonewill have read access to the folder. In most situations, this will take care ofthe necessary permissions. However, I have seen at least one network in whichthe backup software was unable to backup the user’s profile directories untilthe backup program’s service account was granted access to each user’s folder.That is the exception rather than the rule though.

Once you have created the necessary folders and defined theappropriate permissions, it’s time to redirect the user’s profile. To do so,open the Active Directory Users and Computers console, right click on a useraccount, and select the Properties command from the resulting shortcut menu.When you do, you will see the user’s properties sheet. Next, select theproperties sheet’s Profile tab. Enter the user’s profile path as: \\server_name\share_name\user_name.

For example, if you created a share named PROFILES on aserver named Tazmania, then the path to Brien’s profile should be \\Tazmania\PROFILES\Brien.Click OK and then the user’s profile will be roaming starting with the nextlogin.

After you enable roaming profiles, you will want to redirectthe more heavily used folders. You will have to create a separate share on yourfile server to handle the redirected folders. On my server, I created a foldernamed USERS (and shared the folder as USERS), but you can call the folderanything that you want. Just make sure to assign Everyone Full Control at theshare level.

Once you have created the necessary folder, open the Group PolicyEditor and navigate to User Settings | Windows Settings | Folder Redirection.The group policy requires you to redirect each of the four folders separately,but the procedure for doing so is the same for each folder. Set the folder’sSetting option to Basic–Redirect Everyone’s Folder To The Same Location.Next, select the Create A Folder For Each User Under The Root Path option fromthe Target Folder Location drop down list. Finally, enter your root path in theplace provided. For example, on my test server the root path is: \\TAZMANIA\USERSas the root path.

After you configure folder redirection, Windows willautomatically create a folder for each user beneath the USERS folder. Windowswill also assign the necessary permissions to each dynamically created folder.

That’s it!

As you can see, profiles are a handy way of storing data,but they can be problematic if users tend to move from machine to machine. Witha little bit of work and some help from Windows Server 2003’s FolderRedirection feature you can configure profiles to follow your users around thenetwork as they switch from machine to machine.

Kas ir deduplikācija?

Deduplication is becoming more prevalent in the world of proprietary solutions for data backup. However an open source solution deduplication shows the tip of his nose for some time and begins to mature : Opendedup.

For those who have forgotten or do not know this technology, I propose the definition of Wikipedia :

« Data deduplication is a specific form of compression where redundant data is eliminated, typically to improve storage utilization. In the deduplication process, duplicate data is deleted, leaving only one copy of the data to be stored. However, indexing of all data is still retained should that data ever be required. Deduplication is able to reduce the required storage capacity since only the unique data is stored. For example, a typical email system might contain 100 instances of the same one megabyte (MB) file attachment. If the email platform is backed up or archived, all 100 instances are saved, requiring 100 MB storage space. With data deduplication, only one instance of the attachment is actually stored; each subsequent instance is just referenced back to the one saved copy. In this example, a 100 MB storage demand could be reduced to only 1 MB. Different applications have different levels of data redundancy. Backup applications generally benefit the most from de-duplication due to the nature of repeated full backups of an existing file system. »

Also, add that to optimize this deduplication, data storage is usually in blocks of data as shown in the diagram below :

Great Applications for Deduplication

  • Backups
  • Virtual Machines
  • Network shares for unstructured data such as office documents and PSTs
  • Any application with a large amount of deduplicated data

 

Applications that are not a good fit for Deplication

  • Anything that has totally unique data
  • Pictures
  • Music Files
  • Movies/Videos
  • Encrypted Data

 

 

Deduplication with OpenDedup

SDFS leverages data deduplication for primary storage. It acts as a normal file system that can be used for typical IO operations similiar to EXT3, NTFS … etc. The difference is SDFS hashes blocks of data as they are written to the file system and only writes those that are unique to disk. Blocks that are not unique just refernce the data that is already on disk.

Requirements

System

Optional Packages

  • attr – (setfattr and getfattr) if you plan on doing snapshotting or setting extended file attributes.

Install (on Debian 6)

Consider that the Debian squeeze 6 is already installed.

Opendedup

# wget http://opendedup.googlecode.com/files/sdfs-1.0.7.tar.gz
# tar -zxf sdfs-1.0.5.tar.gz
# mv sdfs-bin /opt/sdfs

Attr

# apt-get install attr

Fuse

# wget http://opendedup.googlecode.com/files/debian-fuse.tar.gz
# tar -zxf debian-fuse.tar.gz
# cd debian-fuse
# apt-get install  libselinux1-dev libsepol1-dev
# dpkg -i libfuse2_2.8.3-opendedup0_amd64.deb \
# libfuse-dev_2.8.3-opendedup0_amd64.deb \
# fuse-utils_2.8.3-opendedup0_amd64.deb

Java

# tar -zxf jdk-7-fcs-bin-b146-linux-x64-20_jun_2011.tar.gz
# mkdir /usr/lib/jvm
# mv jdk1.7.0 /usr/lib/jvm/jdk
# export JAVA_HOME=/usr/lib/jvm/jdk

Create SDFS file system

For all possible parameters of mkfs.sdfs : mkfs.sdfs –help

–volume-capacity and –volume-name are required but I recommend –volume-maximum-full-percentage which will back an error when the file system is full. Otherwise the command « df » will show 100% but the storage space continue to increase. By default, data is stored in /opt/sdfs/<volume name>.

# mv /opt/sdfs-bin /opt/sdfs
# cd /opt/sdfs
/opt/sdfs# ./mkfs.sdfs --volume-name=sdfs_vol1 --volume-capacity=500MB --volume-maximum-full-percentage=100
Attempting to create volume ...
Volume [sdfs_vol1] created with a capacity of [500MB]
check [/etc/sdfs/sdfs_vol1-volume-cfg.xml] for configuration details if you need to change anything

Mount SDFS file system

/opt/sdfs# mkdir  /mnt/sdfs
/opt/sdfs# ./mount.sdfs -v sdfs_vol1 -m /mnt/sdfs
Running SDFS Version 1.0.7
reading config file = /etc/sdfs/sdfs_vol1-volume-cfg.xml

-f
/mnt/sdfs
-o
direct_io,big_writes,allow_other,fsname=sdfs_vol1-volume-cfg.xml
11:11:05.114     main  INFO [fuse.FuseMount]: Mounting filesystem

 

Tests

Two identical copies of files

To begin, we’ll copy the same file two times with a different name on SDFS file system :

# du -hc /opt/sdfs/volumes/sdfs_vol1/
[...]
20K     total

# cp jdk-7-fcs-bin-b146-linux-x64-20_jun_2011.tar.gz /mnt/sdfs/

# df -h /mnt/sdfs/
Sys. de fichiers    Taille  Uti. Disp. Uti% Monté sur
sdfs_vol1-volume-cfg.xml
                      500M   91M  410M  19% /mnt/sdfs

# du -hc /opt/sdfs/volumes/sdfs_vol1/
[...]
91M     total

# cp jdk-7-fcs-bin-b146-linux-x64-20_jun_2011-copie.tar.gz /mnt/sdfs/

df -h /mnt/sdfs/
Sys. de fichiers    Taille  Uti. Disp. Uti% Monté sur
sdfs_vol1-volume-cfg.xml
                      500M  181M  319M  37% /mnt/sdfs
# du -hc /opt/sdfs/volumes/sdfs_vol1/
[...]
91M     total

We can see that the disk space occupied by the file system is still the same. The command « df » clearly indicates the sum of the two files.

Copy of two files which the second contains two times the data of the first

# ls -lh ldap*
-rw-r--r-- 1 root root 42M  8 juil. 13:56 ldap2x.ldif
-rw-r--r-- 1 root root 21M 14 mars   2006 ldap.ldif

# cp ldap*.ldif /mnt/sdfs/

# df -h /mnt/sdfs/
Sys. de fichiers    Taille  Uti. Disp. Uti% Monté sur
sdfs_vol1-volume-cfg.xml
                      500M   63M  438M  13% /mnt/sdfs

# du -hc /opt/sdfs/volumes/sdfs_vol1/
[...]
42M     total

For this test the rate of duplication is 1/3.

Copying 500 MB of files (text, jpg, pdf, mp3 …) until saturation of the mounted file system

# ls -rlh /mnt/sdfs/
-rw-r--r-- 1 root root 7,8M 21 sept.  2010 Water - Evolution.mp3
-rw-r--r-- 1 root root 643K 10 juin  10:27 terrain vague.jpg
-rw-r--r-- 1 root root  34M  7 juin  09:48 squeezeboxserver_7.5.4_all.deb
[...]

# df -h /mnt/sdfs
Sys. de fichiers    Taille  Uti. Disp. Uti% Monté sur
sdfs_vol1-volume-cfg.xml
                      500M  500M     0 100% /mnt/sdfs
# du -hc /mnt/sdfs
[...]
564M    total

# du -hc /opt/sdfs/volumes/sdfs_vol1/
[...]
583M    total

There, I confess, I have some difficulty in interpreting these results !

Performance

For testing, I used a virtual machine with 4 G0 RAM, 2 CPU and 3 virtual disks, then I installed the Linux distribution Debian Squeeze 6.0.

# hdparm -t /dev/sda
/dev/sda:
Timing buffered disk reads: 486 MB in  3.02 seconds = 160.71 MB/sec

# hdparm -t /dev/sdb
/dev/sdb:
Timing buffered disk reads: 484 MB in  3.00 seconds = 161.18 MB/sec

# hdparm -t /dev/sdc
/dev/sdc:
Timing buffered disk reads: 482 MB in  3.00 seconds = 160.43 MB/sec
  • Test copy of a 698 MB file :
File system second
ext3 13
ext4 6
sdfs 19

Unsurprisingly EXT4 leads the race ahead and SDFS is close to the sag wagon.

  • Test with dd

Writing test

# time sh -c "dd if=/dev/zero of=/mnt/ext3/test bs=4096 count=175000 && sync"
# time sh -c "dd if=/dev/zero of=/mnt/ext4/test bs=4096 count=175000 && sync"
# time sh -c "dd if=/dev/zero of=/mnt/sdfs/test bs=4096 count=175000 && sync"

In this example, we create a test file on each partition (ext3, ext4 and sdfs) in which we will write 175 000 blocks of 4KB. This will give us a file of 717 MB. dd will return the time and the bandwidth used.

File system
second MB/s
ext3 1,95 367
ext4 1,79 401
sdfs 8,36 86

SDFS is almost four times slower than EXT3.

Reading test

time sh -c "dd if=/mnt/ext3/test of=/dev/null bs=4096 count=175000 && sync"
time sh -c "dd if=/mnt/ext4/test of=/dev/null bs=4096 count=175000 && sync"
time sh -c "dd if=/mnt/sdfs/test of=/dev/null bs=4096 count=175000 && sync"

We read the same test file (was sent to / dev / null). dd we will return the same information. But it is now reading.

File system
second MB/s
ext3 0,34 2100
ext4 0,32 2200
sdfs 4,98 144

In this second test, the gap is even greater with a ratio of 1 / 15!

The next step is the analysis of performance using the program bonnie + +. This program analyzes the type of access database to a file, as well as create, read, and destruction of small files simulating the use made by programs like Squid, INN, or programs using the Maildir format (qmail).

# bonnie++ -d /mnt/ext3 -s 512 -r 256 -u0
# bonnie++ -d /mnt/ext4 -s 512 -r 256 -u0
# bonnie++ -d /mnt/sdfs -s 512 -r 256 -u0

The command runs the test using 512 MB in the mounted file system. The other options specify the amount of memory (256 MB), and the user (-u0, that is the administrator).
Note that in certain boxes displayed « +++++ » means that the test to be less than 500 ms and the result could not be calculated.

Sequential writes
Sequential reads
Random access
Character Block Re-writing Character Block
SF test space KB/s % CPU KB/s % CPU KB/s % CPU KB/s % CPU KB/s % CPU KB/s % CPU
ext3 512 Mo 633 97 128217 32 92074 13 3744 97 +++++ +++ 6071 32
ext4 512 Mo 689 99 153463 21 162631 28 3854 98 +++++ +++ 5941 28
sdfs 512 Mo 36 29 99782 11 71622 20 35 29 147618 7 3241 24

 

Sequential creations Random creations
Creation Read Delete Creation Read Delete
SF number of files / sec % CPU / sec % CPU / sec % CPU / sec % CPU / sec % CPU / sec % CPU
ext3 16 24219 37 +++++ +++ 5241 7 26500 47 +++++ +++ 3908 5
ext4 16 29919 60 +++++ +++ 8112 11 30659 62 +++++ +++ 5669 8
sdfs 16 340 3 3747 7 907 3 405 3 4076 8 771 2

The order is complied with EXT4 which is slightly higher than EXT3 and SDFS in last place far behind the other 2.

Conclusion

Opendedup is certainly an attractive and promising. However for good performance, I think it should be used with disks at 15,000 rev / min and with minimum 4 GB of RAM. Also I noticed an encoding problem when file names contain accented characters. In addition deduplication which is supposed to be in blocks of data does not seem very effective. While the documentation is not abundant but it contains sufficient information. So I certainly must have missed something … :cry:

Your comments are welcome if you want to explore this subject.

References

OpenFiler VS FreeNas

Openfiler vs FreeNAS: Tips for building your own NAS