Top 5 Free Rescue Discs for Your Sys Admin Toolkit

A rescue disc can be a life saver for a SysAdmin. Packed with various diagnostic and repair tools they can do things like fix a Master Boot Record (MBR), recover a password, detect and clean a rootkitor simply allow you to salvage data by transferring it from a damaged drive to another location. Here are the best all-in-one Bootable CD/USBs that admins can use to troubleshoot and repair a Linux or Windows system – all handy additions to your toolkit.

1. Hiren Boot CD

The tagline for Hiren Boot CD reads “a first aid kit for your computer” – and that it is! Hiren Boot CD is one of the more popular Rescue CDs out there and contains a wealth of tools including defrag tools, driver tools, backup tools, antivirus and anti-malware tools, rootkit detection tools, secure data wiping tools, and partitioning tools, among others.

Hiren Boot CD is available to download as an ISO for easy installation to a USB or burning to a CD.

The boot menu allows you to boot into the MiniXP environment, the Linux-based rescue environment, run a series of tools or boot directly from a specified partition.


The MiniXP environment, as shown in the image below, is much like a Windows XP desktop. Everything pretty much happens from the HBCD Launcher (a standalone application with a drop down menu containing shortcuts to the packaged applications).



2. FalconFour’s Ultimate Boot CD

FalconFour’s Ultimate Boot CD is based upon the Hiren Boot CD with a customized boot menu and a whole bunch of updated tools thrown in. F4’s UBCD contains tools that provide system information, tools that recover/repair broken partitions, tools that recover data, as well as file utilities, password recovery tools, network tools, malware removal tools and much more.

F4’s UBCD is available for download as an ISO file so you can burn it to a CD or use it to create a bootable USB drive.

Similar to Hiren Boot CD, when you boot F4’s UBCD you are presented with a menu giving you the option to boot into a Linux environment, the MiniXP environment or run a series of standalone tools. As you scroll through the menu, a description of each item is given at the bottom of the screen.


Similar to that of Hiren Boot CD, the MiniXP environment is much like a Windows XP desktop environment, only it’s really lightweight and is pre-packed with a host of diagnostic and repair tools.


Once the desktop has loaded up, choose from one of the available application shortcuts, launch the HBCD Menu or go to the Start menu to get going.

3. SystemRescueCD

SystemRescueCD is a Linux-based package for troubleshooting Linux and Windows systems. The disc contains antivirus, malware removal, and rootkit removal tools as well as tools to help manage or repair partitions, recover your data, back up your data or clone your drives. SystemRescueCD supports ext2/ext3/ext4, reiserfs, btrfs, xfs, jfs, vfat, and ntfs file systems, as well as network file systems like samba and nfs. It also comes with network troubleshooting, file editing and bootloader restoration tools.

SystemRescueCD is available for download as an ISO file so you can burn it to a CD or use it to create a bootable USB drive.

When you boot the SystemRescueCD, the pre-boot menu gives you a multitude of options, allowing you to boot directly into the graphical environment or the command line.


In the image below, I have booted into the graphical environment and started the chkrootkit application from the Terminal window which searches for rootkits installed on the system. Other applications can be run directly from the terminal in a similar fashion, using arguments and parameters as necessary.


4. Ultimate Boot CD

Ultimate Boot CD is designed to help you troubleshoot Windows and Linux systems using a series of diagnostic and repair tools. It contains anything from data recovery and drive cloning tools to BIOS management, memory and CPU testing tools.

UBCD is downloadable in ISO format for easy installation to a USB or burning to a CD.

Note: UBCD4Win ( is UBCD’s brother built specifically for Windows systems.

When you boot with UBCD you are presented with a DOS-based interface that you navigate depending on which system component you wish to troubleshoot.



5. Trinity Rescue Kit

The Trinity Rescue Kit is a Linux-based Rescue CD aimed specifically at recovery and repair of Windows or Linux machines. It contains a range of tools allowing you to run AV scans, reset lost Windows passwords, backup data, recover data, clone drives, modify partitions and run rootkit detection tools.

The Trinity Rescue Kit is downloadable in ISO format for easy installation to a USB or burning to a CD.

The boot menu gives you the option to start TRK is different modes (useful if you’re having trouble loading in default mode).


Once you get to the Trinity Rescue Kit ‘easy menu’, simply navigate through the list to choose which tool to execute. You can also switch to the command line if you want more flexibility and feel comfortable with Linux-based commands.



You may also wish to consider…


Boot-Repair-Disk is a Rescue CD primarily designed for repairing Linux distributions but can also be used to fix some Windows systems. It automatically launches the Boot-Repair application (a one-click repair system) which is used to repair access to operating systems; providing GRUB reinstallation, MBR restoration, file system repair and UEFI, SecureBoot, RAID, LVM, and Wubi support.

Windows System Repair Disc

The Windows System Repair Disc lets you boot into the Windows Recovery Environment, giving you the option to detect and fix startup and booting issues, restore to a workable restore point (if you had System Restore enabled), restore the entire machine from a backup image, conduct a memory diagnostics test and use the command line to run utilities like chkdsk.

Additionally, Linux distributions such the ones found below are lightweight bootable versions of Linux that contain a host of handy tools to fix common problems, recover data, transfer data, scan for viruses, manage partitions, etc.

Finally, you could also try a Rescue Disc from a popular antivirus vendor, such as:

Although primarily targeted to help with systems that are infected with malware, they are worth adding to your arsenal.

Create your own!

If you want more flexibility, why not create or customize your own bootable rescue disc?

You have a couple of options here:

1) Create your own bootable Live USB

Using applications such as YUMI (Your Universal Multiboot Installer) or UNetBootin, you can create a multi-boot USB drive containing several operating systems, antivirus utilities, disc cloning, diagnostic tools, and more.

In addition to YUMI and UNetBootin, you may also wish to consider SARDU ( and Rufus ( as recommended by some of our readers.

2) Modify a Linux distribution

If you are using a Linux-based Rescue CD / Live CD, you can use an application like Live-Magic (for Debian-based Linux distributions) or Remastersys to create a bootable ISO of an already installed Linux OS. The idea would be to install a clean build of Linux, add or remove applications and make any customizations as necessary and then run the above mentioned applications to capture the build into an ISO.

Alternatively, instead of using an application, you can use a series of shell scripts to do the same thing. Check out for more information.

Why can’t I access forwarded ports on my WAN IP from my LAN/OPTx networks?

By default, pfSense does not allow LAN/OPTx connected PCs to reach forwarded ports on the WAN interface. This is a technical limitation of how the underlying PF functions, it cannot « reflect » in and out the same Interface; it only works when passing « through » the router. NAT Reflection employs some simple bouncing daemons to redirect the connections, which works but isn’t always desirable, or even functional for some scenarios. Usually, split DNS is the better way if it is possible on your network. Both are explained here.

Method 1: NAT Reflection

In order to access ports forwarded on your WAN interface from internal networks, you need to enable NAT reflection.

In order to do this, you must go to System > Advanced, and from there uncheck « Disable NAT Reflection ». Click save, and it should work. This will only work with single port forwards or ranges of less than 500 ports. If you’re using 1:1 NAT, you can’t use NAT Reflection.

Example of system with NAT Reflection enabled. (Disable choice is unchecked).

Method 2: Split DNS

The more elegant solution to this problem involves using Split DNS. Basically this means that internal and external clients resolve your hostnames differently.

Your internal clients would access your resources by hostname, not IP, and clients on your local network would resolve that hostname to your LAN IP, and not the WAN IP as others outside your network would see.

In order for this to work using the DNS forwarder in pfSense, your clients will need to have the IP Address of the pfSense router as their primary DNS server.


Some screenshots that show the above in practice:

Split DNS Example, adding DNS Override

Split DNS Example, what your screen should look like with overridden as


Method 3: Experimental Routing Tricks

This should be considered experimental, and could possibly cause bad things to happen!

If you’re using 1:1 NAT, you can’t use NAT Reflection. If you’re a service provider (a web host, say), you may not have all relevant DNS entries under your control, so « Method 2: Split DNS » may be difficult to implement.

If you have a CIDR network block allocated to you which is all behind your pfSense firewall, you might be better off using public addresses on your internal network, or using a mix of public and private addresses.

If you have only a portion of your CIDR block behind pfSense, and you’re using 1:1 NAT, you may have a difficult situation. Here’s a possible approach you can consider. This may not work, or may work in only some situations. Be careful: don’t try this if you’re remote or don’t have console access to your devices.

1. Make the external IP address an alias on your loopback interface (the interface with localhost/ on it). In FreeBSD, that’s something like this on the command line:

Used in <shellcmd> tags in pfSense, as described here.

2. Cause every other internal host to route traffic destined to your external IP to your internal IP. There may be 3 ways to do this:

a) Add a static route on every other host with something like route add -host but you have to run that on every other host. This option can quickly become administratively difficult as the number of internal hosts goes up, but this can be mitigated if you have centralized administration (via something like cfagent, say).

b) Run a routing protocol – routed for example – on your internal network, and publish routes reflecting the external/internal 1:1 NAT mapping. This might be the most complicated choice, but might scale better than the other alternatives.

c) This seems to not work, presumably because pfSense already knows a route to the external network: Add static routes on the LAN interface of your pfSense firewall with a destination of the external address ( and a gateway of the internal address ( This alternative worries me a little bit, as I’m afraid it might confuse the firewall – I don’t think so, but please be careful.


Why is VirtualBox only showing 32 bit guest versions on my 64 bit host OS?

I experienced an extremely nettlesome problem after swapping out my traditional hard drive for a fasterSolid State Drive (SSD).  I installed Windows 8.1 from scratch using the Product Key, copied over all my software (I probably should have used Ninite but I was too lazy) and then mindlessly enabled a bunch of options that I never enabled before.

But Alas! Stupidity has a cost and in my case it cost hours of discomfiting nights scouring Google for a solution.

Today I want to save you the pain I encountered by showing you how to fix a problem I experienced in VirtualBox.  This post is going to be succinct and to the point.

Even though my Host OS is a 64-bit version of Windows 8.1, VirtualBox categorically refused to display any 64-bit guest OSes in the Create Virtual Machine dialog box.

64-bit OS running 64-bit VirtualBox only showing 32-bit Guest OS

This was super annoying because all my ISOs were 64-bit therefore I couldn’t use them until I fixed this problem.

Uninstalling and reinstalling VirtualBox made no observable difference so I booted into the BIOS to see what I could find there.

I have a Lenovo ThinkPad W520.  As a side note, a few months ago I made another idiotic mistake: I enabled a bunch of BIOS passwords to make myself feel secure but then forgot how to disable them!

Thank God I didn’t enable a Supervisor Password or else I would have to replace the system board.  That’s seriously the only way to get around that one; resetting the CMOS won’t fix a forgotten Supervisor password.  Thankfully, I remembered, the Hard Drive password and the Power-On Password, so after surmounting those obstacles, I removed those passwords and tried to see if there was anything I could enable to make VirtualBox display 64-bit Guest OS versions.

In the Security Section, I noticed an option called Virtualization.

Filled with a bracing hope, I tabbed over and hit enter but then noticed all relevant settings were already enabled!

Intel (R) Virtualization Technology was enabled and Intel (R) VT-d Feature was also enabled.  These were two key options that VirtualBox was expecting.  But since both were already enabled I was utterly flumoxed.

Do I need to toggle the values?  In other words, do I need to disable both options, save changes, reboot, and then enable them again?


I couldn’t figure it out so I decided to poke around the administrator options in Windows to see what I could find.

I wanted to see what administrative Windows features were enabled – perhaps something was conflicting with the visualization settings in the BIOS?

I quickly pressed Windows Key + q to open the Search box and typed in:

turn windows features on or off


I scanned a few options but one in particular was salient:

Hyper-V was enabled.

In Windows 8.1 Hyper-V is the successor to Microsoft Virtual PC.  It’s the native virtualization component that is available to all Windows 8.1 users.

It was enabled though…

Interesting.  Could this be conflicting with the Intel settings in my BIOS?  I decided to uncheck the option to see.


Windows quickly displayed a progress bar denoting the removal of the Hyper-V platform and after about a minute prompted me to reboot.

When my system came back up, I swiftly logged back into the Windows, kicked open VirtualBox and checked the versions list:

Yes yes yes!!!!

I couldn’t have been more elated – something like this might seem trivial to some people but it was really worrying me.  Because it used to work before I upgraded my hard drive.  It turns out, upgrading my hard drive wasn’t even remotely causally related to my problem.

I think I literally pumped my fists in the air when I saw this screen.


The Bottom Line

If VirtualBox is only showing 32 bit versions in the Version list make sure:

  • Your Host OS is 64-bits
  • Intel Virtualization Technology and VT-d are both enabled in the BIOS
  • The Hyper-V platform is disabled in your Windows Feature list.

I hope this helps you – I don’t know if my situation will apply to your system configuration but I wanted to share.  Hopefully this little article will spare you the hours of mind numbing frustration that besieged me for the last few weeks.



The curious case of slow downloads

Some time ago we discovered that certain very slow downloads were getting abruptly terminated and began investigating whether that was a client (i.e. web browser) or server (i.e. us) problem.

Some users were unable to download a binary file a few megabytes in length. The story was simple—the download connection was abruptly terminated even though the file was in the process of being downloaded. After a brief investigation we confirmed the problem: somewhere in our stack there was a bug.

Describing the problem was simple, reproducing the problem was easy with a single curlcommand, but fixing it took surprising amount of effort.

CC BY 2.0 image by jojo nicdao

In this article I’ll describe the symptoms we saw, how we reproduced it and how we fixed it. Hopefully, by sharing our experiences we will save others from the tedious debugging we went through.

Failing downloads

Two things caught our attention in the bug report. First, only users on mobile phones were experiencing the problem. Second, the asset causing issues—a binary file—was pretty large, at around 30MB.

After a fruitful session with tcpdump one of our engineers was able to prepare a test case that reproduced the problem. As so often happens, once you know what you are looking for reproducing a problem is easy. In this case setting up a large file on a test domain and using the --limit-rate option to curl was enough:

$ curl -v --limit-rate 10k > /dev/null
* Closing connection #0
curl: (56) Recv failure: Connection reset by peer  

Poking with tcpdump showed there was RST packet coming from our server exactly 60 seconds after the connection was established:

$ tcpdump -tttttni eth0 port 80
00:00:00 IP > Flags [S], seq 3193165162, win 43690, options [mss 65495,sackOK,TS val 143660119 ecr 0,nop,wscale 7], length 0  
00:01:00 IP > Flags [R.], seq 1579198, ack 88, win 342, options [nop,nop,TS val 143675137 ecr 143675135], length 0

Clearly our server was doing something wrong. The RST packet coming from CloudFlare server is just bad. The client behaves, sends ACK packets politely, consumes the data at its own pace, and then we just abruptly cut the conversation.

Not our problem

We are a heavy users of NGINX. In order to isolate the problem we set up a basic off-the-shelf NGINX server. The issue was easily reproducible locally:

$ curl --limit-rate 10k  localhost:8080/large.bin > /dev/null
* Closing connection #0
curl: (56) Recv failure: Connection reset by peer  

This proved the problem was not specific to our setup—it was a broader NGINX issue!

After some further poking we found two culprits. First, we were using the reset_timedout_connection setting. This causes NGINX to close connections abruptly. When NGINX wants to time out a connection it sets SO_LINGER without a timeout on a socket, followed by a close(). This triggers the RST packet, instead of a usual graceful TCP finalization. Here’s an strace log from NGINX:

04:20:22 setsockopt(5, SOL_SOCKET, SO_LINGER, {onoff=1, linger=0}, 8) = 0  
04:20:22 close(5) = 0  

We could just have disabled the reset_timedout_connection setting, but that wouldn’t have solved the underlying problem. Why was NGINX closing the connection in the first place?

After further investigation we looked at the send_timeout configuration option. The default value is 60 seconds, exactly the timeout we were seeing.

http {  
     send_timeout 60s;

The send_timeout option is used by NGINX to ensure that all connections will eventually drain. It controls the time allowed between successive send/sendfile calls on each connection. Generally speaking it’s not fine for a single connection to use precious server resources for too long. If the download is going on too long or is plain stuck, it’s okay for the HTTP server to be upset.

But there was more to it than that.

Not an NGINX problem either

Armed with strace we investigated what NGINX actually did:

04:54:05 accept4(4, ...) = 5  
04:54:05 sendfile(5, 9, [0], 51773484) = 5325752  
04:55:05 close(5) = 0  

In the config we ordered NGINX to use sendfile to transmit the data. The call to sendfile succeeds and pushes 5MB of data to the send buffer. This value is interesting—it’s about the amount of space we have in our default write buffer setting:

$ sysctl net.ipv4.tcp_wmem
net.ipv4.tcp_wmem = 4096 5242880 33554432  

A minute after the first long sendfile the socket is closed. Let’s see what happens when we increase the send_timeout value to some big value (like 600 seconds):

08:21:37 accept4(4, ...) = 5  
08:21:37 sendfile(5, 9, [0], 51773484) = 6024754  
08:24:21 sendfile(5, 9, [6024754], 45748730) = 1768041  
08:27:09 sendfile(5, 9, [7792795], 43980689) = 1768041  
08:30:07 sendfile(5, 9, [9560836], 42212648) = 1768041  

After the first large push of data, sendfile is called more times. Between each successive call it transfers about 1.7 MB. Between these syscalls, about every 180 seconds, the socket was constantly being drained by the slow curl, so why didn’t NGINX refill it constantly?

The asymmetry

A motto of Unix design is “everything is a file”. I prefer to think about this as: “in Unix everything can be readable and writeable when given to poll“. But what exactly does “being readable” mean? Let’s discuss the behavior of network sockets on Linux.

The semantics of reading from a socket are simple:

  • Calling read() will return the data available on the socket, until it’s empty.
  • poll reports the socket as readable when any data is available on it.

One might think this is symmetrical and similar conditions hold for writing to a socket, like this:

  • Calling write() will copy data to the write buffer, up until “send buffer” memory is exhausted.
  • poll reports the socket is writeable if there is any space available in the send buffer.

Surprisingly, the last point is not true.

Different code paths

It’s very important to realize that in the Linux Kernel, there are two separate code paths: one for sending data and another one for checking if a socket is writeable.

In order for send() to succeed two conditions must be met:

On the other hand, the conditions for a socket to be reported as “writeable” by poll are slightly narrower:

The last condition is critical. This means that after you fill the socket send buffer to 100%, the socket will become writeable again only when it’s drained below 66% of send buffer size.

Going back to our NGINX trace, the second sendfile we saw:

08:24:21 sendfile(5, 9, [6024754], 45748730) = 1768041  

The call succeeded in sending 1.7 MiB of data. This is close to 33% of 5 MiB, our default wmemsend buffer size.

I presume this threshold was implemented in Linux in order to avoid refilling the buffers too often. It is undesirable to wake up the sending program after each byte of the data was acknowledged by the client.

The solution

With full understanding of the problem we can decisively say when it happens:

  1. The socket send buffer is filled to at least 66%.
  2. The customer download speed is poor and it fails to drain the buffer to below 66% in 60 seconds.
  3. When that happens, the send buffer is not refilled in time, it’s not reported as writeable, and the connection gets reset with a timeout.

There are a couple of ways to fix the problem.

One option is to increase the send_timeout to, say, 280 seconds. This would ensure that given the default send buffer size, consumers faster than 50Kbps will never time out.

Another choice is to reduce the tcp_wmem send buffers sizes.

The final option is to patch NGINX to react differently on timeout. Instead of closing the connection, we could inspect the amount of data remaining in the send buffer. We can do that with ioctl(TIOCOUTQ). With this information we know exactly how quickly the connection is being drained. If it’s above some configurable threshold, we could decide to grant the connection some more time.

My colleague Chris Branch prepared a Linux specific patch to NGINX. It implements a send_minimum_rate option, which is used to specify the minimum permitted client throughput.


The Linux networking stack is very complex. While it usually works really well, sometimes it gives us a surprise. Even very experienced programmers don’t fully understand all the corner cases. During debugging we learned that setting timeouts in the “write” path of the code requires special attention. You can’t just treat the “write” timeouts in the same way as “read” timeouts.

It was a surprise to me that the semantics of a socket being “writeable” are not symmetrical to the “readable” state.

In pastwe found that raising receive buffers can have unexpected consequences. Now we know tweaking wmem values can affect something totally different—NGINX send timeouts.

Tuning a CDN to work well for all the users takes a lot of work. This write up is a result of hard work done by four engineers (special thanks to Chris Branch!). If this sounds interesting, consider applying!


Reverse DNS for Azure VM created in Resource Manager

I have the same problem and found a solution. With an CNAME Record pointing to an PublicIpAddress DNS-Name I was abel to create the PublicIpAddress ReverseFqdn. After the Creation I have changed the CName back to an A Record Pointing to the new PublicIpAddress.

So Method 2 of the error works for me. Method 1 not.

Step by Step.

1.) New-AzureRmPublicIpAddress -Name tempstatic -ResourceGroupName cw-extesting -Location “west us” -AllocationMethod Static -domainnamelable tempstatic47115


3.) New-AzureRmPublicIpAddress -Name testingstatic -ResourceGroupName cw-extesting -Location “west us” -ReverseFqdn -AllocationMethod Static -domainnamelable testingstatic47115

4.) Change to A Record pointing to the Ip of 3. (testingstatic)

5.) Remove-AzureRmPublicIpAddress -Name tempstatic -ResourceGroupName cw-extesting

Hint: The ReverseFqdn works only in combination with domainnamelable because the method New-AzureRmPublicIpAddress checks if the domainnamelable is not empty to create the DNS Settings.