Horse Sense #120
Iron Horse is now 25 years old!
Tony Stirk has been President the whole time. In Internet years, Iron Horse is 2^25 (34 million) years old! Tony is older than that as his son reminds him often....
Why a Phone Case is Important
Besides adding shock, dust, water, and/or scratch protection to your phone, cases provide other valuable features. My case has bright colors and garish designs making the phone harder to misplace. It makes the phone less slippery so it does not slide off a car seat, desk, or the top of a pile easily. It makes the phone easier to hold and the buttons easier to press. It has a stand so I can stand it up. It protects my hand from the heat generated by long term phone use. Because the camera lens is recessed in the case, it is less likely to get scratched or covered with gunk. Accessories can make your phone and computing devices safer and a lot more user friendly.
25 years ago, we connected computers into a network in lots of different ways, but Ethernet came out the clear winner for in building connections. Ethernet started out at 3Mbps (Megabits per second). You can now connect devices at 100Gbps (Gigabits per second, 1000M=1G) and even higher speeds are planned. To carry faster Ethernet connections, we have had to upgrade our cabling over time. Early thick coaxial cables were replaced by twisted pair copper and optical cables.
10Gbps Ethernet [Skip to the next section if you are only interested in business issues]
Now, not only is the pipe bigger, but it takes less time for information to travel from one end to the other (latency has decreased). Your information has a better chance to get where it needs to be in time with higher connection speed and lower latency links. If you have read previous Horse Sense articles, you will realize that higher bandwidth is the quick and dirty way to get rid of quality of service, or traffic congestion, issues. If the lanes are wide enough, everyone can travel down them unimpeded. And, even if they do fill up with a ton of data, they will also clear up quickly, and the momentary traffic jam probably will not cause an issue.
10Gbps Ethernet adoption has been slower than the shift from 10Mbps to 100Mbps or 100Mbps to 1Gbps Ethernet. In 2007, 1 million 10Gbps Ethernet switch ports shipped. In 2011, 9 million ports shipped. In 2014, it was 25 million, but there were 175 million 100Mbps ports and 300 million 1Gbps ports shipped. Switches with 10Gbps uplink ports are now common.
10Gbps connections will not supplant 1Gbps connections for many years. To make a reliable 10Gbps connection, you must not only connect to more expensive ports but most people will need to make the connection through new cabling as well. Changing what is connected to the ends of cables is easy. Pulling new cables is very disruptive. Fortunately, although you usually cannot use your existing copper cabling, you may be able to use the fiber cables in your network.
For now, one of the more common and cheapest ways to get a 10Gbps connection uses Direct Attach Copper (DAC) cables consisting of SFP+ transceivers grafted to each end of a twinax cable. These premade cables are stiff, bulky, hard to route, have large inflexible ends, and are limited to only 7 meters in length (15 if they are specially made "active" cables). You cannot leave the room with these cables and are fairly unlikely to even leave the same equipment rack. But....they are really inexpensive and reliable. Because almost all of the 10Gbps Ethernet connections require transceivers which are more expensive than these DAC connections, this connection will be popular for 2-5 years. Fiber will be a very popular 10Gbps connection method because you can tie together parts of your networks that are a long distance away from each other. Finally, 10GBase-T connections are starting to become more common as manufacturers have managed to make connectors that do not require enormous amounts of power. They use the standard copper RJ45 connection everyone is used to. Unfortunately, to reach the 100 meter maximum distance reliably, you have to use Category 6A or better cabling which has been rarely used up to now.
Why is 10Gbps Ethernet Important?
For much of the work that people are interested in doing, they are limited by the slowest responding part of their computing systems. Processors and memory are blazingly fast, but getting the information to them from hard disks is much slower. If that information comes from hard disks not on the machine but elsewhere on a network you slow down even more. Single solid state disks can exceed 22Gbps. Throttling that speed down to a 1Gbps Ethernet link is not a good plan. Solid state disks are also quite good at responding to requests for information spread out all over the disk. Current disks can read over 440,000 random 4KB blocks of data in a second. With the latency of 1Gbps Ethernet, you cannot get anywhere close to that speed. 10Gbps gets you closer.
10Gbps Ethernet is more expensive than 1Gbps Ethernet, so why not use multiple 1Gbps connections to increase your effective speed from a server? As you add those connections up, they cost more, of course, so the cost advantage starts to go away. Also, the price/performance advantage is heavily weighted in favor of 10Gbps. Even an early generation single port 10Gbps Ethernet card delivers 1.5-2 times the total throughput of a 4 port 1Gbps Ethernet card at the same level of CPU utilization (more stuff out with the processor doing the same amount of work). Newer generation 10Gbps Ethernet cards are much better. In addition, four ports on each end connected by four cables is a lot more complicated and prone to issues (4 x 1Gbps) than one port on each end and one cable (1 x 10Gbps at a minimum of 2.5x the performance!).
Since most conversations on computer networks are from a few servers to a bunch of workstations, there are natural bottlenecks. You end up with bottlenecks not just to the servers, but on the connections leading to them. Imagine you are plugged into a 48 port 1Gbps Ethernet switch connected to an identical switch which is connected to the server you want to reach. You and 46 other people on your switch are competing to use the one link between the switches. 47 1Gbps roads all have to use a 1Gbps road to get to where they need to go. Worse, they then compete with another 46 people connected by 1Gbps roads to reach that server. This oversubscription can result in nasty traffic jams and strip you of throughput, especially as your network becomes more congested. Widen the connection to the server to 10Gbps, and that will help the people on the closest roads, but not you out in the boonies. Add a 10Gbps connection between the two switches, and things get a lot better! In fact, it is more important to eliminate bottlenecks in your network than bottlenecks to a particular endpoint, so switch to switch high bandwidth connections are especially important. As long as the connections between the two switches is fast enough, all of your connections will behave as if they are local rather than half behaving like they are in the boonies. This lets you put anything you want to share on either switch without worrying about a bottleneck.
Below is a basic diagram showing these connections and how you and your coworkers would feel on the network. All of the connections are 1Gbps Ethernet, unless you see 10G.
You--Switch 1--Switch 2--Server
= You hate life a lot. You envy people on Switch 2.
You--Switch 1--Switch 2 10G--10G Server
= You still hate life. You envy people on Switch 2 even more.
You--Switch 1 10G--10G Switch 2--Server
= You and Switch 2 people are on equal footing. Move the Server to your Switch 1 and it will not make much difference. However, you and your buddies can swamp the connection to the server. There are 93 people at 1Gbps connecting through a 1Gbps pipe to the Server.
You--Switch 1 10G--10G Switch 2 10G--10G Server
= Everyone on both switches is much less likely to have to wait for anyone else to get what they want from the Server. You have cut a 93/1 ratio to a much more manageable 9.3/1. Yeah!
Modern networking designs often have many compute servers utilizing a central storage server because you can manage it more easily and use it more effectively. Such a setup means you need to feed data across the network to the compute servers. Large organizations have been able to use fibre channel connections to do this, but that requires a separate network, knowledge of fibre channel, additional cabling and switch expense, etc. Connecting via Ethernet allows you to do this using much more familiar tools. 1Gbps Ethernet is not all that fast when it comes to delivering the data to compute servers. But, 10Gbps Ethernet is capable of handling multiple compute servers requesting data from centralized storage at a reasonable rate.
10Gbps Ethernet is also better than earlier generations of Ethernet at supporting virtual machines. You can run many more virtual machines on a single physical server with a 10Gbps Ethernet card and get the needed amount of connectivity out to the rest of the world. 10Gbps Ethernet adapters tend to have built in support for virtualization and centralized storage that allow for much higher efficiency over 1Gbps Ethernet adapters. You can easily switch a virtual machine from one physical compute server to another merely by having the new compute server load up the needed information from centralized storage. This allows you to increase performance at no additional cost by moving more heavily utilized virtual servers from one physical box to another while keeping the storage in exactly the same place.
10Gbps Ethernet connections are a cost effective way to increase network performance and provide access to storage. They can increase server throughput while lowering the overall stress on your server. They can improve productivity and keep you from having to introduce costly and often ineffective quality of service or bandwidth management measures. 10Gbps Ethernet connections make compute server virtualization, centralized storage, and cost efficient load balancing of loads not only possible, but useful.
If your network cannot keep up with your servers and your storage, it could be like tying a large rock behind a race car. If you want to speed things up, call us. We can help.
©2015 Tony Stirk, Iron Horse email@example.com