Computer Application, Maintenance and Supplies
Showing posts with label Memory. Show all posts
Showing posts with label Memory. Show all posts

Saturday, December 24, 2011

Timeline Facebook

Timeline Facebook

Di akhir tahun 2011, Facebook kembali meluncurkan fitur baru yang akan memanjakan para Facebooker. Saat ini fitur tersebut masih dalam tahap uji coba. Sehingga belum semua User dapat menikmati secara langsung fitur yang disajikan. Tentu saja sebagian diantaranya sudah dapat mengaktifkannya pada profilnya. Hal ini ditandai dengan munculnya kotak dialog saat Login dimulai yang meminta User untuk mengaktifkan Timeline Facebook. Sementara bagi User lainnya masih harus ekstra bersabar sampai profilnya mendapat kesempatan yang sama dengan User lainnya. Meski demikian, Facebooker bisa saja mengambil jalan pintas dengan mengunjungi Pusat Bantuan Facebook. Dari sana klik tombol Get Timeline atau Dapatkan Kronologis. Selanjutnya profil lama agar segera berganti dengan tampilan profil baru yang lebih elegan layaknya Website/WebBlog pada umumnya yang terdiri dari Left SideBar, Main Wrapper, Right SideBar, Header dan Footer (maaf kalau salah membaginya).


Sobat Ambae.exe, masih ingat dengan postingan sebelumnya mengenai Cara Mengubah Tampilan Halaman Google pada Browser kalian. Facebook dengan fitur barunya bernama Timeline Facebook, memiliki kemiripan pasa salah satu fitur yang tertanam di dalamnya. Facebooker dapat memanfaatkan fitur tersebut untuk memasang sampul alias cover alias pembungkus pada profilenya.

Sebelum menerbitkan Kronologis, User diberi kesempatan sampai tanggal 31 Desember 2011 untuk melakukan pengeditan pada tampilan profilenya. Namun, bilamana sudah yakin dengan tampilan profile Anda, segera klik tombol Publish Now atau Terbitkan Sekarang. Maka secara otomatis, profile Anda kini akan dilihat oleh User lainnya dengan tampilan yang telah Anda sesuaikan mengikuti konsep fitur Timeline Facebook.

Timeline Facebook-1
Menambah sampul pada profile Anda, akan memberikan kesan lebih menarik dibandingkan dengan tampilan profile lama yang hanya menampilkan Foto Profil di halaman utama (Wall). Pada sisi kanan Foto Profil, klik tombol Add a Cover atau Tambahkan Sampul. Facebook memberi pilihan upload foto berupa foto baru atau foto yang sudah ada pada album profil Anda.

Perlu diperhatikan beberapa hal terkait dengan Sampul Profile Facebook yakni :
  1. Dimensi foto yang dipersyaratkan Facebook adalah lebar foto sebesar 720 pixels.
  2. Resolusi foto minimal 399 pixels.
  3. Berkenaan dengan tinggi foto, Facebook tidak menetapkan ukuran.
  4. Foto yang telah diupload dapat digeser posisinya sesuai keinginan dengan melakukan drag ke atas dan ke bawah sebelum perubahan disimpan dan ditampilkan.
  5. Dalam pemilihan foto, sebaiknya pilih foto dengan ruang bebas pada sudut kiri bawah minimal berdimensi 125px x 240px untuk menempatkan posisi Foto Profile pada tampilan halaman utama (Wall).
  6. Meski Facebook tidak menentukan ukuran tinggi foto, sebaiknya foto telah dipersiapkan dengan posisi yang pas sebelum diupload. Sehingga dapat meminimalkan pergeseran posisi foto dan memberikan kesan efektif pada sampul.
  7. Ambae.exe merekomendasikan foto berdimensi 469px x 720px dengan Resolusi 1000px seperti pada gambar di bawah ini. Gambar tersebut dapat Anda uji coba pada profile Anda. Bila memungkinkan, silakan dipakai pada profil Anda. Dan bila dirasakan tidak cocok, lakukan penggantian sampul ataupun Batalkan sebelum melakukan penyimpanan.

Timeline Facebook-2
Berbicara timeline, berarti terjadi pergeseran waktu di dalamnya. Foto, Status, Comment, Video dan Informasi lainnya pada profile Anda dapat dengan mudah ditampilkan dengan melakukan pemilihan waktu tertentu. Sehingga Anda tidak perlu menunggu lama. Anda dapat membaca/melihat informasi paling jadul sekalipun, serasa informasi tersebut munculnya kemarin sore. Tak kalah pentingnya diperhatikan bahwa dengan profile baru Facebook yang mengusung konsep Timeline, sadar atau pun tidak ternyata mempengaruhi kinerja profil yang semakin berat. Betapa tidak, dengan pergeseran yang terus terjadi, memungkinkan resource CPU dan Memory termakan habis untuk membaca dan membuka semua informasi yang ada.

Bila Anda menguji cobakan Timeline Facebook pada Netbook dengan spesifikasi standar, jangan heran bilamana dalam waktu relatif cepat Netbook Anda mengalami Trouble alis Hang alis Ngadat seketika. Maka, bilamana Anda berencana menggunakan Timeline Facebook sekarang, mungkin perlu Anda pertimbangkan lebih dahulu beberapa hal yang harus dipenuhi. Selain spesifikasi komputer, periksa pula koneksi Internet yang Anda pakai dengan kecepatan Bandwitdh yang mumpuni. Bila semua hal di atas telah terpenuhi dengan baik, apa salahnya bila Anda mencoba mengaktifkan Timeline Facebook. Terkecuali Facebook telah menerapkan fitur ini pada semua profile Facebooker secara permanen. Mau tidak mau, kita harus menerima kenyataan di atas.

Selamat berkreasi dengan sampul Facebook yang lebih keren dan menarik. Merry Christmas 2011 dan Happy New Year 2012 bagi sobat Blogger yang merayakannya. Salam Facebooker.

Monday, April 19, 2010

Making Server

Servers are the cornerstone of corporate infrastructure, relied upon to provide the services that employees and customers require to perform day to day operations in a timely and efficient manner. The single most important attribute of most enterprise grade servers is reliability and a good level of fault tolerance is factored into the design of most servers in order to increase uptime. Many readers run servers in their own home. They are the headless Linux box in the corner of the study that provides email, web server, DNS, routing and file sharing services for the home.


While these machines still constitute servers in a raw sense, it would take a brave Technology Officer to put their faith in trusting these types of servers to fulfil the ITS requirements of their company to these white boxes. This guide demonstrates what differentiates business class servers from the typical white box server that you can build from off the shelf components and highlights some of the many factors of a server’s design that needs to be carefully considered in order to provide reliable services for business.

Form Factor
Servers come in all shapes and sizes. The tower server is designed for organisations or branch offices whose entire infrastructure consists of a server or two. From the outside, they wouldn’t look out of place on or under someone’s desk but the components that make up the server’s guts are often of a higher build quality than workstation components. Tower cases are generally designed to minimise cost whilst providing smaller businesses some sense of familiarity with the design of the enclosure.

For larger server infrastructures, the rack mount case is used to hold a server’s components. As the name suggests, rack mount servers are almost always installed within racks and located in dedicated data rooms, where power supply, physical access, temperature and humidity (among other things) can be closely monitored. Rack mount servers come in standard sizes they are 19 inches in width and have heights in multiples of 1.75 inches, where each multiple is 1 Rack Unit (RU). They are often designed with flexibility and manageability in mind.

Lastly, the blade server is designed for dense server deployment scenarios. A blade chassis provides the base power, management, networking and cooling infrastructure for numerous, space efficient servers. Most of the top 500 supercomputers these days are made up of clusters of blade servers in large data centre environments.

Processors
With the proliferation of Quad Core processors in the mainstream performance sector of today’s computing landscape, the main difference between servers and workstations that you will see comes down to the support for multiple sockets. Consumer class Core 2 and Phenom based systems are built around a single socket designs that feature multiple cores per socket and cannot be used in multi socket configurations. Xeon and Opteron processors on the other hand, provide interconnects that allow processes to be scheduled across multiple separate processors featuring multiple cores to contribute towards the total processing power of a server. It’s not uncommon to see quad socket, Four Core processors in some high end servers providing a total of 16 processing cores at upwards of 3.0GHz per core. The scary thing is that Six Core and Eight Core processors are just around the corner...

The other main difference that you see between consumer and enterprise processors is the amount of cache that is provided. Xeon and Opteron processors often have significantly larger Level 2 and Level 3 caches in order to reduce the amount of data that has to be shifted to memory, generally resulting in slightly faster computation times depending on the application. A server’s form factor will also have an impact on the type of processor that can be used. For instance, blade servers often require more power efficient, cooler processors due to their increased deployment density. Similarly, a 4RU server may be able to run faster and hotter processors than a 1RU server from the same vendor.

Memory
While the physical RAM modules that you see in today’s servers don’t differ dramatically from consumer parts, there are numerous subtle differences to the memory subsystems that provide additional fault tolerance features. Most memory controllers feature Error Checking and Correction (ECC) capabilities, and the RAM modules installed in such servers need to support this feature. Essentially, ECC capable memory performs a quick parity check before and after read or write operation to verify that the contents or memory has been read or written properly. This feature minimised the likelihood of memory corruption due to a faulty read or write operation.

The other main difference in memory controller design is how much RAM is supported. Intel based servers are about to start utilising a memory controller that is built on to the processor die, as has been the case with AMD based systems for years. Even the newest mainstream memory controllers support a maximum of 16GB of RAM. HP have recently announced a “virtualisation ready” Nahalem based server design that will support 128GB of RAM, which will be available by year’s end. Many modern servers provide mirrored memory features. A memory mirror essentially provides RAID 1 functionality for RAM the contents of your system memory are written to two separate banks of identical RAM modules. If one bank develops a fault, it is taken offline and the second bank is used exclusively. The memory controller of the server can usually handle this failover without the operating system even being aware of the change, preventing unscheduled downtime of the server.

Hot spare memory can also be installed in a bank of some servers. The idea here is that if the memory in one bank is determined to be faulty, the hot spare bank can be brought online and used in place of the faulty bank. In this scenario, some memory corruption can occur depending on the operating system and memory controller combination in use. The worst case scenario here usually involves a crash of the server, followed by an automated reboot by server recover mechanisms (detailed later on in this article). Upon reboot, the memory controller brings the hot spare RAM online limiting downtime. Hot swappable memory is often used in conjunction with both of the features giving you the ability to swap out faulty RAM modules without having to shut down the entire server.

Storage Controllers
Drive controllers are dramatically different in servers. Forget on board firmware based SATA RAID controllers that provide RAID 0, 1 and 1+0 and consume CPU cycles every time data is read or written to the array. Server class controllers have dedicated application specific integrated circuits (ASICs) and a bucket full of cache (sometimes as much as 512MB) in order to boost the performance of the storage subsystem. These controllers also frequently support advanced RAID levels including RAID 5 & 6.

The controller cache can be one of the most critical components of a server, depending on the application. At my place of employment, we have a large number of servers that capture video in HD quality at real time. A separate “ingest” server often pulls this data from the encode server immediately after it has been captured for further processing and transcoding. Having 512MB of cache installed on the drive controller allows data to be pushed out via the network interface before it has been physically written to disk, significantly boosting performance. Testing has revealed that if we reduced the cache size to 64MB, data has to be physically written to disk and then physically read when the ingest process takes place, placing significant additional load on the server. Finally, consider that most mainstream controllers have no cache whatsoever the impact on performance in this scenario would probably prevent us from working with HD quality content altogether.

But what happens if there is a power outage and the data that is in the controller cache has not yet been written to the disk? In order to prevent data loss, some controllers feature battery backup units (BBUs) that are capable of keeping the contents of the disk cache intact for in excess of 48 hours or until power is restored to the server. Once the server is switched on again, the controller commit the data from the cache to the disk array before flushing the cache and continuing with the boot process. No data is lost. BBUs are another feature missing from mainstream controllers.

The problem with RAID 5
Traditionally, RAID 5 has been the holy grail of disk arrays, providing the best compromise between performance and fault tolerance. However with the continual increase in storage density, RAID 5 is starting to exhibit a significant design flaw when the array has to be rebuilt after a disk failure.

RAID 5 arrays can tolerate the failure of a single drive in the array. If during the time that it takes to replace the faulty drive and rebuild the array, a second drive fails or an unrecoverable read error (URE) occurs on one of the surviving drives in the array, the rebuild will fail and all data on the array will be lost.

Most manufacturers will quote the probability of encountering a URE in the detailed specifications sheet for each drive. Most consumer grade products have a quoted URE of ~1 in 1014 ¬– which translates to an average of 1 URE encountered for every 12TB of data read. Now, imagine that you have a RAID 5 array containing four 1.5TB drives (which are now readily available) and one disk goes pear shaped. You replace the faulty drive and the rebuild process begins and 1.5TB of data is read from each remaining drives in order to rebuild the data on the new disk. Assuming that you have “average” drives, there’s around a 33% chance of encountering a URE while rebuilding the array, which would result in the loss of up to 4.5TB of data.

Back in the days when we were dealing with arrays containing five 32GB disks, the probability of a URE occurring during array rebuilds was miniscule. But nowadays, it’s not uncommon to see array configurations exceeding 2TB in size, containing eight or more large capacity drives. As a result of the increased number of drives and the increasing capacity of those drives, the probability of encountering a URE during the rebuild process is approaching the stage where RAID 5 arrays are unlikely to be successfully rebuilt in the event of a drive failure. And the more, larger capacity drives that you use in an array, the more likely a URE will occur during the rebuild.

RAID 6 is the solution that is commonly used to overcome the limitations of RAID 5. RAID 6 utilises two different parity schemes and distributes these parity blocks across drives in much the same manner as RAID 5 does. The use of two separate parity schemes essentially allows two drives in an array to fail while maintaining data integrity. While RAID 5 requires n+1 drives in the array, RAID 6 requires n+2 so you’ll be assigning the capacity of two whole drives to parity instead of one.
If the server that you’re building does not require a large amount of disk space, RAID 5 may be perfectly acceptable. However, if you’re deploying a large number of drives or large capacity drives in your server, you’ll want to ensure that you have a drive controller that supports RAID 6.

It should also be noted that while RAID 6 overcomes the issues that are starting to become prominent with RAID 5, it should be noted that a few years from now, RAID 6 will exhibit the same problem if used with larger arrays and drives of larger capacities than we have today. But until this day comes, RAID 6 remains a more reliable fault tolerance scheme than RAID 5.

Maths
Regardless of the scenario, we assume that all 1.5TB needs to be read from all drives in the array in order to perform a successful rebuild. This gives us a 12.5% probability of encountering a URE on a single drive (1.5 / 12 = 0.125), and a 87.5% probability of not encountering a URE (1 0.125 = 0.875).

As you can see from the above tables, you’re much more likely to achieve a successful rebuild with a RAID 6 array however even this probability of success is lower than what some would desire. This only re enforces the fact that RAID 6 is significantly better than RAID 5, but will also experience the same issues assuming that URE rates don’t increase with disk capacity.

And on a side note, I was the unfortunate victim of a rebuild failure due to UREs about a year ago I accidentally knocked a power cord out of an seven 250GB drive RAID 5 NAS enclosure (the enclosure was four years old and did not support RAID 6, but we did have it configured with one hot spare drive). Knocking the cable out abruptly killed one of the redundant power supplies, which took one of the drives with it. The hot spare drive was immediately activated and the array began to be rebuilt. About 5 hours into the rebuild, a URE occurred and the rebuild failed.

It’s just as well we had that 1.5TB worth of data backed up on to a second array as well as LTO tape this just goes to show that RAID arrays are not the be all and end all of fault tolerance.

External Storage
Any computer chassis has a physical limitation to the number of drives that you can install. This limitation is overcome in enterprise servers by connections to Storage Area Networks (SANs). This is typically accomplished in two ways via a fibre channel or iSCSI interfaces.

iSCSI is generally the cheaper option of the two because data transferred between the SAN and server is encapsulated in frames sent over ubiquitous Ethernet networks, meaning that existing Ethernet interfaces, cabling and switches can be used (aside from the cost of the SAN enclosure itself, the only additional costs are generally an Ethernet interface module for the SAN and software licenses).

On the other hand, fibre channel requires its own fibre optic interfaces, cabling and switches, which significantly drives up cost. However, having a dedicated fibre network means that bandwidth isn’t shared with other Ethernet applications. Fibre channel presently offers interface speeds of 4Gb/s compared to the 1Gb/s often seen in most enterprise networks. Fibre channel also has less overhead than Ethernet, which provides an additional boost to comparative performance.

Disk Drives
For years, enterprise servers have utilised SCSI hard disk drives instead of ATA variants. SCSI allowed for up to 15 drives on a single parallel channel versus the 2 on a PATA interface; PATA drives ship with the drive electronics (the circuitry that physically controls the drive) integrated on the drive (IDE), whereas SCSI controllers performed this function in a more efficient manner; many SCSI interfaces provided support for drive hot swapping, reducing downtime in the event of a drive failure; and the SCSI interface allowed for faster data transfer rates than what could be obtained via PATA, giving better performance, especially in RAID configurations.

However over the last year, Serial Attached SCSI (SAS) drives have all but superseded SCSI in the server space in much the same way that SATA drives have replaced their PATA brethren. The biggest problem with the parallel interface was synchronising clock rates on the many parallel connections serial connections don’t require this synchronisation, allowing clock rates to be ramped up and increasing bandwidth on the interface.

SAS drives still the same as SCSI drives in many ways the SAS controller is still responsible for issuing commands to the drive (there is no IDE), SAS drives are hot swappable and data transfer over the interface is faster compared to SATA. SAS drives come in both 2.5 and 3.5 inch form factors with the 2.5 inch size proving popular in servers as they can be installed vertically in a 2RU enclosure.

In addition, SAS controllers can support 128 directly attached devices on a single controller, or in excess of 16,384 devices when the maximum of 128 port expanders are in use (however, the maximum amount of bandwidth that all devices connected to a port expander can use equals the amount of bandwidth between the controller and the port expander). In order to support this many devices, SAS also uses higher signal voltages in comparison to SATA, which allows the use of 8m cables between controller and device. Without using higher signal voltages, I’d like to see anyone install 16,384 devices to a disk controller with a maximum cable length of 1 meter (the current SATA limitation).

In the next few months, there will be another major advantage to using SAS over SATA in servers. SAS does support multipath I/O. Suitable dual port SAS drives can then connect to multiple controllers within a server, which provides additional redundancy in the event of a controller failure.
GPUs and Video
One of the areas where enterprise servers are inferior to regular PCs is in the area of graphics acceleration. Personally, I’m yet to see a server that has been installed within a data centre that contains a PCI Express graphics adapter but that’s not to say that it’s not possible to install one in an enterprise server. In general though, most administrators find the on board adapters more than adequate for server operations.
Networking
Modern day desktops and laptops feature Gigabit Ethernet adapters, and the base adapters seen on servers are generally no different. However, like most other components in servers, there are a few subtle differences that improve performance in certain scenarios.

In order to provide network fault tolerance, two or more network adapters are integrated on most server boards. In most cases, these adapters are able to be teamed. Like RAID fault tolerance schemes, there are numerous types of network fault tolerance options available, including :
• Network Fault Tolerance (NFT) In this configuration, only one network interface is active at any given time, which the rest remain in a slave mode. If the link to the active interface is severed, a slave interface will be promoted to be the active one. Provides fault tolerance, but does not aggregate bandwidth.
• Transmit Load Balancing (TLB) Similar to NFT, but slave interfaces are capable of transmitting data provided that all interfaces are in the same broadcast domain. This provides aggregation of transmission bandwidth, but not receive and also provides fault tolerance.
• Switch assisted Load Balancing (SLB) and 802.3ad Dynamic provides aggregation of both transmit and receive bandwidth across all interfaces within the team, provided that all interfaces are connected to the same switch. Provides fault tolerance on the server side (however, if the switch that is connected to the server fails, you have an outage). 802.3ad Dynamic requires a switch that supports the 802.3ad Link Aggregation Control Protocol (LACP) in order to dynamically create teams, whereas SLB must be manually configured on both the server and the switch.
• 802.3ad Dynamic Dual Channel provides aggregation of both transmit and receive bandwidth across all interfaces within the team and can span multiple switches, provided that they are all in the same broadcast domain and that all switches support LACP.

Just about all server network interface cards (NICs) support Virtual Local Area Network (VLAN) trunking. Imagine that you have two separate networks an internal one that connects to all devices on your LAN, and an external on that connects to the Internet, with a router in between. In conventional networks, the router needs to have at least two network interfaces one dedicated to each physical network.

Provided that your network equipment and router supports VLAN trunking, your two networks could be set up as separate VLANs. In general, your switch would keep track of which port is connected to which VLAN (this is known as a port based VLAN), and your router is trunked across both VLANs utilising a single NIC (physically, it becomes a router on a stick). Frames sent between the switch and router are tagged so that each device knows which network the frame came from or is destined to go to.

VLANs operate in the same physical manner as physical LANs but network reconfigurations can be made in software as opposed to forcing a network administrator to physically move equipment.

Because of the sheer amount of data that is received on Gigabit and Ten Gigabit interfaces, it can become exhaustive to send Ethernet frames to the CPU in order for it to process TCP headers. It roughly requires around 1GHz of processor power to transmit TCP data at Gigabit Ethernet speeds.

As a result, TCP Offload Engines are often incorporated into server network adapters. These integrated circuits process TCP headers on the interface itself instead of pushing each frame off to the CPU for processing. This has a pronounced effect on overall server performance in two ways not only does the CPU benefit from not having to process this TCP data, but less data is transmitted across PCI express lanes toward the Northbridge of the server. Essentially, TCP Offload engines free up resources in the server so that they can be assigned to other data transfer and processing needs.

The final difference that you see between server NICs and consumer ones is that the buffers on enterprise grade cards are usually larger. Part of the reason for this is due to the additional features mentioned above, but there is also a small performance benefit to be gained in some scenarios (particularly inter VLAN routing).

Power Supplies
One of the great features about ATX power supplies is the standards that must be adhered to. ATX power supplies are always the same form factor and feature the same types of connectors (even if the number of those connectors can vary). But while having eight 12 volt Molex connectors is great in a desktop system, this amount of connectors is generally not required in a server, and the cable clutter could cause cooling problems.

Power distribution within a server is well thought out by server manufacturers. Drives are typically powered via a backplane instead of individual Molex connectors and fans often drop directly into plugs on the mainboard. Everything else that requires power draws it from other plugs on the mainboard. Even the power supplies themselves have PCB based connectors on them. All of this is designed to help with the hot swapping of components in order to minimise downtime.

Most servers are capable of handling redundant power supplies. The first advantage here is if one power supply fails, the redundant supply can still supply enough juice to keep the server running. Once aware of the failure, you can then generally replace the failed supply while the server is still running.

The second advantage requires facility support. Many data centres will supply customer racks with power feeds on two separate circuits (which are usually connected to isolated power sources). Having redundant power supplies allows you to connect each supply up to a different power source. If power is cut to one circuit, your server remains online because it can still be powered by the redundant circuit.

Server Management
Most servers support Intelligent Platform Management Interfaces (IPMIs), which allow administrators to manage aspects of the server and to monitor server health including when the server is powered off.

For example, say that you have a remote Linux server that encountered a kernel panic you could access the IPMI on the server and initiate a reboot, instead of having to venture down to the data centre, gain access and press the power button yourself. Alternatively, say that your server is regularly switching itself on and off every couple of minutes too short a time for you to log in and perform any kind of troubleshooting. By accessing the IPMI, you could quickly determine that a fan tray has failed and the server is automatically shutting down once temperature thresholds are exceeded. These are two of the most memorable scenarios where having access to IPMIs has saved my skin.

Many servers also incorporate Watchdog timers. These devices perform regular checks on whether the Operating System on the server is responding and will reboot the server if the response time is greater than a defined threshold (usually 10 minutes). These devices can often minimise downtime in the event of a kernel panic or blue screen.

Finally, most server vendors will also supply additional Simple Networking Management Protocol (SNMP) agents and software that allows administrators to monitor and manage their servers more closely. The agents that are often supplied provide just about every detail about the hardware installed that you could ever want to know how long a given hard disk drive has been operating in the server, the temperature within a power supply or how many read errors have occurred in a particular stick or RAM. All of this data can be polled and retrieved with an SNMP management application (even if your server provider doesn’t supply you with one of these, there are dozens of GPL packages available that utilise the Net SNMP project).

The future...
All of the points detailed in this article and within the corresponding article on the APC website highlight the differences that are seen between today’s high end consumer gear (which is typically used to make the DIY server) and enterprise level kit. However, emerging technologies will continue to have an impact on both the enterprise and consumer markets.

As the technology becomes more refined, solid state drives (SSDs) will start to emerge as a serious alternative to SAS hard disk drives for some server applications. Initially, they’ll most likely be deployed where lower disk capacity and lower disk access times are required (such as database servers). When the capacity of these drives increases, they’ll start to become more prominent but will probably never replace the hard disk drive for storing large amounts of data.

The other big advantage to using SSDs is that the RAID 5 issue mentioned earlier becomes less of an issue. SSDs shouldn’t exhibit UREs once data is written to the disk, it’s stored physically, not magnetically. A good SSD will also verify that the contents of a block including whether it can be read before the write operation is deemed to have succeeded. Thus, if the drive can’t write to a specific block, it should be marked as bad and a reallocation block should be brought online to take its place. Your SNMP agents can then inform you when the drive starts using up its reallocation blocks, indicating that a drive failure will soon happen. In other words, you’ll be able to predict when an SSD fails with more certainty, which could give RAID 5 a new lease of life.

Moving further forward, the other major break from convention in server hardware will most likely be a move toward the use more application specific processor units instead of the CPU as we know it today. There’s already some movement in this area Intel’s Larrabee is an upcoming example of a CPU/GPU hybrid, and the Cell Broadband Engine Architecture (otherwise know as the Cell architecture) that is used in Sony’s Playstation 3 is also used in the IBM RoadRunner supercomputer (the first to sustain performance over the 1 petaFLOPS mark)

Sunday, March 14, 2010

Guide to Buy Desktop PC

Once you've determined the type of desktop system you want a compact PC, a budget system, a mainstream all purpose model, or a performance crackerjack you need to know what components to look for. The processor and graphics chip you choose will determine many of your machine's capabilities, as will the system's memory and hard drive. Understanding those components will help you get the performance you need, without paying for things you don't.


You'll also want to consider details like the layout of the case, which can also make the difference between a pleasant workstation and a nightmare PC.

Processor
The CPU is one of your PC's most important components. The processor you choose is likely to determine your PC's shape and size, and will definitely determine its price. Generally, the higher the CPU clock speed, the faster the performance you may see and the higher the price. A 3.46GHz Core i5 670 PC will trounce a 2.93GHz Core i3 530 system, but you'll pay nearly twice as much for the faster CPU. Another spec to watch is cache size: More is better, here: Core i3 and Core i5 parts have 4MB caches, while performance geared Core i7 chips have 6MB or 8MB caches.

Compact PCs and some all in ones use relatively puny netbook or notebook processors. Though these CPUs deliver weaker performance than desktop processors, they're also smaller and generate less heat, which makes them ideal for small machines. A PC packing an Atom processor should be fine for basic word processing, Web surfing, and limited media playback but little more.

Intel's new Clarkdale line of Core i3 and Core i5 desktop processors tend to appear on systems in the budget desktop and mainstream desktop PC categories. Most users will find something they like in the Core i3 and Core i5 lines, as these CPUs offer dual core performance at palatable price points. Core i3 chips are the cheaper, lower powered models, so you'll generally find them in cheaper machines.

The quad core Core i7 targets users who need a real workhorse processor. If you play high end games or edit hours of audio or video, you'll benefit from the Core i7. The lowliest Core i3 CPU can easily handle basic computing tasks, so stay within a reasonable price range when possible. At the lowest end are dual core Pentium and Celeron processors. These chips appear in budget PCs, where price tags starting at $400 compensate for weaker performance.

Desktop PCs use either Intel or AMD processors. Intel currently holds the performance crown, but AMD has priced its dual and quad core chips aggressively. If you're looking for quad core performance on a budget, AMD based offerings are certainly worth a look.

Graphics Cards
The GPU (graphics processing unit) is responsible for everything you see on your display, whether you play games, watch videos, or just stare at the Aero desktop baked into Windows 7.

If you aren't interested in gaming on your PC, integrated graphics built onto the motherboard or in the CPU itself with Intel's new Core i3 and Core i5 Clarkdale chips is the way to go. Integrated graphics help keep a system's cost low, and they deliver enough power to run simple games or high definition Flash video. Intel's integrated graphics chips are widely used, but some PCs include an nVidia Ion graphics chip, which offers superior integrated video performance.

If you plan to render your own high definition content or play BioShock 2, you'll need a discrete graphics card. Such cards come installed in a PCIe x16 slot on your motherboard and deliver significantly more power than integrated graphics do. Both ATI and nVidia offer plenty of options to choose from. The naming conventions can be a bit overwhelming, but the rule of thumb is that higher numbers indicate better performance and higher prices. Variables such as power consumption, size, and the brand of your motherboard (which may limit which cards you can use) help determine which GPU is right for you.

Gamers with deep pockets can opt for a multiple graphics card setup using either nVidia's SLI or ATI's CrossFire technology, either of which sets multiple cards to work in tandem for vastly improved performance. That performance will cost you, however: Prices for higher end graphics cards generally range between $200 and $400 for apiece.

Memory
If you use your computer for little more than light Web browsing and e mail, 2GB of RAM will be enough, whether the system has Windows XP or Windows 7. More RAM will allow you to run more programs simultaneously, and generally improve the speed and performance of your machine. Systems today typically come with at least 4GB of RAM, though some small PCs and budget systems may be limited to 2GB or 3GB.

If you tend to multitask or play games, you'll want at least 4GB of RAM. If you play graphics intensive games or do serious video or image editing, you might want to spring for even more RAM some performance systems include 8GB or even 16GB of memory.

When you shop for RAM, you'll notice two types: DDR2 and DDR3. Of the two, DDR3 is faster and thus more expensive. You'll also notice a clock speed, much like processors, presented in MHz. Again, higher numbers are better. That said, quantifying the differences isn't easy; and if you're on a budget, 4GB of DDR2 RAM won't leave you at much of a disadvantage versus DDR3. In general, buy as much memory as you can afford. If you have to choose between getting more RAM at a slower clock speed and getting less RAM at a faster clock speed, you'll see more tangible results with the greater amount of RAM.

If you want to buy more than 4GB of RAM, make sure that the system ships with Microsoft's 64 bit Windows 7 operating system; a 32 bit OS will recognize only a little more than 3GB of whatever RAM your system has. If you purchase a new machine, it will probably come bundled with a 64 bit OS, as more retailers move toward including 4GB of RAM. Budget systems are the most likely to lean toward a 32 bit OS but even there we've seen a shift to 64 bit, so if you decide to upgrade your system memory later, the operating system will be able to handle it.

If you intend to upgrade your PC yourself, make sure that your system's motherboard can support additional RAM modules. Check your computer's specs to see how many user accessible DIMM connectors are available; this information can be found on a system's technical specifications page. Our how to guide on installing more memory will help you along the way.

Desktop Case
A good case can make your everyday work easier and can simplify such tasks as upgrading and servicing components in a workplace. A well designed case provides tool less access to the interior, hard drives mounted on easy to slide out trays, readily accessible USB ports and memory card slots, and color coded cables for internal and external parts.

The most common cases are minitower and tower designs that use ATX. The ATX specification dictates where the connectors on the back of the motherboard should be (to line up with the holes in the case), and encompasses details such as the power supply connector.

Slimline systems and other smaller PCs may use Micro ATX, which follows the basic ATX specification but includes fewer expansion slots. Mini ITX is smaller still; Mini ITX motherboards often appear in small PCs, where they offer quiet, low power performance (making these systems a great choice for a home theater PCs).

If you're purchasing a minitower or tower system, you may have more flexibility in configuring it, whether you want to specify optional components to fill the slots or leave room for future expansion. You should reserve at least a couple of open hard drive bays and a free PCI slot, too. And since motherboards come in different shapes and sizes, so do case designs.

If you're buying an all in one or small PC or ordering a traditional tower from a major vendor such as HP or Dell, you rarely have much of a say in your machine's chassis. If the case's size and weight are important to you, try to inspect the machine in a store, or take note of its dimensions when shopping online.

Operating System
It may be a decade old, but Windows XP remains a stalwart even on some new systems. Nevertheless, most systems on the market today run Windows 7. Microsoft's latest operating system has received generally positive reviews, improving on many of Windows Vista's foibles.

Microsoft sells six different versions of Windows 7, but only three Windows 7 Home Premium, Windows 7 Professional, and Windows 7 Ultimate are available to most desktop buyers. Windows 7 Home Premium, the standard offering, includes the visually appealing Aero Glass UI, plus enhancements to Windows Media Center. Advanced users should consider Windows 7 Professional, typically a $75 to $100 step up; it offers location aware printing and improved security features that many business users like. Windows 7 Ultimate which costs about $150 more is a good choice for power users and business users, thanks to its wealth of networking and encryption tools. Consult a full list of OS features before settling on a particular version.

Once again, if you're running a 32 bit operating system, your computer can use only slightly more than 3GB of RAM, regardless of how much your system carries. So be sure to pick a 64 bit OS; you'll be glad you did when you're ready to upgrade.

Hard Drive
Even a basic full size PC should offer at least 320GB of hard drive space. Small PCs, however, tend to start around 160GB. At the upper end of the performance spectrum, power PCs may offer space for 2TB of storage or more along with choices of RAID for data redundancy (RAID 1) or speed optimization (RAID 0), or an option to combine a solid state drive with a hard drive.

When shopping for a PC, check the specifications to see how many internal 2.5 inch hard drive bays are available. Many all in one and small PCs limit you to just one. But with additional internal hard drives, you can store more data and create RAID arrays to safeguard your data from hardware failure, deliver faster performance, or do both.

Most drives today are Serial ATA 300 models, which spin at 7200 rpm. When shopping, pay close attention to the speed of the PC's hard drive: Small PCs may use 2.5 inch hard drives that spin at 5400 rpm, and the potential money savings may not justify the performance hit if you plan to do a lot disk intensive tasks. For people who care more about speed than about capacity, Western Digital's VelociRaptor line offers 10,000 rpm drives, though these max out at 300GB.

Another option for speed conscious buyers is a solid state drive. The cost per gigabyte is still far greater for SSDs than for traditional hard drives, but prices have come down, and performance has improved. Some PC makers offering an SSD in tandem with a hard disk drive a low capacity SSD to store applications and the OS, and a high capacity HDD for data storage duties.

Networking
The days of dial up are done. Broadband speed and performance vary by service provider and location, but you can maximize your PC's connectivity by choosing the right networking options. Fortunately, the options are clear cut: wired or wireless.

Every system comes with a wired ethernet connection at least 10/100 ethernet, and more often gigabit ethernet. Wireless connectivity is an attractive option for small PCs and all in ones, but also for some tower and minitower systems (though you'll need only if your system will be nowhere near your router). If you'd rather not tie down your otherwise svelte machine with an ethernet cable, go wireless and opt for 802.11n; this wireless standard offers better performance than the older 802.11b/g standards.

Wireless performance still has limitations. If you plan to use your PC to stream high definition Internet content from sites like Hulu and Netflix, consider using a wired connection for best performance. You'll get measurably superior performance, and you'll future proof your machine in case you later upgrade to network attached machines with faster transfer speeds.

Keyboard and Mouse
Your keyboard and mouse are crucial devices, so get a set that works for you. But if you're buying a PC online, don't pay the upgrade price that the vendor offers: You can usually get a better deal by shopping around. If you aren't sure what your keyboard or a mouse options are, visit your local PC or electronics retailer and try out a few of the display models.

Most physical attributes of keyboards and mice vary from user to user, so keep in mind where and how you'll be using your machine. Every system comes with at least a basic mouse and keyboard. At build to order PC sites, though, you usually have relatively few options.

If you plan to use a small PC to stream media, consider a small, lightweight wireless keyboard and mouse combo or a wireless keyboard with built in pointing device so you can operate it from the comfort of a couch. Wireless keyboards and mice use either radio frequency (RF) or Bluetooth technology, and require you to plug a USB receiver into a USB port on your machine.

When shopping for a keyboard, watch for handy media keys. These put media playback buttons and volume controls on your keyboard, heightening the couch based computing experience.

If you plan to buy a tower PC, you'll likely have space on your desk for a full size keyboard with number pad. If comfort is an issue or you struggle with wrist pain, look for ergonomic keyboards and mice that conform to the shape of your hands, and workspace. If you're an avid gaming fan, consider keyboards and mice from brands like Razer and Logitech that offer backlit keys, programmable macro buttons, and other features that may give you a competitive edge.

Removable Storage
Your operating system and system restore disks (if any) will still ship on a DVD; consequently all but a handful of small PCs ship with a dual layer multiformat DVD burner. If you're a fan of high definition media, consider adding a Blu ray reader/DVD burner combo drive (about $100 extra) to store data on your own CDs and DVDs and to watch media stored on Blu ray discs.

To take advantage of the massive storage opportunities offered by Blu ray discs, you'll need a Blu ray writer a $200 add on that lets you read and write in every disc based media format.

HP and other companies market portable media drives, ranging from less than $100 to as high as $250. These hard drive models work with a USB cable, but are designed to slide into a media drive bay included on select desktop models. Portable hard drives are and crucial to anyone who wants to protect data from hard drive failure or to transport lots of content.

Sound
The integrated sound provided on a typical PC's motherboard today supports 5.1 channel audio. This should suffice for users who don't want to spend a lot of money on their PC's audio system. But a dedicated sound card will improve the dynamic range of compressed audio, add rich environmental effects to games, and improve system performance when you record or mix audio.

On most PCs above the budget level, motherboards come with 7.1 channel audio. If you're shopping for a PC with integrated graphics, look for models sporting the nVidia Ion graphics processor, which also offers 7.1 channel HD audio.

A sound card can increase your PC's initial cost by $40 to $80, depending on the technology that the card uses. Higher end cards can cost more than $200, but these generally target creative professionals or gamers who require 3D environmental audio effects for competitive play.

If you do opt for a sound card, make sure that your motherboard has a spare PCI or PCI Express slot, depending on the requirements of the card that you've chosen. The manufacturer's specifications for the machine or motherboard you're buying will list the slots it has available.

As with all upgrade options, comparison shop before you settle on a particular sound card or set of speakers. You may find a better deal elsewhere. On the other hand, if you buy the card yourself, you'll have to crack open your system to install the card.

Speaker preferences are personal, and the physical dimensions of the room your computer is in may limit your options. PCs of all shapes and sizes have analog audio outputs, and some models include a digital optical connection, which reduces the number of cables you need.

Many all in one PCs include a speaker bar attached to the screen. Audio from these sound bars varies from model to model, but in general the quality will be akin to that of laptop speakers, with deeper, richer sound from more expensive models. If sound quality isn't a high priority, the included speaker bar will perform adequately, just as built in speakers usually suffice for an HDTV. But if you plan to use your all in one as a primary media machine, we recommend choosing dedicated speakers with a subwoofer.