Computer Application, Maintenance and Supplies
Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Monday, April 19, 2010

Making Server

Servers are the cornerstone of corporate infrastructure, relied upon to provide the services that employees and customers require to perform day to day operations in a timely and efficient manner. The single most important attribute of most enterprise grade servers is reliability and a good level of fault tolerance is factored into the design of most servers in order to increase uptime. Many readers run servers in their own home. They are the headless Linux box in the corner of the study that provides email, web server, DNS, routing and file sharing services for the home.


While these machines still constitute servers in a raw sense, it would take a brave Technology Officer to put their faith in trusting these types of servers to fulfil the ITS requirements of their company to these white boxes. This guide demonstrates what differentiates business class servers from the typical white box server that you can build from off the shelf components and highlights some of the many factors of a server’s design that needs to be carefully considered in order to provide reliable services for business.

Form Factor
Servers come in all shapes and sizes. The tower server is designed for organisations or branch offices whose entire infrastructure consists of a server or two. From the outside, they wouldn’t look out of place on or under someone’s desk but the components that make up the server’s guts are often of a higher build quality than workstation components. Tower cases are generally designed to minimise cost whilst providing smaller businesses some sense of familiarity with the design of the enclosure.

For larger server infrastructures, the rack mount case is used to hold a server’s components. As the name suggests, rack mount servers are almost always installed within racks and located in dedicated data rooms, where power supply, physical access, temperature and humidity (among other things) can be closely monitored. Rack mount servers come in standard sizes they are 19 inches in width and have heights in multiples of 1.75 inches, where each multiple is 1 Rack Unit (RU). They are often designed with flexibility and manageability in mind.

Lastly, the blade server is designed for dense server deployment scenarios. A blade chassis provides the base power, management, networking and cooling infrastructure for numerous, space efficient servers. Most of the top 500 supercomputers these days are made up of clusters of blade servers in large data centre environments.

Processors
With the proliferation of Quad Core processors in the mainstream performance sector of today’s computing landscape, the main difference between servers and workstations that you will see comes down to the support for multiple sockets. Consumer class Core 2 and Phenom based systems are built around a single socket designs that feature multiple cores per socket and cannot be used in multi socket configurations. Xeon and Opteron processors on the other hand, provide interconnects that allow processes to be scheduled across multiple separate processors featuring multiple cores to contribute towards the total processing power of a server. It’s not uncommon to see quad socket, Four Core processors in some high end servers providing a total of 16 processing cores at upwards of 3.0GHz per core. The scary thing is that Six Core and Eight Core processors are just around the corner...

The other main difference that you see between consumer and enterprise processors is the amount of cache that is provided. Xeon and Opteron processors often have significantly larger Level 2 and Level 3 caches in order to reduce the amount of data that has to be shifted to memory, generally resulting in slightly faster computation times depending on the application. A server’s form factor will also have an impact on the type of processor that can be used. For instance, blade servers often require more power efficient, cooler processors due to their increased deployment density. Similarly, a 4RU server may be able to run faster and hotter processors than a 1RU server from the same vendor.

Memory
While the physical RAM modules that you see in today’s servers don’t differ dramatically from consumer parts, there are numerous subtle differences to the memory subsystems that provide additional fault tolerance features. Most memory controllers feature Error Checking and Correction (ECC) capabilities, and the RAM modules installed in such servers need to support this feature. Essentially, ECC capable memory performs a quick parity check before and after read or write operation to verify that the contents or memory has been read or written properly. This feature minimised the likelihood of memory corruption due to a faulty read or write operation.

The other main difference in memory controller design is how much RAM is supported. Intel based servers are about to start utilising a memory controller that is built on to the processor die, as has been the case with AMD based systems for years. Even the newest mainstream memory controllers support a maximum of 16GB of RAM. HP have recently announced a “virtualisation ready” Nahalem based server design that will support 128GB of RAM, which will be available by year’s end. Many modern servers provide mirrored memory features. A memory mirror essentially provides RAID 1 functionality for RAM the contents of your system memory are written to two separate banks of identical RAM modules. If one bank develops a fault, it is taken offline and the second bank is used exclusively. The memory controller of the server can usually handle this failover without the operating system even being aware of the change, preventing unscheduled downtime of the server.

Hot spare memory can also be installed in a bank of some servers. The idea here is that if the memory in one bank is determined to be faulty, the hot spare bank can be brought online and used in place of the faulty bank. In this scenario, some memory corruption can occur depending on the operating system and memory controller combination in use. The worst case scenario here usually involves a crash of the server, followed by an automated reboot by server recover mechanisms (detailed later on in this article). Upon reboot, the memory controller brings the hot spare RAM online limiting downtime. Hot swappable memory is often used in conjunction with both of the features giving you the ability to swap out faulty RAM modules without having to shut down the entire server.

Storage Controllers
Drive controllers are dramatically different in servers. Forget on board firmware based SATA RAID controllers that provide RAID 0, 1 and 1+0 and consume CPU cycles every time data is read or written to the array. Server class controllers have dedicated application specific integrated circuits (ASICs) and a bucket full of cache (sometimes as much as 512MB) in order to boost the performance of the storage subsystem. These controllers also frequently support advanced RAID levels including RAID 5 & 6.

The controller cache can be one of the most critical components of a server, depending on the application. At my place of employment, we have a large number of servers that capture video in HD quality at real time. A separate “ingest” server often pulls this data from the encode server immediately after it has been captured for further processing and transcoding. Having 512MB of cache installed on the drive controller allows data to be pushed out via the network interface before it has been physically written to disk, significantly boosting performance. Testing has revealed that if we reduced the cache size to 64MB, data has to be physically written to disk and then physically read when the ingest process takes place, placing significant additional load on the server. Finally, consider that most mainstream controllers have no cache whatsoever the impact on performance in this scenario would probably prevent us from working with HD quality content altogether.

But what happens if there is a power outage and the data that is in the controller cache has not yet been written to the disk? In order to prevent data loss, some controllers feature battery backup units (BBUs) that are capable of keeping the contents of the disk cache intact for in excess of 48 hours or until power is restored to the server. Once the server is switched on again, the controller commit the data from the cache to the disk array before flushing the cache and continuing with the boot process. No data is lost. BBUs are another feature missing from mainstream controllers.

The problem with RAID 5
Traditionally, RAID 5 has been the holy grail of disk arrays, providing the best compromise between performance and fault tolerance. However with the continual increase in storage density, RAID 5 is starting to exhibit a significant design flaw when the array has to be rebuilt after a disk failure.

RAID 5 arrays can tolerate the failure of a single drive in the array. If during the time that it takes to replace the faulty drive and rebuild the array, a second drive fails or an unrecoverable read error (URE) occurs on one of the surviving drives in the array, the rebuild will fail and all data on the array will be lost.

Most manufacturers will quote the probability of encountering a URE in the detailed specifications sheet for each drive. Most consumer grade products have a quoted URE of ~1 in 1014 ¬– which translates to an average of 1 URE encountered for every 12TB of data read. Now, imagine that you have a RAID 5 array containing four 1.5TB drives (which are now readily available) and one disk goes pear shaped. You replace the faulty drive and the rebuild process begins and 1.5TB of data is read from each remaining drives in order to rebuild the data on the new disk. Assuming that you have “average” drives, there’s around a 33% chance of encountering a URE while rebuilding the array, which would result in the loss of up to 4.5TB of data.

Back in the days when we were dealing with arrays containing five 32GB disks, the probability of a URE occurring during array rebuilds was miniscule. But nowadays, it’s not uncommon to see array configurations exceeding 2TB in size, containing eight or more large capacity drives. As a result of the increased number of drives and the increasing capacity of those drives, the probability of encountering a URE during the rebuild process is approaching the stage where RAID 5 arrays are unlikely to be successfully rebuilt in the event of a drive failure. And the more, larger capacity drives that you use in an array, the more likely a URE will occur during the rebuild.

RAID 6 is the solution that is commonly used to overcome the limitations of RAID 5. RAID 6 utilises two different parity schemes and distributes these parity blocks across drives in much the same manner as RAID 5 does. The use of two separate parity schemes essentially allows two drives in an array to fail while maintaining data integrity. While RAID 5 requires n+1 drives in the array, RAID 6 requires n+2 so you’ll be assigning the capacity of two whole drives to parity instead of one.
If the server that you’re building does not require a large amount of disk space, RAID 5 may be perfectly acceptable. However, if you’re deploying a large number of drives or large capacity drives in your server, you’ll want to ensure that you have a drive controller that supports RAID 6.

It should also be noted that while RAID 6 overcomes the issues that are starting to become prominent with RAID 5, it should be noted that a few years from now, RAID 6 will exhibit the same problem if used with larger arrays and drives of larger capacities than we have today. But until this day comes, RAID 6 remains a more reliable fault tolerance scheme than RAID 5.

Maths
Regardless of the scenario, we assume that all 1.5TB needs to be read from all drives in the array in order to perform a successful rebuild. This gives us a 12.5% probability of encountering a URE on a single drive (1.5 / 12 = 0.125), and a 87.5% probability of not encountering a URE (1 0.125 = 0.875).

As you can see from the above tables, you’re much more likely to achieve a successful rebuild with a RAID 6 array however even this probability of success is lower than what some would desire. This only re enforces the fact that RAID 6 is significantly better than RAID 5, but will also experience the same issues assuming that URE rates don’t increase with disk capacity.

And on a side note, I was the unfortunate victim of a rebuild failure due to UREs about a year ago I accidentally knocked a power cord out of an seven 250GB drive RAID 5 NAS enclosure (the enclosure was four years old and did not support RAID 6, but we did have it configured with one hot spare drive). Knocking the cable out abruptly killed one of the redundant power supplies, which took one of the drives with it. The hot spare drive was immediately activated and the array began to be rebuilt. About 5 hours into the rebuild, a URE occurred and the rebuild failed.

It’s just as well we had that 1.5TB worth of data backed up on to a second array as well as LTO tape this just goes to show that RAID arrays are not the be all and end all of fault tolerance.

External Storage
Any computer chassis has a physical limitation to the number of drives that you can install. This limitation is overcome in enterprise servers by connections to Storage Area Networks (SANs). This is typically accomplished in two ways via a fibre channel or iSCSI interfaces.

iSCSI is generally the cheaper option of the two because data transferred between the SAN and server is encapsulated in frames sent over ubiquitous Ethernet networks, meaning that existing Ethernet interfaces, cabling and switches can be used (aside from the cost of the SAN enclosure itself, the only additional costs are generally an Ethernet interface module for the SAN and software licenses).

On the other hand, fibre channel requires its own fibre optic interfaces, cabling and switches, which significantly drives up cost. However, having a dedicated fibre network means that bandwidth isn’t shared with other Ethernet applications. Fibre channel presently offers interface speeds of 4Gb/s compared to the 1Gb/s often seen in most enterprise networks. Fibre channel also has less overhead than Ethernet, which provides an additional boost to comparative performance.

Disk Drives
For years, enterprise servers have utilised SCSI hard disk drives instead of ATA variants. SCSI allowed for up to 15 drives on a single parallel channel versus the 2 on a PATA interface; PATA drives ship with the drive electronics (the circuitry that physically controls the drive) integrated on the drive (IDE), whereas SCSI controllers performed this function in a more efficient manner; many SCSI interfaces provided support for drive hot swapping, reducing downtime in the event of a drive failure; and the SCSI interface allowed for faster data transfer rates than what could be obtained via PATA, giving better performance, especially in RAID configurations.

However over the last year, Serial Attached SCSI (SAS) drives have all but superseded SCSI in the server space in much the same way that SATA drives have replaced their PATA brethren. The biggest problem with the parallel interface was synchronising clock rates on the many parallel connections serial connections don’t require this synchronisation, allowing clock rates to be ramped up and increasing bandwidth on the interface.

SAS drives still the same as SCSI drives in many ways the SAS controller is still responsible for issuing commands to the drive (there is no IDE), SAS drives are hot swappable and data transfer over the interface is faster compared to SATA. SAS drives come in both 2.5 and 3.5 inch form factors with the 2.5 inch size proving popular in servers as they can be installed vertically in a 2RU enclosure.

In addition, SAS controllers can support 128 directly attached devices on a single controller, or in excess of 16,384 devices when the maximum of 128 port expanders are in use (however, the maximum amount of bandwidth that all devices connected to a port expander can use equals the amount of bandwidth between the controller and the port expander). In order to support this many devices, SAS also uses higher signal voltages in comparison to SATA, which allows the use of 8m cables between controller and device. Without using higher signal voltages, I’d like to see anyone install 16,384 devices to a disk controller with a maximum cable length of 1 meter (the current SATA limitation).

In the next few months, there will be another major advantage to using SAS over SATA in servers. SAS does support multipath I/O. Suitable dual port SAS drives can then connect to multiple controllers within a server, which provides additional redundancy in the event of a controller failure.
GPUs and Video
One of the areas where enterprise servers are inferior to regular PCs is in the area of graphics acceleration. Personally, I’m yet to see a server that has been installed within a data centre that contains a PCI Express graphics adapter but that’s not to say that it’s not possible to install one in an enterprise server. In general though, most administrators find the on board adapters more than adequate for server operations.
Networking
Modern day desktops and laptops feature Gigabit Ethernet adapters, and the base adapters seen on servers are generally no different. However, like most other components in servers, there are a few subtle differences that improve performance in certain scenarios.

In order to provide network fault tolerance, two or more network adapters are integrated on most server boards. In most cases, these adapters are able to be teamed. Like RAID fault tolerance schemes, there are numerous types of network fault tolerance options available, including :
• Network Fault Tolerance (NFT) In this configuration, only one network interface is active at any given time, which the rest remain in a slave mode. If the link to the active interface is severed, a slave interface will be promoted to be the active one. Provides fault tolerance, but does not aggregate bandwidth.
• Transmit Load Balancing (TLB) Similar to NFT, but slave interfaces are capable of transmitting data provided that all interfaces are in the same broadcast domain. This provides aggregation of transmission bandwidth, but not receive and also provides fault tolerance.
• Switch assisted Load Balancing (SLB) and 802.3ad Dynamic provides aggregation of both transmit and receive bandwidth across all interfaces within the team, provided that all interfaces are connected to the same switch. Provides fault tolerance on the server side (however, if the switch that is connected to the server fails, you have an outage). 802.3ad Dynamic requires a switch that supports the 802.3ad Link Aggregation Control Protocol (LACP) in order to dynamically create teams, whereas SLB must be manually configured on both the server and the switch.
• 802.3ad Dynamic Dual Channel provides aggregation of both transmit and receive bandwidth across all interfaces within the team and can span multiple switches, provided that they are all in the same broadcast domain and that all switches support LACP.

Just about all server network interface cards (NICs) support Virtual Local Area Network (VLAN) trunking. Imagine that you have two separate networks an internal one that connects to all devices on your LAN, and an external on that connects to the Internet, with a router in between. In conventional networks, the router needs to have at least two network interfaces one dedicated to each physical network.

Provided that your network equipment and router supports VLAN trunking, your two networks could be set up as separate VLANs. In general, your switch would keep track of which port is connected to which VLAN (this is known as a port based VLAN), and your router is trunked across both VLANs utilising a single NIC (physically, it becomes a router on a stick). Frames sent between the switch and router are tagged so that each device knows which network the frame came from or is destined to go to.

VLANs operate in the same physical manner as physical LANs but network reconfigurations can be made in software as opposed to forcing a network administrator to physically move equipment.

Because of the sheer amount of data that is received on Gigabit and Ten Gigabit interfaces, it can become exhaustive to send Ethernet frames to the CPU in order for it to process TCP headers. It roughly requires around 1GHz of processor power to transmit TCP data at Gigabit Ethernet speeds.

As a result, TCP Offload Engines are often incorporated into server network adapters. These integrated circuits process TCP headers on the interface itself instead of pushing each frame off to the CPU for processing. This has a pronounced effect on overall server performance in two ways not only does the CPU benefit from not having to process this TCP data, but less data is transmitted across PCI express lanes toward the Northbridge of the server. Essentially, TCP Offload engines free up resources in the server so that they can be assigned to other data transfer and processing needs.

The final difference that you see between server NICs and consumer ones is that the buffers on enterprise grade cards are usually larger. Part of the reason for this is due to the additional features mentioned above, but there is also a small performance benefit to be gained in some scenarios (particularly inter VLAN routing).

Power Supplies
One of the great features about ATX power supplies is the standards that must be adhered to. ATX power supplies are always the same form factor and feature the same types of connectors (even if the number of those connectors can vary). But while having eight 12 volt Molex connectors is great in a desktop system, this amount of connectors is generally not required in a server, and the cable clutter could cause cooling problems.

Power distribution within a server is well thought out by server manufacturers. Drives are typically powered via a backplane instead of individual Molex connectors and fans often drop directly into plugs on the mainboard. Everything else that requires power draws it from other plugs on the mainboard. Even the power supplies themselves have PCB based connectors on them. All of this is designed to help with the hot swapping of components in order to minimise downtime.

Most servers are capable of handling redundant power supplies. The first advantage here is if one power supply fails, the redundant supply can still supply enough juice to keep the server running. Once aware of the failure, you can then generally replace the failed supply while the server is still running.

The second advantage requires facility support. Many data centres will supply customer racks with power feeds on two separate circuits (which are usually connected to isolated power sources). Having redundant power supplies allows you to connect each supply up to a different power source. If power is cut to one circuit, your server remains online because it can still be powered by the redundant circuit.

Server Management
Most servers support Intelligent Platform Management Interfaces (IPMIs), which allow administrators to manage aspects of the server and to monitor server health including when the server is powered off.

For example, say that you have a remote Linux server that encountered a kernel panic you could access the IPMI on the server and initiate a reboot, instead of having to venture down to the data centre, gain access and press the power button yourself. Alternatively, say that your server is regularly switching itself on and off every couple of minutes too short a time for you to log in and perform any kind of troubleshooting. By accessing the IPMI, you could quickly determine that a fan tray has failed and the server is automatically shutting down once temperature thresholds are exceeded. These are two of the most memorable scenarios where having access to IPMIs has saved my skin.

Many servers also incorporate Watchdog timers. These devices perform regular checks on whether the Operating System on the server is responding and will reboot the server if the response time is greater than a defined threshold (usually 10 minutes). These devices can often minimise downtime in the event of a kernel panic or blue screen.

Finally, most server vendors will also supply additional Simple Networking Management Protocol (SNMP) agents and software that allows administrators to monitor and manage their servers more closely. The agents that are often supplied provide just about every detail about the hardware installed that you could ever want to know how long a given hard disk drive has been operating in the server, the temperature within a power supply or how many read errors have occurred in a particular stick or RAM. All of this data can be polled and retrieved with an SNMP management application (even if your server provider doesn’t supply you with one of these, there are dozens of GPL packages available that utilise the Net SNMP project).

The future...
All of the points detailed in this article and within the corresponding article on the APC website highlight the differences that are seen between today’s high end consumer gear (which is typically used to make the DIY server) and enterprise level kit. However, emerging technologies will continue to have an impact on both the enterprise and consumer markets.

As the technology becomes more refined, solid state drives (SSDs) will start to emerge as a serious alternative to SAS hard disk drives for some server applications. Initially, they’ll most likely be deployed where lower disk capacity and lower disk access times are required (such as database servers). When the capacity of these drives increases, they’ll start to become more prominent but will probably never replace the hard disk drive for storing large amounts of data.

The other big advantage to using SSDs is that the RAID 5 issue mentioned earlier becomes less of an issue. SSDs shouldn’t exhibit UREs once data is written to the disk, it’s stored physically, not magnetically. A good SSD will also verify that the contents of a block including whether it can be read before the write operation is deemed to have succeeded. Thus, if the drive can’t write to a specific block, it should be marked as bad and a reallocation block should be brought online to take its place. Your SNMP agents can then inform you when the drive starts using up its reallocation blocks, indicating that a drive failure will soon happen. In other words, you’ll be able to predict when an SSD fails with more certainty, which could give RAID 5 a new lease of life.

Moving further forward, the other major break from convention in server hardware will most likely be a move toward the use more application specific processor units instead of the CPU as we know it today. There’s already some movement in this area Intel’s Larrabee is an upcoming example of a CPU/GPU hybrid, and the Cell Broadband Engine Architecture (otherwise know as the Cell architecture) that is used in Sony’s Playstation 3 is also used in the IBM RoadRunner supercomputer (the first to sustain performance over the 1 petaFLOPS mark)

Saturday, March 13, 2010

Choosing the Right Desktop PC

Today's modern desktop PCs offer a wealth of options: You can go for a PC with a fixed retail configuration, or you can customize your system by stepping through a sometimes dizzying array of choices from a configure to order vendor. The resulting array of components is no longer wrapped up in a beige box, but in a colorful shell of highly variable shape and size, differentiated by indecipherable naming conventions.


Presented with so many possibilities, you need to narrow the field by considering what you want to use your new desktop for. Are you an avid photographer looking for a speedy but cost effective platform for editing high resolution photos? If so, you'll benefit from buying a machine with extra RAM and a discrete graphics card. If you've acquired an extensive media collection, and want an inexpensive and compact way to pipe it to your HDTV, a compact PC tailored toward media sharing and playback may be your best bet.

Whatever your needs, you can find a desktop configuration to fit the bill.

Desktops fall into three major categories, each with its own range of price and performance: compact PCs, all in one PCs, and classic tower PCs (which we subdivide into budget, mainstream, and performance categories). Each style of machine has different strengths and weaknesses, and choosing the one that's best for you depends largely on how you plan to use it.

Once you've picked the appropriate desktop category, our guide to PC specifications will help you pick a machine that delivers the performance you need, while staying within your budget. And when you're ready to buy, check our shopping tips for advice on how to get the most from your investment.

Compact PCs
As the smallest members of the desktop computer family, compact PCs often omit features to deliver computing power in a space saving package. The combination of energy efficient components, quiet operation, and small size makes compact PCs ideal for people who want a nonintrusive machine. A typical compact PC costs between $300 and $600, though the price goes up as you add upgrade options.

Compact PCs tend to be equipped with notebook or netbook components, such as Intel Atom processors. This limits their usefulness in tasks that demand lots of processing power, but it makes for quiet, energy efficient operation. Not all compact PCs are created equal, however, so pay attention to specifications when shopping. Some compact PCs are configured for as low a bottom line price as possible; others are packed to the gills to deliver optimal performance in a compact system.

Most compact PCs rely on integrated graphics. In some cases (depending on the CPU and the integrated graphics chipset), anything more complicated than a Flash based browser game will be unplayable, but you will be able to eke out competent media streaming with Intel integrated graphics. A machine toting nVidia's Ion platform, like the Acer Aspire Revo R3610, usually fares much better. Gaming still isn't an option, but 1080p video is, whether you stream from a larger PC or over the Web.

When assessing smaller PCs, keep an eye on the ports. The smaller the footprint, the fewer features you can reasonably expect, and that includes fewer connectivity options. Though you'll get a VGA port and (on average) six USB 2.0 ports, many compact PCs also offer HDMI an asset for home theater setups. The typical hard drive size is 320GB, though 250GB is also common and we've seen compact system carrying up to 1TB (for a $100 upgrade premium). For a chart of recent high ranking PCs in this category.

All in One Desktops
All in One PCs are self contained: components are mounted behind a display, with screen sizes ranging between 18 and 27 inches. With no cords to manage or peripherals to juggle, setting up your new all in one PC can be as simple as pulling the machine out of the box and plugging it in.

With their compact size and integrated displays, all in one PCs can generally be placed wherever you've got a spare power outlet. Some all in ones also offer a rather distinct perk: Touchscreens. With support for multitouch gestures worked into Microsoft's Windows 7, all in ones offer a clever way for users to interact with their media, while still getting a full fledged PC.

All in one components vary from brand to brand, but you can expect to pay more for an all in one than for a similarly equipped desktop; again, some models target buyers on a tight budget, while others load up on performance oriented system components (at a higher price, of course). For example, to low priced machines like the MSI Wind Top AE2010 use notebook or netbook processors and integrated graphics. You'll get reduced performance to match the reduced price tag. If you have a larger budget, you can opt for a model like the Sony VAIO L117FX/B, which includes a quad core processor (most often seen on full size desktops), to deliver superior performance, and offers a large 24 inch screen. You'll be paying in the area of $2000 for those high end specs, however.

Many all in one PCs come with a wireless keyboard and mouse, Bluetooth support, and Wi Fi connectivity. This recuces cord clutter to a minimum an important consideration in spaces where an attractive décor or efficient use of space is at a premium. For ranked charts of all in one PCs that we have tested in recent months.

Budget PCs
A budget tower desktop carries standard desktop components, but can cost as little as $300 if you select older hardware or inexpensive, low end processors. Typically, these PCs are minitower systems, with fewer drive bays than a full tower has. The Acer Veriton X270, for example, offers an older Core 2 Duo processor but delivers relatively speedy performance for just $500. Beware models that come equipped with AMD Sempron or Intel Celeron processors, as those CPUs' performance drawbacks will cancel the advantage of their low cost.

Inexpensive tower desktops usually incorporate low powered, integrated graphics rather than discrete graphics cards. As a result, your entertainment options may be limited. High definition media playback suffers on models equipped with older Intel based integrated graphics; and if you're interested in gaming, you'll be hard pressed to tackle anything more demanding than Flash based offerings. Machines equipped with Intel's Core i3 processor build improved integrated graphics performance right onto the chip; though they still won't be adequate for video games, they will support satisfactory high def media playback.

Budget PCs generally offer at least 320GB of storage space and at least 2GB of RAM, but permit few upgrade options beyond adding RAM or a larger hard drive. They rarely leave much room for expandability inside their cases, either. Still, if you need a machine for nothing more than word processing, e mail, and occasional DVDs or online videos, these machines should suit you just fine. For a ranked chart of systems in this category.

Mainstream PCs
Higher up in the desktop chain, you'll find machines aimed at mainstream users. These PCs start in the vicinity of $800, and carry at least 500GB hard drives and about 4GB of RAM. Powered by dual core and lower end quad core processors, they deliver better performance than budget desktops, without breaking the bank. Consider the Gateway FX6800 01e: For just over $1000, this machine features a quad core Core i7 920 processor, and an ATI Radeon HD 4850 graphics card.

Photo editing applications stand to benefit from working with multicore processors, and entertainment enthusiasts will appreciate the improved gaming performance and stutter free HD media playback that a discrete graphics card helps deliver. Many of the machines in this category include a Blu ray drive, either standard or as an optional extra. And if your video editing needs are modest, you probably can find a machine in the mainstream price bracket that has enough power to handle your creative projects.

Performance PCs
Occupying the high end of the spectrum are performance desktops. Such PCs generally start at a little over $1500, with some outliers like the Maingear Shift hovering in the range of $7000. Most performance PCs are full tower systems, equipped with a slew of drive bays and expansion slots. Designed to tackle challenging tasks, they come equipped with the latest and greatest Intel and AMD dual and quad core processors, 6GB or 8GB of RAM, and at least one discrete graphics card. Some performance desktops include multiple graphics cards to deliver improved graphics performance.

Performance desktops are suitable for users who need a lot of processing power to get their work done professionals who do extensive high resolution photography or video editing, and gamers who are willing to pay for top of the line visual effects.

Traditional PC manufacturers like HP and Dell sell performance machines, but so do smaller boutique PC makers that specialize in highly configurable custom machines, tailored to your needs and budget. For a chart of recent high end models.

Thursday, March 11, 2010

not to Shut Down your Laptop, HOW...

That is not, repeat, not the proper way to shut down a PC. The proper way is to click Start, Shut down. (I know, it's ridiculous that after all these years Microsoft still forces you to use the Start button to end your computing session.) Alternately, you can press and immediately release! the power button, which will either shut down your PC or put it in sleep/hibernate mode, depending on how Windows is configured.


The only time you should press and hold the power button is if your computer is locked up and otherwise unresponsive. A five second press will usually force a "hard" power off, after which you should wait another five seconds before turning the machine back on. But if you do this all the time, Windows won't be able to perform its necessary shut down housekeeping stuff, and ultimately you'll muck up the OS.

Learn Your Laptop's Power Settings
My aunt recently told me about a problem with her new laptop: Whenever she'd step away from it for more than a few minutes, she'd close the lid. Upon returning, she'd open the lid, only to be faced with a blank screen and no response from the mouse or keyboard.

Want to know why? The default lid closing action for most laptops is to put the system in Sleep mode, and Windows is notoriously bad at waking up properly. That's why I advise most laptop users to use Hibernate mode instead, as it's much more reliable when it comes to waking up.

You see, Sleep (aka Standby) puts your system into a low power, off like state, allowing you to pick up where you left off after just a few seconds in theory, anyway. A PC in Standby mode continues to consume battery power, so it's not uncommon to return to a "sleeping" PC to find that it's just plain dead.

Hibernate, however, saves your machine's current state to a temporary hard drive file, then shuts down completely. When you start it up again, it loads that file and returns you to where you left off no booting required.

Both ends of the Hibernate process take a little longer than sleep mode (usually 10 20 seconds, in my experience), but you avoid any of the issues that can arise when Windows suddenly loses power. And as noted, Sleep mode is notoriously flaky. If your system refuses to wake up properly, you'll end up losing whatever documents and/or Web pages you had open. Consequently, I recommend using hibernate most of the time.

Dial2Do, Hands Free E Mail, Texting, and More
It's a proven fact: Texting while driving is insanely dangerous. Same goes for reading e mail, updating your Facebook or Twitter status, and so on. Do yourself and your fellow drivers a favor and keep both hands on the wheel and both eyes on the road.

Easier said than done, right? Actually, no: If you use Dial2Do, all the aforementioned activities are easily said and done. This amazing service lets you send text messages, listen to e mail, add appointments to your calendar, and plenty more, all using just your voice.

Start by signing up for a free trial account. Add the special Dial2Do number to your speed dial, then call it when you want to do something. If that something is, say, send a text message to Bill, wait for the prompt and say, "Send a text message to Bill." Wait for the next prompt, then say what you want to say. When you're done, Dial2Do will transcribe your words into text and send them on their SMS way.

You can do likewise with e mail, though in addition to composing messages, Dial2Do lets you listen to those you've received. It works with a variety of third party services: You can dictate Facebook/Twitter updates, add appointments to your Google Calendar, send a note to Evernote, listen to local weather, and on and on. All this happens entirely hands free. Besides safety, there's another perk: If your phone lacks a keyboard, you'll find that dictating text messages is a lot easier than pecking them out on a numeric keypad.

If you haven't tried Dial2Do, you're missing out. The aforementioned free account limits you to creating personal reminders (which are delivered to you via e mail), but it comes with a 30 day trial for a Pro account. That's what you'll need for all the really cool stuff. Dial2Do Pro costs $4 per month or $40 if you prepay for a year. I typically prefer free stuff, but this is one service worth paying for.

Wednesday, March 03, 2010

Save Memory Card of Camera

Memory Card alias Kartu Memori begitu lazim di mata seorang Photographer. Keberadaannya amat penting sebagai media penyimapanan Foto yang dihasilkan oleh jepretan Kamera. Terbayang di benak kita, andai suatu ketika dimana seorang Photographer telah siap mengabadikan sebuah obyek. Namun, kamera digital miliknya tidak dilengkapi Memory Card. Sementara kamera tersebut tidak memiliki Memory Internal. Atau mungkin juga pada kasus di atas, Memory Card yang disiapkan tiba-tiba rusak. Tindakan apa yang mesti dilakukan. Bingung, itulah kata pertama yang muncul di benak kita.


Ambae.exe lewat kesempatan ini memberikan sedikit tips untuk mengamankan Memory Card. Tujuan utamannya, demi meminimalisir kendala yang akan dihadapi nantinya.

Jangan pernah menghapus Foto sobat Blogger melalui fasilitas Delete di Kamera


Pada dasarnya tips ini bertentangan dengan saran para pakar Desain dan pakar IT. Terlebih lagi, arahan para vendor pembuat kamera saat ini. Mengingat di tiap kamera keluaran saat ini telah dilengkapi dengan Menu super canggih, khususnya menu DELETE yang selalu nampang dan siap menghanguskan hasil jepretan sang Photographer.

Kamera dilengkapi baterai dengan daya tahan tertentu. Saat mengaktifkan DELETE, kamera mengkonsumsi daya yang amat besar. Demikian halnya bila kita ingin menilai foto dengan melihatnya melalui LCD kamera. Dibutuhkan daya besar saat penggunaan fungsi-fungsi tersebut.

Resiko bisa terjadi bila saat penghapusan dilakukan . Unsur ketidak sengajaan menghapus keseluruhan foto dalam Kartu Memori, benar-benar akan membuat sang Photographer jadi pusing tujuh keliling. Terkadang foto yang telah dihapus, masih diinginkan kembali karena ternyata foto tersebut lebih bagus dari foto lainnya. Untuk mengembalikannya, tentu saja kita memerlukan kehadiran komputer dengan beragam software penyelamat.

Olehnya, lakukan penghapusan Foto melalui media komputer. Beri nilai pada masing-masing foto sebelum menghapusnya. Foto akan tampil lebih indah ketimbang menampilkannya melalui LCD kamera. Monitor pada komputer memiliki akurasi warna yang lebih, sehingga foto dapat dinilai dengan baik setelah menyaksikannya. Setelah itu, hapuslah foto yang tidak sesuai keinginan. Misalnya saja pada foto terkesan BLUR, miring, kecil dimensinya dan sebagainya.

Penghapusan yang dilakukan di komputer relatif lebih aman dibanding melalui kamera. Foto yang terlanjur dihapus masih dapat dikembalikan lebih cepat sebelum menggunakan software penyelamat sebagai senjata pamungkas. Disiapkan ruang/Tempat Pembuangan Akhir alias TPA untuk foto dan file lainnya yang dihapus. Kapasitasnya pun bisa diatur, pengguna Windows mengenalnya dengan istilah Recycle Bin. Kapasitas default disiapkan sebesar 10 % untuk masing-masing partisi dari kapasitas Hard Disk. Bilamana menginginkan foto dimaksud kembali ke tempat semula. Windows memberi kesempatan dengan fasilitas UNDO-nya. Select file yang ingin dikembalikan lalu pilih Restore, file pun kembali hadir di tempatnya.

Format Kartu Memori secara teratur


Banyak cara yang dapat dilakukan untuk memformat Kartu Memori. Pada dasarnya format disini dapat diartikan sebagai langkah penghapusan file Foto. Sehingga secara keseluruhan, Kartu Memori kembali kosong dan siap diisi dengan file baru.

Format dapat dengan menyeleksi semua file Foto kemudian menghapusnya. Bisa juga dengan menghapus (DELETE) satu persatu, namun akan membutuhkan waktu yang cukup lama. Cara lainnya adalah dengan memanfaatkan fasilitas Format Partisi milik Windows atau pun Sotware lainnya. Kemudian cara berikutnya yakni dengan memformat Kartu Memori alias Memory Card pada media Camera Digital. Pilihan Format Memory Card telah disediakan di media ini.

Alangkah bagusnya, jika langkah terakhir yang kita jadikan acuan. Hal ini dilakukan guna menghindari kesalahan pemformatan Memory Card. Pada beberapa jenis/type memory card, akan membuat folder tertentu setelah dilakukan Format. Sementara dengan memformatnya di Windows, tidak satu pun Folder akan dihasilkan setelahnya. Sehingga bisa saja Kartu Memori tidak akan terbaca pada kamera tertentu.

Siapkan Kartu Memori Cadangan/Sekunder


Hal ini penting khususnya para penggila jepretan. Bisa saja saat sang Photographer telah siap mengambil gambar pada obyek tertentu. Namun, seketika Memory Card rusak atau pun Full. Sehingga kehadiran memori cadangan akan sangat membantu.

Kenali Usia dan Masa Pakai Memory Card


Tiap barang di dunia ini memiliki batas masa pemakaian. Manusia pun yang nota bene adalah makhluk hidup, memiliki batas kehidupan dan tidak kekal menikmati udara segara di dunia. Demikian halnya dengan Kartu memori pada Kamera Digital. Untuk menjaga agar tidak dikagetkan dengan kerusakan pada Memory Card secara tiba-tiba, kenalilah usia dan batas masa pakainya.

Misalnya saja, sebuah Memory Card memiliki batas kadaluarsa hingga 1 tahun sejak pemakaian. Tandai memori tersebut dengan spidol maupun alat tulis lainnya. Tanda dapat diletakaan pada badan Kartu Memori. Pada contoh kasus di atas, tandailah dengan menuliskan tanggal kadaluarsanya pada 12 bulan mendatang. Sehingga kita akan selalu diingatkan batas pemakaiannya. Dengan mengingatnya, maka lebih awal kita harus menyiapkan Kartu Memori yang baru sebagai penggantinya di saat masa pemakaiannya telah berkahir.

Back Up secara berkala file foto di dalamnya. Saat ERROR pada memory Card yang lama, kita tinggal menggantinya dengan yang baru. File Foto pun aman.

Tuesday, February 16, 2010

Faster Wireless Web

Transfers of large amounts of data across the Internet to wireless devices suffer from a key problem The Transmission Control Protocol (TCP) used to send and receive that data can be unnecessarily slow.


A company called Aspera has now announced an alternative protocol designed to accelerate wireless transfer speeds. Called fasp AIR, it includes new proprietary approaches to addressing problems of data transfer that are unique to wireless communications. The original fasp protocol is already used to boost regular Internet transfers. It was used, for instance, to speed up the transfer of files from New Zealand to the U.S. during production of the movie Avatar.

The main problem with the TCP protocol, which was designed before wireless connections to the Internet were commonplace, is that it doesn't know the difference between packets of data that are lost because of network congestion and those that are lost because of a weak wireless signal. TCP automatically throttles the speed of data transfer when it sees dropped packets, so that congestion doesn't overwhelm the network. That's fine when packets are lost because of congestion, but when the problem is a weak signal, it causes an unnecessary drop in transfer speeds that can bring downloads and uploads to a crawl.

For some applications, like streaming video and Internet telephony, it's possible to use an alternative like the User Datagram Protocol (UDP), which doesn't bother to confirm that all data has arrived intact. The price of UDP's speed is dropped packets of data a result familiar to anyone who has endured the degraded quality of a video stream or telephone conversation when at the limits of a wireless network's range.

Fasp-AIR achieves faster speeds than TCP but doesn't result in any dropped packets, making it suitable for transferring data that must arrive complete and intact. "The drop-off in performance we see with fasp AIR is almost linear, says Aspera CEO Michelle Munson. So a 10 percent loss in the available bandwidth means we're still getting transfer rates that are 90 percent of what's specified.

At first, fasp AIR will be available as an iPhone app that can be used to access enabled servers. Fasp-AIR requires that both the client and the server are running software developed by Aspera. In the future, Aspera hopes that developers will incorporate fasp-AIR into their applications directly. Aspera licensees currently include Amazon and several other large Internet companies.

Fasp AIR certainly isn't the only novel approach being used to speed up transfers of wireless data. Jon Crowcroft, Marconi Professor of Communications Systems at the University of Cambridge, says that some wireless carriers use a proxy server between the wireless and the wired networks to intelligently adapt to changing network conditions. This gets around the problem of whether or not a TCP alternative like FaspAIR is hogging bandwidth on a congested network.

Sunday, February 07, 2010

New Life for Magnetic Tape

PMusic lovers may have long forsaken them, but magnetic tapes still reign supreme when it comes to storing vast amounts of digital data. And new research from IBM and Fujifilm could ensure that tape remains the mass storage medium of choice for years to come for at least a decade.


At IBM's Zurich Research Laboratories in Switzerland, researchers have developed a new tape material and a novel tape-reading technology. In combination, they can store 29.5 billion bits per square inch, which translates to a cartridge capable of holding around 35 terabytes of data--more than 40 times the capacity of cartridges currently available, and several times more than a hard disk of comparable size.

The researchers used a relatively new magnetic medium, called barium ferrite. In cooperation with researchers from Fujifilm's labs in Japan, they orientated the barium ferrite magnetic particles so that their magnetic fields protrude perpendicularly from the tape, instead of lengthways. This means that more bits can be crammed into a given area, and the magnetic fields are stronger. Furthermore, these particles allow thinner tape to be used, meaning12 percent more tape can be stored on a single spooled cartridge.

Increasing the density of data that can be stored on a tape makes it more difficult to reliably read information. This is already a problem because of electromagnetic interference and because the heads themselves will retain a certain amount of residual magnetism from readings. To overcome this, the IBM group developed new signal processing algorithms that simultaneously process data and predict the effect that electromagnetic noise will have on subsequent readings.

Hard disks can store more data on a given surface area than magnetic tape, and the data on a disk can be read faster. But because hundreds of meters of tape can be spooled on a single cartridge, the overall volumetric data density of tape is higher, says Evangelos Eleftheriou, head of the Storage Technologies group at IBM Zurich.

Crucially, tape storage is also much cheaper. "What's most important is the cost per gigabyte," says Eleftheriou. Solid state drives cost between $3 and $20 per gigabyte. In contrast, it costs less than a cent per gigabyte to store information on magnetic tape. In the third quarter of 2009, the global tape market was worth more than half a billion dollars.

Extending the life of magnetic tape technology could delay the arrival of new storage technologies, particularly holographic storage. Experimental holographic discs, which use patterns of light interference to hold multiple pieces of data at a single point, can already hold several hundred gigabytes of data. The technology is expected to eventually allow terabytes of data to be held on a disc.

"Tape still wins, but only at very high data volumes," says James Hamilton, a vice president and distinguished engineer on Amazon's Web services team, in Bellevue, WA. Tape is most suitable for "cold storage"--when data is not accessed frequently. But the volume of digital data that needs to be stored is increasing rapidly, so Hamilton says there's a real need to try to squeeze more out of tape.

It could take another five years before the new tape technology is ready for the market, Eleftheriou admits. "But we have shown that there is still at least another 10 years of life in it," he says.

Wednesday, February 03, 2010

10 Tips to Get Your A/V Set for the Big Game

Whether you’re rooting for the Colts or the Saints, you’ll want your A/V system to be in tip-top shape for the Super Bowl next weekend. Many of the tweaks you’ll be able to handle yourself; others might require the handiwork of a custom electronics (CE) professional.

“It might sound silly, but it’s a good idea to do a test run before Sunday to make sure everything is ready to go,” says Rob Roessler of Audio Video Concepts in Columbia, Ill. “There’s nothing worse than finding out the day of the Super Bowl that your cable box is locked up or the picture on your big-screen looks grainy.”


Oh, and be sure to have an extra set of batteries on hand for your remote, too, says Ryan Lipkovicius of Audio Impact in San Diego.

Even if everything seems a-ok, you still might want to bring in a pro to at least recalibrate your TV and sound system. It’ll cost between $100 and $300, but it’ll ensure that the picture and sound are perfect. For example, by adjusting the settings of your A/V receiver, your pro can make it feel as if you’re seated alongside the screaming fans at the stadium. He’ll also be able to check the signal strength of your satellite reception and if necessary realign the dish to prevent signal dropouts or picture pixilation during the game.

The TV itself will probably need a little fine tuning, as well. If you’ll be watching the game on a rear-projection TV or a video screen, make sure the display’s bulb will make it through the game. If you’re concerned about it burning out, have a pro install a new bulb ($300-$700).

Given that the Super Bowl is just a week away, there’s no guarantee a CE pro will be able to squeeze you into his schedule. Thankfully, there are several adjustments you can make on your own. “Some new audio receivers have a calibration microphone built in which makes it a breeze,” says Roessler. “You just go through the setup process and make sure all the levels are correct.”

Derek Cowburn of DistinctAV, McCordsville, Ind., offers another tip: “Test the surround-sound modes [while watching a sporting event] and jot down the modes you like best. Be sure to remember what buttons you pressed to get there.”

If you think your old receiver may not be up to snuff, for around $600 you can trade up to a more sophisticated model. “Basic receivers just produce sound,” explains Ryan Herd of One Sound Choice, Pompton Plains, N.J. “A better receiver will upconvert the audio and video for a much better result.”

If nothing else, take your receiver off “sports” or “stadium” surround mode,” says Eric Thies of DSI Entertainment Systems, West Hollywood, Calif. “These settings are simply corny and defeat the good work of the sound engineer of the Super Bowl.” With the settings disabled, the game will be presented in Dolby Digital—good for listening to both the game and The Who during the halftime.

Want more impact? “Adding a subwoofer to your current speaker arrangement can deliver a little more oomph on big hits and create a more uniform bass field for multiple seats within a media room,” adds P.J. Aucoin of Home Concepts, Calgary, Alberta.

The Super Bowl, as well as other big sporting events, will be presented in high-def, so in addition to fine-tuning your audio and video components, be sure to tweak the display. “If the white of the players’ jerseys is so bright it’s making the players’ numbers look fuzzy, your set is too bright. Dial it down,” says Thies. “Same with color: If the field is the color of glowing nuclear slime, ratchet the color back a bit to a more natural setting.”

Finally, if you don’t already own a universal remote, invest in one ($100-$500). It’ll simplify the setup and control of your system before, during and after the game. A CE pro can program special commands into the remote to really impress your friends. Cowburn recommends a “Commercial Mute” button that lowers the volume then restores it when pressed again (though maybe the ads will be funnier this year). A “Super Bowl” button, suggested by Aucoin, could tune every TV in the house to the Super Bowl—each at a specific volume level (loud in the family room but softer in the kitchen, for example).

Monday, February 01, 2010

Core i7 is new super CPU

Intel's development model for processors is known as the “Tick Tock” cycle. Every alternate year, they focus is on miniaturizing the existing production technology for CPUs (known as a process shrink—“Tick”), while in the next year a new architecture will be introduced, based on this process (“Tock”). The system has been functioning well for four years now. The Core i7 architecture, formerly known by its codename “Nehalem”, was introduced in November 2008, after the original Core architecture was shrunk to 45 nm around the end of 2007 (products codenamed “Penryn”). The new design brings a series of changes with it, all aimed at optimizing performance, power consumption and reliability.


New package

The last time Intel changed its processor package was in 2004, when it went from 478 contact pins to 775 pads. Since then, the package and matching socket has remained the same despite many CPU refreshes, but now Nehalem requires a radical turnabout. The new CPU requires about 600 more pins for all its new functions. Core i7 CPUs won’t fit into older motherboards since they now have 1,366 contact pads instead of 775. Even if they did fit physically, nothing would work since there are many new elements on the CPU which need to be connected to the motherboard and the rest of the computer’s components. The transition is understandable since it’s been a long time and there are genuine needs and advantages, but anyone who wants to use the new Intel technology must buy a new motherboard.

Goodbye FSB

The most significant innovation with the Nehalem architecture is the obsolescence of the Front Side Bus (FSB), which has been responsible for all communication between CPU and chipset so far. Its successor is known as the QuickPath Interconnect (QPI). The FSB was replaced mainly because its bandwidth was found to be inadequate: QPI provides 20-bit wide, bidirectional links resulting in a maximum data rate of 25.6 GB/s. This is immediately twice the speed of what an FSB at its highest possible rating of 1,600 MHz could offer. QPI is very similar to the HyperTransport technology used by AMD since 2001, which is now at version 3.1 and achieves similar transfer rates.

Intel has chosen to adopt another technique very successfully applied by AMD: a memory controller integrated in the processor package. Intel’s desktop architectures until now have placed the memory controller in the chipset. The specialty of current high end Core i7s is their triple-channel memory controller. Three memory modules can now be ganged up to achieve data transfer rates fast enough to keep the CPU fed with fresh data so that its potential is used optimally. The result is that PCs which make use of this will have 3, 6 or 12 GB of RAM, which is unconventional compared to the progression we’re used to. However, lower-cost Nehalem CPUs which are yet to be launched will feature more traditional dual-channel memory controllers and a different, smaller socket with only 1156 contact pads.

HyperThreading makes a comeback

Since the end of the Pentium 4 generation, HyperThreading disappeared almost completely, but it is now making a comeback. Intel refers to a processor’s ability to process two program threads at the same time as Simultaneous Multi-Threading (SMT). So in addition to the impressive figure of eight CPU cores on a chip in the Windows task manager—four virtual and four real—SMT allows the cores to be utilized more efficiently, with a promised increase in performance of up to 30 percent.

New clock speed tricks

Core i7 processors can run with each individual core at a different clock speed. Turbo mode is especially interesting, because it allows some cores to be overclocked when a non-multithreaded task taxes one or two cores while the others are left idle. Such a situation allows the application to run more efficiently and utilize resources more effectively—and can result in a performance increase of up to 10 percent. On the other hand, a new power saving mode switches idle cores to the C6 state (deep powerdown). In this state, the core is simply disconnected from the power supply. This is taken care of by microcontroller logic which monitors the temperature and power consumption of each core.

New design: Small L2 cache and large common L3 cache

One of the weak points of the cache design on Intel’s previous CPUs was that on a quad-core CPU, each pair of two cores shared a 6 MB L2 cache which was exclusive to them. This was great for fast data exchanges between those two cores, but bad for exchanges between all four, which required the data to travel through the much slower Front Side Bus. In Core i7 CPUs, each core now has its own L2 cache, which is considerably downsized to 256 KB, but with its speed increased by 50 percent. Like in AMD’s Athlon CPUs, a common 8 MB L3 cache (for the current quad-core models) is added to enable data exchange between the cores. This cache receives all data from the cores’ L1 and L2 caches, which in turn considerably accelerates data processing. This allows each core to be shut down without any risk of losing data that's in transit between caches.

A CPU design for all applications

The scalability of the Core i7 architecture is quite unique. Nehalem is suitable for desktops, servers and notebooks as well. Thanks to the new cache design and the introduction of the QPI, two, four or eight cores can now be integrated in a single processor die. Furthermore, the high speed of the QPI enables quick communication between several CPUs on one motherboard for high-end and server configurations. When 8-core Nehalem chips are available, power users should be able to gang two of them up for a grand total of 16 cores and 32 virtual CPUs!

At present, three Core i7 models are available in the market, with more to come soon. By the end of the year 2009, lower cost versions of Nehalem (codenamed Lynnfield and Havendale) will hit the market, with many more innovations and performance advantages in store for users.

Sunday, January 31, 2010

Do You Need High Speed HDMI...WHEN...

Only home theaters with Internet connections will require an HDMI cable with Ethernet. All other existing cables support the remaining features of HDMI 1.4.

“With HDMI 1.4, only the Ethernet Channel requires a new upgraded cable,” reiterates Jeff Park, technology evangelist for HDMI Licensing LLC. “That is only exception that requires a new cable.”


Below is a chart of all the possible features of HDMI and what cables are required for each feature. When you’re watching TV (or a projector) in any format below 1080p, there are only two instances when you’ll need a High Speed Cable: Deep Color and 120Hz from the source.

In both of these cases, if you’re viewing 720p or 1080i content, a High Speed Cable is necessary because those features require almost double the bandwidth of standard definition.

Finally, 120Hz from the source is very different from the 120Hz or 240Hz achieved through upscaling built into the TV. All TVs manufactured today upscale the signal inside the display. If the signal is being upscaled, having a High Speed Cable will not make a difference.

In an attempt to minimize confusion surrounding HDMI 1.4, HDMI Licensing LLC has created a four-category labeling system. There previously were only two types of HDMI cables:

Standard HDMI Cable
Supports up to 720p/1080i up to bandwidth of 2.25Gbps.

High Speed HDMI Cable
Supports 1080p or higher, including 3D or 4k/2k, up to bandwidth of 10.2Gbps.

But with the introduction of HDMI 1.4, there are two new cables
Standard HDMI Cable with Ethernet: Supports up to 720p/1080i supporting up to a total uncompressed bandwidth of 2.25Gbps. Adds support for HDMI Ethernet Channel (up to 100Mbps).

High Speed HDMI Cable with Ethernet
Supports 1080p or higher up to an uncompressed bandwidth of 10.2Gbps. Adds support for HDMI Ethernet Channel (up to 100Mbps).

Thursday, January 28, 2010

Wi-fi Heatmapper

Analyze Wi-Fi signal strengths and make necessary adjustments to enhance network coverage. This free tool will display the best location for your WLAN router.


Some of us who connect via a router often face problems such as slow data transmissions and connection disruptions. All these problems point towards a badly situated router. With the Ekahau HeatMapper you can find the best installation locations to cover all rooms with radio signals in the best possible manner. The software also shows the security settings of all access points within the range making it ideal to trace unsafe and new networks.

A laptop is compulsory when using the tool since the tool itself requires you to be mobile to be able to measure signal strengths in different parts of the room. Also, it is recommended to import the blueprints or layout of the building into the program in order to simplify the measurement. The HeatMapper alternatively shows the measurement results and router location on a grid. CHIP tells you how to optimize your WLAN using this software. But before we start you will need to download this free utility from www.ekahau.com. You will find this tool under the products section.

STEP 1

Install the software on your laptop or netbook. Start the tool and select “I have a map image”. Now select the layout of your building and load it in HeatMapper.

Now if you don’t have an image or map of your apartment simply select
“I don’t have a map image”. The software should now show a grid layout with the centre point being your router access point.

Step 2

Place your router possibly at a place from where you often access your WLAN such as the living room or bedroom. Now connect the router to the socket; an internet connection need not be established as of now. It is best to start testing along with the laptop in the corner of the house or apartment. Click on the map where you are currently standing in-order to start the measurement.

Now walk slowly into the next corner of the building. Keep clicking on your current location continuously while walking. Walk to all rooms so as to draw a continuous line on the layout. End the walk by right clicking on the point you are currently standing on. You can undo the last step using the ‘Undo Survey” tab in case you misclicked.

HeatMapper will now position the found routers with details like names, SSID, channel number or encryption on the map and show where the reception of devices is the strongest with the help of color gradations from green to red. The device currently having the best connection with the Laptop is marked green. Once you are done marking the signal strengths around your house save the overview map by clicking “Take Screenshot”.

Step 3

You can now optimize the location of your router with the help of the reception map. The map basically helps you analyze the signals and setup access points wherever you find the signals to be weak. The red areas on the map show where the reception is bad. You can improve on the reception by moving the router around till you get a green bar.

Wednesday, January 27, 2010

Migration Google Page Creator to Google Sites

Sambil menikmati kopi hangat hingga dini hari, sejenak kuperhatikan pesan yang terpampang di halaman Dasbor BLOGGER. Seperti ini pesan ALERT yang muncul :


Mutakhirkan template

Template Anda mencantumkan tautan ke berkas yang diinangkan di Google Page Creator, sebuah layanan yang tak lama lagi akan bermigrasi ke Google Sites. Ingin Blogger memutakhirkan tautan itu sekarang? Info lengkap

Mutakhirkan dan tinjau Singkirkan


Rupanya Google memberi peringatan bahwa ada bagian pada Template Blog Ambae.exe yang masih menggunakan Google Page Creator. Olehnya, si mbah Google ngasi petunjuk cara melakukan migrasi menuju Google Sites yang sebelumnya menggunakan Google Page Creator. Caranya gimana, mari kita mengulasnya perlahan-lahan.

1. Login ke Blogspot dengan mengunjungi www.blogger.com

2. Setelah Login, maka kita akan diarahkan menuju halaman Dasbor Blogspot

3. Klik pada Text Link Mutakhirkan dan dan Tinjau









4. Selanjutnya akan diarahkan untuk Login sebagai User Google dengan memasukkan Email/Account Google beserta Passwordnya (asumsinya : BLOGGER menggunakan Account Google sebagai media menggunakan Blogspot)







5. Blogspot meminta agar User memberikan Hak Akses terhadap Account Google. Maka silakan klik Tombol Berikan Akses






6. Blogspot berhasil mengakses account Google, yang selanjutnya dihadapkan pada pilihan Update. Klik Update references






7. Kemudian Blogspot meminta untuk mengupdate template yang digunakan Blog bersangkutan, klik Update template references






8. Sekarang kita masuk pada inti proses update yang terdiri dari 2 langkah. Langkah pertama, Blogger mesti klik pada tombol Next. Namun, alangkah baiknya Blogger melakukan Backup terhadap Template blognya untuk mengantisipasi terjadinya kesalahan, klik Download your blog template








9. Langkah kedua dari proses update yakni klik pada tombol Update. Perhatikan pada 2 kolom berbeda yang tampil disana. Pada bagian kiri adalah URL lama milik Google Page Creator. Trus pada bagian kanannya merupakan URL yang baru hasil migrasi Google Sites.










10. Proses update template akan berlangsung beberapa detik, tergantung network dan kopi hangat yang tetap setia menemani...PLEASE WAIT...!!!

11. Bila update berhasil maka akan muncul pesan Congratulation yang bunyinya seperti ini :
Updated 1 reference on your blog. View blog that everything is ok


12. Untuk melihat hasilnya, klik View blog






13. Bila berhasil dan tampilannya akan ceria seperti biasa, demikian pula BLOGGER akan tetap ceria bersama hangatnya kopi begadang.

Selamat bereksperimen dan GOOD sukses


Thursday, January 21, 2010

Cable Cutters is Cheap Alternatives to TV, DSL and Cell Service


I don't like service providers. Cable TV, landline phone and fax, mobile phone, ISP, and even satellite radio companies have so little real competition that they know they don't have to impress me very much to get my business. I either pay the full fees and become connected to them by their cord (physical or wireless), or I don't get any service. Until lately.


Now those companies face new pressure from Web based technologies and services that can offer similar features for far less money, or even for free. Many of these services ride in on the cord owned by the big service provider, relegating said service provider to the job of operating "dumb pipes."

Which TV executive knew just a few years ago that paid and free online services could threaten cable companies? Ditto for the VoIP challenge to landline phones, online fax services replacing another cord, Internet radio being a better value than a satellite subscription, and more.

Here I'll explain the alternatives that can help you walk away from the biggest corded companies that we love to hate. You can pick and choose which are still worth keeping and which to toss. Are you being pressured into buying a service provider's "triple play"? Try three strikes and you're out.

Pay for TV and Movies Instead of Cable Service

The cable and satellite TV model is on life support. Who wants to pay $100 or more a month for an endless well of unwatched shows? Even if you have an appetite for premium channel shows, you can save money by buying some à la carte and watching others free online.

Paid, per show TV sources are all around you. Apple iTunes, Amazon VOD, Zune Marketplace, Blockbuster On Demand, and Jaman store thousands of shows and movies.

All offer various purchase and rental options, often $3 to $4 to rent a movie for a day. Expect to drop about $30 to $40 per season of scripted, premium channel TV (HBO, Showtime, and such), or, often, about $10 less for network shows.

That sounds pricy at first, and it often costs more than buying a physical disc. But total up four or five of those seasons and a handful of movies, and you could pay half as much as cable over the same time period.

If you want to watch video on a portable device, stick with iTunes for iPod and iPhone compatibility; or Zune Marketplace for Zune support. Unfortunately, the other stores don't offer portable media player support. If you have an AppleTV or an Xbox 360, you can at least watch your shows in the living room.

Netflix is a good base service for any cable TV free home. The cheapest subscription for the DVD by mail service is $8.99 each month, but much of the value comes in the thousands of shows and movies you can stream from Netflix to your PC. Plus, Netflix can stream to a TiVo, Xbox 360, PS3, dedicated Roku device, and other hardware, so you can watch in the living room without a media center PC.

Similar streaming services like Amazon, Blockbuster, Jaman, and others can play on much of the same hardware. Check your TV connected hardware against these services' support pages.

I've also got my eye on the upcoming Boxee Box and Sezmi service; both will offer hardware that plays Internet streamed video on a TV. Sezmi, which will be rolling out nationwide this year, even promises local shows and live sports, one of the biggest deficits in online libraries.

Get Free TV And Movies

Hulu is still my king of free TV sites, although it's uncertain if it will or won't change to a paid model. And I've been occasionally frustrated when show episodes or seasons disappear just before I try to watch. But the majority of recent network shows are available. Plus, you'll find movie and TV favorites alongside B level misses.

As I write this, you can watch "Spartacus" and "All the King's Men" alongside the Norm MacDonald vehicle, "Dirty Work."

Check Hulu first, but also scan other sites for free TV and movies. Crackle, Comcast Fancast, and even YouTube have movies and TV content. If you you're looking for a specific show that you still can't find online, visit its Web site or its network site directly.

Live sports can still be elusive. Check the network that's broadcasting the content for a stream; I saw a Monday Night Football game this way last fall. MLB.com hosts live baseball, but you'll have to pay for service. Justin.tv could be your best ace for any sport. While unsanctioned, many users play live streams of their local stations; just click the sports button.

And remember the cheapest, highest quality TV source of all: an antenna. Over the air HD content looks great, often better than video compressed for a cable TV feed. You'll just need a TV with an HD tuner typical for most sets built in the last several years or a PC TV tuner.

Cut Landline Phone And Fax Service

If you have a reliable ISP, a voice over IP (VoIP) phone company can replace a traditional landline. You can place calls through a PC, but you'll have a better experience on a dedicated VoIP handset. The device connects to your network over Wi Fi or wired ethernet to route calls.

Skype deserves its VoIP ubiquity. You'll make free calls to other members or pay about 2 cents per minute to dial out to a real phone. Traditional phones can also call in to you. But several alternatives challenge the Skype giant.

I like the features and versatility of RingCentral. Depending on the package you buy, you'll get a local phone number for incoming calls, an incoming toll free number, and an incoming fax line. Call routing functions make RingCentral excel. Like Google Voice or my1voice, RingCentral can send incoming numbers to any phone. You can have it ring your VoIP handset, a mobile line, a hotel room, a temporary office, or anywhere you happen to be. Or you can have it go straight to voice mail during off hours, if you don't want to be reached.

Most RingCentral plans bundle fax service, or you can just pick that for about $8/month. You'll send and receive faxes through e mail, and cut the cost of a dedicated, traditional fax line. Many other companies sell fax service, too. Check out Mbox, eFax, and MyFax for several options, all priced in a similar range.

Free Yourself From Wireless Phone Service

If you like your current handset or smartphone, you might not be able to change wireless providers. Your device is almost certainly locked to your carrier, and worse, there's a chance that differing network technologies mean you can't move your phone to a different network even if it's unlocked.

AT&T and T Mobile rely on GSM networks; Sprint uses CDMA; and most Verizon handsets use CDMA, but Verizon also offers some dual mode devices that support both network types. An unlocked iPhone is still single mode, so it will never work on a Sprint network, for example. Ask a carrier you're considering how they can enable your old phone.

For GSM devices, including Apple's iPhone, your best option could be unlocking the handset, then swapping in a GSM SIM (subscriber identity module) card from the new provider. Even a prepaid card can work, which drains your account only when you use service.

If you want to completely cut wireless phone service, you could try hopping between Wi Fi hotspots while using a VoIP app. Truphone and Fring work on Android devices, BlackBerrys, iPhones, and even iPod touch media players. (You'll need a headset microphone for any of the players.) It's not the same as real wireless phone service, but it might be enough for some users in some situations.

You can beat text messaging fees by sending texts through an instant messenger app or in e mail. And instead of paying for your carrier's voice mail transcription service, you can substitute SpinVox, PhoneTag, YouMail, or Google Voice.

Revise Your Internet Service

Look for a network without a lock icon to try to gain access.

Did you shop around for your ISP? You might not be getting the best price or service. Check out Broadband Reports for customer reviews. You could find a locally grown alternative to the faceless corporation that you currently use.

You might be able to completely break free from home, wired Internet service. First, walk around your house running inSSIDer. Try to reach a friendly neighbor or café. Or if a neighbor's signal is locked, ask around, and offer to pay part of the fee to join the network and share service.

Wi Fi service subscriptions from T Mobile, Boingo, and others can pay off if you frequent airports and other locations with their coverage. But you're almost as likely to find an open, free network. (To be fair, however, if you need an always on connection wherever you are, nothing beats an EvDO modem stick from Sprint or Verizon.)

Several Web sites map Wi Fi networks, and are good places to check out before you hit the road. Try Jwire, WeFi, and Hotspotr.

If you require an always on connection, you might be better off buying short or long term service from Sprint or Verizon. You can buy a USB plug that connects a single laptop, or a home desktop for that matter. Many mobile phones can also be tethered to a laptop as part of your service plan, sharing the wireless Internet feed. Or opt for a portable router such as the MiFi, and it'll turn its mobile connection into a Wi Fi, Internet bubble. The router will work in your car and could be cheaper than a hotel's Internet service.

Break Out of Satellite Radio's Orbit

Monthly satellite radio service might not be worth what you pay. If your favorite talk show is in an exclusive contract, you could be stuck, but music listeners have alternatives. Try Pandora, Last.fm, and Slacker from a PC or even a smartphone.

Last.fm is free, and the others offer both paid and free versions. All build music programming based on your preferences. If you indicate that you don't like a certain song or musician, they'll adjust your playlist to better match your tastes.

The mobile versions of these services are an especially exciting proposition. They offer the possibility of replacing traditional car radio by streaming music wirelessly to your smartphone in the car. This, of course, is highly dependent on the 3G wireless coverage you're getting as you drive, but that coverage is getting broader and faster all the time. Additionally, Slacker can cache stations to your device so you can play music without any Internet connection. This helps when you're driving across no coverage zones.

Get a Discount, or Cut Ties

Sometimes you just can't cut the cord. In spite of poor service and price gouging, you might need some of these services. For one last alternative, try calling up and asking for a discount. It's worked for me, especially with TV and Internet service.

Arm yourself with details on your current companies' introductory deals and competitors' rates, and ask for a break. If you don't get a good answer, call back, and ask someone else.

Even if you only cut one of these services, you could save a lot. Pay for what you want and only what you use to take back control of your subscriptions.

Wednesday, January 20, 2010

Vodafone Revs Femto Engine


Vodafone UK stepped up its femtocell efforts in a big way Monday with the launch of a national marketing campaign, new brand, and a dramatically cheaper price for its small home base stations.

Vodafone has quietly offered a femtocell called the Vodafone Access Gateway since July 2009 for a one off cost of £160 (US$261).

Now, though, the operator is ready to make a big noise about the little base stations. The operator revealed today that the new name for its femtocell is the Sure Signal. (A name that's an open invite for criticism should the device not work).


But it's the new price of the Sure Signal femto that will surely raise eyebrows: The operator has slashed the cost of the device to a one off fee of £50 (US$82) or £5 (US$8) per month for 12 months-with monthly price plans of £25 (US$41) or more.

For monthly plans that are less than £25, the Sure Signal costs a one off £120 (US$196) or £5 per month for 24 months.

That price reduction is not the result of a drastic drop in the cost of making femtocells. Rather, Vodafone says it's prepared to deepen its subsidy on the femtos because of the gains it says it gets in customer satisfaction and subscriber acquisition-particularly new customers that defect from other operators.

According to Lee McDougall, senior product marketing manager at Vodafone UK, a "greater level of subsidy is worthwhile." [We'll be] subsidizing heavily and marketing heavily as well," says McDougall. "We're seeing really great benefits and feedback."

He also notes, though, that there has been a "slight reduction" in the femto prices because the operator is "ordering in much bigger volumes now." But McDougall wouldn't disclose how many Vodafone is ordering now, or how much it's paying per femtocell.

For the fledgling femto industry, Vodafone's move in the U.K. is a significant endorsement for the little home base stations.

Boosting the iPhone indoors?
The launch of the Sure Signal conveniently coincides with Vodafone's launch of Apple's iPhone on January 14. But McDougall says that was "coincidental, but timely… just the way things panned out."

So, the femtocell is not just aimed at iPhone users. The primary application for the device is improved 3G coverage at home and in small offices. Check out the video about the Williams family's coverage plight here on Vodafone's site, which gives a good idea of how the operator is marketing the Sure Signal and using it a differentiator.

"Only Vodafone can guarantee the signal in your home," says McDougall.

While Vodafone mainly targets the Sure Signal at consumers, the operator has also launched price plans for small businesses (which do not include value-added tax): A £42.56 (US$70) one off charge or £4.26 (US$7) per month for 12 months for Your Plan for Small Business and Storm price plans of £21.26 (US$35) or more.

Or, for price plans of less than £21.26 (US$35), the Sure Signal costs £102 (US$167) or £4.26 (US$7) per month for 24 months.

Along with the new marketing push, Vodafone has also updated its back office systems for the femto service. Previously, customers had to call Vodafone so that someone could manually register their phone number with the femtocell. Now, that procedure is automated. Customers can log on to their Sure Signal web service portal to add or remove phone numbers, and the changes are made in real time.

Tuesday, January 19, 2010

Bredolab dan Zbot siap meruntuhkan harapan FACEBOOKer


Perusahaan antivirus Vaksincom mencermati, saat ini setidaknya ada dua virus yang mengancam pengguna Facebook. Virus itu bukan menyebar di Facebook, tapi memanfaatkan jejaring sosial popular itu untuk menjaring korban dan bertujuan mendapatkan akun FB.

“Menurut hemat kami, di tahun ini serangan virus dengan modus operandi mengeksploitasi FB akan makin marak seiring makin populernya FB,” kata CEO Vaksincom Alfons Tanuwijaya, di Jakarta.


Ia menambahkan di dunia virus berlaku hukum pembuat virus akan mengincar sistem operasi atau aplikasi yang paling popular. Hal itu karena korban potensialnya lebih besar.

Oleh karena itu komputer dengan sistem operasi Microsoft Windows lebih menarik dibuat virusnya dibandingkan Mac OS Leopard atau Linux, karena penggunannya paling banyak. Sementara sistem operasi Windows Mobile atau BlackBerry lebih sedikit mendapat serangan, karena secara de facto market ponsel masih dikuasai oleh sistem operasi Symbian.

“Saat ini dua virus yang cukup berbahaya dalam mengeksploitasi FB adalah Bredolab dan Zbot,” kata Alfons.

Bredolab merupakan virus lama yang menyebarkan diri sebagai lampiran e-mail. Sebelumnya, virus ini seolah-olah datang dari DHL. Jika dijalankan, maka virus akan menyebabkan komputer terinfeksi.

Memanfaatkan Facebook yang popular, pembuat virus mengubah e-mail seakan-akan dari administrator Facebook. Pembuat virus meminta password reset Facebook dan menjalankan satu aplikasi yang isinya program berbahaya.

“Jika aplikasi itu dijalankan, maka korbannya akan dibuat pusing tujuh keliling. Selain menginfeksi komputer korbannya, virus juga akan meng-download spyware, scareware berupa antivirus palsu, dan menginfeksi lagi komputer korbannya,” kata Alfons.

Tidak cukup melakukan hal ini, Bredolab juga akan melakukan spam dari komputer korbannya. Akibatnya IP address komputer korban bisa diblok oleh perusahaan blacklist, karena mengirimkan spam dan mengganggu pengiriman e-mail.

Sedangkan cara penginfeksian virus Zbot lebih canggih. Virus ini tidak mengirimkan diri sebagai lampiran e-mail seperti Bredolab, yang bisa diblok oleh mailserver.

Virus itu menyebar melalui email phishing, seakan-akan pesan resmi dari Facebook untuk mengubah password. Jika link tersebut diklik, maka ia akan menampilkan situs palsu Facebook yang meminta korbannya memasukkan username dan password.

Jika dituruti maka username dan password pengguna Facebook akan diketahui oleh pembuat virus. Tidak cukup mencuri password Facebook korbannya, virus ini juga akan memberikan link ke file untuk diunduh yang dikatakan sebagai file update dari Facebook. Jika dijalankan, maka virus akan menginfeksi komputer korbannya dan mengakibatkan komputer itu mengirimkan spam.

Lalu bahaya terbesar apa yang bisa dialami user?

Alfons mengatakan tingkat bahaya tergantung dari korbannya. Korban bisa kehilangan akun Facebook dan jika ada nilai ekonomis di akun itu, misalnya data rahasia rekening perbankan maka akan bisa berpindah tangan.

“Selain itu yang perlu dikhawatirkan adalah bahaya jaringan komputer kantor yang menjadi korban virus ini. Jika komputer mengirimkan spam, maka IP kantor akan di-blacklist sehingga tidak akan bisa mengirimkan email dengan baik, dan seluruh email kantor tersebut akan masuk ke dalam kategori spam. Potensi kerugian ekonominya sangat besar,” ujar Alfons.

Untuk menghindari virus di Facebook, Alfons mengatakan pengguna komputer harus menggunakan program antivirus yang ter-update. Selain itu ia menyarankan pengguna komputer selalu waspada terhadap pemalsuan situs atau webforging.

“Jangan mudah percaya link yang diberikan. Khususnya pada saat memasukkan data penting seperti username dan password, sebaikn, ya perhatikan situs yang dikunjungi dengan seksama,” timpalnya.

Source : Inilahcom & Vkasincom