Computer Application, Maintenance and Supplies
Showing posts with label Computer. Show all posts
Showing posts with label Computer. Show all posts

Monday, April 19, 2010

Making Server

Servers are the cornerstone of corporate infrastructure, relied upon to provide the services that employees and customers require to perform day to day operations in a timely and efficient manner. The single most important attribute of most enterprise grade servers is reliability and a good level of fault tolerance is factored into the design of most servers in order to increase uptime. Many readers run servers in their own home. They are the headless Linux box in the corner of the study that provides email, web server, DNS, routing and file sharing services for the home.


While these machines still constitute servers in a raw sense, it would take a brave Technology Officer to put their faith in trusting these types of servers to fulfil the ITS requirements of their company to these white boxes. This guide demonstrates what differentiates business class servers from the typical white box server that you can build from off the shelf components and highlights some of the many factors of a server’s design that needs to be carefully considered in order to provide reliable services for business.

Form Factor
Servers come in all shapes and sizes. The tower server is designed for organisations or branch offices whose entire infrastructure consists of a server or two. From the outside, they wouldn’t look out of place on or under someone’s desk but the components that make up the server’s guts are often of a higher build quality than workstation components. Tower cases are generally designed to minimise cost whilst providing smaller businesses some sense of familiarity with the design of the enclosure.

For larger server infrastructures, the rack mount case is used to hold a server’s components. As the name suggests, rack mount servers are almost always installed within racks and located in dedicated data rooms, where power supply, physical access, temperature and humidity (among other things) can be closely monitored. Rack mount servers come in standard sizes they are 19 inches in width and have heights in multiples of 1.75 inches, where each multiple is 1 Rack Unit (RU). They are often designed with flexibility and manageability in mind.

Lastly, the blade server is designed for dense server deployment scenarios. A blade chassis provides the base power, management, networking and cooling infrastructure for numerous, space efficient servers. Most of the top 500 supercomputers these days are made up of clusters of blade servers in large data centre environments.

Processors
With the proliferation of Quad Core processors in the mainstream performance sector of today’s computing landscape, the main difference between servers and workstations that you will see comes down to the support for multiple sockets. Consumer class Core 2 and Phenom based systems are built around a single socket designs that feature multiple cores per socket and cannot be used in multi socket configurations. Xeon and Opteron processors on the other hand, provide interconnects that allow processes to be scheduled across multiple separate processors featuring multiple cores to contribute towards the total processing power of a server. It’s not uncommon to see quad socket, Four Core processors in some high end servers providing a total of 16 processing cores at upwards of 3.0GHz per core. The scary thing is that Six Core and Eight Core processors are just around the corner...

The other main difference that you see between consumer and enterprise processors is the amount of cache that is provided. Xeon and Opteron processors often have significantly larger Level 2 and Level 3 caches in order to reduce the amount of data that has to be shifted to memory, generally resulting in slightly faster computation times depending on the application. A server’s form factor will also have an impact on the type of processor that can be used. For instance, blade servers often require more power efficient, cooler processors due to their increased deployment density. Similarly, a 4RU server may be able to run faster and hotter processors than a 1RU server from the same vendor.

Memory
While the physical RAM modules that you see in today’s servers don’t differ dramatically from consumer parts, there are numerous subtle differences to the memory subsystems that provide additional fault tolerance features. Most memory controllers feature Error Checking and Correction (ECC) capabilities, and the RAM modules installed in such servers need to support this feature. Essentially, ECC capable memory performs a quick parity check before and after read or write operation to verify that the contents or memory has been read or written properly. This feature minimised the likelihood of memory corruption due to a faulty read or write operation.

The other main difference in memory controller design is how much RAM is supported. Intel based servers are about to start utilising a memory controller that is built on to the processor die, as has been the case with AMD based systems for years. Even the newest mainstream memory controllers support a maximum of 16GB of RAM. HP have recently announced a “virtualisation ready” Nahalem based server design that will support 128GB of RAM, which will be available by year’s end. Many modern servers provide mirrored memory features. A memory mirror essentially provides RAID 1 functionality for RAM the contents of your system memory are written to two separate banks of identical RAM modules. If one bank develops a fault, it is taken offline and the second bank is used exclusively. The memory controller of the server can usually handle this failover without the operating system even being aware of the change, preventing unscheduled downtime of the server.

Hot spare memory can also be installed in a bank of some servers. The idea here is that if the memory in one bank is determined to be faulty, the hot spare bank can be brought online and used in place of the faulty bank. In this scenario, some memory corruption can occur depending on the operating system and memory controller combination in use. The worst case scenario here usually involves a crash of the server, followed by an automated reboot by server recover mechanisms (detailed later on in this article). Upon reboot, the memory controller brings the hot spare RAM online limiting downtime. Hot swappable memory is often used in conjunction with both of the features giving you the ability to swap out faulty RAM modules without having to shut down the entire server.

Storage Controllers
Drive controllers are dramatically different in servers. Forget on board firmware based SATA RAID controllers that provide RAID 0, 1 and 1+0 and consume CPU cycles every time data is read or written to the array. Server class controllers have dedicated application specific integrated circuits (ASICs) and a bucket full of cache (sometimes as much as 512MB) in order to boost the performance of the storage subsystem. These controllers also frequently support advanced RAID levels including RAID 5 & 6.

The controller cache can be one of the most critical components of a server, depending on the application. At my place of employment, we have a large number of servers that capture video in HD quality at real time. A separate “ingest” server often pulls this data from the encode server immediately after it has been captured for further processing and transcoding. Having 512MB of cache installed on the drive controller allows data to be pushed out via the network interface before it has been physically written to disk, significantly boosting performance. Testing has revealed that if we reduced the cache size to 64MB, data has to be physically written to disk and then physically read when the ingest process takes place, placing significant additional load on the server. Finally, consider that most mainstream controllers have no cache whatsoever the impact on performance in this scenario would probably prevent us from working with HD quality content altogether.

But what happens if there is a power outage and the data that is in the controller cache has not yet been written to the disk? In order to prevent data loss, some controllers feature battery backup units (BBUs) that are capable of keeping the contents of the disk cache intact for in excess of 48 hours or until power is restored to the server. Once the server is switched on again, the controller commit the data from the cache to the disk array before flushing the cache and continuing with the boot process. No data is lost. BBUs are another feature missing from mainstream controllers.

The problem with RAID 5
Traditionally, RAID 5 has been the holy grail of disk arrays, providing the best compromise between performance and fault tolerance. However with the continual increase in storage density, RAID 5 is starting to exhibit a significant design flaw when the array has to be rebuilt after a disk failure.

RAID 5 arrays can tolerate the failure of a single drive in the array. If during the time that it takes to replace the faulty drive and rebuild the array, a second drive fails or an unrecoverable read error (URE) occurs on one of the surviving drives in the array, the rebuild will fail and all data on the array will be lost.

Most manufacturers will quote the probability of encountering a URE in the detailed specifications sheet for each drive. Most consumer grade products have a quoted URE of ~1 in 1014 ¬– which translates to an average of 1 URE encountered for every 12TB of data read. Now, imagine that you have a RAID 5 array containing four 1.5TB drives (which are now readily available) and one disk goes pear shaped. You replace the faulty drive and the rebuild process begins and 1.5TB of data is read from each remaining drives in order to rebuild the data on the new disk. Assuming that you have “average” drives, there’s around a 33% chance of encountering a URE while rebuilding the array, which would result in the loss of up to 4.5TB of data.

Back in the days when we were dealing with arrays containing five 32GB disks, the probability of a URE occurring during array rebuilds was miniscule. But nowadays, it’s not uncommon to see array configurations exceeding 2TB in size, containing eight or more large capacity drives. As a result of the increased number of drives and the increasing capacity of those drives, the probability of encountering a URE during the rebuild process is approaching the stage where RAID 5 arrays are unlikely to be successfully rebuilt in the event of a drive failure. And the more, larger capacity drives that you use in an array, the more likely a URE will occur during the rebuild.

RAID 6 is the solution that is commonly used to overcome the limitations of RAID 5. RAID 6 utilises two different parity schemes and distributes these parity blocks across drives in much the same manner as RAID 5 does. The use of two separate parity schemes essentially allows two drives in an array to fail while maintaining data integrity. While RAID 5 requires n+1 drives in the array, RAID 6 requires n+2 so you’ll be assigning the capacity of two whole drives to parity instead of one.
If the server that you’re building does not require a large amount of disk space, RAID 5 may be perfectly acceptable. However, if you’re deploying a large number of drives or large capacity drives in your server, you’ll want to ensure that you have a drive controller that supports RAID 6.

It should also be noted that while RAID 6 overcomes the issues that are starting to become prominent with RAID 5, it should be noted that a few years from now, RAID 6 will exhibit the same problem if used with larger arrays and drives of larger capacities than we have today. But until this day comes, RAID 6 remains a more reliable fault tolerance scheme than RAID 5.

Maths
Regardless of the scenario, we assume that all 1.5TB needs to be read from all drives in the array in order to perform a successful rebuild. This gives us a 12.5% probability of encountering a URE on a single drive (1.5 / 12 = 0.125), and a 87.5% probability of not encountering a URE (1 0.125 = 0.875).

As you can see from the above tables, you’re much more likely to achieve a successful rebuild with a RAID 6 array however even this probability of success is lower than what some would desire. This only re enforces the fact that RAID 6 is significantly better than RAID 5, but will also experience the same issues assuming that URE rates don’t increase with disk capacity.

And on a side note, I was the unfortunate victim of a rebuild failure due to UREs about a year ago I accidentally knocked a power cord out of an seven 250GB drive RAID 5 NAS enclosure (the enclosure was four years old and did not support RAID 6, but we did have it configured with one hot spare drive). Knocking the cable out abruptly killed one of the redundant power supplies, which took one of the drives with it. The hot spare drive was immediately activated and the array began to be rebuilt. About 5 hours into the rebuild, a URE occurred and the rebuild failed.

It’s just as well we had that 1.5TB worth of data backed up on to a second array as well as LTO tape this just goes to show that RAID arrays are not the be all and end all of fault tolerance.

External Storage
Any computer chassis has a physical limitation to the number of drives that you can install. This limitation is overcome in enterprise servers by connections to Storage Area Networks (SANs). This is typically accomplished in two ways via a fibre channel or iSCSI interfaces.

iSCSI is generally the cheaper option of the two because data transferred between the SAN and server is encapsulated in frames sent over ubiquitous Ethernet networks, meaning that existing Ethernet interfaces, cabling and switches can be used (aside from the cost of the SAN enclosure itself, the only additional costs are generally an Ethernet interface module for the SAN and software licenses).

On the other hand, fibre channel requires its own fibre optic interfaces, cabling and switches, which significantly drives up cost. However, having a dedicated fibre network means that bandwidth isn’t shared with other Ethernet applications. Fibre channel presently offers interface speeds of 4Gb/s compared to the 1Gb/s often seen in most enterprise networks. Fibre channel also has less overhead than Ethernet, which provides an additional boost to comparative performance.

Disk Drives
For years, enterprise servers have utilised SCSI hard disk drives instead of ATA variants. SCSI allowed for up to 15 drives on a single parallel channel versus the 2 on a PATA interface; PATA drives ship with the drive electronics (the circuitry that physically controls the drive) integrated on the drive (IDE), whereas SCSI controllers performed this function in a more efficient manner; many SCSI interfaces provided support for drive hot swapping, reducing downtime in the event of a drive failure; and the SCSI interface allowed for faster data transfer rates than what could be obtained via PATA, giving better performance, especially in RAID configurations.

However over the last year, Serial Attached SCSI (SAS) drives have all but superseded SCSI in the server space in much the same way that SATA drives have replaced their PATA brethren. The biggest problem with the parallel interface was synchronising clock rates on the many parallel connections serial connections don’t require this synchronisation, allowing clock rates to be ramped up and increasing bandwidth on the interface.

SAS drives still the same as SCSI drives in many ways the SAS controller is still responsible for issuing commands to the drive (there is no IDE), SAS drives are hot swappable and data transfer over the interface is faster compared to SATA. SAS drives come in both 2.5 and 3.5 inch form factors with the 2.5 inch size proving popular in servers as they can be installed vertically in a 2RU enclosure.

In addition, SAS controllers can support 128 directly attached devices on a single controller, or in excess of 16,384 devices when the maximum of 128 port expanders are in use (however, the maximum amount of bandwidth that all devices connected to a port expander can use equals the amount of bandwidth between the controller and the port expander). In order to support this many devices, SAS also uses higher signal voltages in comparison to SATA, which allows the use of 8m cables between controller and device. Without using higher signal voltages, I’d like to see anyone install 16,384 devices to a disk controller with a maximum cable length of 1 meter (the current SATA limitation).

In the next few months, there will be another major advantage to using SAS over SATA in servers. SAS does support multipath I/O. Suitable dual port SAS drives can then connect to multiple controllers within a server, which provides additional redundancy in the event of a controller failure.
GPUs and Video
One of the areas where enterprise servers are inferior to regular PCs is in the area of graphics acceleration. Personally, I’m yet to see a server that has been installed within a data centre that contains a PCI Express graphics adapter but that’s not to say that it’s not possible to install one in an enterprise server. In general though, most administrators find the on board adapters more than adequate for server operations.
Networking
Modern day desktops and laptops feature Gigabit Ethernet adapters, and the base adapters seen on servers are generally no different. However, like most other components in servers, there are a few subtle differences that improve performance in certain scenarios.

In order to provide network fault tolerance, two or more network adapters are integrated on most server boards. In most cases, these adapters are able to be teamed. Like RAID fault tolerance schemes, there are numerous types of network fault tolerance options available, including :
• Network Fault Tolerance (NFT) In this configuration, only one network interface is active at any given time, which the rest remain in a slave mode. If the link to the active interface is severed, a slave interface will be promoted to be the active one. Provides fault tolerance, but does not aggregate bandwidth.
• Transmit Load Balancing (TLB) Similar to NFT, but slave interfaces are capable of transmitting data provided that all interfaces are in the same broadcast domain. This provides aggregation of transmission bandwidth, but not receive and also provides fault tolerance.
• Switch assisted Load Balancing (SLB) and 802.3ad Dynamic provides aggregation of both transmit and receive bandwidth across all interfaces within the team, provided that all interfaces are connected to the same switch. Provides fault tolerance on the server side (however, if the switch that is connected to the server fails, you have an outage). 802.3ad Dynamic requires a switch that supports the 802.3ad Link Aggregation Control Protocol (LACP) in order to dynamically create teams, whereas SLB must be manually configured on both the server and the switch.
• 802.3ad Dynamic Dual Channel provides aggregation of both transmit and receive bandwidth across all interfaces within the team and can span multiple switches, provided that they are all in the same broadcast domain and that all switches support LACP.

Just about all server network interface cards (NICs) support Virtual Local Area Network (VLAN) trunking. Imagine that you have two separate networks an internal one that connects to all devices on your LAN, and an external on that connects to the Internet, with a router in between. In conventional networks, the router needs to have at least two network interfaces one dedicated to each physical network.

Provided that your network equipment and router supports VLAN trunking, your two networks could be set up as separate VLANs. In general, your switch would keep track of which port is connected to which VLAN (this is known as a port based VLAN), and your router is trunked across both VLANs utilising a single NIC (physically, it becomes a router on a stick). Frames sent between the switch and router are tagged so that each device knows which network the frame came from or is destined to go to.

VLANs operate in the same physical manner as physical LANs but network reconfigurations can be made in software as opposed to forcing a network administrator to physically move equipment.

Because of the sheer amount of data that is received on Gigabit and Ten Gigabit interfaces, it can become exhaustive to send Ethernet frames to the CPU in order for it to process TCP headers. It roughly requires around 1GHz of processor power to transmit TCP data at Gigabit Ethernet speeds.

As a result, TCP Offload Engines are often incorporated into server network adapters. These integrated circuits process TCP headers on the interface itself instead of pushing each frame off to the CPU for processing. This has a pronounced effect on overall server performance in two ways not only does the CPU benefit from not having to process this TCP data, but less data is transmitted across PCI express lanes toward the Northbridge of the server. Essentially, TCP Offload engines free up resources in the server so that they can be assigned to other data transfer and processing needs.

The final difference that you see between server NICs and consumer ones is that the buffers on enterprise grade cards are usually larger. Part of the reason for this is due to the additional features mentioned above, but there is also a small performance benefit to be gained in some scenarios (particularly inter VLAN routing).

Power Supplies
One of the great features about ATX power supplies is the standards that must be adhered to. ATX power supplies are always the same form factor and feature the same types of connectors (even if the number of those connectors can vary). But while having eight 12 volt Molex connectors is great in a desktop system, this amount of connectors is generally not required in a server, and the cable clutter could cause cooling problems.

Power distribution within a server is well thought out by server manufacturers. Drives are typically powered via a backplane instead of individual Molex connectors and fans often drop directly into plugs on the mainboard. Everything else that requires power draws it from other plugs on the mainboard. Even the power supplies themselves have PCB based connectors on them. All of this is designed to help with the hot swapping of components in order to minimise downtime.

Most servers are capable of handling redundant power supplies. The first advantage here is if one power supply fails, the redundant supply can still supply enough juice to keep the server running. Once aware of the failure, you can then generally replace the failed supply while the server is still running.

The second advantage requires facility support. Many data centres will supply customer racks with power feeds on two separate circuits (which are usually connected to isolated power sources). Having redundant power supplies allows you to connect each supply up to a different power source. If power is cut to one circuit, your server remains online because it can still be powered by the redundant circuit.

Server Management
Most servers support Intelligent Platform Management Interfaces (IPMIs), which allow administrators to manage aspects of the server and to monitor server health including when the server is powered off.

For example, say that you have a remote Linux server that encountered a kernel panic you could access the IPMI on the server and initiate a reboot, instead of having to venture down to the data centre, gain access and press the power button yourself. Alternatively, say that your server is regularly switching itself on and off every couple of minutes too short a time for you to log in and perform any kind of troubleshooting. By accessing the IPMI, you could quickly determine that a fan tray has failed and the server is automatically shutting down once temperature thresholds are exceeded. These are two of the most memorable scenarios where having access to IPMIs has saved my skin.

Many servers also incorporate Watchdog timers. These devices perform regular checks on whether the Operating System on the server is responding and will reboot the server if the response time is greater than a defined threshold (usually 10 minutes). These devices can often minimise downtime in the event of a kernel panic or blue screen.

Finally, most server vendors will also supply additional Simple Networking Management Protocol (SNMP) agents and software that allows administrators to monitor and manage their servers more closely. The agents that are often supplied provide just about every detail about the hardware installed that you could ever want to know how long a given hard disk drive has been operating in the server, the temperature within a power supply or how many read errors have occurred in a particular stick or RAM. All of this data can be polled and retrieved with an SNMP management application (even if your server provider doesn’t supply you with one of these, there are dozens of GPL packages available that utilise the Net SNMP project).

The future...
All of the points detailed in this article and within the corresponding article on the APC website highlight the differences that are seen between today’s high end consumer gear (which is typically used to make the DIY server) and enterprise level kit. However, emerging technologies will continue to have an impact on both the enterprise and consumer markets.

As the technology becomes more refined, solid state drives (SSDs) will start to emerge as a serious alternative to SAS hard disk drives for some server applications. Initially, they’ll most likely be deployed where lower disk capacity and lower disk access times are required (such as database servers). When the capacity of these drives increases, they’ll start to become more prominent but will probably never replace the hard disk drive for storing large amounts of data.

The other big advantage to using SSDs is that the RAID 5 issue mentioned earlier becomes less of an issue. SSDs shouldn’t exhibit UREs once data is written to the disk, it’s stored physically, not magnetically. A good SSD will also verify that the contents of a block including whether it can be read before the write operation is deemed to have succeeded. Thus, if the drive can’t write to a specific block, it should be marked as bad and a reallocation block should be brought online to take its place. Your SNMP agents can then inform you when the drive starts using up its reallocation blocks, indicating that a drive failure will soon happen. In other words, you’ll be able to predict when an SSD fails with more certainty, which could give RAID 5 a new lease of life.

Moving further forward, the other major break from convention in server hardware will most likely be a move toward the use more application specific processor units instead of the CPU as we know it today. There’s already some movement in this area Intel’s Larrabee is an upcoming example of a CPU/GPU hybrid, and the Cell Broadband Engine Architecture (otherwise know as the Cell architecture) that is used in Sony’s Playstation 3 is also used in the IBM RoadRunner supercomputer (the first to sustain performance over the 1 petaFLOPS mark)

Tuesday, March 23, 2010

Share Two Computers

Many homes have more than one computer. Rather than leaving them as standalone PCs, it can make sense to set up a network. A network makes it easy not only to share files between computers but to share an internet connection and hardware, such as a printer. Although the idea of setting up a network can be daunting, it is actually an easy process.

We are going to take a look at how a combined broadband modem and router can be used to connect two or more computers to the internet. We will look at wired and wireless connections on Windows XP and Vista, and take crucial security measures into consideration.

follow step by step here....


One of the first steps to take before starting the process of setting up a network is to check that any desktop computers that you want to connect have a network interface card, or NIC. Computers bought in the past few years are likely to have a network connection built in, but it’s also possible to buy network cards, such as the one pictured, separately if required. In the case of notebook PCs, it is usual for a network adapter to be built in by default. Wireless USB adapters are also available for times when cabling cannot be used or isn’t wanted for as little as £10.

If you have subscribed to an ADSL broadband service – and most

With the modem/router duly connected, all that is left to do is connect each of the computers to the network ports on the back of the device using standard networking cable, sometimes referred to as CAT5 or Ethernet cable. If you are using a separate modem and router, the modem will need to be connected to the uplink port of the router before connecting computers to the router’s network ports. Connect the power and check the activity lights on the front of the router to see if the connection indicator is lit.

There are now a couple of steps to take within Windows that will make it easier to establish and manage a network connection. The first is to display an icon in the Notification Area that makes it easy to keep an eye on network activities. To enable this icon in Windows XP, launch the Control panel from the Start menu and open the Network Connection section. Right-click on the icon representing your network connection and select Properties. On the General tab tick the box labelled ‘Show icon in Notification Area when connected’ and click on OK.

In Windows Vista the network activity icon should be enabled by default, but if this is not the case right-click on the Taskbar and select Properties. Move to the Notification Area tab and make sure that the Network option is ticked before clicking on OK. Vista displays different icons in the Notification Area depending on the type of connection that has been established. Two flashing monitors indicates that there is a network connection, while the addition of a globe icon indicates that an internet connection is also present.

To ensure that the computers on the network can communicate, it is vital to ensure they are part of the same workgroup. In Windows XP, click on the Start button and right-click on the entry for My Computer before selecting Properties from the menu that appears. Move to the Computer Name tab and click on the Change button. Enter a name for the computer and then one for the Workgroup before pressing Enter. Restart Windows if prompted to do so.

To configure the computer and Workgroup name in Windows Vista, click on the Start orb, right-click on the Computer entry and select Properties from the menu that appears. In the ‘Computer name, domain and workgroup settings’ section of the System window that appears, click on the Change settings link and then on Continue in the User Account Control dialogue. Click on the Change button, enter a name for the computer and select the Workgroup option. Enter the same workgroup name as for the other computers and click on OK.

So far we have concentrated on configuring computers to connect to a home network using network cabling, but computers can also be connected wirelessly. It is first necessary to enable and configure the wireless settings of the router or modem that is being used. On one of the computers wired into the network, open a web browser window and type the IP address of the router – usually 192.168.2.1 but check with the manufacturer for details – into the Address or Location bar. The required username and password should have been provided in the instruction booklet.

Section names will vary from one piece of equipment to another, but find the wireless section of the setup utility. Start by giving the wireless network a name (or SSID) and select a radio channel on which to broadcast. This setting may have to be adjusted later if you notice interference from a nearby network. To safeguard data, wireless security must be enabled, so select the type of encryption that should be used and then type a password before saving the new settings. WPA security is the safest option, and should be offered by all modern routers.

When it comes to connecting a computer to a network wirelessly, many notebook PCs include a built-in wireless card which makes things much easier. If wireless connectivity is not built into a laptop, it can be added by connecting a wireless USB adapter or a wireless PC Card device. If you want to connect a desktop PC wirelessly, you have a choice between installing a wireless USB adapter or a standard PCI network card with a built-in antenna. In both cases, USB adapters such as the one pictured are the easiest to get up and running.

The exact process of installing a wireless adapter will vary from one piece of equipment to another, but it should boil down to the same essential steps. Start by inserting the CD that came with the adapter and install any necessary drivers. Once this is complete, connect the adapter and wait for it to be detected by Windows. When the installation process has finished a message should soon appear indicating that wireless networks have been detected.

Click the message that is displayed or right-click on the wireless network icon in the notification area and select View Available Wireless Networks from the menu that appears. Windows will then display a list of all the wireless networks that have been detected in range. Select the entry that relates to your own network and click on the Connect button. As encryption has been enabled,you must provide the password that you chose earlier before a connection can be established.

To display the equivalent wireless network-detection screen in Windows Vista, click the Start button followed by the Control Panel entry and then open the Network And Sharing Center – this can also be accessed by clicking the Notification Area icon. In the Tasks list to the left of the window that appears, click on the ‘Connect to a network’ link. As with Windows XP, a list of available networks will be displayed and a connection can be established by entering the correct password. There are also tools on hand to diagnose connection problems.

In some cases a utility is provided with a wireless adapter and this must be used to detect and connect to networks rather than using Windows’ built-in options. However, it may be possible to choose which you would prefer to use. Most utilities work in similar ways and after selecting a network, all that needs to be done is to provide the relevant password. In this Workshop we have established wired and wireless connections that can be used to connect several computers to the internet.

Saturday, March 20, 2010

Phone on a USB

Skype, the program that makes it easy to have telephone conversations over the internet, is one of the most useful programs ever created for the PC. However, it can be frustrating and inconvenient to pitch up at a computer and discover that the program isn’t installed.

Fortunately, it’s relatively straightforward to get around this by putting a copy of Skype on a USB memory key and carrying it around – then, when the time comes to make a call, just plug the key into a spare socket and run the program as normal. Well, almost as normal. As this step by step guide explains, with only the smallest amount of tinkering it is possible to have Skype always on hand, whichever PC you’re using.


Don’t have Skype? Go to www.skype.com, download and install the software. Plug in a USB memory key and then double-click on My Computer, the main drive (usually C) then open these folders – Program Files, Skype and Phone. Open the USB key and then click and drag the Skype icon over from the hard disk to the USB key window. Next, right-click there and choose New, and then Folder and call it ‘Data’. Make sure the Skype program and the Data folder are stored at the key’s top level and not inside any other folders.

Next, click the Start button and choose All Programs, Accessories and then Notepad. We’re going to use this to create a small launch command that will make our USB-based version of Skype work properly. Type the following, exactly (including the spaces): skype.exe /datapath:"Data" /removable. Then click File, Save As and when the dialogue box opens, open the dropdown menu next to Save as type and choose All Files. Type in Skype.bat as the file name and save the file onto the memory key next to the copied Skype program.

Switch back to the open USB key window and scroll through the list of files. See the new one called Skype? To start the portable version of Skype we’ve created, double-click on this icon (be sure to choose the one that says MS-DOS Batch underneath it, not the original blue and white Skype icon). After a moment an odd-looking black window (a DOS window) appears and disappears, followed by the Skype program. Either create a new account or sign in with an existing name and password to continue and use the program as normal.

Wednesday, March 03, 2010

Save Memory Card of Camera

Memory Card alias Kartu Memori begitu lazim di mata seorang Photographer. Keberadaannya amat penting sebagai media penyimapanan Foto yang dihasilkan oleh jepretan Kamera. Terbayang di benak kita, andai suatu ketika dimana seorang Photographer telah siap mengabadikan sebuah obyek. Namun, kamera digital miliknya tidak dilengkapi Memory Card. Sementara kamera tersebut tidak memiliki Memory Internal. Atau mungkin juga pada kasus di atas, Memory Card yang disiapkan tiba-tiba rusak. Tindakan apa yang mesti dilakukan. Bingung, itulah kata pertama yang muncul di benak kita.


Ambae.exe lewat kesempatan ini memberikan sedikit tips untuk mengamankan Memory Card. Tujuan utamannya, demi meminimalisir kendala yang akan dihadapi nantinya.

Jangan pernah menghapus Foto sobat Blogger melalui fasilitas Delete di Kamera


Pada dasarnya tips ini bertentangan dengan saran para pakar Desain dan pakar IT. Terlebih lagi, arahan para vendor pembuat kamera saat ini. Mengingat di tiap kamera keluaran saat ini telah dilengkapi dengan Menu super canggih, khususnya menu DELETE yang selalu nampang dan siap menghanguskan hasil jepretan sang Photographer.

Kamera dilengkapi baterai dengan daya tahan tertentu. Saat mengaktifkan DELETE, kamera mengkonsumsi daya yang amat besar. Demikian halnya bila kita ingin menilai foto dengan melihatnya melalui LCD kamera. Dibutuhkan daya besar saat penggunaan fungsi-fungsi tersebut.

Resiko bisa terjadi bila saat penghapusan dilakukan . Unsur ketidak sengajaan menghapus keseluruhan foto dalam Kartu Memori, benar-benar akan membuat sang Photographer jadi pusing tujuh keliling. Terkadang foto yang telah dihapus, masih diinginkan kembali karena ternyata foto tersebut lebih bagus dari foto lainnya. Untuk mengembalikannya, tentu saja kita memerlukan kehadiran komputer dengan beragam software penyelamat.

Olehnya, lakukan penghapusan Foto melalui media komputer. Beri nilai pada masing-masing foto sebelum menghapusnya. Foto akan tampil lebih indah ketimbang menampilkannya melalui LCD kamera. Monitor pada komputer memiliki akurasi warna yang lebih, sehingga foto dapat dinilai dengan baik setelah menyaksikannya. Setelah itu, hapuslah foto yang tidak sesuai keinginan. Misalnya saja pada foto terkesan BLUR, miring, kecil dimensinya dan sebagainya.

Penghapusan yang dilakukan di komputer relatif lebih aman dibanding melalui kamera. Foto yang terlanjur dihapus masih dapat dikembalikan lebih cepat sebelum menggunakan software penyelamat sebagai senjata pamungkas. Disiapkan ruang/Tempat Pembuangan Akhir alias TPA untuk foto dan file lainnya yang dihapus. Kapasitasnya pun bisa diatur, pengguna Windows mengenalnya dengan istilah Recycle Bin. Kapasitas default disiapkan sebesar 10 % untuk masing-masing partisi dari kapasitas Hard Disk. Bilamana menginginkan foto dimaksud kembali ke tempat semula. Windows memberi kesempatan dengan fasilitas UNDO-nya. Select file yang ingin dikembalikan lalu pilih Restore, file pun kembali hadir di tempatnya.

Format Kartu Memori secara teratur


Banyak cara yang dapat dilakukan untuk memformat Kartu Memori. Pada dasarnya format disini dapat diartikan sebagai langkah penghapusan file Foto. Sehingga secara keseluruhan, Kartu Memori kembali kosong dan siap diisi dengan file baru.

Format dapat dengan menyeleksi semua file Foto kemudian menghapusnya. Bisa juga dengan menghapus (DELETE) satu persatu, namun akan membutuhkan waktu yang cukup lama. Cara lainnya adalah dengan memanfaatkan fasilitas Format Partisi milik Windows atau pun Sotware lainnya. Kemudian cara berikutnya yakni dengan memformat Kartu Memori alias Memory Card pada media Camera Digital. Pilihan Format Memory Card telah disediakan di media ini.

Alangkah bagusnya, jika langkah terakhir yang kita jadikan acuan. Hal ini dilakukan guna menghindari kesalahan pemformatan Memory Card. Pada beberapa jenis/type memory card, akan membuat folder tertentu setelah dilakukan Format. Sementara dengan memformatnya di Windows, tidak satu pun Folder akan dihasilkan setelahnya. Sehingga bisa saja Kartu Memori tidak akan terbaca pada kamera tertentu.

Siapkan Kartu Memori Cadangan/Sekunder


Hal ini penting khususnya para penggila jepretan. Bisa saja saat sang Photographer telah siap mengambil gambar pada obyek tertentu. Namun, seketika Memory Card rusak atau pun Full. Sehingga kehadiran memori cadangan akan sangat membantu.

Kenali Usia dan Masa Pakai Memory Card


Tiap barang di dunia ini memiliki batas masa pemakaian. Manusia pun yang nota bene adalah makhluk hidup, memiliki batas kehidupan dan tidak kekal menikmati udara segara di dunia. Demikian halnya dengan Kartu memori pada Kamera Digital. Untuk menjaga agar tidak dikagetkan dengan kerusakan pada Memory Card secara tiba-tiba, kenalilah usia dan batas masa pakainya.

Misalnya saja, sebuah Memory Card memiliki batas kadaluarsa hingga 1 tahun sejak pemakaian. Tandai memori tersebut dengan spidol maupun alat tulis lainnya. Tanda dapat diletakaan pada badan Kartu Memori. Pada contoh kasus di atas, tandailah dengan menuliskan tanggal kadaluarsanya pada 12 bulan mendatang. Sehingga kita akan selalu diingatkan batas pemakaiannya. Dengan mengingatnya, maka lebih awal kita harus menyiapkan Kartu Memori yang baru sebagai penggantinya di saat masa pemakaiannya telah berkahir.

Back Up secara berkala file foto di dalamnya. Saat ERROR pada memory Card yang lama, kita tinggal menggantinya dengan yang baru. File Foto pun aman.

Thursday, February 25, 2010

Manage anything of Cyberspace

Mendiami lokasi tertentu tentunya membutuhkan trik khusus agar dapat beradaptasi dengan lingkungan sekitarnya. Terlebih lagi jika menjadi penghuni dunia nyata. Keberadaan kita pada salah satu sudut dunia nyata misalnya di Bonthain, diharapkan dapat diterima oleh Black Community. Sehingga akan terjalin komunikasi dan pertalian di antara sesama penghuninya. Demikian halnya dengan dunia maya alias Cyberspace.


Kejadian dunia nyata tak ubahnya kejadian di dunia maya. Beberapa netter lebih memilih untuk berbohong terhadap keabsahan identitas dirinya. Ada keuntungan dan ada pula kerugian. Dari sisi keuntungan, netter akan merasa aman dengan biodata yang dimilikinya. Kerugiannya, netter lainnya akan sulit mengetahui siapa gerangan di balik pemilik Account/Profile bersangkutan.

Mungkin di dunia BLOG, jarang terjadi hal seperti ini. Namun, coba lirik dunianya social network teranyar saat ini yakni FACEBOOK, Twitter, Friendster, Perfspot bahkan Yahoo Messenger dan Google Talk serta sudut dunia maya lainnya. Sebagian netter dengan istilah berbeda-beda (FACEBOOKER, BLOGGER dll) sering mengubah/gonta-ganti nama profile mereka seiring perkembangan trend dan style masa kini. Termasuk penggantian Status Family, Pekerjaan, Usia dan sebagainya telah begitu lazim dilakukan para Netter.

Akankah kita mampu mengingat dan menyimpan semua perubahan yang terjadi. Sementara jumlah teman kontak yang dimiliki hampir menyamai jumlah penduduk wilayah Bonthain. Mungkin bagi Netter yang hanya memiliki sedikit teman akan dengan mudah mengingatnya satu persatu. Apalagi kunjungan di antara mereka terbilang lancar-lancar saja. Namun bagi Netter dengan sejuta umat di dalam memorinya, tentulah akan mengalami kesulitan.

Nah, bagaimana menyiasati hal tersebut...??? Lebih dari satu contoh akan diungkapkan Ambae.exe kali ini sebagai solusinya.


Di dunia FACEBOOK, kita dapat menentukan kategori dalam mengelompokkan Daftar Teman Kontak. Mungkin lain waktu akan dibahas lebih lanjut, meskipun rekan BLOGGER lainnya telah membahasnya lebih awal.

BLOG, cara apa yang dilakukan para BLOGGER untuk mengelola pertemanan mereka. Salah satunya dengan memasang sebuah gadget berisikan Daftar Teman (lihat Best Frenz dan Look at more Banner).

Kedua cara di atas merupakan management untuk masing-masing sudut dunia maya yakni FACEBOOK dan BLOG.

Selanjutnya, Netter akan sedikit mengalami kesulitan lagi disaat Netter memiliki ribuan Link yang terkait dengan Account-nya. Jutaan Teman untuk masing-masing Account dan milyaran kunjungan tiap harinya ke website-website tertentu yang mesti dicatat, agar di kesempatan berikutnya kita masih dapat mengingatnya dengan jelas.

Caranya gimana bro...agar terasa lebih Djarum Black Menthol dan lebih Djarum Black Slimz.


Buatlah sebuah File misalnya menggunakan fasilitas Excel milik Microsoft Office. Tentukan nama filenya, misalnya Daftar Website_URL_e-mail (Download contoh formatnya).

Dengan file trsebut, rekan BLOGGER dapat menyimpan informasi seputar aktifitasnya di dunia maya. Bahkan aktifitas di dunia nyata pun dapat disimpan di dalamnya. Tergantung rekan-rekan mengelolanya agar lebih apik. Setelah filenya dibuat, penting pula diperhatikan agar file ini aman dan tidak bisa diakses orang lain. Seperti halnya yang dilakukan Ambae.exe, file dimaksud diproteksi dengan berbagai teknik proteksi. Bicara proteksi, rekan BLOGGER sudah pada paham.

Untuk selanjutnya melalui media Flash Disk, file ini dapat dibuka saat Autoblackthrough goes to campus dimana pun. Lebih yahud lagi jika membenamkannya ke dalam Google Document atau semacamnya sehingga memudahkan mengaksesnya jika Flash Disk lupa dibawa serta dalam perjalanan.

Kiranya cukup tutorial simple nan biasa ini disajikan oleh newbie seperti Ambae.exe. Semoga bermanfaat dalam mengelola aktifitas rekan BLOGGER...BRAVO...

Wednesday, February 24, 2010

Secure of Computer Books

Buku hingga kini dijadikan salah satu sumber inspirasi bagi para pembacanya. Sekaligus menjadi bahan bacaan guna menimba ilmu dari Penulis yang dengan rela menyumbangkan buah pikiran di dalamnya untuk dapat diketahui orang banyak. Kehadirannya mampu mengubah pola pikir seseorang yang tadinya penuh kemalasan, dapat berubah secara drastis menjadi rajin dan lebih bersemangat setelah membacanya. Meski tidak bisa dipungkiri bahwa ada pula bahan bacaan yang tidak cocok dengan kebutuhan tiap individu.


Buku sering dijadikan sebagai teman dalam banyak hal dan dalam berbagai kondisi. Teman di kala sepi, saat suntuk dan sebagainya. Saking dekatnya dengan pembacanya, buku mesti dijaga agar awet. Sehingga pada waktu yang lain, buku tersebut masih bisa dibaca baik oleh sang pemilik maupun rekan-rekan yang ingin membacanya pula.

Andaikan buku tersebut diambil, dicuri atau pun hilang dengan sendirinya. Kecil kemungkinan untuk mendapatkannya kembali. Bertolak dari hal tersebut di atas, Ambae.exe memberikan sedikit tips untuk menjaga buku agar aman. Meskipun tidak bisa dipastikan bahwa langkah ini sangat aman.


Satu hal yang patut diperhatikan rekan-rekan Black Community yakni memberi tanda pada buku milik kita agar buku tersebut dapat dikeali meskipun berbaur dengan buku lainnya yang bukan milik kita. Sekaligus memberikan ciri khas tersendiri yang membuatnya beda dengan yang lain.

Sobat Djarum Black, Ambae.exe selalu menandai buku miliknya termasuk Black In News seperti majalah dan sebagainya yang menjadi koleksi ilmu berbasis tulisan miliknya. Hal ini dilakukan agar buku komputer yang menjadi kebanggaanya, tidak gampang dicuri. Atau pun kalau sempat dicuri, buku tersebut akan ketahuan siapa pemiliknya karena di dalamnya tertera Stempel Ambae.exe yang memenuhi hampir tiap halaman. Tak hanya pada isi tiap halaman, pada sisi luar pun distempel agar memudahkan identifikasi buku meski nampak dari jauh.

Seperti dikatakan sebelumnya, hal ini bukanlah langkah paling aman. Ada-ada saja teknik para pencuri untuk memindah tangankan buku milik kita. Selanjutnya mengakui bahwa buku tersebut adalah miliknya. Pernah kejadian bahkan berulang kali hal serupa terjadi. Buku milik Ambae.exe dicuri pihak yang tidak bertanggung jawab. Kemudian buku tersebut distempel dengan stempel tertentu berdesain lain dan bertuliskan nama tersangka. Meski Ambae.exe mampu mengidentifikasi buku dimaksud, namun secara keseluruhan tidak mampu berbuat apa-apa setelah mendapati stempel lain tertera pada buku.


Untuk menghindari hal tersebut, ada baiknya para BLOGGER menandai lebih awal buku milik Anda di saat baru membelinya dari toko. Setelah menandainya dengan stempel maupun kode lainnya, barulah memulai membacanya. Kemudian buku yang telah dibaca mesti diarsipkan pula agar lebih rapi dan memudahkan bagi pemiliknya bilamana akan membacanya di kesempatan yang lain. Misalnya saja menempatkan koleksi buku Anda pada Rak Buku, Laci maupun Lemari.

Awet dan amannya buku dapat menggambarkan karakter sang pemilik. Tapi ungkapan lain berkata lain : Buku yang rapi menggambarkan bahwa buku tersebut jarang bahkan tidak pernah tersentuh pembacanya.

Olehnya, mari membaca buku yang kita miliki. Membaca sambil merawatnya agar tidak cepat usang dan masih dapat dibaca di kesempatan berikutnya.

Tuesday, February 09, 2010

Clean and Scan Computer without Antivirus

Menjadi kebiasaan buruk jika tidak mengutamakan kebersihan. Kebersihan tidak mutlak bagi yang nampak saja secara kasat mata. Bagian terdalam dari sebuah benda pun harus diperhatikan kebersihannya. Kenapa tidak mencoba hidup bersih sebagai cikal bakal kesehatan. Dimulai dari diri sendiri, hingga semua yang ada di sekitar kita. Jangan menjadikannya berlarut-larut, mengingat hal ini akan memperburuk kondisi yang tadinya normal. Namun, karena kelalaian dan sikap acuh tak acuh, suatu waktu bakal terasa akibatnya.


Yang kita bicarakan kali ini adalah proses Cleaning yang akan dilakukan terhadap komputer kesayangan. Layaknya memelihara kebersihan kuku, maka yang akan dilakukan pertama kali yakni membuka terlebih dahulu sepatu yang dikenakan.

Bicara soal komputer, yang lazim dilakukan berupa Antivirus untuk membersihkan virus yang menguasai system. Untuk itu, akan diuraikan bagaimana membersihkan komputer tanpa memanggil bantuan Antivirus. Langkah demi langkah yang patut diikuti yakni :

Sebelumnya siapkan Bahan, berupa :
1. Obeng
2. Kuas/Sikat
3. Lap Kering
4. Kipas Angin
5. Djarum Black
6. Cemilan
7. Kopi Hangat
8. Djarum Black Menthol (ini milik sobat-ku)
9. Tempat Sampah
10. Mini Vacuum Cleaner

Proses
1. Buka Casing Komputer dengan bantuan obeng
2. Scanning kondisi komputer
3. Lepas Memori Komputer
4. Lepas Processor
5. Lepas Fan CPU dari pendinginnya
6. Lepas VGA Card
7. Lepas Audio Card
8. Lepas komponen lainnya yang mungkin mengganggu proses Cleaning
9. Cleaning dengan menggunakan kuas/sikat biar segala debu dan kotoran lainnya yang melekat pada komponen dalam Casing
10. Penggunaan kuas dimaksudkan untuk meminimalkan kerusakan pada komponen yang ada khususnya pada bagian terkecil dari benda-benda yang melekat pada PCB, misalnya resistor dan sebagainya
11. Setelah kotoran bergeser dari posisinya alias tidak melekat lagi, nyalakan Kipas Angin dan Kipas ke dalam Casing agar debu dapat dikeluarkan dengan cepat
12. Gunakan Mini Vacuum Cleaner untuk menyedot kotoran yang membandel
13. Gunakan Lap Kering untuk membersihkan sisa debu yang telah dihilangkan sebelumnya
14. Kotoran dipindahkan ke tempat sampah yang telah disediakan agar area kerja pun ikut bersih dan kinclong
15. Setelah selesai pembersihan, pasang kembali tiap komponen yang tadinya dilepas dari tempat melekatnya
16. Pasang kembali tutup Casing Komputer
17. Pindahkan CPU kembali ke asalnya
18. Bila sudah benar-benar yakin bahwa Komputer bersih kini, sekarang saatnya beristirahat
19. Cicipi cemilan dan kopi hangat sambil menikmati Djarum Black dan Djarum Black Menthol bersama rekan-rekan yang tadinya menjadi penonton setia

It's simply tutorial to Clean your computer, try again a regular.

Monday, February 08, 2010

Computer can Lead to Stress

Komputer bagi sebagian besar Black Community bumi ini bukan lagi barang baru yang belum pernah dikenal sebelumnya. Komputer telah hadir mengisi tiap sisi kehidupan manusia. Perkantoran, rumah tangga, usaha jasa bahkan hampir tiap individu telah memiliki barang yang canggih ini. Beberapa tahun belakangan, kehadirannya kini amatlah penting. Tanpa komputer, dapat diibaratkan korek tanpa Djarum Black Slimz.


Apa iya...? Tergantung kebutuhan, lebih tepatnya dikatakan seperti itu. Karena ada pula Black Car Community maupun bagian tertentu di dunia ini yang tidak terlalu butuh dengannya. Mungkin saja butuh tapi tidak setiap saat.

Kita tinggalkan pihak yang tidak butuh untuk sementara waktu. Selanjutnya mari fokuskan pada pihak yang secara kontinue kebutuhannya benar-benar pasti. Betapa stres jadinya jikalau komputer yang menjadi pacar sejatinya dalam menyelesaikan pekerjaan, ternyata saat dibutuhkan justru tidak hadir alias ALPA. Mungkin juga komputernya ada, tapi ada beberapa masalah yang mengakibatkan ERROR pada benda pintar ini. Kondisi komputer yang bikin stres, antara lain :
1. Komputer masih dalam Box (belum dirakit)
2. Tidak ada sumber listrik untuk menghidupkannya
3. Belum diinstall salah satu Operation System (OS)
4. OS under DOS, si operator hanya menguasai Windows
5. Tidak dilengkapi dengan Mouse (bagi pengguna awam terhadap DOS)
6. Mati lampu karena perbuatan penguasa listrik
7. Not available Internet (target Blogger, Facebooker and other netter)
8. Ukuran monitor hanya 1 inci (kacamata terinjak Elephant)
9. Hard Disk is full with songs and videos but no player build in (putar pakai tangan biar ramah lingkungan)

Ya udah.........cukup sembilan sebagai pembuka Djarum Black...Jadinya khawatir kalau dicukupkan sepuluh karena masih banyak faktor lainnya yang akan membuat User jadi stres.

Thursday, January 21, 2010

Cable Cutters is Cheap Alternatives to TV, DSL and Cell Service


I don't like service providers. Cable TV, landline phone and fax, mobile phone, ISP, and even satellite radio companies have so little real competition that they know they don't have to impress me very much to get my business. I either pay the full fees and become connected to them by their cord (physical or wireless), or I don't get any service. Until lately.


Now those companies face new pressure from Web based technologies and services that can offer similar features for far less money, or even for free. Many of these services ride in on the cord owned by the big service provider, relegating said service provider to the job of operating "dumb pipes."

Which TV executive knew just a few years ago that paid and free online services could threaten cable companies? Ditto for the VoIP challenge to landline phones, online fax services replacing another cord, Internet radio being a better value than a satellite subscription, and more.

Here I'll explain the alternatives that can help you walk away from the biggest corded companies that we love to hate. You can pick and choose which are still worth keeping and which to toss. Are you being pressured into buying a service provider's "triple play"? Try three strikes and you're out.

Pay for TV and Movies Instead of Cable Service

The cable and satellite TV model is on life support. Who wants to pay $100 or more a month for an endless well of unwatched shows? Even if you have an appetite for premium channel shows, you can save money by buying some à la carte and watching others free online.

Paid, per show TV sources are all around you. Apple iTunes, Amazon VOD, Zune Marketplace, Blockbuster On Demand, and Jaman store thousands of shows and movies.

All offer various purchase and rental options, often $3 to $4 to rent a movie for a day. Expect to drop about $30 to $40 per season of scripted, premium channel TV (HBO, Showtime, and such), or, often, about $10 less for network shows.

That sounds pricy at first, and it often costs more than buying a physical disc. But total up four or five of those seasons and a handful of movies, and you could pay half as much as cable over the same time period.

If you want to watch video on a portable device, stick with iTunes for iPod and iPhone compatibility; or Zune Marketplace for Zune support. Unfortunately, the other stores don't offer portable media player support. If you have an AppleTV or an Xbox 360, you can at least watch your shows in the living room.

Netflix is a good base service for any cable TV free home. The cheapest subscription for the DVD by mail service is $8.99 each month, but much of the value comes in the thousands of shows and movies you can stream from Netflix to your PC. Plus, Netflix can stream to a TiVo, Xbox 360, PS3, dedicated Roku device, and other hardware, so you can watch in the living room without a media center PC.

Similar streaming services like Amazon, Blockbuster, Jaman, and others can play on much of the same hardware. Check your TV connected hardware against these services' support pages.

I've also got my eye on the upcoming Boxee Box and Sezmi service; both will offer hardware that plays Internet streamed video on a TV. Sezmi, which will be rolling out nationwide this year, even promises local shows and live sports, one of the biggest deficits in online libraries.

Get Free TV And Movies

Hulu is still my king of free TV sites, although it's uncertain if it will or won't change to a paid model. And I've been occasionally frustrated when show episodes or seasons disappear just before I try to watch. But the majority of recent network shows are available. Plus, you'll find movie and TV favorites alongside B level misses.

As I write this, you can watch "Spartacus" and "All the King's Men" alongside the Norm MacDonald vehicle, "Dirty Work."

Check Hulu first, but also scan other sites for free TV and movies. Crackle, Comcast Fancast, and even YouTube have movies and TV content. If you you're looking for a specific show that you still can't find online, visit its Web site or its network site directly.

Live sports can still be elusive. Check the network that's broadcasting the content for a stream; I saw a Monday Night Football game this way last fall. MLB.com hosts live baseball, but you'll have to pay for service. Justin.tv could be your best ace for any sport. While unsanctioned, many users play live streams of their local stations; just click the sports button.

And remember the cheapest, highest quality TV source of all: an antenna. Over the air HD content looks great, often better than video compressed for a cable TV feed. You'll just need a TV with an HD tuner typical for most sets built in the last several years or a PC TV tuner.

Cut Landline Phone And Fax Service

If you have a reliable ISP, a voice over IP (VoIP) phone company can replace a traditional landline. You can place calls through a PC, but you'll have a better experience on a dedicated VoIP handset. The device connects to your network over Wi Fi or wired ethernet to route calls.

Skype deserves its VoIP ubiquity. You'll make free calls to other members or pay about 2 cents per minute to dial out to a real phone. Traditional phones can also call in to you. But several alternatives challenge the Skype giant.

I like the features and versatility of RingCentral. Depending on the package you buy, you'll get a local phone number for incoming calls, an incoming toll free number, and an incoming fax line. Call routing functions make RingCentral excel. Like Google Voice or my1voice, RingCentral can send incoming numbers to any phone. You can have it ring your VoIP handset, a mobile line, a hotel room, a temporary office, or anywhere you happen to be. Or you can have it go straight to voice mail during off hours, if you don't want to be reached.

Most RingCentral plans bundle fax service, or you can just pick that for about $8/month. You'll send and receive faxes through e mail, and cut the cost of a dedicated, traditional fax line. Many other companies sell fax service, too. Check out Mbox, eFax, and MyFax for several options, all priced in a similar range.

Free Yourself From Wireless Phone Service

If you like your current handset or smartphone, you might not be able to change wireless providers. Your device is almost certainly locked to your carrier, and worse, there's a chance that differing network technologies mean you can't move your phone to a different network even if it's unlocked.

AT&T and T Mobile rely on GSM networks; Sprint uses CDMA; and most Verizon handsets use CDMA, but Verizon also offers some dual mode devices that support both network types. An unlocked iPhone is still single mode, so it will never work on a Sprint network, for example. Ask a carrier you're considering how they can enable your old phone.

For GSM devices, including Apple's iPhone, your best option could be unlocking the handset, then swapping in a GSM SIM (subscriber identity module) card from the new provider. Even a prepaid card can work, which drains your account only when you use service.

If you want to completely cut wireless phone service, you could try hopping between Wi Fi hotspots while using a VoIP app. Truphone and Fring work on Android devices, BlackBerrys, iPhones, and even iPod touch media players. (You'll need a headset microphone for any of the players.) It's not the same as real wireless phone service, but it might be enough for some users in some situations.

You can beat text messaging fees by sending texts through an instant messenger app or in e mail. And instead of paying for your carrier's voice mail transcription service, you can substitute SpinVox, PhoneTag, YouMail, or Google Voice.

Revise Your Internet Service

Look for a network without a lock icon to try to gain access.

Did you shop around for your ISP? You might not be getting the best price or service. Check out Broadband Reports for customer reviews. You could find a locally grown alternative to the faceless corporation that you currently use.

You might be able to completely break free from home, wired Internet service. First, walk around your house running inSSIDer. Try to reach a friendly neighbor or café. Or if a neighbor's signal is locked, ask around, and offer to pay part of the fee to join the network and share service.

Wi Fi service subscriptions from T Mobile, Boingo, and others can pay off if you frequent airports and other locations with their coverage. But you're almost as likely to find an open, free network. (To be fair, however, if you need an always on connection wherever you are, nothing beats an EvDO modem stick from Sprint or Verizon.)

Several Web sites map Wi Fi networks, and are good places to check out before you hit the road. Try Jwire, WeFi, and Hotspotr.

If you require an always on connection, you might be better off buying short or long term service from Sprint or Verizon. You can buy a USB plug that connects a single laptop, or a home desktop for that matter. Many mobile phones can also be tethered to a laptop as part of your service plan, sharing the wireless Internet feed. Or opt for a portable router such as the MiFi, and it'll turn its mobile connection into a Wi Fi, Internet bubble. The router will work in your car and could be cheaper than a hotel's Internet service.

Break Out of Satellite Radio's Orbit

Monthly satellite radio service might not be worth what you pay. If your favorite talk show is in an exclusive contract, you could be stuck, but music listeners have alternatives. Try Pandora, Last.fm, and Slacker from a PC or even a smartphone.

Last.fm is free, and the others offer both paid and free versions. All build music programming based on your preferences. If you indicate that you don't like a certain song or musician, they'll adjust your playlist to better match your tastes.

The mobile versions of these services are an especially exciting proposition. They offer the possibility of replacing traditional car radio by streaming music wirelessly to your smartphone in the car. This, of course, is highly dependent on the 3G wireless coverage you're getting as you drive, but that coverage is getting broader and faster all the time. Additionally, Slacker can cache stations to your device so you can play music without any Internet connection. This helps when you're driving across no coverage zones.

Get a Discount, or Cut Ties

Sometimes you just can't cut the cord. In spite of poor service and price gouging, you might need some of these services. For one last alternative, try calling up and asking for a discount. It's worked for me, especially with TV and Internet service.

Arm yourself with details on your current companies' introductory deals and competitors' rates, and ask for a break. If you don't get a good answer, call back, and ask someone else.

Even if you only cut one of these services, you could save a lot. Pay for what you want and only what you use to take back control of your subscriptions.

Wednesday, January 20, 2010

Vodafone Revs Femto Engine


Vodafone UK stepped up its femtocell efforts in a big way Monday with the launch of a national marketing campaign, new brand, and a dramatically cheaper price for its small home base stations.

Vodafone has quietly offered a femtocell called the Vodafone Access Gateway since July 2009 for a one off cost of £160 (US$261).

Now, though, the operator is ready to make a big noise about the little base stations. The operator revealed today that the new name for its femtocell is the Sure Signal. (A name that's an open invite for criticism should the device not work).


But it's the new price of the Sure Signal femto that will surely raise eyebrows: The operator has slashed the cost of the device to a one off fee of £50 (US$82) or £5 (US$8) per month for 12 months-with monthly price plans of £25 (US$41) or more.

For monthly plans that are less than £25, the Sure Signal costs a one off £120 (US$196) or £5 per month for 24 months.

That price reduction is not the result of a drastic drop in the cost of making femtocells. Rather, Vodafone says it's prepared to deepen its subsidy on the femtos because of the gains it says it gets in customer satisfaction and subscriber acquisition-particularly new customers that defect from other operators.

According to Lee McDougall, senior product marketing manager at Vodafone UK, a "greater level of subsidy is worthwhile." [We'll be] subsidizing heavily and marketing heavily as well," says McDougall. "We're seeing really great benefits and feedback."

He also notes, though, that there has been a "slight reduction" in the femto prices because the operator is "ordering in much bigger volumes now." But McDougall wouldn't disclose how many Vodafone is ordering now, or how much it's paying per femtocell.

For the fledgling femto industry, Vodafone's move in the U.K. is a significant endorsement for the little home base stations.

Boosting the iPhone indoors?
The launch of the Sure Signal conveniently coincides with Vodafone's launch of Apple's iPhone on January 14. But McDougall says that was "coincidental, but timely… just the way things panned out."

So, the femtocell is not just aimed at iPhone users. The primary application for the device is improved 3G coverage at home and in small offices. Check out the video about the Williams family's coverage plight here on Vodafone's site, which gives a good idea of how the operator is marketing the Sure Signal and using it a differentiator.

"Only Vodafone can guarantee the signal in your home," says McDougall.

While Vodafone mainly targets the Sure Signal at consumers, the operator has also launched price plans for small businesses (which do not include value-added tax): A £42.56 (US$70) one off charge or £4.26 (US$7) per month for 12 months for Your Plan for Small Business and Storm price plans of £21.26 (US$35) or more.

Or, for price plans of less than £21.26 (US$35), the Sure Signal costs £102 (US$167) or £4.26 (US$7) per month for 24 months.

Along with the new marketing push, Vodafone has also updated its back office systems for the femto service. Previously, customers had to call Vodafone so that someone could manually register their phone number with the femtocell. Now, that procedure is automated. Customers can log on to their Sure Signal web service portal to add or remove phone numbers, and the changes are made in real time.

Tuesday, January 19, 2010

Bredolab dan Zbot siap meruntuhkan harapan FACEBOOKer


Perusahaan antivirus Vaksincom mencermati, saat ini setidaknya ada dua virus yang mengancam pengguna Facebook. Virus itu bukan menyebar di Facebook, tapi memanfaatkan jejaring sosial popular itu untuk menjaring korban dan bertujuan mendapatkan akun FB.

“Menurut hemat kami, di tahun ini serangan virus dengan modus operandi mengeksploitasi FB akan makin marak seiring makin populernya FB,” kata CEO Vaksincom Alfons Tanuwijaya, di Jakarta.


Ia menambahkan di dunia virus berlaku hukum pembuat virus akan mengincar sistem operasi atau aplikasi yang paling popular. Hal itu karena korban potensialnya lebih besar.

Oleh karena itu komputer dengan sistem operasi Microsoft Windows lebih menarik dibuat virusnya dibandingkan Mac OS Leopard atau Linux, karena penggunannya paling banyak. Sementara sistem operasi Windows Mobile atau BlackBerry lebih sedikit mendapat serangan, karena secara de facto market ponsel masih dikuasai oleh sistem operasi Symbian.

“Saat ini dua virus yang cukup berbahaya dalam mengeksploitasi FB adalah Bredolab dan Zbot,” kata Alfons.

Bredolab merupakan virus lama yang menyebarkan diri sebagai lampiran e-mail. Sebelumnya, virus ini seolah-olah datang dari DHL. Jika dijalankan, maka virus akan menyebabkan komputer terinfeksi.

Memanfaatkan Facebook yang popular, pembuat virus mengubah e-mail seakan-akan dari administrator Facebook. Pembuat virus meminta password reset Facebook dan menjalankan satu aplikasi yang isinya program berbahaya.

“Jika aplikasi itu dijalankan, maka korbannya akan dibuat pusing tujuh keliling. Selain menginfeksi komputer korbannya, virus juga akan meng-download spyware, scareware berupa antivirus palsu, dan menginfeksi lagi komputer korbannya,” kata Alfons.

Tidak cukup melakukan hal ini, Bredolab juga akan melakukan spam dari komputer korbannya. Akibatnya IP address komputer korban bisa diblok oleh perusahaan blacklist, karena mengirimkan spam dan mengganggu pengiriman e-mail.

Sedangkan cara penginfeksian virus Zbot lebih canggih. Virus ini tidak mengirimkan diri sebagai lampiran e-mail seperti Bredolab, yang bisa diblok oleh mailserver.

Virus itu menyebar melalui email phishing, seakan-akan pesan resmi dari Facebook untuk mengubah password. Jika link tersebut diklik, maka ia akan menampilkan situs palsu Facebook yang meminta korbannya memasukkan username dan password.

Jika dituruti maka username dan password pengguna Facebook akan diketahui oleh pembuat virus. Tidak cukup mencuri password Facebook korbannya, virus ini juga akan memberikan link ke file untuk diunduh yang dikatakan sebagai file update dari Facebook. Jika dijalankan, maka virus akan menginfeksi komputer korbannya dan mengakibatkan komputer itu mengirimkan spam.

Lalu bahaya terbesar apa yang bisa dialami user?

Alfons mengatakan tingkat bahaya tergantung dari korbannya. Korban bisa kehilangan akun Facebook dan jika ada nilai ekonomis di akun itu, misalnya data rahasia rekening perbankan maka akan bisa berpindah tangan.

“Selain itu yang perlu dikhawatirkan adalah bahaya jaringan komputer kantor yang menjadi korban virus ini. Jika komputer mengirimkan spam, maka IP kantor akan di-blacklist sehingga tidak akan bisa mengirimkan email dengan baik, dan seluruh email kantor tersebut akan masuk ke dalam kategori spam. Potensi kerugian ekonominya sangat besar,” ujar Alfons.

Untuk menghindari virus di Facebook, Alfons mengatakan pengguna komputer harus menggunakan program antivirus yang ter-update. Selain itu ia menyarankan pengguna komputer selalu waspada terhadap pemalsuan situs atau webforging.

“Jangan mudah percaya link yang diberikan. Khususnya pada saat memasukkan data penting seperti username dan password, sebaikn, ya perhatikan situs yang dikunjungi dengan seksama,” timpalnya.

Source : Inilahcom & Vkasincom