LONDON (AP) — The loose-knit hacking movement "Anonymous" claimed Sunday to have stolen thousands of credit card numbers and other personal information belonging to clients of U.S.-based security think tank Stratfor. One hacker said the goal was to pilfer funds from individuals' accounts to give away as Christmas donations, and some victims confirmed unauthorized transactions linked to their credit cards.
Anonymous boasted of stealing Stratfor's confidential client list, which includes entities ranging from Apple Inc. to the U.S. Air Force to the Miami Police Department, and mining it for more than 4,000 credit card numbers, passwords and home addresses.
Austin, Texas-based Stratfor provides political, economic and military analysis to help clients reduce risk, according to a description on its YouTube page. It charges subscribers for its reports and analysis, delivered through the web, emails and videos. The company's main website was down, with a banner saying the "site is currently undergoing maintenance."
Proprietary information about the companies and government agencies that subscribe to Stratfor's newsletters did not appear to be at any significant risk, however, with the main threat posed to individual employees who had subscribed.
"Not so private and secret anymore?" Anonymous taunted in a message on Twitter, promising that the attack on Stratfor was just the beginning of a Christmas-inspired assault on a long list of targets.
Anonymous said the client list it had already posted was a small slice of the 200 gigabytes worth of plunder it stole from Stratfor and promised more leaks. It said it was able to get the credit card details in part because Stratfor didn't bother encrypting them — an easy-to-avoid blunder which, if true, would be a major embarrassment for any security-related company.
Fred Burton, Stratfor's vice president of intelligence, said the company had reported the intrusion to law enforcement and was working with them on the investigation.
Stratfor has protections in place meant to prevent such attacks, he said.
"But I think the hackers live in this kind of world where once they fixate on you or try to attack you it's extraordinarily difficult to defend against," Burton said.
Hours after publishing what it claimed was Stratfor's client list, Anonymous tweeted a link to encrypted files online with names, phone numbers, emails, addresses and credit card account details.
"Not as many as you expected? Worry not, fellow pirates and robin hoods. These are just the 'A's," read a message posted online that encouraged readers to download a file of the hacked information.
The attack is "just another in a massive string of breaches we've seen this year and in years past," said Josh Shaul, chief technology officer of Application Security Inc., a New York-based provider of database security software.
Still, companies that shared secret information with Stratfor in order to obtain threat assessments might worry that the information is among the 200 gigabytes of data that Anonymous claims to have stolen, he said.
"If an attacker is walking away with that much email, there might be some very juicy bits of information that they have," Shaul said.
Lt. Col. John Dorrian, public affairs officer for the Air Force, said that "for obvious reasons" the Air Force doesn't discuss specific vulnerabilities, threats or responses to them.
"The Air Force will continue to monitor the situation and, as always, take appropriate action as necessary to protect Air Force networks and information," he said in an email.
Miami Police Department spokesman Sgt. Freddie Cruz Jr. said that he could not confirm that the agency was a client of Stratfor, and he said he had not received any information about a security breach involving the police department.
Anonymous also linked to images online that it suggested were receipts for charitable donations made by the group manipulating the credit card data it stole.
"Thank you! Defense Intelligence Agency," read the text above one image that appeared to show a transaction summary indicating that an agency employee's information was used to donate $250 to a non-profit.
One receipt — to the American Red Cross — had Allen Barr's name on it.
Barr, of Austin, Texas, recently retired from the Texas Department of Banking and said he discovered last Friday that a total of $700 had been spent from his account. Barr, who has spent more than a decade dealing with cybercrime at banks, said five transactions were made in total.
"It was all charities, the Red Cross, CARE, Save the Children. So when the credit card company called my wife she wasn't sure whether I was just donating," said Barr, who wasn't aware until a reporter with the AP called that his information had been compromised when Stratfor's computers were hacked.
"It made me feel terrible. It made my wife feel terrible. We had to close the account."
Wishing everyone a "Merry LulzXMas" — a nod to its spinoff hacking group Lulz Security — Anonymous also posted a link on Twitter to a site containing the email, phone number and credit number of a U.S. Homeland Security employee.
The employee, Cody Sultenfuss, said he had no warning before his details were posted.
"They took money I did not have," he told The Associated Press in a series of emails, which did not specify the amount taken. "I think 'Why me?' I am not rich."
But the breach doesn't necessarily pose a risk to owners of the credit cards. A card user who suspects fraudulent activity on his or her card can contact the credit card company to dispute the charge.
Stratfor said in an email to members, signed by Stratfor Chief Executive George Friedman and passed on to AP by subscribers, that it had hired a "leading identity theft protection and monitoring service" on behalf of the Stratfor members affected by the attack. The company said it will send another email on services for affected members by Wednesday.
Stratfor acknowledged that an "unauthorized party" had revealed personal information and credit card data of some of its members.
The company had sent another email to subscribers earlier in the day saying it had suspended its servers and email after learning that its website had been hacked.
One member of the hacking group, who uses the handle AnonymousAbu on Twitter, claimed that more than 90,000 credit cards from law enforcement, the intelligence community and journalists — "corporate/exec accounts of people like Fox" News — had been hacked and used to "steal a million dollars" and make donations.
It was impossible to verify where credit card details were used. Fox News was not on the excerpted list of Stratfor members posted online, but other media organizations including MSNBC and Al-Jazeera English appeared in the file.
Anonymous warned it has "enough targets lined up to extend the fun fun fun of LulzXmas through the entire next week."
The group has previously claimed responsibility for attacks on credit card companies Visa Inc. and MasterCard Inc., eBay Inc.'s PayPal, as well as other groups in the music industry and the Church of Scientology.
Plushnick-Masti reported from Houston. Associated Press writers Jennifer Kay in Miami and Daniel Wagner in Washington, D.C. also contributed to this report.
(Reuters) - Facebook launched a new suicide prevention tool on Tuesday, giving users a direct link to an online chat with counselors who can help, the company said.
Friends are able to report suicidal behavior by clicking a report option next to any piece of content on the site and choosing suicidal content under the harmful behavior option, Facebook spokesman Frederic Wolens said.
Facebook will then email the user in distress a direct link for a private online chat with a crisis representative from the National Suicide Prevention Lifeline as well as the group's phone number.
The new tool gives people who may not be comfortable picking up the phone a direct avenue to seek help.
"This was a natural progression from something we've been working on for a long time," Wolens said.
Users also have the ability to report suicidal behavior by going to the site's Help Center or search for suicide reporting forms. They can also use reporting links around the site.
Worried friends who reported the behavior will also receive a message to say it is being addressed, Wolens said.
Facebook, the most popular Web-based social networking site, has more than 800 million active users worldwide. The Palo Alto, California-based company was co-founded by Mark Zuckerberg in 2004.
The new suicide reporting tool will be made available to people who use Facebook in the United States and Canada.
Wolens said that all reporting on the site is done anonymously and so a distressed user will not know who reported the suicidal content.
Nearly 100 Americans die by suicide every day, according to the Surgeon General of the United States.
In the past year, more than 8 million Americans 18 or older had thought seriously about suicide, according to a blog post by the Surgeon General accompanying the release of the new Facebook tool.
(Reporting by Lauren Keiper; Editing by Peter Bohan and Richard Chang)

From : 
RIM's first all-screen Curve has sashayed over to the FCC. Two models of the BlackBerry Curve 9380, the REA70UW and REB70UW, are included in the latest filing, which goes into typical laborious detail on radio frequencies and the like. Thankfully, those myriad charts and graphs reveal support for WCDMA band IV, meaning that the phone plays nice with T-Mobile's 3G network. Let's just hope that, if given the chance to strut its stuff in the US market, it follows the Curve family tradition of arriving keenly priced. We'll have to wait and see if it hits our wallet's sweet spot, but for those more interested in the phone's internals, the source link beckons below.

Source from:
Hydraulic fracturing, notorious for its environmental hazards all this while, has now come under the scanner for its alleged connection with increased seismic activity. Continue reading to see how fracking - that this practice is popularly known as, can trigger earthquakes, and how damaging these quakes can be.The 5.6 magnitude earthquake that rocked the state of Oklahoma on 5th November, 2011, has once again raised some serious questions about the increasing use of hydraulic fracturing to tap natural resources, i.e. oil and gas, from beneath the Earth's crust. Even though it has not yet been ascertained whether this quake was caused as a result of weakening of the Earth's crust as a result of fracking, most of the people believe so. The experts at the United States Geological Survey (USGS) though, are of the opinion that fracking cannot trigger an earthquake of this magnitude. According to them, it was the fault line existing in this region that triggered the Oklahoma earthquake - and not fracking.

Link Between Hydraulic Fracturing and Earthquakes

While the presence of a fault line in this region of the United States can be an apt explanation for the 5.6 magnitude Oklahoma earthquake, what about the sudden rise in seismic activity here? Between 1972 to 2008, an average of 2-6 earthquakes were recorded in the state of Oklahoma every year. In 2009, the number of earthquakes recorded reached 50, and further increased to a whopping 1047 in 2010. One cannot ignore the fact that more than a thousand drilling wells and more than a hundred injection wells have cropped up in this region over the course of time. Back in August itself, the region experienced a series of tremors, all ranging between the magnitude of 1 and 2.5, and now the 5.6 magnitude quake. While environmentalists are citing the link between hydraulic fracturing and earthquakes to oppose such projects, those in this business refute these allegations as baseless.

Man-made Hydraulic Fracturing
Hydraulic fracturing is basically the process wherein a fluid - which is basically a mixture of water and some chemicals, is pumped underground at high pressure to develop cracks in the sediment layers which have the natural gas and oil trapped within. While breakdown of sediment layers due to the development of veins and dikes does occur, it is not as rampant as man-made hydraulic fracturing - a technology used to harness natural oil and gas trapped within the Earth's crust. Man-made hydraulic fracturing is more often referred to as 'fracking' or 'hydrofracking'. While many people tend to associate fracking with drilling, the fact is that the two are totally different concepts, and fracking is used only after the process of drilling is complete.

Even though there is abundance of natural gas and oil in shale, its characteristic low permeability can hamper the flow of these natural resources. In such circumstances, one convenient method of releasing these trapped resources is to make cracks in the rock bed - and this is exactly what is done in the process of hydraulic fracturing. After vertical drilling is complete, horizontal drilling is done in the targeted sediment layer, i.e. the hydrofrac zone or hydraulic fracturing zone, to make passage for the 'fracfluid'. When the fracfluid is released at high pressure, it causes the targeted layer to crack, and the gas which is released as a result of this makes its way to the surface through the bore-well. At times, hydraulic fracturing is also used to restore the cracks within the formation to ensure unabated flow of natural oil and gas in the existing wells.

Does Hydraulic Fracturing Trigger Earthquakes?
Until recently, hydraulic fracturing was opposed for its tendency to pollute ground water, but the series of earthquakes in the United States and the United Kingdom has once again brought it to the headlines. This time it is the sudden rise in seismic activity in Oklahoma and surrounding regions, followed by the 5.6 magnitude earthquake, that has brought the alleged link between fracking and earthquakes to the forefront. Environmentalists argue that the sudden rise in seismic activity and the rise in the number of injection wells in this region cannot be a coincidence, and there has to be some link between the two.

If the experts at the USGS are to be believed, it is not possible for such intense earthquakes to be caused as a result of human activities such as fracking. These experts further add that such intense earthquakes are usually attributed to plate tectonics. However, they don't deny the chances of human activities causing minor earthquakes. In fact, deep injection of fluid and nuclear detonations are the two prominent human activities with the ability to trigger tremors by the USGS. That being said, they do make it a point to add that the earthquakes caused as a result of these activities are not of such high magnitude.

Previously there have been cases of earthquakes being triggered by deep injection of wastewater and deep injection of water for the production of geothermal power. One prominent example is that of the Rocky Mountain Arsenal (RMA) deep injection well which was built by the RMA in 1961 for disposing liquid waste, but only used till 1966 as it was eventually revealed that the fluid injection was triggering earthquakes in this area. As the regions with abundance of natural oil and gas are usually located along the fault lines, it gives rise to the misconception that energy exploration and production makes us vulnerable to hazards such as earthquakes.

While that does rule out the chances of major earthquakes, like the one in Oklahoma, being triggered by hydraulic fracturing, it also suggests that less-intense seismic activity can be linked to the practice of harnessing natural gas and oil. However, one also has to note that major earthquakes can be triggered as a result of fluid injection when it is injected at the wrong place - an unknown fault for instance. In such circumstances, earthquakes with a magnitude of 5 or more cannot be ruled out.
Published: 11/24/2011
NEW YORK (AP) — Lululemon Athletica Inc. named Facebook executive Emily White to its board, saying her social media and e-commerce experience complements the yoga-wear retailer's online efforts.

The Canadian company said Friday that White's addition expanded the board to 10 members.

White is senior director of local and mobile partnerships at Facebook, which she joined in 2010. Before that, she served in several e-commerce positions at Google from 2001 to 2010.

"Emily brings a wealth of knowledge regarding e-commerce and social media to our board in a time when our customers are utilizing these communication channels more than ever to both shop and enrich their lives," Lululemon Chairman Chip Wilson said.

Lululemon offers shopping on its site as well as videos and a blog about yoga, running and other topics.

It's been one of the hottest chains in retailing. The popularity of its high-priced yoga pants and tank tops — the pants run about $100 — has helped shares more than double in the past 12 months.

Shares rose $1.22, or 2.4 percent, to $51.54 in premarket trading.

Read More:
Enzymes are essential for almost all the chemical reactions that take place inside living cells. However, the activities of the enzymes can be enhanced or inhibited by a number of factors. In this article, we are talking about all those factors that affect enzyme activity.

Enzymes are protein-based complex molecules produced by the cells. There are several enzymes which are involved with different biochemical reactions. Each of these enzymes present in our body can influence any one particular chemical reaction or a set of reactions. They serve as organic catalysts and enhance the speed of the reactions in which they take part. In the absence of an enzyme, the speed of a chemical reaction becomes extremely slow. Some of these reactions may not occur if the right kind of enzyme is not present in the body.

Enzyme Activity Explained

An enzyme can increase the speed of a chemical reaction manifold. You will be surprised to know that studies have found that it can make a chemical reaction 10 billion times faster. The chemical substances that are present at the start of a biochemical process is termed as substrates which undergo chemical change(s) to form one or more end products. Basically, the active site of the enzymes forms a temporary bond with the substrate. During this time, an enzyme lowers the activation energy of the participant molecules which in turn speeds up the reaction. After the reaction is over, the newly formed product leaves the surface of the enzyme and the enzyme gets back its original shape. Thus, you can say it participates in the reaction without undergoing any physical or chemical change. Therefore, the same enzyme is used again and again for the specific process.

Factors Influencing Enzyme Activity

Concentrations of substrate and enzyme have an impact on the activity of the enzymes. Besides, environmental conditions such as temperature, pH values, presence of inhibitors, etc. also influence their activities. Each of these important factors have been discussed below:

Effects of Change in Temperature
All enzymes need a favorable temperature to work properly. The rate of a biochemical reaction increases with rise in temperature. This is because the heat enhances the kinetic energy of the participant molecules which results in more number of collisions between them. On the other hand, it is mostly found that in low temperature conditions, the reaction becomes slow as there is less contact between the substrate and the enzyme. However, extreme temperatures are not good for the enzymes. Under the influence of very high temperature, the enzyme molecule tends to get distorted, due to which the rate of reaction decreases. In other words, a denatured enzyme fails to carry out its normal functions. In the human body, the optimum temperature at which most enzymes become highly active lies in the range of 95 °F to 104 °F (35 °C to 40 °C). There are some enzymes that prefer a lower temperature than this.

Effects of Change in pH Value
The efficiency of an enzyme is largely influenced by the pH value of its surroundings. This is because the charge of its component amino acids changes with the change in the pH value. Each enzyme becomes active at a certain pH level. In general, most enzymes remain stable and work well in the pH range of 6 and 8. However, there are some specific enzymes which work well only in acidic or basic surroundings. The favorable pH value for a specific enzyme actually depends on the biological system in which it is working. When the pH value becomes very high or too low, then the basic structure of the enzyme undergoes change(s). As a result, the active site of the enzyme fails to bind well with the substrate properly and the activity of the enzyme gets badly affected. The enzyme may even stop functioning completely.

Effects of Substrate Concentration
Substrate concentration plays a major role in various enzyme activities. This is obviously because higher concentration of substrate means more number of substrate molecules are involved with the enzyme activity. Whereas, a low concentration of substrate means less number of molecules will get attached to the enzymes. This in turn reduces the enzyme activity. When the rate of an enzymatic reaction is maximum and the enzyme is at its most active state, an increase in the concentration of substrate will not make any difference in the enzyme activity. In this condition, the substrate is continuously replaced by new ones at the active site of the enzyme and there is no scope to add those extra molecules there.

Effects of Enzyme Concentration
In any enzymatic reaction, the quantity of substrate molecules involved is more as compared to the number of enzymes. A rise in enzyme concentration will enhance the enzymatic activity for the simple reason that more enzymes are participating in the reaction. The rate of the reaction is directly proportional to the quantity of enzymes available for it. However, that does not mean that a constant rise in concentration of enzymes will lead to a steady rise in the rate of reaction. Rather, a very high concentration of enzymes where all the substrate molecules are already used up does not have any impact on the reaction rate. To be precise, once the rate of reaction has attained stability, an increase in the quantity of enzymes does not affect the rate of reaction anymore.

Effects of Inhibitors
As the name suggests, inhibitors are those substances that have a tendency to prevent activities of the enzymes. Enzyme inhibitors interfere with the enzyme functions in two different ways. Based on this, they are divided into two categories: competitive inhibitors and noncompetitive inhibitors. A competitive inhibitor has a structure which is the same as that of a substrate molecule, and so it gets attached to the activated center of the enzyme easily and restricts the bond formation of enzyme-substrate complex. A noncompetitive inhibitor is the one which brings about change(s) in the shape of the enzymes by reacting with its active site. In this condition, the substrate molecule cannot bind itself to the enzyme and thus, the subsequent activities are blocked.

Effects of Allosteric Factors
There are some enzymes which have one active site and one or more regulatory sites and are known as allosteric enzymes. A molecule that binds with the regulatory sites are referred to as allosteric factor. When this molecule in the cellular environment forms a weak noncovalent bond at the regulatory site, the shape of the enzyme and its activation center get modified. This change usually decreases the enzyme activity as it inhibits the formation of a new enzyme-substrate complex. However, there are some allosteric activators that promote the affinity between the enzyme and the substrate and influence enzymatic behavior positively.

Hope this article helped you to get an overview about different factors that promote and inhibit the actions of various enzymes present in the living cells. We can conclude from the information provided here that all the enzymes require a favorable condition to function properly. An unfavorable condition tends to influence enzyme activity adversely.
Published: 11/16/2011
Read more :
Chromatography is a method used to separate different components of a mixture. There are different types of chromatography, but all of them are based on the same principle: the different molecules or ions in the mixture will interact differently with the stationary phase of the chromatograph, and get separated in the process. The different techniques of chromatography use different substances as the stationary and mobile phases. Chromatography can either be analytical or preparative. Analytical chromatography is used for determining the relative proportions of the different components in the mixture, while the separation of the different components is what preparative chromatography is used for.

Chromatography techniques are generally classified on the basis of the mechanism of separation. The types of chromatography, based on the mechanism of separation, are adsorption chromatography, partition chromatography, ion exchange chromatography, molecular exclusion chromatography, and affinity chromatography. Paper chromatography is based on the principle of partition chromatography.

Uses of Paper Chromatography

Paper chromatography is a simple chromatography technique which has many applications. Its main advantage is that it is not very expensive to perform, and provides clear results. Given below are some important uses of paper chromatography.

Obtain Pure Compounds
Paper chromatography is used to obtain pure compounds from a mixture. This is done by cutting out and redissolving the patterns formed by each constituent. Also, this technique can be effectively used to remove impurities from chemical compounds. Due to the process of paper chromatography, the impurities get separated from the compound and the pure compound can be obtained.

Qualitative Analysis of Drugs
Paper chromatography is one of the methods of qualitative analysis, to analyze or separate the different constituents of a mixture. It is a useful tool for separating polar as well as nonpolar solutes. Pharmaceutical companies use this technique to analyze the different compounds in drugs.

Forensic Science
Paper chromatography is useful in the field of forensic science, for investigation of crime. Samples from crime scenes are collected to be anaylzed and identified, using this technique.

Separating Colored Pigments
Paper chromatography is an effective technique for separating colored pigments from a mixture. Each pigment gets separated and rises up to a particular level on the chromatography paper.

Determining the Pollutants in Water
This technique is used to determine the pollutants or impurities in water bodies like rivers and lakes, by analyzing a small sample from the water body.

Analyzing Complex Mixtures
Paper chromatography is used to detect the presence of, or identify certain organic compounds such as carbohydrates and amino acids, from a complex mixture of organic compounds.

Pathological laboratories use paper chromatography to detect the presence of alcohol or chemicals in blood. The fact that paper chromatography requires small samples, is very useful in pathological testing and diagnostics.

How Does Paper Chromatography Work

To understand the principle of paper chromatography, we must learn what is partition chromatography. As the name suggests, partition chromatography is a method of separating the components of a mixture in which the constituents of the mixture are partitioned or separated between two liquid phases, one of which is supported by a solid and is termed as the stationary phase. The other liquid phase is the solvent in which a small amount of the mixture (that is to be separated) is dissolved. In paper chromatography, the solid in question is a filter paper, and the stationary phase is water in the pores of the filter paper. The following are the steps to perform paper chromatography.

Step 1: Take a long rectangular piece of filter paper and draw a straight line on it using a pencil, a few centimeters above one of its shorter edges. This is your start line. Place a drop of the mixture on the start line, using a capillary tube.

Step 2: Take a glass jar and pour a small amount of the solvent liquid into it. Now, place the filter paper inside the glass jar such that the part of it below the start line, is submerged in the solvent. Do not disturb the setup and you shall see that the solvent in the jar slowly rises up due to the capillary action of the paper. Wait for around 15-20 minutes, till the solvent nearly reaches the top of the paper.

Step 3: Remove the filter paper from the jar and mark the highest point on the filter paper till which the solvent has risen. You shall see that the different components of the mixture have been carried to different levels, by the solvent. In the above diagram, you can see two colored spots formed by two different solutes, A and B. This is due to the difference in the affinity of the solutes present in the mixture, to the filter paper (stationary phase). So, while one solute (solute B) is easily carried farther away by the solvent, the other is not. This results in the solutes getting separated from the mixture.

Step 4: When the filter paper has dried, note the distance covered by each constituent from the start line. Now, calculate the retardation factor (Rf value) by the following formula. This value can never be more than 1, which implies that a solute can never travel ahead of the solvent.

Retardation Factor (Rf) = Distance traveled by the solute from the start lineDistance traveled by the solvent from the start line

Note: If the spot formed by a component is irregular, you need to measure the distance from the middle of the spot to the start line.

This was all about paper chromatography and its uses. However, while performing an experiment on paper chromatography, you need to follow each step carefully in order to get the desired results.

Will they ever get a job? Will they ever keep that job for more than a few months? Will they ever have enough money to pay their student loans and still be able to spend $100 a week on pot? Will they ever put their pants on the right way round at the first attempt?

Now it seems that something they do for recreation, in order to take their mind off their worries, is having increasingly worrying effects.

My hard-core reading of Psychology Today caused me to come across a pained and painful piece called "Porn-Induced Sexual Dysfunction is a Growing Problem."

The thesis behind this frightful news--supported by research performed in Italy and elsewhere--is that Internet porn desensitizes young men to such a degree that, when actually faced with a real human from their target sex group, they are entirely unable to participate as they should.
Indeed, research from the University of Padua in Italy suggested that erectile dysfunction due to excessive Web porn begins for many men in their teens. 70 percent of those young men who came to seek help for performance issues said they were Web porn habitues.

The weary and wise might offer that this problem must be psychological. Yet the researchers declare: "Hold on there, big brains."

For their belief is that Web porn simply numbs men's pleasure receptacles, desensitizing responses to the neurochemical dopamine. This is a chemical associated with reward and, in young men, researchers believe that gorging on Internet porn simply shuts down the physiological sense of reward from sex.

Because the Web allows for so many different--and, if the user so chooses--ever more intense stimulations, the mind-body continuum begins to feel nothing at all. Yes, it's a little like 15 minutes of "Keeping Up With the Kardashians."

It seems that when these young men are suddenly confronted with a real sexual encounter, the idea of coupling with a real human being feels suddenly numbing--and therefore frightening.

You might wonder what happens when young men try to wean themselves off their Web porn habits. Studies show that they experience all sorts of withdrawal pains, including insomnia and catchall flulike symptoms.

I know that the Web is supposed to be the repository of all that is open and shared and loving. It seems possible, though, that its very ease offers so much of a good thing that the put-upon males of Generation Y just can't cope, poor dears.

Perhaps all porn Web sites should exclude anyone under 35. For public health reasons, you understand.

From :
We're not exactly lacking in opportunities for Minority Report references these days, but sometimes they're just unavoidable. According to a new report from CNET based on documents obtained by the Electronic Privacy Information Center, the US Department of Homeland security is now working on a system dubbed FAST (or Future Attribute Screening Technology) that's designed to identify individuals who are most likely to commit a crime. That's not done with something as simple as facial recognition and background checks, however, but rather algorithms and an array of sensors and cameras that can detect both physiological and behavioral cues that are said to be "indicative of mal-intent." What's more, while the DHS says that it has no plans to actually deploy the system in public just yet, it has apparently already conducted a limited trial using DHS employees -- though no word on the results of how well it actually works, of course. Hit the source link below for the complete (albeit somewhat redacted) documents.

from :

ExoPC may not have bowled folks over with its own Slate last year (or met its own promise of some all-in-one PCs this summer), but the company did produce an unquestionably unique UI, which it's since been trying to license to others. Now it's found what appears to be its first taker in Skytex, which has adopted the custom touch layer for its new Skytab S Series Windows 7 tablet. Like the ExoPC itself, this one packs a 9.7-inch capacitive display, although the internals get an upgrade to a dual-core Atom N550 processor, which is paired with 2GB of DDR3 RAM and an as-yet-unspecified amount of storage. ExoPC also describes this particular version of the UI as a "special edition," although it's not showing off too many of the changes just yet. There's no word on a price yet either, but the tablet's expected to ship in early October.

from :

The only reason that HP (or Samsung, Motorola, Nokia/Microsoft, and the others) has a chance in this market is that those incompatible mobile OSes nevertheless all run the two greatest killer apps of them all, aka web browsers and GPS navigation. Most of the other apps (mail, social networks, etc.) that users need are web or cloud-based and can be accessed via the browser if an alternative is not provided. In this sense, there is only one OS and it is the one supported by HTML and other established web formats. Apple may be at a slight disadvantage because it does not support Adobe's Flash standard.

What those companies may need to do is identify a handful of the top killer apps in the existing market and spend the right amount of money to make sure that those apps migrate to their own platforms. Providing powerful tools that can facilitate app migration should probably be at the top of their to-do list. After that, the market will become very much like the car industry, a matter of status, gadgetry, trend and fashion. Eventually, there will only be one basic OS, preferably a user-customizable OS
In buying my own personal notebook computer, I was faced with a myraid of questions. Just what make and model will meet my number crunching and stock market analysis needs? What are the considerations from portability, accessibility and upgrading of the notebook computer?

Having used a desktop computers and notebook computers that had been officially supplied by the company where I was employed, there was no need to worry about what type of notebook computer or whatever configuration that was required in my work.

So when I finally had to purchase my own notebook computer for personal private use, I found myself facing a myraid of questions. Just what should I look out for when buying my own notebook computer?

First, I found I had to quantify my own needs for a notebook computer. Having quantified my needs, which was to do a lot of number crunching and to perform technical analysis and charting of stock prices online, I found that even low priced models could perform work that was demanded by my needs.

I was pleasantly surprised that my needs did not demand a high priced model.

Secondly, the notebook computer I required would need to be sufficiently light. In the process of identifying the notebook computer, I decided I did not need a subnotebook, as most notebook computers weigh between 5 to 7 kg, with a subnotebook weighing at 5 kg or less.

The standard notebook computer was sufficient for my needs coming with some wordprocessing software that was already installed as part of the package that comes with the computer and with Internet access capabilities. All I need was to install my specialized technical analysis program to monitor the stock prices.

At the same time, advances in notebook computer technology ensured that I had wireless technology and can hook up online at any hotspot outlet. This would allow me to have mobile wireless access anywhere I go. I could also use a pen drivefor additional mobile storage.

Finally, I also decided that I really do not need to use the provided upgrade functionality for the notebook, perferring to use the notebook for a period of two years at most. This was because I discovered parts and accessories were expensive, and going the way of upgrades to be expensive. Changing to a new model completely after two years appear to be a better proposition in terms of more power, functionality and cost savings.

Having made these decisions, the next step was to go online for a price comparison. Shopping online allowed me the convenience of researching each one of the notebook computer that caught my attention, without feeling pressured to make a quick decision.

There were some sites that allowed the added convenience of comparing different models side by side, and doing so was very useful in helping me to make my final decision on the notebook computer.

If you are faced with the task of purchasing your own notebook computer, the considerations which I have mentioned above will help you in your initial selection in making a wise decision.
It is no secret that servers can take up a lot of floor space, and power. As a result, they can sometimes seem inconvenient. One way to save space and power is to consolidate servers. Server consolidation is very important in order to ease some of the frustrations of overdue consolidation processes.

It is no secret that servers can take up a lot of floor space, and power. As a result, they can sometimes seem inconvenient. One way to save space and power is to consolidate servers. Server consolidation is very important in order to ease some of the frustrations of overdue consolidation processes. Server consolidation projects can also be accelerated via automation and virtualization. Platespin server consolidation helps to accelerate consolidation projects, and reduce errors. This is done without actually having to have contact with the physical machines.

Platespin allows managers to measure and evaluate resource utilization in order to speed up capacity planning for consolidation projects. This is accomplished by remotely gathering information about the server. This information can be the server operating system, memory, CPU speed, the network, and memory. Platespin server consolidation works on Windows NT, 2000, and 2003 systems. The system works without the help of agents. Therefore, the need to manually deploy software is eliminated. The risk of missing certain agent dependencies is also eliminated in this case. Platespin is also very simple and lightweight, so it can start to collect data in almost one minute.

Platespin completely automates the physical to virtual migration of data. This allows the servers to be consolidated quickly and with more ease. There is a drag and drop interface that allows the user to convert machines running Windows or Linux into one fully functional virtual machine that is hosted on several types of servers. These servers include VMware GSX Server, Microsoft Virtual Server 2005, or simply a Platespin Flexible Image file.

Network configurations, CPU cycles disk space, and memory allocations can all be converted rather quickly. This ease allows users to right-size target servers as the conversion process is occurring. As a direct result, data centers are made able to increase the number of servers that are able to be consolidated. This further optimizes resource utilization rates.

Sever consolidation may seem complicated, but the right program can make it quite simple. Platespin automates many processes and allows many different factors to be converted quickly. This means that the total time for consolidating servers is reduced.
Visit Dell mobile reseller near you
New horizons. Colourful adventures. Exhilarating moments.
Inspiration sees no end in the world you create with the Dell™ XCD28. Bring your imagination to life with the smart 3G experience, Wi-Fi™, GPS and a host of exciting applications.
  • Android™ 2.1 OS
  • 3G HSDPA speeds upto 7.2 Mbps plus EDGE, GSM support
  • Wi-Fi b/g connectivity plus Bluetooth® 2.1 EDR plus AGPS plus FM
  • TFT LCD full touch up to 262K colours
  • 3.2 MP camera with 5x zoom
  • GPS will never leave you stranded. You can find the nearest city or simply find your way home with Google Maps
  • Storage — 200MB internal user memory plus support for external SD card up to 8GB
  • Access to Corporate Exchange Emails
  • Document Reader (Word, PDF, Excel, Powerpoint)
  • Includes 2GB Micro SD card
Jika anda memakai Microsoft® Windows® XP atau Windows XP Service Pack 1 (SP1) dan komputer anda telah terinfeksi Worm Sasser, anda dapat mengikuti langkah berikut ini untuk mengganti perangkat lunak anda, menghilangkan wormnya, dan memberikan perlindungan dari serangan berikutnya.

Langkah 1: Putuskan sambungan komputer anda ke Internet
Untuk mencegah masalah selanjutnya, pustuskan sambungan komputer anda dari Internet:

Bagi Penguuna Sambungan Internet Broadband Cari kabel yang menghubungkan komputer ke modem eksternal DSL atau kabel yang menghubungkan komputer ke modem atau ke sambungan telepon.Cabutlah kabel itu dari modem atau dari sambungan telepon, sehingga sambungan komputer anda ke Internet terputus.
Dial-up connection users: Carilah kabel yang menghubungkan modem internal dalam komputer anda ke sambungan telepon, kemudian cabutlah kabel itu agar tidak menghubungkan komputer anda ke Internet.
Langkah 2: Hentikan Siklus Shutdown
Worm ini akan menyebabkan file LSASS.EXE tidak bereaksi atas perintah lainnya (stop responding), yang kemudian menyebabkan sistem operasi kembali shut down setelah 60 detik. Jika komputer anda mulai shut down, ikutilah langkah berikut ini agar system tidak mengalami shutdown.

Pada taskbar dibawah layar komputer anda, click Start, lalu click Run.
Ketikan: cmd dan kemudian click OK.
Pada promt, ketikan : shutdown.exe -a kemudian tekan tombol ENTER.
Langkah 3: Lakukan Pencegahan
Anda bisa secara periodik membuang semua tanda tanda adanya worm menginfeksi komputer anda dengan cara membuat sebuah file log.

Membuat file log

Pada taskbar dibawah layar monitor anda, click Start, kemudian click Run.
Ketikan: cmd kemudian click OK.
Pada prompt, ketikan: echo dcpromo >%systemroot%\debug\dcpromo.log dan tekan ENTER.
Membuat file log berstatus read-only

Pada prompt, ketikan: attrib +R %systemroot%\debug\dcpromo.log lalu tekan tombol ENTER.
Langkah 4: Ubahlah Performansi Sistem
Jika komputer anda memberikan tanda adanya sambungan Internet yang lambat, worm mungkin sedang masuk ke jaringan komputer anda. Sehingga mengakibatkan anda sulit melakukan download dan menginstalasi perangkat lunak upodate nya.Untuk mengubah performansi sistem anda lakukan :

Tekan CTRL+ALT+DELETE, kemudian click Task Manager.
Untuk setiap langkah berikut ini yang mungkin ada dalam daftarnya, click task untuk memilihnya, kemudian click End Task button untuk mengakhirinya.
Setiap task berakhir dengan _up.exe (contoh , 12345_up.exe).
Setiap task dimulai dengan avserve (contoh, avserve.exe).
Setiap task dimulai dengan avserve2 (contoh, avserve2.exe).
Setiap task dimulai dengan skynetave (contoh, skynetave.exe).

Perhatian Jangan mengakhiri task wmiprvse.exe; sebab ini merupakan task sistem yang sedang dipakai.

Langkah 5: Enable Firewall
Sebuah firewall adalah sebagian perangkat lunak ataupun perangkat keras yang dibuat untuk tameng penghalang antara komputer anda dan Internet. Jika komputer anda terinfeksi, sebuah firewall akan menolong memperkecil efek dari worm. Windows XP dilengkapi dengan Internet Connection Firewall (ICF). Operaikan ICF:

Pada taskbar dibawah layar monitor anda, click Start, kemudian click Control Panel.
Click kategori Network and Internet Connections.
(Jika Network and Internet Connections adalah not visible, click Switch to Category View dalam Control Panel bagian sisi kiri window Control Panel.)
Click Network Connections.
Click tombol kanan mouse anda pada Dial-up, LAN, atau pada sambungan High-Speed Internet yang adan pergunakan untuk akses ke Internet, lalu click Properties dari menu shortcut.
Pada tab Advanced, pada Internet Connection Firewall, pilih Protect my computer and network, kemudian click OK. Windows XP firewall sudah enable sekarang.
Langkah 6: Sambungkan Kembali ke Internet
Hubungkan kabel seperti yang disebutkan pada langkah 1, ke bagian belakang komputer anda, ke sambungan telepon atau kemodem.

Langkah 7: Installasi File Update Yang Diperlukan
Untuk menolong perlindungan komputer anda melawan worm di waktu mendatang, anda harus mendownload dan menginstalasi file update security 835732, yang telah diumumkan dalam Microsoft Security Bulletin MS04-011. Untuk mendownload update security 835732, carilah disitus

Langkah 8: Pengecekan dan Pembersih Sasser
Setelah anda menginstalasi file update security pada komputer anda dan melakukan restart komputer anda, akseslah halaman web "What You Should Know About the Sasser Worm and Its Variants" di situs Pakailah perangkat lunak pembersih worm Sasser agar hardisk anda bisa discan dan dibersihkan dari Sasser.A, Sasser.B, Sasser.C, dan Sasser.D.

Tentang Internet Connection Firewall
Windows XP Internet Connection Firewall dapat memblokir task berguna seperti file sharing atau printer melalui jaringan komputer, pengiriman file atau penyimpanan game multiplayer. Microsoft menyarankan agar anda memakai firewall untuk perlindungan komputer anda.

Jika anda mengoperasikan Internet Connection Firewall dan menemui masalah dimana atas task yang anda inginkan, bacalah "How to Open Ports in the Windows XP Internet Connection Firewall" dalam situs

Jika anda memiliki lebih dari satu komputer, dan ingin informasi teknis lebih lanjut, atau ingin belajar lebih lanjut tentang firewall, bacalah "Frequently Asked Questions About Firewalls" dalam situs
People have more flexible time due to wireless network. Thanks to the invention of wireless. People can now work from home while taking care of their kids or doing house works. No more stress from traffic jam anymore. Is this great?

Well, there is something you should realize. Working from home while using a wireless local area network (WLAN) may lead to theft of sensitive information and hacker or virus infiltration unless proper measures are taken. As WLANs send information over radio waves, someone with a receiver in your area could be picking up the transmission, thus gaining access to your computer. They could load viruses on to your laptop which could be transferred to the company's network when you go back to work.

Believe it or not! Up to 75 per cent of WLAN users do not have standard security features installed, while 20 per cent are left completely open as default configurations are not secured, but made for the users to have their network up and running ASAP. It is recommended that wireless router/access point setup be always done though a wired client.

You can setup your security by follow these steps:

1. Change default administrative password on wireless router/access point to a secured password.

2. Enable at least 128-bit WEP encryption on both card and access point. Change your WEP keys periodically. If equipment does not support at least 128-bit WEP encryption, consider replacing it. Although there are security issues with WEP, it represents minimum level of security, and it should be enabled.

3. Change the default SSID on your router/access point to a hard to guess name. Setup your computer device to connect to this SSID by default.

4. Setup router/access point not to broadcast the SSID. The same SSID needs to be setup on the client side manually. This feature may not be available on all equipment.

5. Block anonymous Internet requests or pings. On each computer having wireless network card, network connection properties should be configured to allow connection to Access Point Networks Only. Computer to Computer (peer to peer) Connection should not be allowed.

Enable MAC filtering. Deny association to wireless network for unspecified MAC addresses. Mac or Physical addresses are available through your computer device network connection setup and they are physically written on network cards. When adding new wireless cards / computer to the network, their MAC addresses should be registered with the router /access point. Network router should have firewall features enabled and demilitarized zone (DMZ) feature disabled.

All computers should have a properly configured personal firewall in addition to a hardware firewall. You should also update router/access point firmware when new versions become available. Locating router/access point away from strangers is also helpful so they cannot reset the router/access point to default settings. You can even try to locate router/access point in the middle of the building rather than near windows to limit signal coverage outside the building.

There is no guarantee of a full protection of your wireless network, but following these suggested tips can definitely lessen your risk of exposing to attackers aiming at insecure networks.
In studying for your CCNA exam and preparing to earn this valuable certification, you may be tempted to spend little time studying static routing and head right for the more exciting dynamic routing protocols like RIP, EIGRP, and OSPF. This is an understandable mistake, but still a mistake. Static routing is not complicated, but it's an important topic on the CCNA exam and a valuable skill for real-world networking.

To create static routes on a Cisco router, you use the ip route command followed by the destination network, network mask, and either the next-hop IP address or the local exit interface. It's vital to keep that last part in mind - you're either configuring the IP address of the downstream router, or the interface on the local router that will serve as the exit interface.

Let's say your local router has a serial0 interface with an IP address of, and the downstream router that will be the next hop will receive packets on its serial1 interface with an IP address of The static route will be for packets destined for the network. Either of the following ip route statements would be correct.

R1(config)#ip route (next-hop IP address)


R1(config)#ip route serial0 ( local exit interface)

You can also write a static route that matches only one destination. This is a host route, and has for a mask. If the above static routes should only be used to send packets to, the following commands would do the job.

R1(config)#ip route (next-hop IP address)


R1(config)#ip route serial0 ( local exit interface)

Finally, a default static route serves as a gateway of last resort. If there are no matches for a destination in the routing table, the default route will be used. Default routes use all zeroes for both the destination and mask, and again a next-hop IP address or local exit interface can be used.

R1(config)#ip route (next-hop IP address)


R1(config)#ip route serial0 ( local exit interface)

IP route statements seem simple enough, but the details regarding the next-hop IP address, the local exit interface, default static routes, and the syntax of the command are vital for success on CCNA exam day and in the real world.
Not at all few are the ones that enjoy going to concerts, regardless of age, favorite music genre or other preferences such as band and/or location. People enjoy going to concerts because their favorite artists or some classic legends are performing at a distance so close to them that it makes everything dream-like. For this reason, amateurs have always tried to reproduce their idol's performances, especially when it comes down to guitar training. In this process artists have realized the need to slow down guitar solo performances in order to be able to better understand the musical notes, tone and generic riff performing. There are various ways to slow down guitar solo, methods applied worldwide in musical studios, even if there is always a much cheaper and similarly reliable alternative.

Every guitar student has not just once heard a guitar lick that he or she wanted to learn even if it seemed always too fast to follow, making it a very time consuming activity to try to reproduce using the normal playing speed. Trying to learn a part of a song with too much going on or the simple attempt to hear just the piano in a specific performance makes it very difficult for artists to improve their skills. Pulling different instruments out of a song so that you can solo or learn that song using that specific instrument part can be a very tedious if not difficult process. In order to achieve this various studio hardware has been deployed over the centuries to slow down guitar solo so that artists would be able to both track and improve performances.

As expensive as it may be, this type of hardware did its job. With the ability to slow down guitar solo, performers such as Jimi Hendrix or Joe Satriani have been easily followed by amateurs trying to reproduce their riffs and tones with the feeling of a real rock superstar in their minds. This feeling is often emphasized by famous guitarists when they choose to express their skills using terrific guitar solos, parts of songs or individual performances which more than often remain in the memory of all audience, lasting over time in the hearts and minds of many. For this reason, being able to slow down guitar solo made it possible for many guitar amateurs to become their very own superstars in a very short amount of time.

Sad but true, old methods are nowadays obsolete and cannot be considered anymore, especially since the revolution made by the PC in mostly all industries, including, of course, the musical industry. Software products such as the award winning Riff Master Pro have managed to put a lot of the hardware used to slow down guitar solo into the box, making it possible to instantly slow down guitar solos without changing the pitch with a feature-rich interface that allows users to benefit from capabilities such as saving an already slow guitar solo for later training purposes.

This software is helpful for transcribing, working out a difficult riff, helping you learn new song techniques and slowing down music for dance performances while making everything very easy to use and accessible for even a non-technical PC user.
Almost all the technological inventions being made seems to take in a seamless blend of sophistication and complication at their functioning phase. And it happens quite often that the end-users of these products are nearly blinded of the technicalities involved in them. The growing popularity of computer user interface designing boils down to a concept much similar to this. Creating an effective user interface design is a hard churned out job by a set of highly dedicated designers and software engineers. Here their efforts are weighed based on how well the interface augments the user experience, which is undoubtedly the key to acceptance.

As they say, outsourcing is in the air that has given the global business scenario a perfect facelift in terms of cost-effectiveness and efficiency. And why not, software application development too experiences the boom, with user interface design services being outsourced to far-off nations. From the user perspective, user interface is a collection of well placed controls or displays meant to interact with the computer. So the more simple the design is, the more is its efficacy. Thus creating an effective user interface put complications on the higher side, as it is one among the most decisive components involved in software application designing.

In order to claim a user interface a user optimized interface it asks for a rather systematic approach in the designing front. Accepting the importance and technicalities involved, seeking for professional help is just the way to go about it. By outsourcing interface design services benefits prevail over the few downsides, where it signals not just getting experienced hands on the project, but a cutback in direct staff and cost. By joining hands with the lead players in the industry most of the web-based as well as software application development requirements will be covered. Experts in user interface design can well juggle the hardware and software demands that their clients come up with.

Anyhow, with suitable amount of research and groundwork being applied it wouldn't be quite a demanding task to make your way to the best players in the field that handle interface design services.
It's easy to be confused by all the different options when purchasing a laptop computer. There are literally hundreds of models to choose from for all different prices.

The key to finding the right laptop for you is determining firstly what your needs are going to be, then determining how much money you are willing to spend.

There are some general factors to consider. The first would be the size of the laptop. Do you want an ultra portable laptop that's small and light weight or do you want something more like a desktop replacement while compromising on size and weight.

The second factor to consider would be the size of the hard drive on the laptop. Laptop hard drives are a lot smaller than desktop hard drives. A standard laptop hard drive size would be around 100gb. If you need to store large files such as videos then you would have to consider getting something with an upgraded hard drive.

The third factor would be the size of the memory. A standard size would be 256-512mb. Anything above this would cost extra. You would need to consider what you are going to use the laptop for. If word processing and web surfing is all you want to do then 256mb would be more than enough.

The final factor would be pricing. This is completely a personal choice but you do get what you pay for. Laptop prices have dropped dramatically these days and a laptop can be bought for as little as $1000. Considering a couple of years back you couldn't get a laptop for under $2000 this price isn't too bad. But of course it all comes back to your needs. If a high end laptop is what you need for picture or video editing then expect to be paying over $2500.

Buying a laptop doesn't need to be a difficult task. If you evaluate your needs first and choose carefully your new purchase will be a profitable one.
If we want to understand the reasons behind Firefox success we have to find the origins of the browser.
During, September 2002 the first version of the browser was released to the public called Phoenix. The browser was based on Gecko engine from Mozilla Suite. After, a number of releases the name was changed to Firebird but due to a legal dispute it was changed again to Firefox. This browser has received a great deal of publicity as an alternative browser to Internet Explorer.

There are many factors behind Firefox's success but I think the added features and the marketing strategy make a whole lot difference in users' adopting the software. Another thing, I want to add is that Internet Explorer after winning the battle with Netscape's browser was left with no significant changes. Of course, this has changed with the upcoming new version of windows called Vista and a new version of Internet Explorer. I suppose Microsoft is trying to correct some omissions and bugs in various levels of the browser.

We are now going to explore the main features Firefox has at the moment. One of the main goal of the developers working in Firefox is enhanced usability and accessibility for the end user. Tabbed browsing, where you load many pages on the same window, is a valuable feature in Firefox as it can make your browsing a lot faster. Also, pop-up blocking eliminates those irritating ads and the user can easily find information on a particular page using the 'find as you type' feature. The built in search bar includes all the major search engines such as Google, Yahoo etc. and you can add more search engines if you want. People working on the accessibility of the browser have manged to make Firefox work with several screen readers, screen magnifiers and on-screen keyboards. These accessibility features can help people with impairments browse the Internet easier than before.
Another feature Firefox users like very much is that they can customize easily many aspects of the browser. Extensions such as the popular web developer or the Venkman debugger can be added to the browser and enhance the functionality of Firefox. Users, often like to have an appearance according to their preferences so they use different themes in Firefox. Therefore, themes are used to change the visual appearance of the browser.

Security is really important for end users and corporations. Both, want a secure browser that they can trust without the security holes of Internet Explorer and its ActiveX technology. Mozilla Firefox fulfills this requirement mainly by giving the opportunity to other developers to check the code for security bugs and using various successful security techniques and models such as the sandbox security model. In addition, the browser can be used in many different platforms and the source code is freely available for anyone to compile it and contribute to the project.

We have seen numerous features that Firefox has but I would like to talk a little bit about the marketing strategy that is used. The development of the browser is supported by search engines Google and Yahoo through partnerships and mostly by the open source community. Mozilla Foundation which is responsible for the development of the browser believes that community based marketing can be successful. They have proven their point by using a community based marketing web site called spread They were able to place an ad on New York Times through donations made by the community of developers and devotees during the release of Firefox 1.0.

The secret behind Firefox's success is the valuable features available for the user and the enthusiastic community which helps financially through donations and spreads the word.
The rapid development of the World Wide Web in recent years has led to an explosive growth of information on the Internet. Our contemporary lifestyle would be unimaginable without access to such a super-abundant cornucopia of valuable information and web surfing has now become an everyday occupation for even the most diverse sections of society.

This rapid expansion of web resources raises some new issues for all of us. How could you possibly remember; after a long search, the address of that crucial web page? How will you be able to return to the page without repeating a tedious web search through hundreds and thousands of pages?
The answer is obvious, you need a program that will allow you to easily create and manage a database of web resources. Of course, this database must be quick, intuitive and convenient to use.

One way to resolve this problem is to use your web browser's bookmarks feature. Bookmarks are a popular term for the lists of web page links stored in web browsers, although they are called 'Favorites' in Internet Explorer. These web browser bookmark systems have some severe limitations. For example, each bookmark list will only be compatible with a specific web browser. If you use several different web browsers you will have to manage the bookmark system in each one. Web browser bookmark lists may become cumbersome to use when your bookmark list grows beyond a few items. Important features missing from web browser bookmark systems include:
- Powerful search functions;
- Synchronization of bookmarks between different computers;
- Detection and automatic deletion of duplicate bookmarks;
- Checks for availability of bookmarked web pages.

Specialist programs and web services that store and organize bookmarks are now available and they offer a comprehensive solution to these problems. They are called bookmark managers or bookmark organizers (in this article both terms have the same meaning). The difference between online (web-based) bookmark managers and standalone bookmark managers is in the location of the stored bookmark database and in the way that the database is accessed. Web services called 'online bookmark managers' store the user's bookmarks on their remote servers and their bookmarks may be accessed from any browser. A standalone bookmark organizer is simply a program which runs on your local computer. It stores the bookmark database on a hard disk and allows access through its own built-in interface.

Here are some examples of web-based bookmark managers: -

LinkaGoGo -

Murl -

You can find more links to online bookmark managers here:

Bookmark management software can be found here:

Link Commander -

Linkman -

Powermarks -

Any software catalog will contain plenty of links to bookmark managers. For example:

Offline and online bookmark managers each have relative advantages and disadvantages due to their differing methods of database storage and access.
An online bookmark manager does not depend on any particular computer. If you have an Internet connection you can access your bookmarks from any computer in the world. You don't need to synchronize the bookmarks on your home/work PC or notebook because they will all access the same bookmarks database. With an online bookmark manager you can access your bookmarks even when you are in an Internet cafe! Another advantage is that most of them are free. They will cost you time, though, because you access your bookmarks via an Internet connection. More importantly, most of the web interfaces are not as convenient as software based bookmark managers and don't have so many useful features. For example, they can't search for and delete duplicate database items. Here are some of the other potential disadvantages of using online bookmark managers:

1) You risk losing all your bookmarks if, for some reason, the web service closes down.

2) There is a danger of unauthorized access to your private bookmarks because your bookmark manager server may not be secure against hackers.

The advantages and disadvantages of offline bookmark managers are almost exactly opposite to those of online bookmark managers and will be discussed next.

Any offline bookmark manager is tied to the computer on which it is installed. It stores your bookmarks in a database (which usually has its own proprietary format) that is located on one of the hard drives. To use your bookmarks on several computers you will need to install the program on each computer and find a way to synchronize the bookmark databases. Most of the currently available bookmark organizers do have a database synchronization feature. Also, there are now devices with high data transfer speeds (e.g. flash drives) that can store an independent bookmark database and allow it to be shared between several computers.

Another disadvantage of bookmark manager software is the price. There are some free programs out there, but they don't have a great number of features and technical support is often weak or unavailable. The programs that require payment are inexpensive, though, usually costing from $20 to $40. The user licenses of such programs will normally allow you to install the programs on all of your computers.

In my opinion, the disadvantages of standalone bookmark managers are minimal compared to their advantages. The location of both the program and database on the same computer guarantees you fast access to your bookmarks and high security from hacker attacks. The convenience of the program interface and the number of useful features are limited only by the power of the computer and the skills of developers.

So, how should you organize your bookmarks? Should you use an online or offline bookmark manager? I don't think there is a definite answer. It all depends on your preferences and working habits. If mobility is your priority, if you travel often and wish to access your bookmarks no matter where you are and from any computer, then you should consider an online bookmark manager. If speed, ease of use, security and functionality is most important to you then an offline bookmark manager might be a better choice.
Wireless networks use radio waves instead of wires to transmit data between computers. Here's how:

The Binary Code: 1s and 0s

It's well known that computers transmit information digitally, using binary code: ones and zeros. This translates well to radio waves, since those 1s and 0s can be represented by different kinds of beeps. These beeps are so fast that they're outside the hearing range of humans.

Morse Code: Dots And Dashes

It works like Morse code, which is a way to transmit the alphabet over radio waves using dots (short beeps) and dashes (long beeps). Morse code was used manually for years via telegraph to get information from 1 place to another very quickly. More importantly for this example, though, it is a binary system, just as a computer system is.

Wireless networking, then, can be thought of as a Morse code for computers. You plug in a combined radio receiver and transmitter, and the computer is able to send out its equivalent of dots and dashes (bits, in computer-speak) to get your data from here to there.

Wavelengths And Frequencies

You might wonder how the computer can send and receive data at high speed without becoming garbled nonsense. The key to wireless networking is how it gets around this problem.

First, wireless transmissions are sent at very high frequencies, which allows more data to be sent per second. Most wireless connections use a frequency of 2.4 gigahertz (2.4 billion cycles per second) -- a frequency similar to mobile phones and microwave ovens. However, this high frequency produces a wavelength that is very short, which is why wireless networking is effective only over short distances.
Wireless networks also use a technique called "frequency hopping." They use dozens of frequencies, and constantly switch among them. This makes wireless networks more immune to interference from other radio signals than if they transmitted on a single frequency.

Internet Access Points

The final step for a wireless network is to provide internet access for every computer on the network. This is done by a special piece of wireless equipment called an access point. An access point is more expensive than a wireless card for 1 computer, because it contains radios capable of communicating with around 100 computers, sharing internet access among them. Dedicated access points are necessary only for larger networks. With only a few computers, it is possible to use 1 of them as the access point, or to use a wireless router.

Industry Standards

Wireless equipment from different manufacturers can work together to handle these complex communications because there are standards which guide the production of all wireless devices. These standards are technically called the 802.11. Because of industry compliance with these standards, wireless networking is both easy to use and affordable today.

Wireless Is Simple To Use

If all this talk of frequencies has you worried -- relax. Wireless networking hardware and software handle all of this automatically, without need for user intervention. Wireless networking, for all its complicated ability, is far simpler to use than you might expect.

Minggu, 20 Maret 2011 - 17:00 wib
Ahmad Taufiqurrakhman - Okezone
JAKARTA - Situs resmi Persatuan Sepakbola Seluruh Indonesia (PSSI) telah dibajak. Ini terlihat dengan munculnya gambar tikus memegang pistol di website tersebut.

Ketika Okezone membuka situs PSSI pada Minggu (20/3), terpampang jelas di halaman muka situs tersebut, gambar tikus memegang pistol dengan tulisan 'Stop Korupsi'.

"Situs '' kembali diserang hacker, yang kedua kalinya dalam tiga hari terakhir; penanggungjawab serangan pertama 'soldier of Allah', sedangkan yang terakhir adalah dari 'aktivis tukang gorengan peduli indonesia' belum bisa difahami, semangat apa yang diusung oleh kawan-kawan hacker ini; sekedar iseng atau memang disengaja, sbg dinamika terkait kongres PSSI 26 Maret dan 29 April nanti?" tulis Tubagus Adhi Priyanto, Wakil Bidang Media PSSI, di akun facebook-nya, Minggu (20/3/2011) pukul 16.20 WIB.

Selain tulisan 'Stop Korupsi', di bagian bawah halaman muka situs resmi PSSI tersebut juga ada tulisan 'Stop Korupsi & Suap di Indonesia' dan 'Hacked by Aktivis Tukang Gorengan Peduli Indonesia'.

Selain mengganti gambar halaman utama situs PSSI, hacker 'Aktivis Tukang Gorengan' juga memblokir akses untuk masuk ke dalam situs tersebut.

Nampaknya masyarakat sudah terlalu muak akan kepengurusan yang buruk di PSSI saat ini, hingga mereka membajak situs resminya. Terbukti dengan kasus korupsi yang menggiring Ketua Umum Nurdin Halid sebagai tersangka, pada 2007 lalu.

Klik ke sini untuk melihat situs PSSI yang telah di-hack tersebut.

sumber : Okezone
Bluetooth History
The name Bluetooth came from a prestigious project promoted by the giant international companies engaged in telecommunications and
computers, among them Ericsson IBM Intel, Nokia and Toshiba.

This project in early 1998 with the code name of the bluetooth, as inspired by
a Viking king (Denmark) named Harald Blatand. King Harald Blatand this
power in the 10th century by mastering most of the region and Denmark
Scandinavian region at that time. Due to the vast territory,
King Harald Blatand is funding scientists and engineers to build
a metamorphosis-tech project that aims to control the forces
of the tribes in the Scandinavian region from a distance. So for
of the tribes in the Scandinavian region from a distance. So for bluetooth
(in English) project is named.
The first time was released for
Bluetooth version 1.0 and 1.0 B on July 26, 1999
This product has not been perfect, because it has many problems and company
manufacturing supporters have difficulty in applying this technology in
Hardware Device Address (BD-ADDR)
transmission when the connection between two devices in a network (handshaking process)
so that user security is not guaranteed, and
use the protocol without a name (anonymite mode)
not possible in this version.