With a simple trick, the humble spud can be made into a battery, so could potato powered homes catch on?
Mashed, boiled, baked or fried? You probably have a preference for your potatoes. Haim Rabinowitch, however, likes his spuds “hacked”.
For the past few years, researcher Rabinowitch and colleagues have been pushing the idea of “potato power” to deliver energy to people cut off from electricity grids. Hook up a spud to a couple of cheap metal plates, wires and LED bulbs, they argue, and it could provide lighting to remote towns and villages around the world.
They’ve also discovered a simple but ingenious trick to make potatoes particularly good at producing energy. “A single potato can power enough LED lamps for a room for 40 days,” claims Rabinowitch, who is based at the Hebrew University of Jerusalem.
The idea may seem absurd, yet it is rooted in sound science. Still, Rabinowitch and his team have discovered that actually launching potato power in the real world is much more complex than it first appears.
While Rabinowitch and team have found a way to make potatoes produce more power than usual, the basic principles are taught in high school science classes, to demonstrate how batteries work.
To make a battery from organic material, all you need is two metals – an anode, which is the negative electrode, such as zinc, and a cathode, the positively charged electrode, such as copper. The acid inside the potato forms a chemical reaction with the zinc and copper, and when the electrons flow from one material to another, energy is released.
This was discovered by Luigi Galvani in 1780 when he connected two metals to the legs of a frog, causing its muscles to twitch. But you can put many materials between these two electrodes to get the same effect. Alexander Volta, around the time of Galvani, used saltwater-soaked paper. Others have made “earth batteries” using two metal plates and a pile of dirt, or a bucket of water.
Potatoes are often the preferred vegetable of choice for teaching high school science students these principles. Yet to the surprise of Rabinowitch, no one had scientifically studied spuds as an energy source. So in 2010, he decided to give it a try, along with PhD student Alex Goldberg, and Boris Rubinsky of the University of California, Berkeley.
“We looked at 20 different types of potatoes,” explains Goldberg, “and we looked at their internal resistance, which allows us to understand how much energy was lost by heat.”
They found that by simply boiling the potatoes for eight minutes, it broke down the organic tissues inside the potatoes, reducing resistance and allowing for freer movement of electrons– thus producing more energy. They also increased the energy output by slicing the potato into four or five pieces, each sandwiched by a copper and zinc plate, to make a series. “We found we could improve the output 10 times, which made it interesting economically, because the cost of energy drops down,” says Goldberg.
“It’s low voltage energy,” says Rabinowitch, “but enough to construct a battery that could charge mobile phones or laptops in places where there is no grid, no power connection.”
Their cost analyses suggested that a single boiled potato battery with zinc and copper electrodes generates portable energy at an estimated $9 per kilowatt hour, which is 50-fold cheaper than a typical 1.5 volt AA alkaline cell or D cell battery, which can cost $49–84 per kilowatt hour. It’s also an estimated six times cheaper than standard kerosene lamps used in the developing world.
Which raises an important question – why isn’t the potato battery already a roaring success?
In 2010, the world produced a staggering 324,181,889 tonnes of potatoes. They are the world’s number one non-grain crop, in 130 countries, and a hefty source of starch for billions around the world. They are cheap, store easily, and last for a long time.
With 1.2 billion people in the world lacking access to electricity, a simple potato could be the answer– or so the researchers thought. “We thought organisations would be interested,” says Rabinowitch. “We thought politicians in India would give them out with their names inscribed on them. They cost less than a dollar.”
Yet three years on since their experiment, why haven’t governments, companies or organisations embraced potato batteries? “The simple answer is they don’t even know about it,” reasons Rabinowitch. But it may be more complicated than that.
First, there’s the issue of using a food for energy. Olivier Dubois, senior natural resources officer at the United Nations Food and Agriculture Organisation (FAO), says that using food for energy – like sugar cane for biofuels – must avoid depleting food stocks and competing with farmers.
“You first need to look at: are there enough potatoes to eat? Then, are we not competing with farmers making income from selling potatoes?” he explains. “So if eating potatoes is covered, selling potatoes is covered, and there’s some potatoes left, then yes, it can work”
In a country like Kenya, the potato is the second most important food for families after maize. Smallholder farmers produced around 10 million tonnes of potatoes this year, yet around 10-20% were lost in post-harvest waste due to lack of access to markets, poor storage conditions, and other issues, according to Elmar Schulte–Geldermann, potato science leader for sub–Saharan Africa at the International Potato Center in Nairobi, Kenya. The potatoes that don’t make it to the market could easily be turned into batteries.
Yet in Sri Lanka, for instance, the locally available potatoes are rare and expensive. So a team of scientists at the University of Kelaniya recently decided to try the experiment with something more widely available, and free – plantain piths (stems).
Physicist KD Jayasuriya and his team found that the boiling technique produced a similar efficiency increase for plantains – and the best battery performance was obtained by chopping the plantain pith after boiling.
With the boiled piths, they found they could power a single LED for more than 500 hours, provided it is prevented from drying out. “I think the potato has slightly better current, but the plantain pith is free, it’s something we throw away,” says Jayasuriya.
Despite all this, some are sceptical of the feasibility of potato power. “In reality, the potato battery is essentially like a regular battery you’d buy at the store,” says Derek Lovley at the University of Massachusetts, Amherst. “It’s just using a different matrix.” While the potato helps to prevent energy being lost to heat, it is not the source of the energy – that’s actually extracted via the corrosion of the zinc. “It’s sacrificial – the metal is degrading over time,” says Lovley. This means you’d have to replace the zinc – and of course the potato or plantain pith – over time.
Still, zinc is quite cheap in most developing countries. And Jayasuriya argues that it could still be more cost effective than a kerosene lamp. A zinc electrode that lasts about five months would cost about the same as a litre of kerosene, which fuels the average family home in Sri Lanka for two days. You could also use other electrodes, like magnesium or iron.
But potato advocates must surmount another problem before their idea catches on: consumer perception of potatoes. Compared with modern technologies like solar power, potatoes are perhaps less desirable as an energy source.
Gaurav Manchanda, founder of One Degree Solar, which sells micro-solar home systems in Kenya, says people buy their products for more reasons than efficiency and price. “These are all consumers at the end of the day. They need to see the value in it, not only in terms of performance, but status,” he explains. Basically, some people might not want to show off their potato battery to impress a neighbor.
Still, it cannot be denied that the potato battery idea works, and it appears cheap. Advocates of potato power will no doubt continue to keep chipping away.
After ten years of bonding and strong connections, Orkut decides to call it a day.
An email from Google reports that the growth of other social networking communities on its own platform had outpaced Orkut hence they decided to focus their energy and resources on making the experience of other social platforms as amazing as possible for the users.
Orkut users would face not impact until the website closes down on September 30, 2014. They would also be able to export profile data, community posts and photos using Google Takeout, which will be available until September 2016.
Google apologised to those who still actively use the service.
If the rumors are to be believed, and recent product releases are any indication, we stand at the cusp of an explosion in the wearable technology industry.
There are many troubles that plague product launches in terms of manufacturing, functionality, costs, and delivery dates. Even if the launch proceeds without a hitch, nascent ecosystems usually require the might and resources of large titans to scale and grow.
While most small companies pursue profitability via market adoption, the larger companies have the mettle and finances to acquire and operate business units for relevance and influence, often at minimal or as loss-leaders - The category of products that is sold at cost, or even below cost, to incentivize the consumer to buy additional products or services being sold.
For a large company to swoop in and acquire a startup for its wearable technology, not only do future market acceptance indicators need to be present, but the technology has to be aligned to the corporation’s product roadmap.
The Samsung Galaxy Gear watch met with largely negative reviews when it was first launched. It was burdened with some poor design decisions, but moreover, there was no compelling reason for anyone to buy one. To paraphrase someone in the tech industry: “Smart watches are a solution looking for a problem”.
The essence of the problem, as i see it, is this: If the device is only really going to be useful as an extension to a particular mobile phone, it’s going to experience growth slowly unless it can offer some truly differentiated, desirable functionality.
In the event that it does, it needs to be marketed as an standalone product whose functionality doesn’t have external dependencies. Doing that, of course, leads to decisions which impact cost, design and duplication of functionality. Should an iWatch have it’s own antenna, or should it piggyback off the iPhone? What if the iPhone battery drains completely? Should the wearable still be functional?
This also applies to features within apps. Do i have to make another in-app purchase to play a game on my wearable device? Does a Glass-only app need my phone’s accelerometer to function? Isn’t it infinitely better if i can snap my fingers to “Shazam” a song at a public venue, rather than rummage through my pockets to launch an app and press a button?
The second dimension to the problem is the way ecosystems are constructed. In recent years, they have almost exclusively relied on the contribution of third party developers who can help the ecosystem balloon immediately by growing it in parallel. That won’t happen for a niche product because the financial incentive isn’t there simply because the market volume isn’t there. Blackberry tried providing financial guarantees to developers to lure them to their platform; It didn’t work.
If i’m a developer who’s looking to monetize his knowledge, I’m going to more than likely first turn to the iOS platform to validate my product and test monetization. My second move would be the Android ecosystem to scale and gain market presence which would have a resurgent effect on my already published iOS app. To get developers onboard, you have to either be a dominant market player with an established, lucrative app distribution platform, or a newbie with frictionless development tools and lucrative terms, and possibly a new industry.
The solution lies in the medium. Every medium has its own strengths and weaknesses, irrespective of whether it is stitched in to the fabric of another device or not. Harness its strengths. Most people condemned Google Glass to be dead-on-arrival (doa) Glass is an awesome product with truly spectacular potential in the right hands. Google is just the worst company in the world to launch it, largely because of it’s reputation to monetize by selling personal data.
A successful product should not focus solely on extensibility, but also on independent functionality made capable by the device’s unique specifications.
Gla**holes. That is an actual term people use to refer to individuals wearing Google Glass. They have been asked to leave diners and other places of business simply because of its intrusive nature. This opens up a larger debate of what is acceptable by society and why. After all, i can walk around a city subtly snapping photos of random people with my cellphone. So why is it so unacceptable if i do the same with my glasses?
The problem is compounded by the unfortunate timing of the wave in the industry. Julian Assange and Edward Snowden have made quite a hash of public trust in governments and corporations. Research conducted by the Pew Research Institute illustrates sensitivity to privacy at an all-time high, and has an inversely proportional relation to external trust.
A successful wearable should be designed and positioned as a privacy-conscious product, including its features.
Power is another major issue. Most people have to recharge their phones by the end of the day. Having to monitor and recharge power levels for two separate devices is simply cumbersome and a detractor to user experience. Having interdependence built in will only deplete the power levels quicker since both devices will have to be in an active state. To date, there isn’t a universal solution to the problem of rapid power depletion. To save space, Apple devices don’t allow for batteries to be swapped and the industry is mostly following suit. Starbucks is now introducing Duracell wireless charging mats in all its stores but that is more of an independent value-added service rather than a trend catching on globally.
A successful wearable will harness solar energy, and the same technology used in battery-less timepieces.
I’ll have to refer again to Apple’s products simply because they have been the mainstays of the industry while other offerings come and go around them. The iPhone is the exception to the rule. No other product has been able to motivate customers to upgrade or buy every couple of years. Arguably, it is one of the reasons they haven’t entered the television industry. Looking at their most recent earnings call, it’s abundantly clear that even the immensely popular iPads suffer from the same problem. Customer’s simply will not buy devices every couple of years unless there are major improvements. For a wearable product to appease a titan’s financial appetite, it will either have to offer phenomenal margins, or prove to be addictive like the iPhone. The Apple TV is not a valid parallel because we haven’t seen it’s full potential yet. (I suspect it is a trojan horse which might soon be bolstered with exclusive content by way of additional acquisitions.)
A successful wearable must take cues from the fashion world and emphasize its visible nature.
One reason why multifunctional cellphones took off is obscure input. It is a feature that doesn’t get anywhere near the amount of attention it should. I believe it is also the main reason why Google’s Glass is going to gain traction slowly. (I have alluded to a solution within this post.) Imagine a mall or an airport. Now imagine several hundred humans walking around saying “ok glass…”.
It is here that tactile feedback is really useful. This however leads to a catch-22 of the very worst kind. Given prevalent design trends and material selection in the hardware industry, a wearable device would almost certainly have an element of touch input to make the design more aesthetically pleasing, whereas uglier, tactile buttons would make the functionality substantially better. Think about video games if this sounds confusing. You cannot use touch interfaces without the element of sight. This has a direct impact on the usability though. Two-way interactivity is essential or the device will never really catch on. Not many people want to spend hundreds of dollars on a passive device relaying notifications.
Using my cellphone, I can deposit a cheque, make reservations for dinner, order a movie, confirm my plans and give my thoughts on a collaborative development project without anyone else knowing the details. Rather than aim to emulate this functionality, a successful wearable should eschew all of it and concentrate on what it can achieve by virtue of its own human interface.
It isn’t all just about monitoring health or tracking the number of miles you may run; I believe iBeacon will be a core enabler in Apple’s wearable offering, whenever it surfaces. Yes, an iPhone can do the same, but how comfortable is it to walk around with a phone in front of you, or repeatedly withdraw it from your bag or pocket every time you sense an alert?
A successful wearable should have a non-public human input interface which doesn’t require a person to look at it constantly.
As stated earlier, It might be more tempting to buy an additional device if there is a significant increase in functionality. A catalyst for that could be enhanced, medium-specific features offered by apps I already use with my smartphone today. The point is not to be repetitive, but rather highlight the other side of the coin. As a developer, I will have to put in many more hours for non-existent marginal revenue, since customers will not pay for the same app twice even if it offers more. I’m not a big fan of in-app purchases since i believe this fragments the user experience (see point number one in this post) and creates a susceptibility to migrate to an alternative, of which there are plenty since every new entrant in all the app stores is looking for early traction. The only remaining motivation would be that of staying ahead of the competition. That however, comes with depleting profitability since you have to continually provide more for almost the same per-user revenue.
No one knows how this will pan out just yet. These ramblings are just suggestions to where I personally feel an entry can achieve a good product/market fit. Many will disagree, some might agree. The point is to explore the many directions the industry could take. We can speculate and commentate endlessly, but whether Apple buys Soundcloud and launches a label, or Google leads the charge with wearable technology, or anonymity becomes the next big thing remains to be seen.
WASHINGTON: With hackers stealing tens of millions of customer details in recent months, firms across the globe are ratcheting up IT security and nervously wondering which of them is next.
The reality, cyber security experts say, is that however much they spend, even the largest companies are unlikely to be able to stop their systems being breached. The best defense may simply be either to reduce the data they hold or encrypt it so well that if stolen it will remain useless.
Only a few ago, the primary IT security concern for many large corporations was stopping the loss or theft of physical disks or drives with customer information.
Now, much harder to detect online thefts are rife.
Last week, Reuters revealed a host of big name U.S. Fortune 500 companies were on a hiring spree for board level cyber security experts often offering $500,000-700,000 a year, sometimes more.
Many have high-level backgrounds, at much lower pay, at signals intelligence agencies such as the US National Security Agency or Britain's GCHQ - although security experts say European firms are reluctant to hire ex-NSA staff following revelations over the scale of US cyber monitoring by whistleblower Edward Snowden.
"Information has become toxic for retailers because the more they have, the bigger a target they become," said Lamar Bailey, security researcher at IT security firm Tripwire."The ongoing rash of attacks brings into question what information an organization should be keeping."
US retailer Target ousted its CEO Gregg Steinhafel in May after the firm said foreign hackers had stolen up to 70 million items of customer data including some PIN numbers late last year.
Industry watchers said purchases on its website dropped noticeably in the run-up to Christmas with the breach also sparking lawsuits and official investigations.
A report from cyber security think tank the Ponemon Institute showed the average cost of a data breach in the last year grew by 15 percent to $3.5 million. The likelihood of a company having a data breach involving 10,000 or more confidential records over a two-year period was 22 percent, it said.
The corporate fallout from the largest recorded breach so far, the loss of password data on some 145 million customers from online retailer eBay, is not yet clear.
A senior eBay executive told Reuters last week that "for a very long time" the firm had not realized customer data had been seriously compromised by the attack.
Abortion charity fined
Much smaller organizations, even charities, are also discovering they have much to lose.
UK charity the British Pregnancy Advisory Service (BPAS) - which provides information on abortions and runs clinics - is appealing a 200,000 pound fine after an anti-abortion campaigner was able to access websites details of women asking for advice.
Britain's Information Commissioner said the charity had failed in its responsibility to store records securely. "I do feel sympathy for them," said Calum MacLeod, vice president for Europe, Middle East and Africa at Lieberman Software Corporation. "They were never going to be able to attract top IT staff and with their limited resources, it will very often mean that they will outsource services such as website development. This shows that great care must be taken."
IT security experts say firms are becoming increasingly careful, now sometimes instructing tens of thousands of users to change passwords if even a single account appears compromised. Many are also taking out specialist insurance.
Still, a study of 102 UK financial institutions and 151 retail organizations conducted earlier this year by Tripwire showed 40 percent said they would need 2 to 3 days to detect a breach.
A February report by BAE Systems Applied Intelligence, the cyber arm of the British defense firm, showed customer data loss was by far the largest IT security concern for firms in the United States, Canada, Australia and Britain. It significantly outranked worries over lost trade secrets and interruption of service.
Hackers seek the most complete range of information they can get on individual customers. Obtaining a complete dataset of password, date of birth, e-mail address, phone number and other personal data can be more valuable than simple credit card details.
"The theft of financial information has a limited lifespan, until we make changes the account details," said Andy Heather, vice president for Europe, Middle East and Africa at Voltage Security. "The personal information that can be obtained by accessing someone's account profile has much broader use and can be used to commit a much wider range of fraud."
Banks have been ahead of the curve when it comes to tightening IT security and have suffered less than retailers in recent months. Increasing numbers of firms are also using online payment operator PayPal instead of taking credit card numbers themselves, reducing the amount of data they hold.
The better data is encrypted, the less serious it is when it is stolen though even some encrypted passwords can be cracked with sufficient computer power.
Other strategies involve using "honeypots" - false folders designed to look as though they contain valuable data - that can be used to mislead and even detect attackers.
The most common route in for criminals, however, is gaining control of someone else's user profile, allowing them to sneak into networks and steal further data.
Some worry the high-profile nature of recent hacks may have actually made such identity theft easier. Security experts report an increase in "phishing" attacks - fake e-mails purportedly from major firms mentioning recent security breaches and prompting people to a dubious link to reset the password.
"Any time an event like this occurs it opens the door for phishing campaigns to be more effective," said Troy Gill, senior security analyst at AppRiver. "No organization is immune."
بیجنگ(پاكستانيز ان كويت ٹيم) : اب اسمارٹ فون سے گاڑیاں بھی چلیں گی، جی ہاں چین میں مستقبل کی گاڑیوں نے شائقین کوخوابوں کی دنیا میں پہنچا دیا، جس کی تعبیر ممکن ہے۔
چین کا نام جہاں آجائے، ٹیکنالوجی کی نئی خبرلے آئے۔ بیجنگ میں بھی ان دنوں گاڑی کی صنعت کیلئے نئی خبرصنعت کاروں اور شائقین کی توجہ سمیٹ رہی ہے، اسمارٹ فون کے حکم پرچلنے پھرنے والی گاڑیاں قدم جمائے کھڑی ہیں۔
مقامی کمپنی کی تیار کردہ ان گاڑیوں میں انٹرنیٹ کی مکمل سہولت اور اسمارٹ آپریٹنگ سسٹم موجود ہے، اسے چُرانے کیلئے چور کو بھی جدید ٹیکنالوجی کا ہی رخ کرنا ہوگا، ورنہ چوری ناممکن ہے۔
یہ گاڑی اب اپنے صارف سے جُڑی رہے گی، دو ہزار پندرہ میں یہ گاڑیاں مول پانے کیلئے منڈیوں کا رخ کریں گی۔
BOSTON: Microsoft Corp is rushing to fix a bug in its widely used Internet Explorer web browser after a computer security firm disclosed the flaw over the weekend, saying hackers have already exploited it in attacks on some US companies.
PCs running Windows XP will not receive any updates fixing that bug when they are released, however, because Microsoft stopped supporting the 13-year-old operating system earlier this month. Security firms estimate that between 15 and 25 percent of the world's PCs still run Windows XP.
Microsoft disclosed on Saturday its plans to fix the bug in an advisory to its customers posted on its security website, which it said is present in Internet Explorer versions 6 to 11. Those versions dominate desktop browsing, accounting for 55 percent of the PC browser market, according to tech research firm NetMarketShare.
Cybersecurity software maker FireEye Inc said that a sophisticated group of hackers have been exploiting the bug in a campaign dubbed "Operation Clandestine Fox."
FireEye, whose Mandiant division helps companies respond to cyber attacks, declined to name specific victims or identify the group of hackers, saying that an investigation into the matter is still active.
"It's a campaign of targeted attacks seemingly against US-based firms, currently tied to defense and financial sectors," FireEye spokesman Vitor De Souza said via email. "It's unclear what the motives of this attack group are, at this point. It appears to be broad-spectrum intel gathering."
He declined to elaborate, though he said one way to protect against them would be to switch to another browser.
Microsoft said in the advisory that the vulnerability could allow a hacker to take complete control of an affected system, then do things such as viewing changing, or deleting data, installing malicious programs, or creating accounts that would give hackers full user rights.
FireEye and Microsoft have not provided much information about the security flaw or the approach that hackers could use to figure out how to exploit it, said Aviv Raff, chief technology officer of cybersecurity firm Seculert.
Yet other groups of hackers are now racing to learn more about it so they can launch similar attacks before Microsoft prepares a security update, Raff said.
"Microsoft should move fast," he said. "This will snowball."
Still, he cautioned that Windows XP users will not benefit from that update since Microsoft has just halted support for that product.
The software maker said in a statement to Reuters that it advises Windows XP users to upgrade to one of two most recently versions of its operating system, Windows 7 or 8.
واشنگٹن: امریکی سائنسدانوں نے انکشاف کیا ہے کہ انسانی ناک کم از کم ایک لاکھ کروڑ بُو یا مہک میں فرق کرسکتی ہے، پچھلے اندازوں کی تعداد سے یہ لاکھوں میں زیادہ ہے۔
کئی دہائیوں سے سائنسدان یہ مانتے چلے آرہے تھے کہ انسان صرف دس ہزار قسم کی بو یا مہک کو محسوس کرسکتا ہے، یہی وجہ تھی کہ انسان کی سونگھنے کی صلاحیت دیکھنے اور سننے کی صلاحیت سے کم سمجھی جاتی تھی۔
راکفیلر یونیورسٹی کی نیوروجینیٹک لیبارٹری کے سربراہ اور اس ریسرچ میں شریک لیسلے ووشال کا کہنا ہے کہ ہمارا تجزیہ یہ ظاہر کرتا ہے کہ بُو میں فرق کرنے کی انسانی صلاحیت اس سے کہیں زیادہ ہے، جس کی کوئی بھی توقع کرسکتا ہے۔
ناک کی صلاحیت کے لیے لگائے گئےگزشتہ تخمینوں میں بتایا گیا تھا کہ قوتِ شامّہ سے متعلق چار سو ریسپٹرز اس سلسلے میں مدد کرتے ہیں، یہ تخمینے 1920ء کے ہیں، جن کی تصدیق کے لیے اعدادوشمار پیش نہیں کیے گئے تھے۔
سائنسدانوں نے تحقیق کی ہے کہ انسانی آنکھ اور اس کے صرف تین ریسپٹرز کئی لاکھ رنگوں میں فرق کرسکتے ہیں اور انسانوں کے کان تین لاکھ چالیس ہزار آوازوں میں امتیاز کرسکتے ہیں۔
لیسلے ووشال نے کہا کہ ’’قوتِ شامّہ کی جانچ کے لیے کسی نے کبھی وقت صرف نہیں کیا تھا۔‘‘
سائنسدانوں نے اپنی ریسرچ کے سلسلے میں 128 مختلف خوشبودار سالموں سے ایک مرکب تیار کیا، اس میں انفرادی طور پر گھاس، لیموں یا مختلف قسم کی کیمیائی مادّے شامل تھے، لیکن یہ سب تین گروپس میں یکجان کردیے گئے تھے۔
لیسلے ووشال نے کہا کہ ’’ہم چاہتے تھے کہ ہمارا تیار کردہ مرکب میں شامل اشیاء کی مہک کی شناخت نہ کی جاسکے، لہٰذا اس کوشش میں یہ مرکب بہت زیادہ عجیب اور کافی گندے ہوگئے تھے۔‘‘
ریسرچ میں شامل رضاکاروں کو ایک وقت میں تین شیشیوں میں ان مرکبات کے نمونے دیے گئے۔ ان میں سے دو تو ایک جیسے تھے، اور ایک مختلف تھا۔ ہم نے یہ دیکھنے کے لیے کہ وہ ان کو الگ الگ پہچان سکیں، اس طرح کے 264 موازنے مکمل کیے۔
128 خوشبوؤں کے تمام نمونوں کے ممکنہ مجموعوں میں سے کتنی مہک کو انسان اوسطاً علیحدہ علیحدہ شناخت کرسکتا ہے، سائنسدانوں نے اس تجربے سے اندازہ لگایا کہ یہ کم از کم ایک لاکھ کروڑ کے قریب قریب ہوسکتا ہے۔
اس ریسرچ ٹیم کے سربراہ انڈریاس کیلر کا تعلق بھی راک فیلر یونیورسٹی سے ہے، ان کا کہنا ہے کہ یہ تعداد یقیناً کہیں کم ہے، اس لیے کہ حقیقی دنیا میں اس سے کہیں زیادہ خوشبوئیں موجود ہیں جنہیں ملا کر لاتعداد نمونے تیار کیے جاسکتے ہیں۔
انہوں نے کہا کہ ہمارے آباؤ اجداد ہم سے کہیں زیادہ قوت شامّہ پر انحصار کرتے تھےلیکن جدید دنیا میں ذاتی حفظانِ صحت کی ترقی نے خوشبوؤں کو محدود کردیا ہے۔
انڈریاس کیلر نے کہا کہ ’’ہمارے رویّوں سے یہ ظاہر ہوتا ہے کہ ہمارے لیے قوت شامّہ سننے اور دیکھنے کے مقابلے میں زیادہ اہم نہیں ہے۔‘‘
بو کا احساس انسانی رویّے سے منسلک ہے اور سائنسدان زور دیتے ہیں کہ یہ ریسرچ اس حوالے سے روشنی ڈال سکتی ہے کہ انسانی دماغ کس طرح پیچیدہ اطلاعات پر عملدرآمد کرتا ہے۔
یہ ریسرچ جرنل سائنس میں شایع ہوئی ہے۔