Tuesday, June 23, 2015
All together now: yeasts can evolve to form snowflake-like multicellular shapes (Image: Courtesy of Jennifer Pentz, Georgia Tech)
The leap from single-celled life to multicellular creatures is easier than we ever thought. And it seems there's more than one way it can happen.
The mutation of a single gene is enough to transform single-celled brewer's yeast into a "snowflake" that evolves as a multicellular organism.
Similarly, single-celled algae quickly evolve into spherical multicellular organisms when faced with predators that eat single cells.
These findings back the emerging idea that this leap in complexity isn't the giant evolutionary hurdle it was thought to be.
At some point after life first emerged, some cells came together to form the first multicellular organism. This happened perhaps as early as 2.1 billion years ago. Others followed – multicellularity is thought to have evolved independently at least 20 times – eventually giving rise to complex life, such as humans.
But no organism is known to have made that transition in the past 200 million years, so how and why it happened is hard to study.
Special snowflake
Back in 2011, evolutionary biologists William Ratcliff and Michael Travisano at the University of Minnesota in St Paul coaxed unicellular yeast to take on a multicellular "snowflake" form by taking the fastest-settling yeast out of a culture and using it to found new cultures. And then repeating the process. Because clumps of yeast settle faster than individual cells, this effectively selected yeast that stuck together instead of separating after cell division.
The team's latest work shows that this transformation from a single to multicellular existence can be driven by a single gene calledACE2 that controls separation of daughter cells after division, Ratcliff told the 15-19 June Astrobiology Science Conference in Chicago.
And because the snowflake grows in a branching, tree-like pattern, any later mutations are confined to single branches. When the original snowflake gets too large and breaks up, these mutant branches fend for themselves, allowing the value of their new mutation to be tested in the evolutionary arena.
"A single mutation creates groups that as a side effect are capable of Darwinian evolution at the multicellular level," says Ratcliff, who is now at Georgia Tech University in Atlanta.
Bigger is better
Ratcliff's team has previously also evolved multicellularity in single-celled algae calledChlamydomonas
, through similar selection for rapid settling. The algal cells clumped together in amorphous blobs.
Now the feat has been repeated, but with predators thrown into the mix. A team led byMatt Herron of the University of Montana in Missoula exposed Chlamydomonas to a paramecium, a single-celled protozoan that can devour single-celled algae but not multicellular ones.
Safety in even numbers (Image: Jacob Boswell)
Sure enough, two of Herron's five experimental lines became multicellular within six months, or about 600 generations, he told the conference.
This time, instead of daughter cells sticking together in an amorphous blob as they did under selection for settling, the algae formed predation-resistant, spherical units of four, eight or 16 cells that look almost identical to related species of algae that are naturally multicellular.
"It's likely that what we've seen in the predation experiments recapitulates some of the early steps of evolution," says Herron.
Neither Ratcliff's yeast nor Herron's algae has unequivocally crossed the critical threshold to multicellularity, which would require cells to divide labour between them, says Richard Michod of the University of Arizona in Tucson.
But the experiments are an important step along that road. "They're opening up new avenues for approaching this question," he says.
One gene may drive leap from single cell to multicellular life
Will more sensory substitution devices hit the market soon?

The BrainPort V100
Courtesy Wicab, Inc.
Last week, the Food and Drug Administration (FDA) announced that medical device company Wicab is allowed to market a new device that will help the blind “see.” The device, called theBrainPort V100, can help the blind navigate by processing visual information and communicating it to the user through electrodes on his tongue. Though this isn’t the first device to go on the market using sensory substitution (where information perceived by one sense is communicated through another), the sophistication and usability of the BrainPort V100 could mean that the number of sensory substitution devices permitted by the FDA is on the rise.
The BrainPort V100 consists of a pair of dark glasses and tongue-stimulating electrodes connected to a handheld battery-operated device. When cameras in the glasses pick up visual stimuli, software converts the information to electrical pulses sent as vibrations to be felt on the user’s tongue. Like most sensory substitution devices, “seeing” with your tongue may not be intuitive at first. But the researchers who developed the device tested it over the course of a year, training users to interpret the vibrations. Studies showed that 69 percent of the test subjects were able to identify an object using the BrainPort device after a year of training. However, the device is expensive; Wicab toldPopular Science that it will cost $10,000 per unit, the same as its price when first reported back in 2009.
Researchers have been fiddling withsensory substitution for a long time, but most of these devices are not yet widely available. The BrainPort V100 will be on one of the first, having passed the FDA’s review through recently-updated guidelines called the premarket review pathway: “a regulatory pathway for some low- to moderate-risk medical devices that are not substantially equivalent to an already legally-marketed device,” according to the press release. Since this device is now allowed to be marketed and was approved relatively quickly through these new guidelines, the BrainPort may be paving the way for an explosion of sensory substitution devices to hit the market in the next few years, which could help the growing numbers of Americans with sensory impairments.
Device That Helps Blind People See With Their Tongues Just Won FDA Approval

Technology can reconstruct video based on a person's thoughts and even anticipate your moves while you drive. Now, a brain-to-text system can translate brain activity into written words.
In a recent study in Frontiers in Neuroscience, seven patients had electrode sheets placed on their brain which collected neural data while they read passages aloud from the Gettysburg Address, JFK’s inaugural speech, and Humpty Dumpty.
As each patient spoke, a computer algorithm learned to associate speech sounds—such as "foh", "net", and "ik"—with different firing patterns in the brain cells. Eventually it learned to read the brain cells well enough that it could guess which sound they were producing with up to 75 percent accuracy. But the program doesn't need 100 percent accuracy to put those sounds together into the word "phonetic". Because our speech only takes certain forms, the system’s algorithm can correct for these errors “just like autocorrect,” says Peter Brunner, one of the co-authors of the study.
“Siri wouldn’t be more accurate than 50 or 70 percent,” he says. “Because it knows what the potential options are that you choose, or the typical sentences that you say, it can actually utilize this information to get the right choice.”
It is important to record the data directly from the brain, says Brunner, because picking up neural activity from the scalp only gives a “blurred version” of what is happening in the brain. He likened the latter method to flying 1000 feet above a baseball stadium and only being able to vaguely recognize that people are cheering, but not the specifics of what the people’s faces look like.
In this case, the patients were already undergoing an epilepsy procedure where the skull is popped open and an electrode grid is placed on the brain to map areas where neurons are misfiring. The resourceful team of researchers from the National Center for Adaptive Neurotechnologies and the State University of New York at Albany used this time to conduct their own research. However, it means study was limited by each patient’s individualized epilepsy treatment, such as where the electrodes were placed on the brain.
Because every person’s brain is so unique, and the neural activity must be picked up directly from the brain, it would be difficult to create a general brain-to-text device for the average consumer, says Brunner. However, this technology has a lot of potential to be used for people who suffer from neurological diseases, such as ALS, who lose the ability to move and to speak. Instead of using an external device like Steven Hawking to pick out words on a screen for a computer to read, the computer would simply speak your mind.
“This is just the beginning,” said Brunner. “The prospects of this are really endless.”
Mind-Reading Program Translates Thoughts Into Text
Saturday, June 20, 2015
In order to understand how the organ selectively transmits cells from mother to child

A placenta on a chip device
Courtesy of the Eunice Kennedy Shriver National Institute of Child Health and Human Development
From lungs to brains, organ tissues grown on a lab are telling researchers a lot about how their cells do their jobs. Now researchers are using the technology to better understand the placenta, the temporary organ that connects a fetus and mother during pregnancy.
The placenta’s primary function is to act as a “crossing guard” between mother and child—it sends the good stuff (like nutrients and oxygen) along to the baby, while leaving other damaging elements like chemicals from environmental exposure or disease-causing bacteria or viruses. If the placenta is damaged or doesn’t work right, it could endanger the health of both the mother and the baby.
Researchers don’t really know how the placenta is able to transmit the good things while keeping out the bad. That’s because the placenta is notoriously difficult to study in humans—it takes a long time, varies a lot between individual patients, and could put the fetus’ safety at risk. In the past, most studies about the placenta were done in animals to work around these issues. Animal studies have shed some light on how the placenta works, but the tissue is never quite the same as in humans.
A rendering of the placenta on a chip
Scientists Are Growing A Human Placenta On A Chip
Friday, June 19, 2015
![]()
Researchers have built devices that harness changes in atmospheric humidity to generate small amounts of electricity, lift tiny weights, and even power a toy car. In the grand scheme of things, that captured energy is not free, but it’s pretty darn close. The study suggests that evaporation could be used to operate a variety of gadgets that don’t require a lot of power, scientists say.
“This is one of the first experiments to show that humidity can be a source of fuel,” says Albert Schenning, a materials scientist at the Eindhoven University of Technology in the Netherlands who wasn’t involved in the new study. The team’s designs, he says, “are very nice and very clever.”
All the gadgets rely on a simple phenomenon—the change in size of bacterial spores as they absorb moisture from the air and then release it, says team leader Ozgur Sahin, a biophysicist at Columbia University. Sahin and his colleagues used the living but dormant spores from Bacillus subtilis, a species of bacteria commonly found in soil and in the human gastrointestinal tract. Each spore typically swells and then shrinks up to 6% when moved from dry air to extremely humid air and then back again, Sahin says. The researchers harnessed that size-changing action by gluing thin layers of spores onto one side of curved sheets of polymer. When the spores swelled, that side of the polymer sheet lengthened—which in turn caused the curved sheet to somewhat straighten out. The stretching and contracting of these spore-coated polymer sheets are the driving force for the team’s devices.
A change in size of 6% may not sound that impressive. But when the researchers strung together a series of these polymer sheets, the “artificial muscles” they created quadrupled in length when relative humidity changed from below 30% to more than 80%, the team reports today in Nature Communications.
The thicker the spore layer, the longer it would take for the muscles to react to changes in humidity. So, to make sure their artificial muscles were quick-acting, the researchers used spore layers that were extremely thin—no more than 3 micrometers thick, or about 5 spores deep on average, Sahin says. Tests showed that the devices could react to humidity changes within 3 seconds, he notes.
Tests also revealed that the spore-coated polymer strips could expand and shrink for more than 1 million cycles with little change in their range of motion. Other trials showed that the strips, when they shrank, could lift more than 50 times their own weight (video). But they did so much more slowly than an animal’s muscle would, so the power they generated—that is, their rate of energy production—was correspondingly low.
Nevertheless, the team harnessed changes in humidity to perform actual work. In one device, the back-and-forth motion driven by one artificial muscle suspended above a postage stamp–sized patch of water provided enough electrical power to light an LED. In another, the expansion of muscles on one side of a Ferris wheel–like device (where the air was humidified by evaporation from a wet paper towel) but not the other triggered an imbalance that caused the wheel to rotate (video). The team used the motion of a similar wheel to power a 100-gram toy car(video).
“These are fun demonstrations, but they prove the principle,” says Peter Fratzl, a materials scientist at the Max Planck Institute of Colloids and Interfaces in Potsdam, Germany, who was not involved with the work. Researchers are constantly looking for sources of energy, even if they’re small, he notes. “It makes sense to use these gradients [in humidity], because they’re everywhere and they’re free.”
The team’s results are a good conceptual starting point, says George Whitesides, a chemist at Harvard University. Such devices could, in theory, generate enough electricity to run a few transistors, he adds. “But it will still be a while before these things are in every child’s bathtub.”
Energy harnessed from humidity can power small devices
Thursday, June 18, 2015

In an attempt to reverse evolution, the team has already made significant strides in mutating chickens back to the very creatures from which they descended. If that wasn’t enough genetic splicing and dicing, Harvard scientists attempted a similar feat recently by inserting the genes of a woolly mammoth into elephants in order to recreate the extinct beasts. Whoa baby.
If the four major differences between dinosaurs and birds are their tails, arms, hands and mouths, Horner and team have already flipped certain genetic switches in chicken embryos to reverse-engineer a bird’s beak into a dinosaur-like snout.
“Actually, the wings and hands are not as difficult,” Horner said, adding that a ‘Chickensoraus’ -- as he calls the creation -- is well on its way to becoming reality. “The tail is the biggest project. But on the other hand, we have been able to do some things recently that have given us hope that it won't take too long."
Scientists Say They Can Recreate Living Dinosaurs Within the Next 5 Years
#Biointerfaces #Chip – Biointerfaces Has Developed a Chip to Mimic Heartbeats Using Gravity – Researchers at the University of Michigan managed to mimic a heartbeat outside of the body, mimicking fundamental physical rhythms like the heartbeat.

Developed as a “lab on a chip,” microfluidic devices that can be extremely useful when performing complex laboratory functions in a tiny space.
Being an instant success in heartbeat mimicking, researchers have already started testing cardiovascular drugs and blood thinners, where blood flow and its accurate simulation can help develop new studies and medical solutions.
Apparently, cells will react more natural when subjected to the pulsing rhythms inside a body or when in motion, instead of the static environment of the lab. This way doctors will be able to test and simulate cell motion much more accurately before testing on live subjects.
Just to make an idea on how primitive heartbeat simulations were outside of a body before this new heart-on-a-chip arrived, doctors had to operate a syringe pump operated by a lab technician for a limited amount of time. The new device not only eliminated the human factor in simulating a heartbeat, but can also operate in infinitely longer amounts of time.
Biointerfaces Has Developed a Chip to Mimic Heartbeats Using Gravity
Tuesday, June 9, 2015
Emerging Technologies – Most of the global challenges of the 21st century are a direct consequence of the most important technological innovations of the 20st century.
New technology is arriving faster than ever and holds the promise of solving many of the world’s pressing challenges such as food and water security, energy sustainability and personalised medicine.
Lighter, cheaper and flexible electronics made from organic materials have found endless practical applications and drugs are being delivered via nanotechnology at the molecular level, at the moment just in medical labs.
However, outdated government regulations, inadequate existing funding models for research and uninformed public opinion are the greatest challenges in effectively moving emerging technologies from the research labs to people’s lives.
1) Robotics 2.0
A new generation of robotics takes machines away from just automating the most manual manufacturing assembly line tasks and orchestrates them to collaborate in creating more advanced assemblies, subassemblies and complete products. Collaborative robotics can accelerate time-to-market, improve production accuracy and reduce rework. By using GPS technology that is commonly available in smartphones, robots can be used in precision agriculture for weed control and harvesting.
We’ve seen robots that can walk like an ape and run like a cheetah, robots that can mix a perfect martini, help the disabled, or drive you to the store. Robots could replace soldiers on the battlefield. In Japan, robots are being tested in nursing roles: they help patients out of bed and support stroke victims in regaining control of their limbs.
Artificial Intelligence, machine learning and computer vision are constantly developing and perfecting new technologies that “enable the machine” to perceive and respond to its ever changing environment. Emergent AI is the nascent field of how systems can learn automatically by assimilating large volumes of information. An example of this is how Watson system developed by IBM is now being deployed in oncology to assist in diagnosis and personalised, evidence-based treatment options for cancer patients.
2) Neuromorphic Engineering
Neuromorphic engineering, also known as neuromorphic computing started as a concept developed by Carver Mead in the late 1980s, describing the use of very-large-scale integration (VLSI) systems containing electronic analogue circuits to mimic neurobiological architectures present in the nervous system.
A key aspect of neuromorphic engineering is understanding how the morphology of individual neurones, circuits and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change. Neuromorphic Computing is next stage in machine learning.
IBM’s million “neurones” TrueNorth chip, revealed in prototype in August 2014, has a power efficiency for certain tasks that is hundreds of times superior to conventional CPU’s and comparable for the first time to the human cortex. The challenge here remains creating code that can realise the potential of the TrueNorth chip, an area IBM continues investing in today.
3) Intelligent Nanobots – Drones
Again, Emergent AI and Computer Vision will provide drones with human like capabilities allowing them to complete tasks too dangerous or remote for humans to do like checking electric power lines or delivering medical supplies in an emergency for example.
Autonomous drones will improve agricultural yields by collecting and processing vast amounts of visual data from the air, allowing precise and efficient use of inputs such as fertiliser and irrigation.
Ambulance drones that can deliver vital medical supplies and “on screen” instructions. Drones with mounted camera to “learn” about surroundings – with no information about the environment or the objects within it- by using reference points and different angles, it builds a 3D map of surroundings, with additional sensors picking up barometric and ultrasonic data. Autopilot software then uses all this data to navigate safely and even seek out specific objects. Autonomous. Intelligent. Swarming. Nano Drones.
4) 3D Printing (yes, we can’t get enough of it)
Imagine a future in which a device connected to a computer can print a solid object. A future in which we can have tangible goods as well as intangible services delivered to our desktops or high-street shops over the Internet.
And a future in which the everyday “atomisation” of virtual objects into hard reality has turned the mass pre-production and stock-holding of a wide range of goods and spare parts into no more than an historical legacy.
Emerging Technologies in 3D printing that eventually lead to lighter, more efficient plane parts that could save fuel on your flights, replacement body parts, from printed knickers to 3D printed pills and from synthetic hearts to 3D printed homes on Mars and other planers.
5) Precision Medicine
From heart disease to cancer, all have a genetic component. Cancer is best described as a disease of the genome. The ability to sequence a patient’s whole genome is close to entering the clinic in cancer hospitals.
We’ve come a long way in technology since we sequenced the first genome, and as it gets faster and cheaper we will be able to both personalise treatments and learn more about the enormous number of genetic changes that lead to each type and subtype of cancer.
With digitisation, doctors will be able to make decisions about a patient’s cancer treatment informed by a tumour’s genetic make-up.
This new knowledge is also making precision medicine a reality by enabling the development of highly targeted therapies that offer the potential for improved treatment outcomes, especially for patients battling cancer and finally by combining whole-genome sequencing (or transcriptome sequencing) with advancing sequence-based drug technology the future really looks amazing!
Top 5 Emerging Technologies In 2015
Monday, March 23, 2015
Future farming a technology utilization
Unmanned Aerial Vehicle (UAV) filming a combine harvester in wheat field in Provence-Alpes-Cote d'Azur, France.
Global positioning gives hyperlocal info
Info, analysis, tools
Let’s automate
Autonomous machines can replace people performing tedious tasks, such as hand-harvesting vegetables. They use sensor technologies, including machine vision that can detect things like location and size of stalks and leaves to inform their mechanical processes. Japan is a trend leader in this area. Typically, agriculture is performed on smaller fields and plots there, and the country is an innovator in robotics. But autonomous machines are becoming more evident in the U.S., particularly in California where much of the country’s specialty crops are grown.
The development of flying robots gives rise to the possibility that most field-crop scouting currently done by humans could be replaced by UAVs with machine vision and hand-like grippers. Many scouting tasks, such as for insect pests, require someone to walk to distant locations in a field, grasp plant leaves on representative plants and turn them over to see the presence or absence of insects. Researchers are developing technologies to enable such flying robots to do this without human involvement.
Breeding + sensors + robots
Just another day on the future farm?
The technology is future of farming
Sunday, February 22, 2015
Scientists develop prototype contact lenses that allow wearers to zoom in and out
Wednesday, December 24, 2014
A new microscope attachment can allow smartphone users to take acloser look at fluorescently labeled DNA.
![]()
Electrical and bioengineer Aydogan Ozcan of the University of California, Los Angeles, and his colleagues have developed a smartphone attachment that can estimate the lengths of DNA molecules in a sample, according to a study published this month (December 10) in ACS Nano. The unit—which weighs less than 190 grams, costs only $400, and runs on three AAA batteries—can reveal copy-number variations and other genetic features of disease, making it a potential tool for diagnostic field tests,Chemical & Engineering News (C&EN) reported.
The researchers demonstrated the smartphone microscope’s utility by analyzing purified solutions of fluorescently labeled DNA molecules. By putting the solution between two coverslips, they effectively stretched the DNA into straight lines; then, a compact blue laser within the fluorescence microscope attachment shone on the DNA, and the smarphone took a series of photos that were sent to a remote server to calculate the strand lengths. Testing the setup on DNA molecules that ranged from 10,000 to 48,000 base pairs, the team found that the smartphone microscope could estimate length within about 1,000 base pairs, similar to the error of conventional bench-top fluorescence microscopes, according to C&EN.
Other groups are also hoping to make smartphone microscopes, to bring the power of microscope-based diagnostics to parts of the world lacking the necessary infrastructure—often the places most in need of strategies to quickly diagnose infectious diseases. Ozcan compares the movement to the personal computer revolution. “If you look at the early computers, they were bulky, they were extremely expensive,” he told The Scientist last year. Now, “[computers] are portable . . . and almost anyone can afford them. The same thing is going on today [with microscopy]. We are miniaturizing our micro- and nano-analysis tools. We’re making them more affordable; we’re making them more powerful.”
![]()
Measuring DNA with a Smartphone a Next Generation Technology…
Wednesday, November 12, 2014
Mind-controlled transgene expression by a wireless-powered optogenetic designer cell implant
Future Treatment Technique For arthritis, Diabetes, Obesity
Future Of Gene And Cell Based Treatments using optogenetic Implant
Wireless-powered optogenetic implant






