GilmerFreePress.net

New Technology Makes Batteries Cheaper, More Efficient

The Gilmer Free Press

BOSTON, MA—Researchers at MIT have developed a new manufacturing strategy that cuts the cost of battery production in half.

In addition to slashing costs, the new technology promises to produce a better performing and more easily recycled battery. The key to the across-the-board improvements is hybridization.

The new manufacturing method has allowed scientists to combine the benefits of both liquid-based flow batteries and traditional solid ones. Researchers call the battery “semisolid.“

he battery features an electrode in the form of tiny suspended particles. The electrode is suspended in liquid, which allows manufactures to forego the drying process involved in traditional solid battery construction.

Scientists were able to use thicker, less delicate electrodes when employing a semisolid design. This removed complexity from the manufacturing process, and made the battery more resilient and flexible.

Liquid technology is ideal for small batteries that don’t have to hold a significant charge. But for larger ion batteries intended for industrial uses, liquid technology requires too many components and an inefficient manufacturing process.

The new method brings the benefits of liquid technology to big batteries—but without the baggage.

“We realized that a better way to make use of this flowable electrode technology was to reinvent the [lithium ion] manufacturing process,“ Yet-Ming Chiang, lead researcher and MIT professor, explained in a press release.

Chiang and his colleagues have already spun the technology off into a new company, which is partnering with a number of companies to produce more than 1,000 prototypes.

Individual Neurons Tell Us Whether We Remember Something

And, at the same time, indicate how confident we should be in that judgment.
The Gilmer Free Press

It’s hard to pin down exactly what makes us remember things. When you see an image, what makes you decide you’ve seen it before? A new study has tackled this question, identifying a group of neurons that participate in the process of identifying images as familiar.

While this may seem counterintuitive—it probably feels like you either recognize something automatically or you don’t—your brain makes that determination using different aspects of your memory. “Determining whether a stimulus is novel or familiar is a complex decision involving the comparison of sensory information with internal variables,” the authors explain in their paper.
Am I sure I’ve seen this before…?

When your brain makes a decision, it’s often accompanied by an assessment of how accurate that decision is. Was I right to buy that car? My brain would consider a number of factors—the driving experience, the gas mileage, and so on—before concluding it’s pretty likely I’m making the right decision. (Just an example; alas, there’s no shiny new car for my brain to assess). These confidence values are an essential part of the decision-making process, at least for humans, as it helps us navigate our complex environment.

The decision of whether you recognize something is no exception. But exactly how your brain makes confidence judgments about familiarity is not well understood. One model holds that the ability might rely on evaluating the decision after it has been made, an ability that might be unique to humans.

Other models propose that confidence judgments are an essential part of the decision-making process itself—confidence in the decision is assessed by the same mechanism that makes the decision in the first place. Unlike the other model, it doesn’t require advanced cognitive abilities exclusive to humans, and thus we would expect to find this in other animals.

Confidence judgments in perceptual decisions can be studied in animals, and this work has recently provided evidence for the latter model. These studies show that animals seem to have the ability to make confidence judgments about decisions they’re making, but this has not been tested when it comes to memory-based decisions.


Finding the right neurons

The brain’s medial temporal lobe (MTL) has previously been implicated in memory-based decisions, and researchers have even identified specific populations of neurons within the MTL that might be involved in the process. Some of these neurons, researchers have suggested, may be marking certain stimuli the person encounters as familiar or as new.

This led to a prediction: the activity of those neurons should correlate with both memory strength and with confidence. That is, as a person looks at a stimulus she’s encountered before, these neurons should activate more when she’s most confident she’s seen the stimulus before, and when she has the clearest memory of it.

To test this prediction, the researchers needed a way to see what was going on in the brain as people made their decisions. The ideal candidates turned out to be individuals who’d had electrodes implanted in their brain to evaluate them for possible surgical treatment of epilepsy. The electrodes allowed the researchers to follow the activity of individual neurons; 28 individuals volunteered and were included in the study.

The researchers presented these participants with a series of images, all selected from easily recognizable categories: cars, animals, people, and so on. Later, in another session, the participants were presented with a second series of images from the same categories, with half being images the participants had seen during the first session. The task was simple: when shown an image for one second, report whether it’s familiar to you or whether you’re seeing it for the first time—as well as how confident you were in that evaluation.

Our brains do a pretty good job. The participants who reported higher confidence in their answer were consistently more likely to answer correctly. Subjects tended to correctly identify around 69% of the images they’d seen before as familiar, but they also mistakenly said they’d seen between 11 and 45% of the new images before.


A neuron never forgets

Similar to what other studies had shown, a small percentage of neurons in the amygdala and hippocampus, about 8.5%, responded differently when the participants were shown familiar images than when they were shown new ones. The researchers labeled these “memory-selective” neurons. The researchers identified two kinds of memory-selective neurons: one that fired when participants saw new images (novelty selective), and one that fired at familiar images (familiar selective).

These neurons responded differently based on confidence: the more confident the participant was that the image was familiar, the stronger the signal among the familiar selective neurons, and vice versa for the novelty selective ones. Both kinds decreased their firing when the opposite stimuli were shown: familiar selective neurons fired less when the stimulus was new, and vice versa. This last effect, however, did not correlate with the participant’s confidence.

The memory-selective neurons also reacted at a rate better than chance when the participant was shown images they’d previously seen but claim to have forgotten. By contrast, that reaction did not occur when the participant was looking at a truly new image. Apparently, stimuli that we think we’ve forgotten may not be gone from memory.

With their results, the researchers created a mathematical model that was able to consistently predict the familiarity decision a person will make when given the signals from these neurons. The researchers’ model makes specific predictions about new kinds of neurons that may be found through future work, neurons that would be involved in evaluating the evidence for familiarity and unfamiliarity and deciding which is stronger.

Taken together, the researchers’ results favor the second of the two ideas mentioned earlier: making a confidence judgment in one’s own decision doesn’t require meta-cognition (thinking about one’s own thoughts); instead, it’s an inherent property of the way brains make decisions. This implies that animals are probably capable of making confidence judgments about visual memories as well, something that should be possible to test.

~~  Nature Neuroscience, 2015 ~~

Seeing More Deeply with Laser Light

The Gilmer Free Press

A human skull, on average, is about 0.3 inches thick, or roughly the depth of the latest smartphone. Human skin, on the other hand, is about 0.1 inches, or about three grains of salt, deep.

While these dimensions are extremely thin, they still present major hurdles for any kind of imaging with laser light.

Why? Laser light contains photons, or miniscule particles of light. When photons encounter biological tissue, they scatter. Corralling the tiny beacons to obtain meaningful details about the tissue has proven one of the most challenging problems laser researchers have faced.

However, one research group at Washington University in St. Louis (WUSTL) decided to eliminate the photon roundup completely and use scattering to their advantage.

The result: An imaging technique that penetrates tissue up to about 2.8 inches. This approach, which combines laser light and ultrasound, is based on the photoacoustic effect, a concept first discovered by Alexander Graham Bell in the 1880s.

In his work, Bell found that a focused light beam produces sound when trained on an object and rapidly interrupted—he used a rotating, slotted wheel to create a flashing effect with sunlight.

Bell’s concept is the foundation for photoacoustics, an area of a growing field known as biophotonics, which joins biology and light-based science known as phototonics. Biophotonics bridges photonics principles, engineering and technology that are relevant for critical problems in medicine, biology and biotechnology.

“We combine some very old physics with a modern imaging concept,“ says WUSTL researcher Lihong Wang, who pioneered the approach.

Wang and his WUSTL colleagues were the first to describe functional photoacoustic tomography (PAT) and 3-D photoacoustic microscopy (PAM). Both techniques follow the same basic principle: When the researchers shine a pulsed laser beam into biological tissue, it spreads out and generates a small, but rapid rise in temperature. This increase produces sound waves that are detected by conventional ultrasound transducers. Image reconstruction software converts the sound waves into high-resolution images.


Following a tortuous path

Wang first began exploring the combination of sound and light as a post-doctoral researcher.

At the time, he modeled photons as they traveled through biological material. This work led to an NSF CAREER grant to study ultrasound encoding of laser light to “trick” information out of the beam.

“The CAREER grant boosted my confidence and allowed me to study the fundamentals of light and sound in biological tissue, which benefited my ensuing career immensely,“ he says.

Unlike other optical imaging techniques, photoacoustic imaging detects ultrasonic waves induced by absorbed photons no matter how many times the photons have scattered. Multiple external detectors capture the sound waves regardless of their original locations.

“While the light travels on a highly tortuous path, the ultrasonic wave propagates in a clean and well-defined fashion,“ Wang says. “We see optical absorption contrast by listening to the object.“

The approach does not require injecting imaging agents, so researchers can study biological material in its natural environment. Using photoacoustic imaging, researchers can visualize a range of biological material from cells and their component parts to tissue and organs. It detects single red blood cells in blood, as well as fat and protein deposits.

While PAT and PAM are primarily used by researchers, Wang and others are working on multiple clinical applications. In one case, researchers use PAM to study the trajectory of blood cells as they flow through vessels in the brain.

“By seeing individual blood cells, researchers can start to identify what’s happening to the cells as they move through the vessels. Watching how these cells move could act as an early warning system to allow detection of potential blockage sites,“ says Richard Conroy, director of the Division of Applied Science and Technology at the National Institute of Biomedical Imaging and Bioengineering.


Minding the gap

Because PAT and PAM images can be correlated with those generated using other methods such as magnetic resonance imaging or positron emission tomography, these techniques can complement existing ones.

“One imaging modality can’t do everything,“ says Conroy. “Comparing results from different modalities provides a more detailed understanding of what is happening from the cell level to the whole animal.“

The approach could help bridge the gap between animal and human research, especially in neuroscience.

“Photoacoustic imaging is helping us understand how the mouse brain works. We can then apply this information to better understand how the human brain works,“ says Wang, who along with his team is applying both PAT and PAM to study mouse brain function.

Wang notes that one of the challenges currently facing neuroscientists is the lack of available tools to study brain activity such as action potentials, which occur when electrical signals travel along axons, the long fibers that carry signals away from the nerve cell body.

“The holy grail of brain research is to image action potentials,“ he says.

With funding from The BRAIN Initiative, Wang and his group are now developing a PAT system to capture images every one-thousandth of a second, fast enough to image action potentials in the brain.

“Photoacoustic imaging fills a gap between light microscopy and ultrasound,“ says Conroy. “The game-changing aspect of this [Wang’s] approach is that it has redefined our understanding of how deep we can see with light-based imaging.“

~~  Susan Reiss - NSF ~~

Microsoft Rolls Out Windows 10 Mobile Preview Build 10136

The Gilmer Free Press

It’s been about a month since Microsoft made available a new test build of its Windows 10 Mobile operating system for Windows Phones. But on June 16, Microsoft delivered a new build, No. 10136, to Windows Insider testers on the Fast Ring.

Build 10136 include improvements to Cortana and Photos and Camera, according to a June 16 blog post outlining the changes in the new build. There are also lots of “subtle” changes in the interface, including modifications designed to make Windows Phones with screen sizes of five inches or more easier to use with one hand. The new build also includes a number of bug fixes.

With this build, Microsoft has not added any new Windows Phones to the list of those the preview already supports.

Today’s build is available only to select phones running Windows Phone 8.1 that have opted into the Fast Ring. That means those on the previous test build (10080) need to go back to Windows Phone 8.1 using the Windows Phone Recovery tool first. Those on Build 10080 who opt not to flash back to WP 8.1 won’t get this build, but still will get whatever follows 10136. It’s worth reading the blog post for more detailed instructions about installing today’s new build.

Windows 10 Mobile won’t be available in final form starting July 29—the beginning of the rollout of Windows 10 for PCs. It will “release broadly later this year,“ Microsoft officials continue to say.

Click Below for More...

Page 298 of 322 pages « First  <  296 297 298 299 300 >  Last »


The Gilmer Free Press

Copyright MMVIII-MMXVIII The Gilmer Free Press. All Rights Reserved