A Cheap and Easy Blood Test Could Catch Cancer Early

A simple-to-take test that tells if you have a tumor lurking, and even where it is in your body, is a lot closer to reality—and may cost only $500.  

The new test, developed at Johns Hopkins University, looks for signs of eight common types of cancer. It requires only a blood sample and may prove inexpensive enough for doctors to give during a routine physical.

“The idea is this test would make its way into the public and we could set up screening centers,” says Nickolas Papadopoulos, one of the Johns Hopkins researchers behind the test. “That’s why it has to be cheap and noninvasive.”

Although the test isn’t commercially available yet, it will be used to screen 50,000 retirement-age women with no history of cancer as part of a $50 million, five-year study with the Geisinger Health System in Pennsylvania, a spokesperson with the insurer said.

The test, detailed today in the journal Science, could be a major advance for “liquid biopsy” technology, which aims to detect cancer in the blood before a person feels sick or notices a lump.

That’s useful because early-stage cancer that hasn’t spread can often be cured.

Companies have been pouring money into developing liquid biopsies. One startup, Grail Bio, has raised over $1 billion in pursuit of a single blood test for many cancers.

Sign up for Weekend Reads

Our guide to stories in the archives that put technology in perspective.

Manage your newsletter preferences

For their test, Hopkins researchers looked at blood from 1,005 people with previously diagnosed ovarian, liver, stomach, pancreatic, esophageal, colorectal, lung, or breast cancer.

Their test searches for a combination of eight cancer proteins as well as 16 cancer-related genetic mutations.

The test was best at finding ovarian cancer, which it detected up to 98 percent of the time. It correctly identified a third of breast cancer cases and about 70 percent of people with pancreatic cancer, which has a particularly grim outlook.

The chance of a false alarm was low: only seven of 812 apparently healthy people turned up positive on the test.

The researchers also trained a machine-learning algorithm to determine the location of a person’s tumor from the blood clues. The algorithm guessed right 83 percent of the time.

“I think we will eventually get to a point where we can detect cancer before it’s otherwise visible,” says Len Lichtenfeld, deputy chief medical officer of the American Cancer Society.

He cautions that screening tests can sometimes harm rather than help. That can happen if they set off too many false alarms or if doctors end up treating slow-growing cancers that are not likely to do much harm.

Canadian Science Community Gathers Momentum in Improving Gender Equity

ISTOCK, VDVTUTUnconscious bias can lead to inequitable outcomes in grant-funding competitions, especially when applicants’ leadership potential is explicitly evaluated, a new study preprint published on December 12, 2017, in bioRxiv shows. But unconscious bias training for reviewers can mitigate this effect and lead to a more equitable allocation of funds, the science funding organization found. That may pave the way for strategies and policies to improve equity in science at the highest levels—a stubborn problem in spite of the large number of women and minorities entering science graduate programs.

In 2009, women made up about 12 percent of full professors in natural sciences and engineering in Canadian universities, about 20 percent of associate professors, and 28 percent of assistant professors, according to a National Science and Engineering Research Council (NSERC) report (see Fig 3.10). From 2010 to 2016, the number of female full professors across all academic disciplines rose 28 percent and the number of female associate professors rose 18 percent, but the number of female assistant professors fell by 12 percent, according to a Statistics Canada report.

The Canadian Institutes for Health Research (CIHR) analyses show that overall, women and men experience comparable success rates in CIHR grant competitions, but gender inequities exist in certain CIHR programs. (The data for funding-success rates in different CIHR programs is freely available on its website.)

See “New Initiatives Offer Jobs, Funding to Women Only”

In 2014, CIHR phased out traditional grant programs and launched two new programs to reduce the administrative burden on investigators and grant reviewers. One, called the Foundation Program, takes a “people, not projects” approach to funding and explicitly evaluates the caliber of the principal investigator (PI). It contrasts with the new Project Program, which is focused on evaluating the proposed research projects. These funding programs together account for approximately three quarters of total CIHR funding, which is $1 billion CAD.

Unconscious bias seems to operate more strongly when evaluating the leadership potential of the applicant.

To see how women fared under these new funding arrangements, researchers conducted an analysis of PIs’ success rates. They found that in the Project program, men and women were funded at equal rates—around 15 percent succeeded in getting an award.

“So the good news is that when we evaluate research proposals based on the science, it doesn’t matter if you are a male or female applicant, you can have an amazing idea and have the same chance of getting funded,” says Cara Tannenbaum, scientific director of the CIHR Institute of Gender and Health and a coauthor of the study.

But in the Foundation scheme, the authors found signs of a gender bias, Tannebaum says. The funding success rate for male applicants was 4 percentage points higher than that for female applicants. Unconscious bias seems to operate more strongly when evaluating the leadership potential of the applicant, Tannenbaum adds.

CIHR’s attempt to curb unconscious bias

Even before Tannenbaum’s results were published, she and her colleagues at CIHR were already considering ways to address potential gender disparities under these new funding schemes. One idea was to remove the name of the applicant and do a blind review, but they found that there is something about the language in the way people write their own applications and reference letters that allows gendered stereotypes to emerge, Tannenbaum explains.

So instead, in 2016, they created an unconscious bias training module for peer reviewers. When CIHR ran the 2016-2017 grant competition, all reviewers were required to do this training before reviewing the proposals. And this time, the success rate between male and female applicants equalized in the Foundation program, suggesting that the unconscious-bias training was an effective intervention, Tannenbaum says. The researchers plan to collect data on several more rounds of competition to confirm this preliminary finding.

I congratulate the authors on this elegant study and hope NIH and others take note.—Molly Carnes,
University of Wisconsin-Madison

The results are promising enough that in the meantime, CIHR has made the unconscious-bias training module mandatory training for all peer reviewers in CIHR’s largest open grant competitions. “We’re being really upfront and quite aggressive about it, to be honest with you,” Tannenbaum says. Furthermore, because the initial gender disparities in the Foundation Program seem to occur at Stage 1 (during the assessment of the applicant), CIHR instituted a policy to ensure that the proportion of female applicants in Stage 1 will equal the proportion of female applicants moving to Stage 2 (the assessment of the science), the CIHR policy team writes in an email to The Scientist.

Molly Carnes of the University of Wisconsin-Madison who has extensively researched implicit bias in science notes similarities between Canada’s Foundation Program and the NIH Director’s Pioneer Award—a program focused on evaluating a scientist’s leadership potential. In its first year, 2004, no women were among its nine awardees, and in 2017, just one woman (among 11 men) was an awardee.

“I congratulate the authors on this elegant study and hope NIH and others take note,” Carnes writes in an email to The Scientist. “There are several programs at NIH that focus on the investigator—it will be important to follow the gender distributions of the awardees.”

Canada’s spotlight on gender equity

In recent years, Canada has launched a number of initatives to retain women in STEM, many spearheaded by Canadian Science Minister Kirsty Duncan, who decided to make the issue a priority for Canadian science when she was appointed in November 2015.

You have to make [bias] explicit. You’ve got to draw attention to it for people to learn to see it.—Vianne Timmons,
University of Regina

Since her tenure began, she has asked universities to set and meet specific, transparent equity targets for hiring women, racialized groups, people with disabilities, and aboriginal peoples as researchers at Canadian universities—or risk losing funding for a prestigious tri-agency program called the Canada Excellence Research Chairs. The three Canadian science funding agencies involved are CIHR, the Natural Sciences and Engineering Research Council (NSERC), and the Social Sciences and Humanities Research Council (SSHRC). Through the Canada Excellence Research Chairs program, universities can apply for support to hire exceptional faculty, but now, continued support to the universities is contingent on their ability to meet their equity goals in the applicants recruited and nominated for these chairs.

“The minister really pushed universities to step up and deal with [bias]. I give her a lot of credit for it,” says Vianne Timmons, president of the University of Regina in Saskatchewan and co-chair of the tri-agency Advisory Committee on Equity, Diversity, and Inclusion Policy. “It is something I’m glad we are tackling,” Timmons adds. “I think we’re at a turning point in society.”

Through Universities Canada, Canadian university presidents collectively participated in unconscious-bias training in spring 2017, Timmons explains. The experience was so valuable to her that she brought a facilitator to the University of Regina to train her senior leadership team as well. “You have to make [bias] explicit,” she says. “You’ve got to draw attention to it for people to learn to see it.”

Senior leadership in the Canadian federal government also has showed a commitment to pursuing gender equity by hosting the Gender Summit in Montreal in November 2017. The summit brought together advocates of gender equity across the country to outline actionable, concrete steps to eliminate disparities. “There have been a few institutions or groups that have been working on advocacy for a long time,” says Imogen Coe, Dean of the Faculty of Science at Ryerson University in Toronto who attended the summit. “But this was a national stage and an expression by the federal funding leaders showing their commitment to this issue.” 

Effect of light-delignification on mechanical, hydrophobic, and thermal properties of high-strength molded fiber materials


Fiber morphology and chemical composition analysis

Light-delignification can cause changes in fiber morphology and chemical composition, which in turn affected the performance of HMFM. The average and general value of the dimensions are shown in Table 1.

Table 1: Average and general value of the fiber dimensions.

Full size table

The general value of STP fiber lengths was between 495.5 to 808.3 μm, which was smaller compared to that of UTP fibers. However, the general value of STP fiber diameters ranging from 14.3 to 19.3 μm, which was greater than that of UTP fibers. It resulted in a decrease in aspect ratio of STP fibers. The average value of fiber dimensions also showed the same changes as the general value. The fiber length decreased, the diameter increased, and the aspect ratio decreased after the light-delignification. The decrease of aspect ratio may weaken the mechanical cross-linking between fibers. However, the increase of diameter provided more contact area between fibers, which was beneficial to the improvement of inter-fiber bonding strength. The general value of STP fiber wall thickness ranged from 3.5 to 3.9 μm, which was smaller than that of UTP fibers. It may be related to the removal of lignin on the fiber surface, resulting in the decrease of fiber wall thickness and the increase of fiber softness, which was beneficial to the increase in inter-fiber compactness. Table 2 shows the yields and chemical composition.

The yield of STP decreased by 20.3% compared to that of UTP. In addition to the decomposition and dissolution of lignin and other compositions, it might be affected by the changes of some small fibers, which resulted in an increase in mass loss during washing. After the light-delignification, the ratio of holocellulose to lignin content varied from 2.8 to 7.0, indicating a significant change in fiber composition. The content of lignin was significantly decreased by 54.0%, and the contents of holocellulose and α-cellulose were increased correspondingly. However, the content of pentosan did not increase significantly. The Light-delignification reduced the amount of pentosan in holocellulose from 22.0% to 19.9%, showing a certain degree of degradation of hemicellulose.

Physical and mechanical properties of HMFM

Mechanical strength is one of the most important indexes to characterize the performance of HMFM. The density and mechanical properties are shown in Table 3 and Fig. 1.

After the light-delignification, the density increased, the tensile strength and bending strength improved significantly, and the corresponding strain also increased slightly. The density of STS increased by 6.0%, the tensile strength of STS increased by 22.0% and the bending strength of STS increased by 23.9% compared to those of UTS. The light-delignification increased the softening degree of fibers. The fibers were pressed more densely, resulting in an increase in density. The formation of adhesive material between fibers, as well as the compaction of the fiber cell lumens, promoting the tensile strength and bending strength of HMFM, which was consistent with the XPS and SEM results.

Surface chemical composition analysis of HMFM

The Light-delignification caused the changes of the outer surface chemical composition of HMFM. XPS provides quantitative information of different bonded carbon atoms on the HMFM surface besides the chemical composition19, which are shown in Fig. 2 and Table 4. Carbon (~285 eV) and oxygen (~532 eV) were the main elements detected in the fibers in XPS survey scan, and a small amount of nitrogen (~399 eV) was also found. The outer surface nitrogen atom concentration decreased slightly after light-delignification. The outer surface lignin content of HMFM was calculated using Eq. (1), and the results are presented in Table 4. However, the outer surface lignin concentration of STS was increased. The reason may be that the light-delignification made the lignin on the outer fiber surface dissolved out in the form of debris, which was enriched on the material surface during the hot-pressing process, resulting in the increased outer surface coverage of lignin.

The O/C ratios can be used to characterize the outer surface carbohydrate, lignin and extractives contents. Due to the removal of acetone-extracted extractives, the increase in the O/C ratio can represent a higher carbohydrate concentration on the material surface20. The decreased O/C ratio of the STS indicated that the oxygen-rich composition on the material surface was relatively reduced, which was consistent with the results of outer surface lignin content. The theoretical O/C ratios of cellulose and lignin are 0.83 and 0.33, respectively21. The O/C ratios of STS and UTS were between0.33 and 0.83, and close to 0.33, which was consistent with the chemical composition of HYP fibers.

According to the classification of carbon atoms in wooden materials, the C1s peak was deconvoluted into four subpeaks: C1 corresponds to C–C or C–H, and C1 is considered to only lignin (extractives are removed); C2 and C3 refer to the C–O and C=O or O–C–O respectively, existing in carbohydrate; C4 refers to O–C=O, which represents carboxylic acids, resins and other substances22.

Figure 3 presents the deconvoluted C1s signals of STS and UTS. After the light-delignification, the relative amount of C1 and C4 increased obviously as C2 and C3 decreased. After the light-delignification, the degree of polymerization of lignin on the fiber surface was decreased, and more phenolic hydroxyl radicals were exposed. During the hot-pressing process, phenolic hydroxyl radicals and the degradation products of carbohydrates were polymerized, producing resin polymers useful for inter-fiber bonding, thereby the C4 relative content was increased. Additionally, the hydrophobic properties of the material surface can be expressed as the C1/C2 ratio23. A C1/C2 value of 0.91 was obtained for UTS, while it increased to 1.48 for STS (Table 4), indicating that the hydrophobic properties of HMFM were improved.

Thermal properties analysis of HMFM

The light-delignification caused changes in the fiber chemical composition, which may lead to changes in the thermal properties of the material. Figure 4 shows the TG, DTG and DSC curves of UTS and STS. Important data derived from TG curves are explained briefly in Table 5.

Figure 4

Figure 4

TG (a), DTG (b) and DSC (c) curves of UTS and STS.

Full size image
Table 5: TG results of UTS and STS. aT i values for initial decomposition temperature.

Full size table

As shown in Fig. 4(a) and (b), UTS and STS had similar weight loss behavior. From 26 to 115 °C, the weight loss was slight corresponding to the first endothermic process, which was due to the evaporation of water. But the weight loss of STS was greater, indicating that the moisture absorption was larger. From 115 to 240 °C, the samples were almost without weight loss. From 240 to 395 °C, there was a great weight loss with apparently different weight loss rates. As seen in Fig. 4(c), the characteristic peaks of the DSC curves were basically the same, indicating that similar chemical reactions occurred during the thermal decomposition process. But the intensities of the characteristic peaks were different, which was due to the difference of the content of the chemical composition.

The decomposition temperature ranges of hemicellulose, cellulose, and lignin in wood fiber materials were 180–240 °C, 230–310 °C, and 300–400 °C, respectively24. As seen in Table 5, The T i values of STS and UTS were 238.2 °C and 242.8 °C, respectively, indicating that the hemicellulose began to undergo thermal decomposition. The T m values of STS and UTS were 345.6 °C and 370.0 °C, respectively, which was due to the depolymerization of most of the cellulose and a portion of the lignin25. The T f values of STS and UTS were 389.5 °C and 394.5 °C, respectively, indicating that the residual lignin decomposed gradually.

Compared to UTS, the T i value and T f value of STS decreased slightly, while the T m value of STS decreased significantly. These results indicate that the STS was less thermally stable than UTS. The lignin polymerization degree of the fiber surface was reduced by the light-delignification, which weakened the barrier effect of lignin on the thermal degradation of fiber chemical composition26, so that the thermal stability was reduced. Compared to UTS, the RW of STS decreased at 450 °C, which could be attributed to the decrease of lignin polymerization degree accelerating more decomposition of lignin.

As shown in Fig. 4(c), the peak temperature values of major endothermic peaks appeared in the DSC thermogram of UTS and STS were 369.1 °C and 348.3 °C, respectively, and the corresponding enthalpy values were 110.8 J/g and 68.0 J/g, respectively. The results showed that the temperature of thermal decomposition decreased, the enthalpy required for thermal decomposition decreased and the thermal stability of material decreased after the light-delignification, which was consistent with the TG results. A weak endothermic peak in the DSC thermogram of UTS was detected at 398 °C, which was assigned to the thermal decomposition of lignin27. However, it did not appear in the DSC thermogram of STS, which may be related to the decrease in lignin content28.

Micro morphology analysis of HMFM

The light-delignification caused changes in the microscopic morphology of fibers, resulting in changes in material properties. The SEM micrographs of the material surface, cross section, and inner surface are showed in Figs 5 and 6.

Figure 5

Figure 5

SEM micrographs of the surface of UTS (a,c) and STS (b,d).

Full size image
Figure 6

Figure 6

SEM micrographs of the cross section (a UTS, b STS) and inner surface (c UTS, d STS).

Full size image

As shown in Fig. 5, UTS fibers appeared stiffer, and had a lower fibrillation extent. The fibers of UTS were intertwined mechanically with smaller contact area, looser binding degree, and more surface holes. On the contrary, STS fibers showed a lower hardness. The fibers of STS bonded together by some adhesive substances with increased contact area and improved binding compactness.

As shown in Fig. 6a and b, compared to UTS, the fibers of STS appeared more compaction, more regular and neat arrangement, and increased tightness of the combination. As shown in Fig. 6c and d, compared to UTS, the fibers of STS showed more melted fragments spreading between fibers, which caused more agglomerations of fibers and contributed to the improvement of the inter-fiber bonding strength.

Hydrophobic property analysis of HMFM

Hydrophobic property is an important index influencing the application of HMFM. The water contact angle is an effective method to evaluate the surface hydrophobicity of material. Figure 7 shows the curve of surface water contact angle of HMFM over time.

Figure 7

Figure 7

Water contact angles of the UTS and STS at different contact times.

Full size image

As seen in Fig. 7, the water contact angle of UTS and STS decreased gradually over time. The water contact angle of UTS varied from 72.7° to 64.3° in 60 s, and the water contact angle of STS varied from 84.3° to 80.8° in 60 s. The significantly increased water contact angle values indicated the increase in hydrophobic property of HMFM. The stability of water contact angle over time is a very important parameter for a hydrophobic surface29. The water contact angle of UTS decreased 8.4° in 60 s, while the water contact angle of STS only decreased 3.5° in 60 s, which indicated that the hydrophobic stability was improved. The light-delignification significantly improved the hydrophobic property of the HMFM surface, which was consistent with the XPS results.

Weiss, Mahadevan-Jansen honored by OSA; Weiss also named SPIE Fellow

SHARELINES

Share this on Facebook TweetWeiss, Mahadevan-Jansen named OSA Fellows

Two engineering professors have been named fellows of The Optical Society (OSA), a leading international association for optics and photonics.

Cornelius Vanderbilt Professor of Engineering Sharon Weiss, a professor of electrical engineering, received the distinction “for contributions expanding the use of silicon in photonics and optoelectronics, and especially for designing and demonstrating highly sensitive porous-silicon guided-wave optical biosensors.”

Weiss also has been named a Fellow of SPIE, international society advancing an interdisciplinary approach to the science and application of light, “for achievements in optical properties of nanoscale materials.”

Orrin H. Ingram Professor of Biomedical Engineering Anita Mahadevan-Jansen, also a professor of neurological surgery, was honored by OSA “for contributions to the clinical translation of optical diagnostics and therapeutics including the development and application of Raman spectroscopy methods and infrared neural stimulation.”

She was named a SPIE Fellow in 2010 and just finished a three-year term on the organization’s Board of Directors.

Sharon Weiss

OSA and SPIE are interdisciplinary groups that bring together academics, clinicians, and industry members in optics and photonics, fast-growing fields that leverage advanced materials and different forms of light in clinical and scientific applications.

Mahadevan-Jansen is director of the Vanderbilt Biophotonics Center (VBC) and Weiss is deputy director of the Vanderbilt Institute of Nanoscale Science and Engineering (VINSE). Both VINSE and VBC involve faculty from the School of Medicine and the College of Arts & Science as well as the School of Engineering.

Weiss’ research group studies photonics, optoelectronics, nanoscience and technology, plus optical properties of materials, for applications in biosensing, optical communication, drug delivery, nanoscale patterning and pseudocapacitors.

She joined the Vanderbilt engineering faculty as an assistant professor in 2005. She has received school’s Award for Excellence in Teaching and a Presidential Early Career Award for Scientists and Engineers (PECASE), the highest honor bestowed by the United States government on young professionals in the early stages of their independent research careers. In 2017, Weiss was named a Cornelius Vanderbilt Professor of Engineering.

Recent publications include 2017 articles in Optics Express and Proceedings of SPIE Nanoscience and Engineering.

Anita Mahadevan-Jansen

Mahadevan-Jansen, an acknowledged leader in biomedical photonics, joined the School of Engineering in 1997.

VBC focuses on cancer photonics, neuro-photonics and multiscale photonics. She and her team develop technologies that can be used in clinical care for cancer diagnosis and therapy guidance. Surgeons use her laser spectroscopy techniques during delicate brain tumor surgery and her optical techniques are also used in breast cancer surgery.

Mahadevan-Jansen and her team are making at least six presentations at SPIE Photonics West in San Francisco later this month. She also is leading a $3 million interdisciplinary Vanderbilt initiative under the TIPs program to develop next-generation microscopy tools.

The distinction of Fellow is one of the highest honors in SPIE and OSA and requires nomination, review and approval by the respective boards. The new honors for Mahadevan-Jansen and Weiss are for the Fellow cohorts of 2018.

“It’s an honor to congratulate the new Fellows, who have contributed to the growth of our industry.” OSA CEO Elizabeth Rogan said in a statement. “This impressive group of individuals will continue to add tremendous value to the optical science ecosystem.”

Posted on Wednesday, January 17, 2018 in Anita Mahadevan-Jansen, Biophotonics Center at Vanderbilt, nanotechnology, optics, Sharon Weiss, SPIE, VINSE,Biomedical Engineering, Electrical Engineering and Computer Science, News, News Sidebar

Learning Opens the Genome

ISTOCK, SELVANEGRAA new bioinformatics strategy called DEScan has enabled researchers to identify genomic regions that undergo changes in chromatin accessibility in response to learning, according to a report in Science Signaling yesterday (January 16). Examining hippocampal neurons from mice before and after fear conditioning revealed widespread changes in chromatin conformation, mainly toward a more open structure.

“This is a fascinating investigation into the epigenetic basis for plasticity in the adult nervous system,” David Sweatt, a pharmacologist at Vanderbilt University who was not involved in the work, writes in an email to The Scientist. “The study is exceptionally comprehensive and utilizes cutting-edge technologies to interrogate the entire genome and assess sites of genetic plasticity in memory formation.”

Figuring out how epigenetic mechanisms within brain cells are linked to learning and memory is a subject of great interest to many researchers, including Washington State University’s Lucia Peixoto. But Peixoto’s epigenetic pursuits also have a medical motivation, she explains. “About 50 percent of people on the [autism] spectrum have learning disabilities . . . telling us that there must be a big overlap between mechanisms underlying learning and memory and those that underlie more severe forms of the [condition].” She therefore reasons that examining learning-induced chromatin changes might help to narrow the search for genetic regions involved in autism.

Genetic association studies for autism spectrum disorder (ASD) to date have been limited to the protein-coding portion of the genome, “which is about 2 percent,” says Peixoto. “There’s a need to look outside of the exome,” she continues, and investigating gene regulatory regions—hotspots of epigenetic chromatin changes—seemed like a good place to start.

The seat of learning and memory in the brain is the hippocampus. Peixoto and colleagues therefore took hippocampal samples from mice that experienced fear conditioning as well as from control animals, and prepared the chromatin for structural analysis.

DNA sequencing of these preparations was then followed by DEScan evaluations—a statistical framework for determining the differential enrichment of open and compact chromatin regions between the conditioned animals and controls. Of the 2,365 regions that exhibited changes in accessibility after fear conditioning, only 25 became more compact while 2,340 became more accessible.

“This was a surprise,” says Peixoto, but “it actually matches what other people have reported for . . . neuronal activity,” which is that activation of hippocampal neurons can induce open chromatin structures.

“Their results reinforce the view that the mature nervous system exhibits significant plasticity, and the epigenomes are dynamically regulated in response to learning,” Hongjun Song, a neuroscientist and cell and developmental biologist at the University of Pennsylvania who was not involved in the work, writes in an email to The Scientist.

Open chromatin structures are generally associated with more transcriptional activity, but Peixoto’s team found only a weak link between the set of learning-altered loci and learning-induced gene-expression differences. The researchers found a much stronger link, however, to learning-induced production of alternative transcripts—the non-standard transcripts sometimes made by certain genes. Consistent with this, the researchers showed that a disproportionate number of the learning-altered loci overlapped with alternative promoter regions of genes, which direct alternative transcript production. This suggests that changes in chromatin conformation during learning could drive the expression of alternative gene products—such as variant proteins, or noncoding RNAs.

Perhaps most strikingly, the team found a more than three-fold enrichment of known ASD risk genes in the set of learning-altered loci, which supported the researchers’ original hypothesis: that the biological underpinnings of learning and ASD are linked.

Exactly how regions of the genome that experience learning-induced chromatin changes might influence ASD is, as yet, unknown. But, the findings hint “that aberrant epigenetic regulation could contribute to ASD,” writes Song.

J.N. Koberstein et al., “Learning-dependent chromatin remodeling highlights noncoding regulatory regions linked to autism,” Sci Signal, 11:eaan6500, 2018.

In vivo Microscopic Photoacoustic Spectroscopy for Non-Invasive Glucose Monitoring Invulnerable to Skin Secretion Products


Position Scanning Photoacoustic Imaging

Since the skin of our body constantly secretes sweat and human skin oil, these skin secretion products are a significant source of interference and noise. Therefore, it is important to investigate the effect of the sweat secretion on the in vivo skin photoacoustic spectroscopy for glucose monitoring. To obtain the spatial information of skin, we have developed a position scanning photoacoustic spectroscopy system. The setup for the mid infrared excitation of human skin in vivo and photoacoustic detection has been previously described29,30 and the modified version for the scanning photoacoustic spectroscopy is shown in Fig. 1. In our photoacoustic system, a high-energy laser beam irradiated on skin produced a thermal expansion, thereby generating an acoustic wave. The wave was acquired via a microphone, a phase-sensitive amplifier, and filters. The position scanning photoacoustic images of the skin was taken with a field of view of 1.3 mm by 1.3 mm and the skin is exposed to the detector in a 2.5 mm diameter. The laser beam was focused up to 90 μm in the beam diameter, near the focus at 50.4 mm as shown in Fig. 1(c). The resolution of photoacoustic imaging was expected to be 90 µm because the beam diameter was focused down to 90 µm. The resolution was experimentally evaluated using a specimen consisting of SU-8 line structures on a silicon wafer with varying line width and spacing. An optical micrograph and photoacoustic image of the SU-8 structures are shown in Fig. 1(b). The photoacoustic image confirmed that our system was able to resolve features as small as 90 µm. The size of the window was 30-by-30 pixels with a step size of 44 µm. The acoustic pulse of the skin generated by the absorption of the laser pulse was coupled to the air of the photoacoustic cell and by matching the repetition rate of the laser at 47.5 kHz, which is the frequency at which the resonance of the cell coincides with the resonance of the microphone, was amplified with a quality factor of 17 [Fig. 1(d)].

Figure 1

Figure 1

(a) Schematic drawing of the position scanning photoacoustic spectroscopy system. (b) Optical micrograph and photoacoustic image of SU-8 structures for resolution evaluation. (c) Beam diameter after reflection from the parabolic mirror, with a minimum of 90 μm at the focal distance of 50.4 mm from the parabolic mirror. (d) Photoacoustic spectrum in the frequency domain where the peak is located at 47.5 kHz with a reference carbon black tape sample. Scale bar is 250 µm.

Full size image

Using the position scanning photoacoustic imaging system, we first investigated how the signal varied depending on the different locations of the skin. We obtained the position scanning photoacoustic images on multiple body locations of a volunteer including the fingertips, palm, and forearm. As shown in Fig. 2, a large inhomogeneity was found at the location where the fingertip (volar distal phalanx) was exposed to the detector. Depending on the probing locations of the fingertip, palm, and forearm, the patterns of the scanning images also varied to a large extent. Since the scanning location caused the skin inhomogeneity, variations would be introduced between subjects, as well as temporal changes even at the same location.

Figure 2

Figure 2

Images of the position scanning photoacoustic spectroscopy measured in two different regions of the fingertip; the part of the epidermal ridges aligned in a line (a) and the part where the epidermal ridges are bent into the U-shape (b). The optical micrographs of the corresponding fingertip regions are shown in (c) and (d). Images of the position scanning photoacoustic spectroscopy measured in the volar thenar are shown in (e) and (f), with the corresponding optical micrographs in (g) and (h). Scale bar is 250 µm.

Full size image

The microscopic images were taken in parallel at the same location where the photoacoustic images were taken [Fig. 2(c),(d),(g),(h)] The pattern of the scanning photoacoustic spectroscopy was well matched with the microscopic images at the corresponding site of the fingertips and the structural patterns of the friction ridges of the finger were reflected on the photoacoustic images [Fig. 2(a–d)]. However, instead of constructing the lines of the epidermal ridges, the pattern of the scanning photoacoustic spectroscopy had multiple discrete blobs along the epidermal ridges. The blobs of locally intense signals were observed along the top of the epidermal ridges across the entire set of images taken in the fingertip. The volar hypothenar of the palm displayed similar ridge structures coinciding with the friction ridges (data not shown) whereas the volar thenar showed rounded pore structures [Fig. 2(e)–(h)]. To visualize how the photoacoustic images co-register the optical microscopic photos, the two images of the finger index was merged varying the opacity of the photoacoustic images as shown in Supplementary Figure 2. We were unable to match the optical micrograph with the position scanning photoacoustic images of the thenar in which little friction ridges were present. We speculate that the patterns in the thenar resulted from the underlying structure which is not seen at the surface of skin.

Spectra of Position Scanning Photoacoustic Imaging

Wavenumber scanning of the mid infrared light was performed at multiple spots in the images: at the region of the locally intense blobs on the top of the ridges, at the dark region of the valley between ridges, and the gray region between the bright blobs on the top of the ridges. The different regions exhibited unique spectral patterns as shown in Fig. 3. At the bright blobs, large peaks were observed at 1070, 1105, and 1140 cm−1. The other locations had little or weak peaks at these wave numbers. This result implies that not only the signal intensities, but also the chemical compositions of the bright blobs differ from those of other locations.

Figure 3

Figure 3

Spectra of different spots of a scanning photoacoustic spectroscopy image of skin and sodium lactate. (a) Three locations of the corresponding scanning image; p1 is in the dark region, p2 is the gray region between the bright blobs, and p3 is the bright region. (b) The wave number scan of the photoacoustic signal at the three different sites. (c) Spectrum of sodium lactate and the difference spectra of p1 from p3. Scale bar is 250 µm.

Full size image

On the fore side of the palm and fingers there are barely sebaceous glands but an abundance of eccrine sweat glands. The ducts of the sweat eccrine glands are usually open on the tops of the tiny lines on the forehand called epidermal ridges31. The sweat eccrine glands secrete the fluid products from the ducts in the skin primarily for thermoregulation. We hypothesized that the bright blobs in the position scanning images originated from the skin secretion products of the eccrine sweat or sebaceous glands. We therefore tested several different soluble analytes available in the sweat or sebum, including sodium lactate, calcium lactate, and triglycerides to see which chemical components contributed to the spectra. For comparison, we subtracted the spectrum of the dark region from that of the bright blob. The difference in the spectra allowed us to extract the chemical compositions of the secretion products from the epidermis. When we compared the difference spectra with those of the different analytes, the sodium lactate spectrum bore similarities with the spectrum of the bright blob with a comparable width and intensity (Fig. 3(c) and Supplementary Figure 2). We therefore concluded that the major component of the bright blob is sodium lactate.

Effects of Temporal Skin Secretion on Photoacoustic Spectroscopy

The intense cleaning procedures of the skin before measurements would be beneficial to improve the day-by-day repeatability and reduce any spurious absorption effects of the dried-off layers of skin, as an optical measuring beam needs to penetrate the dried-off skin layers before reaching the interstitial fluid containing layers. In addition, cleaning procedures are a common practice before an invasive medical procedure, not limited to the photoacoustic measurement. However, the skin reconstitution by exocrine gland secretion in response to thorough hand washing could add further interference and increase measurement errors because the reconstitution response was found to be faster than the usual time required for a full glucose correlation test27.

To overcome these issues, we used the spatial information to detect the secretion of skin which created the spurious effects on the spectroscopy measurement. Prior to the measurement, the volunteers washed their hands thoroughly with soap and water, drying them off completely with nitrogen blow, before placing them on the measurement setup. While spectra were taken consecutively for two hours, the secretion of sweat was detected over time on spatially distinct locations as shown in Fig. 4(a). After hand washing the dark regions in the position scanning photoacoustic images remained steady. On contrary, some of the bright regions increased in the signal intensity and gained a similar shape as the bright blobs observed in the previous experiments without hand washing. This result proved that the detected bright blobs were associated with the sweat secretion from the eccrine glands.

Figure 4

Figure 4

Images of the position scanning photoacoustic spectroscopy over time. (a) Images recorded for two hours in 30-min intervals. Scale bar is 250 µm. (b)–(e) Wavenumber scan over time at two different spots of a region of the index finger tip: (b) and (c) where there was no secretion, and (d) and (e) with secretion from an eccrine sweat gland. (b) and (d) 1D plots of the PAS spectra for each position. (c) and (e) Color coded representations of the spectral alterations observed for a volunteer.

Full size image

Figure 4(b–e) shows the mid-infrared spectra obtained from the different locations. At the location without sweat secretion, the spectra stayed distinct from the spectra of the skin products and intact against the skin secretion [Fig. 4(b)]. Conversely, at the location with the sweat secretion, significant changes in the spectra were detected over time [Fig. 4(d)]. As expected, the peak location and width of the spectra here were also matched with the sweat spectra or the bright blob spectra previously measured. These results imply that the signals from the locations with the eccrine sweat glands strongly interfere with the glucose signals and dampen the mid-infrared light absorption of the glucose contained in the interstitial fluid. Notably, the intact region can be a promising location due to the reduction of the variations and increased repeatability of the measurements. Moreover, the pre-washing step right before the measurement was previously condemned because of the reconstitution of the skin27. However, if the measurement can be localized in the region with no skin secretion, the pre-washing step would be suitable for the glucose correlation test and preferred because it is invulnerable to the skin secretion product regardless of the reconstitution response.

The same spectral alternations due to skin secretion products are shown in a color-coded representation [Fig. 4(c),(e)]. At the non-secreting position, the signals were barely altered over time [Fig. 4(d)]. At the secreting position, the signals at 1070 cm−1 and 1140 cm−1 appeared in a few minutes after the start of the measurement with the washed hands, rose continuously, and saturated after about one hour [Fig. 4(e)]. It confirms that the position scanning photoacoustic spectroscopy can overcome the issues regarding the instant reconstitution response of skin to pre-washing as well as support the implementation of the glucose correlation test with photoacoustic spectroscopy after a pre-washing step.

Putting these data together, the dark area between the friction ridges was expected to prevent the effects of sebum and sweat. Our procedure for glucose measurement used the following protocol: (i) wash hands thoroughly and dry them off with nitrogen blow, (ii) obtain a repetition frequency response, (iii) acquire a 2D position scanning image and choose the location between the friction ridges, (iv) perform wavenumber scanning, and (v) measure the glucose level invasively. The 2D position scanning images displayed the friction ridges even immediately after washing hands. We initially located the probing position at the darkest point of the valley between the friction ridges. At a wavenumber of 1040 cm−1, the 2D position scanning images exhibited prominent signal differences by the skin secretion product. Additionally, if a dust particle resided on the skin, the images at the wavenumber allowed us to easily detect any particle. For these reasons, we chose a wavenumber of 1040 cm−1 of the QCL laser for 2D position scanning for the rest of the experiments.

Glucose Correlation Test with Photoacoustic Spectroscopy

To assess the improvement by eliminating the interference of skin secretion, we conducted glucose correlation tests under the approval of the Institutional Review Board (IRB). We probed two locations per each glucose correlation procedure to evaluate the performance of photoacoustic non-invasive glucose monitoring when selectively probing the area: (i) the darkest point of the valley between the friction ridges and (ii) the brightest point on the friction ridges. During the glucose correlation test, the index finger of the subject was fixed to the photoacoustic cell so that the same probing position was maintained during the consecutive wavenumber scan. We ensured an identical probing location by acquiring the 2D position scanning at the end of the entire experiment and comparing it with the initial image.

One photoacoustic wavenumber scan in our system took 8.4 s. The total acquisition time of the 16 repetitive wavenumber scans of each location took 2.3 min, which corresponded to the reference measurements. The 2D position scanning added a time cost of 2 min to the glucose measurement. Provided that calibration is completed, a single measurement requires 4.3 min for glucose prediction. Further improvements in the measurement speed can be achieved by measuring at selected wavelengths instead of scanning the entire spectrum scan and reducing the size of the window for 2D position scanning. The imaging time may also be reduced by using an array of ultrasound transducers instead of a single acoustic sensor.

The pulse energy of the laser varies with different wavenumbers. Thus, one needs to normalize the photoacoustic spectra with the pulse energy variation curve for the wavenumber [Supplementary Figure 1(a)]. When we compared the prediction performance before and after normalization, it barely affected the glucose measurement results. This was because we were analyzing the relative difference of signals at each wavenumber. In addition, the single pulse energy of each wavenumber could fluctuate from its mean over time. To reduce the noise from the laser, we filtered the signal using a lock-in amplifier with a time constant of 30 ms, which took account of 1425 pulses. We additionally averaged 16 independent acquisitions to reduce possible deviations owing to laser pulse energy fluctuations. The publications of Pleitz et al. and Kottmann et al. reported methods to correct the laser power variation using a mercury cadmium telluride detector or a power meter, which may leave a room for a further improvement of our system32,33.

To manipulate the blood glucose, an oral glucose tolerance test was performed by the intaking of glucose dissolved in water, and was monitored by the non-invasive infrared and invasive enzymatic measurements in parallel. After each wavenumber scan, the reference blood glucose level was measured. The detailed procedure for the glucose correlation test is shown in Fig. 5(a). To predict the glucose level from the photoacoustic spectrum, we used a Partial Least Square Regression (PLSR) algorithm and cross-validation. For details, see Methods and Supplementary Figure 5 for the algorithm. Figure 5 illustrates the prediction of the blood glucose level from the photoacoustic measurements of the skin on the index finger of a volunteer. Figure 5(c) is the glucose correlation test results over time calculated from the PLSR cross-validation of the data at both the non-secreting and secreting positions, and the reference glucose level. The non-invasive measurement closely followed the invasive reference measurements and no significant delay was observed. In particular, from the spectra at the position of non-secretion from an eccrine sweat gland the performance of the glucose level prediction was better than that of the prediction at the region near an eccrine sweat gland secretion. This time course result showed that the photoacoustic infrared spectra of intestinal fluid can be used to reliably determine the glucose level in the blood. Figure 5(c) illustrates the RMSE-CV (root mean square error of cross-validation) and RMSEC (root mean square error of calibration) by varying the number of the latent variable number used for the prediction. RMSEC monotonically decreases with the number of latent variables. However, RMSEC cannot fully reflect the model performance of prediction, and therefore RMSE-CV is more appropriate for evaluating the model performance of prediction. In both cases of non-secreting and secreting positions, the RMSE-CV decreased until the second latent variable was included and started to increase thereafter, so that we chose two latent variables for the prediction. In determining the agreement of the method with the clinical relevance of the errors, a Clarke’s grid was employed [Fig. 5(d),(e)] which is a standard measure to determine the accuracy of blood glucose measuring methods. It defines a region of sufficient accuracy (within 20% of the reference sensor, zone A) and a region of low but clinically acceptable accuracy without inappropriate treatment of the patient (zone B). The results in zones C, D, and E are potentially dangerous and are therefore clinically significant errors34. In addition, the Mean Absolute Relative Deviation (MARD) and the Mean Absolute Difference (MAD) of the cross-validated measurement data sets are also computed to compare system performance35. A MARD of 9.94% and MAD of 8.27 mg/dl were obtained for the non-secreting position. These values were 30% lower than the MARD of 14.67% and MAD of 11.98 mg/dl obtained from the secreting position. The standard deviations were, respectively, 9.97% and 7.5 mg/dl of MARD and MAD for the non-secreting position, and 11.12% and 7.99 mg/dl of MARD and MAD for the secreting position.

Figure 5

Figure 5

Performance of the glucose level prediction from the photoacoustic spectroscopy on two different spots with or without secretion from an eccrine sweat gland. (a) Procedure for glucose correlation test. (b) Non-invasively measured skin glucose with skin spectra recorded by photoacoustic spectroscopy on the two different spots of the index finger with (yellow) or without (red) secretion from an eccrine sweat gland of a volunteer over time compared with the reference glucose measured enzymatically after taking out blood (blue). (c) Changes in the root mean Square error of cross-validation and root mean square error of calibration with varying latent variable number in a PLSR analysis. (d) and (e) Correlation between measured blood glucose and predicted glucose from the spectra without (e) or with (e) the secretion from a sweat gland. The Clarke’s error grid is also shown.

Full size image

These results confirmed that the infrared measurement on the non-secreting position produced a better prediction than that on the secreting position. By selectively probing the non-secreting position spectra of the fingertip, we continued conducting the five different experiments from both a healthy and a diabetic subject. As a result, PLSR-CV predictions from 76 measurements were acquired and 70% of measurements fell into zone A of the Clarke’s grid and 30% into zone B (Fig. 6). No measurement pairs fell in zones C, D, or E of the error grid. The MAD and MARD of the experiment resulted in 18.51 mg/dl (±12.35 mg/dl) and 14.4% (±10.5%), respectively, for the pooled data from the five independent experiments with a healthy and a diabetic subject.

Figure 6

Figure 6

The pooled glucose correlation test from a healthy subject and a diabetic subject, and 70% of the measurements fell into Zone A.

Full size image

U.S. Doctors Plan to Treat Cancer Patients Using CRISPR

The first human test in the U.S. involving the gene-editing tool CRISPR could begin at any time and will employ the DNA cutting technique in a bid to battle deadly cancers. 

Doctors at the University of Pennsylvania say they will use CRISPR to modify human immune cells so that they become expert cancer killers, according to plans posted this week to a directory of ongoing clinical trials.

The study will enroll up to 18 patients fighting three different types of cancer—multiple myeloma, sarcoma, and melanoma—in what could become the first medical use of CRISPR outside China, where similar studies have been under way.

An advisory group to the National Institutes of Health initially gave a green light to the Penn researchers in June 2016 (see “First Human Test of CRISPR Proposed”), but until now it was not known whether the trial would proceed. 

“We are in the final steps of preparing for the trial, but cannot provide a specific projected start date,” a spokesperson for Penn Medicine told MIT Technology Review

The Parker Institute for Cancer Immunotherapy, a charity set up by billionaire entrepreneur Sean Parker, one of the founders of the music-sharing site Napster, confirmed it is helping to finance the study.

The CRISPR trial, led by doctor Edward Stadtmauer, involves reprogramming a person’s immune cells to find and attack tumors.

To help enhance the treatment, Penn scientists intend to use CRISPR to delete two genes in patients’ T cells to make them better cancer fighters. One of the genes to be removed makes a “checkpoint” molecule, PD-1, that cancer cells exploit to put brakes on the immune system.

A further edit will delete the receptor that immune cells normally use to sense danger, like germs or sick tissue. An engineered receptor, added in its place, will instead steer them toward particular tumors. 

In the study, doctors will remove people’s blood cells, modify them with CRISPR in the lab, and then infuse them back into the patients.

This outside-the-body approach, called ex vivo gene therapy, is considered less risky than injecting CRISPR directly into a person’s bloodstream, which could cause immune reactions.

A second CRISPR trial that could begin in Europe later this year will also pursue the ex vivo approach. CRISPR Therapeutics, a biotech company based in Cambridge, Massachusetts, asked European regulatory authorities in December for permission to try to cure beta thalassemia, a blood disorder, by making a genetic tweak to people’s blood cells.

How Gaining and Losing Weight Affects the Body

ISTOCK, SUSANNEBGaining and losing weight causes extensive changes in the gut microbiota and in biomarkers related to inflammation and heart disease, researchers report today (January 17) in Cell Systems. The authors tracked what they call “personal omics profiles,” composed of the genomics, transcriptomics, proteomics, metabolomics, and microbiomics, of people who ate an average of 880 extra calories every day for a month.

“It’s a landmark paper,” says Leroy Hood, chief strategic officer at the Institute for Systems Biology, a Seattle biomedical nonprofit organization, and senior vice president and chief science officer at Providence St. Joseph Health. He was not involved in this study, but he has previously led long-term omics-based projects to track wellness in people. Using this type of data “to study aspects of disease is going to be a transformational approach in medicine, and this is one of the first beautiful, clear demonstrations of how powerful that will be,” he says.

In the study, the researchers monitored subjects’ omics profiles as they added extra snacks and beverages to their regular diets. “We were fortunate we got 23 people who would eat extra calories—typically 1,000 if you’re male, 750 if you’re female—[every] day for 30 days,” coauthor Michael Snyder, a geneticist at Stanford University, tells The Scientist. “They’re just a very interested bunch of folks. They have to be to show this kind of dedication to giving samples,” he says.

Snyder’s team collected blood and stool samples before and after the 30 days of eating extra calories, as well as after participants returned to their starting weight, about 60 days after dropping the extra calories, and then three months after that. Of the 23 participants, 13 were insulin resistant and 10 were insulin sensitive at the beginning of the study. Comparisons of baseline profiles showed differences in metabolism, transcript and protein levels, and the microbiota of insulin resistant and insulin sensitive people.

We were fortunate we got 23 people who would eat extra calories—typically 1,000 if you’re male, 750 if you’re female—[every] day for 30 days. . . . They’re just a very interested bunch of folks.—Michael Snyder,
Stanford University

Although subjects only gained an average of about six pounds, the researchers detected considerable changes in molecules related to fat metabolism, inflammation, and dilated cardiomyopathy, a condition where the heart is less able to pump blood, which can lead to heart failure. The team also found differences in the gut microbiota after weight gain. Many of the shifts the scientists observed were less pronounced in the insulin-resistant individuals. For instance, one bacterial species—Akkermansia muciniphila, which is thought to help protect against the development of insulin resistance after weight gain—only appeared in the insulin-sensitive participants.

“There is a molecular difference in the way [insulin] resistant and sensitive folks react to gaining weight, and we think it reflects differences in their underlying biochemistry,” Snyder says.

Most of the changes went back to baseline after weight loss, but a few—such as molecules associated with folate metabolism—stayed elevated. And while the researchers saw some common responses to weight gain and loss across the group, “you still look more like you than somebody else,” Snyder explains. “That means that our inherent biochemical profiles are pretty stable, at least through weight gain [and] weight loss.”

Jennifer Stearns, a microbial ecologist at McMaster University in Hamilton, Ontario, Canada, says the real value of the work is in the open access to the data for the research community because much of the data can be used by everyone else. Stearns, who did not participate in the new project, adds, “In studies where we look to see how microbes could be impacting weight gain . . . it’s often difficult to say which microbes are important.” This type of longitudinal analysis could help determine whether microbes are contributing to weight gain or just reacting to it, she says.

Two clinical trials that Hood is involved in are already using data like these gathered from Alzheimer’s patients to test different therapeutics and assess their effectiveness. But in terms of eventually bringing these analyses into the clinic on a large scale, “the major shortcoming for all of us is the expense of doing all of these assays,” he says. For Hood’s work, generating personalized data along these lines costs upwards of $5,000 per person per year.

Another challenge will be interpreting the data in a way that will be understood and digested by physicians. “I think clinical academia really is skeptical about these N of 1 experiments, but in the future they’re going to become central factors,” Hood says. “Each individual has to be viewed in the context of what they are by virtue of their genetics and their lifestyle and environmental exposures.”

B.D. Piening et al., “Integrative personal omics profiles during periods of weight gain and loss,” Cell Systems, doi:10.1016/j.cels.2017.12.013, 2017.

Critical Shear Stress is Associated with Diabetic Kidney Disease in Patients with Type 2 Diabetes

In our present study, the patients with higher CSS levels were more likely to have DKD. In addition to CSS, the common risk factors for DKD in both eGFR and uACR standards were the duration of diabetes, presence of hypertension, Hb value, ESR, fibrinogen, and the presence of DR. After adjusting for age, sex, duration of diabetes, presence of hypertension, and Hb, the risk of developing DKD was approximately 2.5–3.0 times greater in the highest CCS tertile than in the lowest, with statistical significance.

Currently, the uACR measured in a fresh, first morning, spot sample is preferred as a screening tool for DKD. Compared to urinary total protein, urine albumin measurement provides a more specific and sensitive measure of changes in glomerular permeability17. However, there is a disadvantage in that uACR is recommended for repeated testing because the results vary according to the patient’s exercise, upright posture, and condition of infection or the sample’s storage temperature17,23. Additionally, approximately 20% to 63% of patients with low eGFR(<60 mL/min/1.73 m2) were reported to be normoalbuminuric24. For this reason, new potential novel biomarkers for early detection of DKD have been suggested, targeting several pathogeneses of DKD, including hyperfiltration, inflammation, and renal remodelling25. Recently, oxidative stress has emerged as a new pathophysiology of DKD which eventually alters haemodynamics26,27. Among the alterations in haemodynamics, reduced RBC deformability and increased RBC aggregation have been strongly featured and implicated in the pathogenesis of diabetic micro- and macro-vascular complications28,29.

There have been several studies reporting the relationship between haemorheologic markers and diabetic micro-vascular complications, and the comparison with the present study is as follows. First, DR was associated with impairment of RBC deformability7, plasma fibrinogen, or conventional aggregation indices30; however, no significant differences in haemorheologic markers between NPDR and PDR were noted30. In our study, there was no significant difference in CSS among normal, NPDR, and PDR numbers. This appears to be due to the small number of patients diagnosed with PDR in our study. In addition, since impairment of RBC deformability precedes RBC aggregation5 and DR precedes DKD17, the association of RBC deformability might be strong in DR, whereas the association of CSS is strong in DKD. Second, moderately increased albuminuria (uACR 30–300 mg/g) was significantly associated with impairment of RBC deformability9 or fibrinogen divided by RBC deformability8, compared with uACR <30 mg/g, but CSS showed only a significant difference between severely increased albuminuria (uACR >300 mg/g) and uACR <30 mg/g8. Our study provides additional evidence of CSS as a significant differential marker for moderately increased albuminuria (uACR 30–300 mg/g). Besides, CSS was the most significant indicator for DKD among the haemorheologic indices; we confirmed again that CSS is an independent hemorheologic index reflecting the synergistic effect of reduced RBC deformability and increased fibrinogen31. Third, the increased impairment of RBC deformability was noted in DPN without statistical significance7. In our study, CSS also increased without statistical significance.

CSS is one of several indices that represents RBC aggregation. CSS has an advantage in that it does not require haematocrit adjustments, unlike the conventional aggregation indices15, and fibrinogen does not affect the value of CSS if it is measured from the BSL of a transient microfluidic aggregometer22. Additionally, it has a similar trend to changes in whole blood viscosity with temperature variations32. Moreover, the new role of RBCs in coagulation has been recognised. A recent study reported that increments in CSS significantly increased platelet activation while RBC deformability was not associated33. The enhanced aggregation and the induced central compaction of RBC favours the migration of platelets to the marginal flow zone and modulates the possibility of platelet activation16, which causes vascular occlusion. Therefore, CSS may be used as a novel biomarker of hemorheological risk in both diabetic microcirculation and seasonal ischemic macro-vascular diseases.

The cut-off value of CSS for detecting DKD was approximately 310 mPa in our study (data was not shown). In previous studies, the mean CSS in the channel flow has been reported as 200.5 mPa15. Interestingly, an approximately 30% increase in CSS in acute coronary syndrome was noted: 265 mPa in stable angina, 338 mPa in unstable angina, and 324 mPa in acute myocardial infarction34. Further studies to establish the relationship between increased CSS values and diabetic vascular complications, and the study of each cut-off value as a screening tool, may be worthwhile.

To the best of our knowledge, this is the first study that has revealed the relationship between CSS and early stage DKD and presented the CSS cut-off values for DKD. However, there were some limitations to our study. This study was designed retrospectively and was a cross-sectional study; therefore, a causal relationship was hard to determine. In addition, omitted variable bias might have occurred due to the lack of important confounding variables in the regression analysis, such as medication. Further research with consideration to usage of cardiovascular medication is needed. With regard to hemorheologic parameters, we used the measured value once and did not use the average value from the repeated measurement. Although the mean value may more accurately reflect the hemorheologic change, single measurement could be acceptable based on previous studies7,8,9,10. Well-designed, prospective studies with a larger sample size are warranted in the future.

In conclusion, the elevation of CSS was closely associated with an increased risk of DKD. These results reinforce the possibility that RBC aggregability might contribute to DKD development. We anticipate that if additional studies and reference ranges are accumulated, haemorheologic parameters, including CSS, may have a role as screening tools for diabetic micro-vascular complications.

Court Software No Better Than Mechanical Turks at Predicting Repeat Crime

Software now widely used by courts to predict which criminals are likely to commit future crimes might be no more accurate than regular people with presumably little to no criminal justice expertise, a new study finds.

Predictive algorithms now regularly make recommendations regarding music, ads, health care, stock trades, auto insurance, and bank loans, among other things. In the criminal justice system, such algorithms have been used to predict where crimes will likely occur, who is likely to commit violent crimes, who is likely to fail to appear at their court hearings, and who is likely to repeat criminal behavior in the future.

One criminal risk analysis tool, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), has been used to assess more than 1 million offenders since it was developed in 1998, and to predict recidivism, or repeat criminal behavior, since 2000. Supporters of such systems argue that automated techniques are more accurate and less biased than humans. However, previous research suggested COMPAS’s predictions might be racially biased to underpredict recidivism among white defendants and overpredict recidivism among black defendants.

To investigate further whether algorithms can be more fair and accurate than humans at predicting recidivism, computer scientists recruited 400 workers through Amazon’s online Mechanical Turk crowdsourcing marketplace, presumably none of them criminal justice experts. Each worker saw descriptions of 50 people from a pool of 1,000 defendants from Broward County, Florida, who awaited trial in 2013 and 2014. These descriptions contained seven features about each defendant, including their sex, age, and previous criminal history, but not their race.

The crowdsourced workers were then asked to rate the risk that defendants would commit a misdemeanor or felony within two years of their last arrest. These results were then compared to ones from COMPAS.

Graphics comparing the crowdsourced data to the COMPAS results. Image: Carla Schaffer/AAAS

Although the crowdsourced workers analyzed considerably fewer variables than COMPAS, their average results were accurate in 67 percent of the cases presented, about the same as COMPAS’s accuracy of 65.2 percent.

“Considering that COMPAS uses 137 variables in its predictions, and that it is a commercial software presumably built on much more data than we had access to, this result was surprising,” says study senior author Hany Farid, a computer scientist at Dartmouth College in Hanover, New Hampshire.

Further analysis found that a strategy that only looked at two variables—a defendant’s age and total number of prior convictions—was about as accurate as COMPAS. A spokesperson for Equivant, the Ohio-based firm behind COMPAS, said the company was not holding interviews. Equivant posted a statement about the new research shortly before its release, calling its “highly misleading.”

“We believe that the most important implication of our work is that the courts should consider how much credibility to give these types of prediction algorithms—you can imagine that a judge would weigh a risk assessment made from a big-data machine-learning algorithm differently than a risk assessment made from people responding to an online survey,” Farid says. “We also believe that there should be more transparency in the use of algorithms in making such critical, life-altering decisions.”

“We are not saying in any way that big data, machine learning, artificial intelligence should be abandoned,” Farid says. “We are simply saying that their use should be deployed in a careful, thoughtful, and transparent manner, particularly when the results of such algorithms can have life-altering implications.”

However, the researchers found that results from both the crowdsourced workers and COMPAS were similarly unfair to black defendants. Farid did note there appear to be differences in the base rates of recidivism across race, with black defendants reoffending at a rate of 51 percent as compared with 39 percent for white defendants, but “these base rates may themselves be the result of racial biases in the criminal justice system—for example, black people are almost four times as likely as white people to be arrested for drug offenses. So what we may be seeing is a ripple effect in policing and prosecution that disproportionately impacts African-Americans.”

“On a national scale, black people are more likely to have prior crimes on their record than white people are—black people in America are incarcerated in state prisons at a rate that is 5.1 times that of white Americans, for example,” says study lead author Julia Dressel at Dartmouth College. “Within the dataset used in our study, white defendants had an average of 2.59 prior crimes, whereas black defendants had an average of 4.95 prior crimes. The racial bias that appears in both the algorithmic and human predictions is a result of this discrepancy.”

In the future, there may be ways to test the effectiveness of this kind of software before it goes on the market. “We can imagine that an organization like the National Institute of Standards and Technology (NIST) could undertake the task of creating standards and benchmarks that any software would have to meet,” Farid says. “Such a system would require access to the type of data that we used in our study, but at a larger and more diverse scale.”

“We think that studies similar to ours should be performed for all such algorithms,” Farid says. “We would also welcome access to larger and more diverse data sets to help us understand the efficacy of these algorithms and, possibly, develop more accurate algorithms.”

Dressel and Farid detailed their findings on 17 January 2018 in the journal Science Advances