Information Technology

This map, which shows glaciers and tributaries in patterned flows, was created using the same data that Stanford researchers used to train an AI model that revealed some of the fundamental physics governing the large-scale movements of the Antarctic ice sheet. (Image credit: NASA's Goddard Space Flight Center Scientific Visualization Studio). Image Credit: NASA's Goddard Space Flight Center Scientific Visualization Studio
Information Technology

AI Uncovers New Insights Into Antarctic Ice Flow

As the planet warms, Antarctica’s ice sheet is melting and contributing to sea-level rise around the globe. Antarctica holds enough frozen water to raise global sea levels by 190 feet, so precisely predicting how it will move and melt now and in the future is vital for protecting coastal areas. But most climate models struggle to accurately simulate the movement of Antarctic ice due to sparse data and the complexity of interactions between the ocean, atmosphere, and frozen surface. In…

Visualizations of the semantic structure information in backbone stages. Pixels of the same class as the marked pixel are brightly colored. The brighter the color, the higher the similarity. Our motivation comes from this phenomenon. Image Credit: Yanpeng SUN, Zechao LI
Information Technology

Semantic Structure Aware Inference for Pixel-Wise Predictions

CAM is proposed to highlight the class-related activation regions for an image classification network, where feature positions related to the specific object class are activated and have higher scores while other regions are suppressed and have lower scores. For specific visual tasks, CAM can be used to infer the object bounding boxes in weakly-supervised object location(WSOL) and generate pseudo-masks of training images in weakly-supervised semantic segmentation (WSSS). Therefore, obtaining the high-quality CAM is very important to improve the recognition performance…

Post-LLM roadmap. Image Credit: Fei Wu et al.
Information Technology

New Horizons for AI in the Post-LLM Era: Knowledge & Collaboration

A recent paper published in the journal Engineering delves into the future of artificial intelligence (AI) beyond large language models (LLMs). LLMs have made remarkable progress in multimodal tasks, yet they face limitations such as outdated information, hallucinations, inefficiency, and a lack of interpretability. To address these issues, researchers explore three key directions: knowledge empowerment, model collaboration, and model co-evolution. Knowledge empowerment aims to integrate external knowledge into LLMs. This can be achieved through various methods, including integrating knowledge into training objectives,…

Through a hyperspectral camera and AI, differences in the palm can provide highly personalized security. Image Credit: Osaka Metropolitan University
Information Technology

Advanced Biometric Authentication Using AI and Infrared

Hyperspectral imaging and AI can identify individuals using blood vessels in palms Hyperspectral imaging is a technology that detects slight differences in color to pinpoint the characteristics and conditions of an object. While a normal camera creates images using red, green, and blue, a hyperspectral camera can obtain over 100 images in the visible to near-infrared light range in a single shot. As a result, hyperspectral imaging can obtain information that the human eye cannot see. Specially Appointed Associate Professor…

First author Brendan Cottrell in the field. Image Credit: DFO (Fisheries and Oceans Canada). Credit: DFO (Fisheries and Oceans Canada)
Information Technology

Innovative Researcher Uses Smartphone for Sea Creature Reports

Q&A with Brendan Cottrell, who investigated the use of smartphones to create 3D scans of stranded marine life that can help scientists protect marine species What inspired you to become a researcher? My interest in research began with an early love for nature, particularly the ocean and its wildlife. Drawn to conservation, I am fascinated by how technology can help study and protect marine mammals. Can you tell us about the research you’re currently working on? This research focuses on…

The overall architecture of the MLOB framework. Image Credit: Zhiming Dong et al.
Information Technology

Machine Learning on Blockchain: Enhancing Computational Security

A new study published in Engineering presents a novel framework that combines machine learning (ML) and blockchain technology (BT) to enhance computational security in engineering. The framework, named Machine Learning on Blockchain (MLOB), aims to address the limitations of existing ML-BT integration solutions that primarily focus on data security while overlooking computational security. ML has been widely used in engineering to solve complex problems, offering high accuracy and efficiency. However, it faces security threats such as data tampering and logic corruption….

Engineer developing innovative artificial intelligence solutions by DC_Studio, Envato
Information Technology

Better Poverty Mapping: New Machine-Learning Approach Enhances Aid

Leveraging national surveys, big data, and machine learning, Cornell University researchers have developed a new approach to mapping poverty that could help policymakers and NGOs better identify the neediest populations in poor countries and allocate resources more effectively. To eliminate extreme poverty, defined as surviving on less than $2.15 per person per day, governments and development and humanitarian agencies need to know how many people live under that threshold, and where. Yet that information often is lacking in the countries that…

This illustrates the principle of two oscillators giving in-phase and out-of-phase oscillation modes. Image Credit: Victor H. González
Information Technology

New Low-Cost Computer Breakthrough Enhances Accessibility

A low-energy challenger to the quantum computer that also works at room temperature may be the result of research at the University of Gothenburg. The researchers have shown that information can be transmitted using magnetic wave motion in complex networks. Spintronics explores magnetic phenomena in nano-thin layers of magnetic materials that are exposed to magnetic fields, electric currents and voltages. These external stimuli can also create spin waves, ripples in a material’s magnetisation that travel with a specific phase and…

Like the teeth of a comb, a microcomb consists of a spectrum of evenly distributed light frequencies. Optical atomic clocks can be built by locking a microcomb tooth to a ultranarrow-linewidth laser, which in turn locks to an atomic transition with extremely high frequency stability. That way, frequency combs act like a bridge between the atomic transition at an optical frequency and the clock signal at a radio frequency that is electronically detectable for counting the oscillations – enabling extraordinary precision. The researchers’ photonic chip, on the righthand side of the image, contains 40 microcombs generators and is only five millimeters wide. Image Credit: Chalmers University of Technology\ Kaiyi Wu
Information Technology

Microcomb Chips Enhance GPS Accuracy by 1000 Times

Optical atomic clocks can increase the precision of time and geographic position a thousandfold in our mobile phones, computers, and GPS systems. However, they are currently too large and complex to be widely used in society. Now, a research team from Purdue University, USA, and Chalmers University of Technology, Sweden, has developed a technology that, with the help of on-chip microcombs, could make ultra-precise optical atomic clock systems significantly smaller and more accessible – with significant benefits for navigation, autonomous…

Ruishan Liu, WiSE Gabilan Assistant Professor of Computer Science, USC. Image Credit: Alexis Situ
Information Technology

AI Unlocks Genetic Insights for Personalized Cancer Care

New study uncovers how specific genetic mutations influence cancer treatment outcomes  A groundbreaking study led by USC Assistant Professor of Computer Science Ruishan Liu has uncovered how specific genetic mutations influence cancer treatment outcomes—insights that could help doctors tailor treatments more effectively. The largest study of its kind, the research analyzed data for more than 78,000 cancer patients across 20 cancer types. Patients received immunotherapies, chemotherapies and targeted therapies. Using advanced computational analysis, the researchers identified nearly 800 genetic changes that directly…

Information Technology

D2-GCN: Dynamic Disentanglement for Node Classification

Classic Graph Convolutional Networks (GCNs) often learn node representation holistically, which would ignore the distinct impacts from different neighbors when aggregating their features to update a node’s representation. Disentangled GCNs have been proposed to divide each node’s representation into several feature channels. However, current disentangling methods do not try to figure out how many inherent factors the model should assign to help extract the best representation of each node. To solve the problems, a research team led by Chuliang WENG published…

Automotive Engineering

TU Graz AI System Boosts E-Mobility Powertrain Development

The new method optimises the technical design with regard to classic objectives such as costs, efficiency and package space requirements and also takes greenhouse gas emissions along the entire supply chain into account  The development of vehicle components is a lengthy and therefore very costly process. Researchers at Graz University of Technology (TU Graz) have developed a method that can shorten the development phase of the powertrain of battery electric vehicles by several months. A team led by Martin Hofstetter…

Parsimonious models may be the norm in science, but complex models can be more flexible and accurate.
Information Technology

Exploring Ockham’s Razor: Simplifying Complex Innovations

Medieval friar William of Ockham posited a famous idea: always pick the simplest explanation. Often referred to as the parsimony principle, “Ockham’s razor” has shaped scientific decisions for centuries. But lately, incredibly complex AI models have begun outperforming their simpler counterparts. Consider AlphaFold for predicting protein structures, or ChatGPT and its competitors for generating humanlike text. A new paper in PNAS argues that by relying too much on parsimony in modeling, scientists make mistakes and miss opportunities. First author and…

Information Technology

AI Tool Analyzes Speech Patterns to Identify Depression

Evaluation of an AI-based voice biomarker tool to detect signals consistent with moderate to severe depression Background and Goal: Depression impacts an  estimated 18 million Americans each year,  yet depression screening rarely occurs in the outpatient setting. This study evaluated an AI-based machine learning biomarker tool that uses speech patterns to detect moderate to severe depression, aiming to improve access to screening in primary care settings. Study Approach: The study analyzed over 14,000 voice samples from U.S. and Canadian adults….

Information Technology

Humans vs Machines—Who’s Better at Recognizing Speech?

Are humans or machines better at recognizing speech? A new study shows that in noisy conditions, current automatic speech recognition (ASR) systems achieve remarkable accuracy and sometimes even surpass human performance. However, the systems need to be trained on an incredible amount of data, while humans acquire comparable skills in less time. Automatic speech recognition (ASR) has made incredible advances in the past few years, especially for widely spoken languages ​​such as English. Prior to 2020, it was typically assumed…

Information Technology

Not Lost in Translation: AI Increases Sign Language Recognition Accuracy

Additional data can help differentiate subtle gestures, hand positions, facial expressions The Complexity of Sign Languages Sign languages have been developed by nations around the world to fit the local communication style, and each language consists of thousands of signs. This has made sign languages difficult to learn and understand. Using artificial intelligence to automatically translate the signs into words, known as word-level sign language recognition, has now gained a boost in accuracy through the work of an Osaka Metropolitan…

Feedback