News & Events

221 News items, Awards, Events or Talks found.


  •  NEWS    MERL researchers presented 9 papers at the American Control Conference (ACC)
    Date: June 8, 2022 - June 10, 2022
    Where: Atlanta, GA
    MERL Contacts: Scott A. Bortoff; Ankush Chakrabarty; Stefano Di Cairano; Christopher R. Laughman; Abraham P. Vinod; Avishai Weiss
    Research Areas: Control, Machine Learning, Optimization
    Brief
    • At the American Control Conference in Atlanta, GA, MERL presented 9 papers on subjects including autonomous-vehicle decision making and motion planning, realtime Bayesian inference and learning, reference governors for hybrid systems, Bayesian optimization, and nonlinear control.
  •  
  •  NEWS    MERL researchers presented 5 papers and an invited workshop talk at ICRA 2022
    Date: May 23, 2022 - May 27, 2022
    Where: International Conference on Robotics and Automation (ICRA)
    MERL Contacts: Ankush Chakrabarty; Stefano Di Cairano; Siddarth Jain; Devesh K. Jha; Pedro Miraldo; Daniel N. Nikovski; Arvind Raghunathan; Diego Romeres; Abraham P. Vinod; Yebin Wang
    Research Areas: Artificial Intelligence, Machine Learning, Robotics
    Brief
    • MERL researchers presented 5 papers at the IEEE International Conference on Robotics and Automation (ICRA) that was held in Philadelphia from May 23-27, 2022. The papers covered a broad range of topics from manipulation, tactile sensing, planning and multi-agent control. The invited talk was presented in the "Workshop on Collaborative Robots and Work of the Future" which covered some of the work done by MERL researchers on collaborative robotic assembly. The workshop was co-organized by MERL, Mitsubishi Electric Automation's North America Development Center (NADC), and MIT.
  •  
  •  NEWS    MERL Scientists Presenting 5 Papers at IEEE International Conference on Communications (ICC) 2022
    Date: May 16, 2022 - May 20, 2022
    Where: Seoul, Korea
    MERL Contacts: Jianlin Guo; Toshiaki Koike-Akino; Philip V. Orlik; Kieran Parsons; Pu (Perry) Wang; Ye Wang
    Research Areas: Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Machine Learning, Signal Processing
    Brief
    • MERL Connectivity & Information Processing Team scientists remotely presented 5 papers at the IEEE International Conference on Communications (ICC) 2022, held in Seoul Korea on May 16-20, 2022. Topics presented include recent advancements in communications technologies, deep learning methods, and quantum machine learning (QML). Presentation videos are also found on our YouTube channel. In addition, K. J. Kim organized "Industrial Private 5G-and-beyond Wireless Networks Workshop" at the conference.

      IEEE ICC is one of two IEEE Communications Society’s flagship conferences (ICC and Globecom). Each year, close to 2,000 attendees from over 70 countries attend IEEE ICC to take advantage of a program which consists of exciting keynote session, robust technical paper sessions, innovative tutorials and workshops, and engaging industry sessions. This 5-day event is known for bringing together audiences from both industry and academia to learn about the latest research and innovations in communications and networking technology, share ideas and best practices, and collaborate on future projects.
  •  
  •  NEWS    Arvind Raghunathan's publication is Featured Article in the current issue of the INFORMS Journal on Computing
    Date: April 1, 2022
    Where: INFORMS Journal on Computing (https://pubsonline.informs.org/journal/ijoc)
    MERL Contact: Arvind Raghunathan
    Research Areas: Artificial Intelligence, Machine Learning, Optimization
    Brief
    • Arvind Raghunathan co-authored a publication titled "JANOS: An Integrated Predictive and Prescriptive Modeling Framework" which has been chosen as a Featured Article in the current issue of the INFORMS Journal on Computing. The article was co-authored with Prof. David Bergman, a collaborator of MERL and Teng Huang, a former MERL intern, among others.

      The paper describes a new software tool, JANOS, that integrates predictive modeling and discrete optimization to assist decision making. Specifically, the proposed solver takes as input user-specified pretrained predictive models and formulates optimization models directly over those predictive models by embedding them within an optimization model through linear transformations.
  •  
  •  NEWS    Toshiaki Koike-Akino gave an invited lecture to USPTO on advanced photonics
    Date: May 4, 2022
    MERL Contact: Toshiaki Koike-Akino
    Research Areas: Artificial Intelligence, Communications, Electronic and Photonic Devices, Machine Learning, Optimization, Signal Processing
    Brief
    • Toshiaki Koike-Akino gave an invited lecture on advanced photonic devices at the United States Patent and Trademark Office (USPTO) Technology Fair on May 4, 2022. Topics of the lecture included the recent progress of applied artificial intelligence (AI) technologies for optical systems, nano-photonic devices, and quantum technology. During the 2-hour interactive online presentation, he lectured to more than 200 patent examiner participants.

      USPTO Tech Fair Organizer mentioned:
      "Thank you very much for representing Advanced Photonic Devices at this year’s Technology Center 2800 Virtual Tech Fair held May 4th, 2022. Tech Fair is an important part of the United States Patent and Trademark Office’s Patent Examiner Technical Training Program (PETTP). Having a scientifically well-trained examiner workforce and ensuring the quality, consistency, and reliability of issued patents are top priorities at the USPTO. The PETTP is designed to achieve those priorities by giving examiners direct access to technical experts who are willing to share their knowledge about prior art and industry standards for both emerging and established technologies. Experts like yourself help to maintain our high quality of patent examination by keeping examiners updated on technologies and innovations pertinent to their field of examination.
      We very much appreciate your efforts, time, and contributions."
  •  
  •  TALK    [MERL Seminar Series 2022] Prof. Vincent Sitzmann presents talk titled Self-Supervised Scene Representation Learning
    Date & Time: Wednesday, March 30, 2022; 11:00 AM EDT
    Speaker: Vincent Sitzmann, MIT
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
    Abstract
    • Given only a single picture, people are capable of inferring a mental representation that encodes rich information about the underlying 3D scene. We acquire this skill not through massive labeled datasets of 3D scenes, but through self-supervised observation and interaction. Building machines that can infer similarly rich neural scene representations is critical if they are to one day parallel people’s ability to understand, navigate, and interact with their surroundings. This poses a unique set of challenges that sets neural scene representations apart from conventional representations of 3D scenes: Rendering and processing operations need to be differentiable, and the type of information they encode is unknown a priori, requiring them to be extraordinarily flexible. At the same time, training them without ground-truth 3D supervision is an underdetermined problem, highlighting the need for structure and inductive biases without which models converge to spurious explanations.

      I will demonstrate how we can equip neural networks with inductive biases that enables them to learn 3D geometry, appearance, and even semantic information, self-supervised only from posed images. I will show how this approach unlocks the learning of priors, enabling 3D reconstruction from only a single posed 2D image, and how we may extend these representations to other modalities such as sound. I will then discuss recent work on learning the neural rendering operator to make rendering and training fast, and how this speed-up enables us to learn object-centric neural scene representations, learning to decompose 3D scenes into objects, given only images. Finally, I will talk about a recent application of self-supervised scene representation learning in robotic manipulation, where it enables us to learn to manipulate classes of objects in unseen poses from only a handful of human demonstrations.
  •  
  •  NEWS    Rui Ma gives an Invited Talk on Digital Intensive PA/Transmitter for RF Communications Workshop at IMS2022
    Date: June 19, 2022
    Research Areas: Communications, Electronic and Photonic Devices, Machine Learning
    Brief
    • MERL Researcher Rui Ma will give an invited talk titled "All Digital Transmitter with GaN Switching Mode Power Amplifiers"at a technical workshop during International Microwave Symposium (IMS)2022. This IMS workshop (WSN) invites members from academia and industry to discuss the latest development activities in the area of digital-intensive power amplifiers and transmitters for RF communications.

      In addition, Dr. Rui Ma is chairing a Technical Session(We2C) on "AI/ML on RF and mmWave Applications" at IMS2022.

      IMS is the flagship annual conference of IEEE Microwave Theory and Technology Society(MTT-S).

      Learn more here:
      Sessions
      Workshops
  •  
  •  AWARD    Japan Telecommunications Advancement Foundation Award
    Date: March 15, 2022
    Awarded to: Yukimasa Nagai, Jianlin Guo, Philip Orlik, Takenori Sumi, Benjamin A. Rolfe and Hiroshi Mineno
    MERL Contacts: Jianlin Guo; Philip V. Orlik
    Research Areas: Communications, Machine Learning
    Brief
    • MELCO/MERL research paper “Sub-1 GHz Frequency Band Wireless Coexistence for the Internet of Things” has won the 37th Telecommunications Advancement Foundation Award (Telecom System Technology Award) in Japan. This award started in 1984, and is given to research papers and works related to information and telecommunications that have made significant contributions and achievements to the advancement, development, and standardization of information and telecommunications from technical and engineering perspectives. The award recognizes both the IEEE 802.19.3 standardization efforts and the technological advancements using reinforcement learning and robust access methodologies for wireless communication system. This year, there were 43 entries with 5 winning awards and 3 winning encouragement awards. This is the first time MELCO/MERL has received this award. Our paper has been published by IEEE Access in 2021 and authors are Yukimasa Nagai, Jianlin Guo, Philip Orlik, Takenori Sumi, Benjamin A. Rolfe and Hiroshi Mineno.
  •  
  •  NEWS    Devesh Jha delivers invited talk at Mechanical and Aerospace Engineering Department, NYU
    Date: March 1, 2022
    Where: Online/Zoom
    MERL Contact: Devesh K. Jha
    Research Areas: Artificial Intelligence, Machine Learning, Robotics
    Brief
    • Devesh Jha, a Principal Research Scientist in MERL's Data Analytics group, gave an invited talk at the Mechanical and Aerospace Engineering Department, NYU. The title of the talk was "Robotic Manipulation in the Wild: Planning, Learning and Control through Contacts". The talk presented some of the recent work done at MERL for robotic manipulation in unstructured environments in the presence of significant uncertainty.
  •  
  •  NEWS    MERL work on scene-aware interaction featured in IEEE Spectrum
    Date: March 1, 2022
    MERL Contacts: Anoop Cherian; Chiori Hori; Jonathan Le Roux; Tim K. Marks; Anthony Vetro
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
    Brief
    • MERL's research on scene-aware interaction was recently featured in an IEEE Spectrum article. The article, titled "At Last, A Self-Driving Car That Can Explain Itself" and authored by MERL Senior Principal Research Scientist Chiori Hori and MERL Director Anthony Vetro, gives an overview of MERL's efforts towards developing a system that can analyze multimodal sensing information for highly natural and intuitive interaction with humans through context-dependent generation of natural language. The technology recognizes contextual objects and events based on multimodal sensing information, such as images and video captured with cameras, audio information recorded with microphones, and localization information measured with LiDAR.

      Scene-Aware Interaction for car navigation, one target application that the article focuses on, will provide drivers with intuitive route guidance. Scene-Aware Interaction technology is expected to have wide applicability, including human-machine interfaces for in-vehicle infotainment, interaction with service robots in building and factory automation systems, systems that monitor the health and well-being of people, surveillance systems that interpret complex scenes for humans and encourage social distancing, support for touchless operation of equipment in public areas, and much more. MERL's Scene-Aware Interaction Technology had previously been featured in a Mitsubishi Electric Corporation Press Release.

      IEEE Spectrum is the flagship magazine and website of the IEEE, the world’s largest professional organization devoted to engineering and the applied sciences. IEEE Spectrum has a circulation of over 400,000 engineers worldwide, making it one of the leading science and engineering magazines.
  •  
  •  TALK    [MERL Seminar Series 2022] Learning Speech Representations with Multimodal Self-Supervision
    Date & Time: Tuesday, March 1, 2022; 1:00 PM EST
    Speaker: David Harwath, The University of Texas at Austin
    MERL Host: Chiori Hori
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    Abstract
    • Humans learn spoken language and visual perception at an early age by being immersed in the world around them. Why can't computers do the same? In this talk, I will describe our ongoing work to develop methodologies for grounding continuous speech signals at the raw waveform level to natural image scenes. I will first present self-supervised models capable of discovering discrete, hierarchical structure (words and sub-word units) in the speech signal. Instead of conventional annotations, these models learn from correspondences between speech sounds and visual patterns such as objects and textures. Next, I will demonstrate how these discrete units can be used as a drop-in replacement for text transcriptions in an image captioning system, enabling us to directly synthesize spoken descriptions of images without the need for text as an intermediate representation. Finally, I will describe our latest work on Transformer-based models of visually-grounded speech. These models significantly outperform the prior state of the art on semantic speech-to-image retrieval tasks, and also learn representations that are useful for a multitude of other speech processing tasks.
  •  
  •  NEWS    Jonathan Le Roux discusses MERL's audio source separation work on popular machine learning podcast
    Date: January 24, 2022
    Where: The TWIML AI Podcast
    MERL Contact: Jonathan Le Roux
    Research Areas: Artificial Intelligence, Machine Learning, Speech & Audio
    Brief
    • MERL Speech & Audio Senior Team Leader Jonathan Le Roux was featured in an extended interview on the popular TWIML AI Podcast, presenting MERL's work towards solving the "cocktail party problem". Humans have the extraordinary ability to focus on particular sounds of interest within a complex acoustic scene, such as a cocktail party. MERL's Speech & Audio Team has been at the forefront of the field's effort to develop algorithms giving machines similar abilities. Jonathan talked with host Sam Charrington about the group's decade-long journey on this topic, from early pioneering work using deep learning for speech enhancement and speech separation, to recent works on weakly-supervised separation, hierarchical sound separation, as well as the separation of real-world soundtracks into speech, music, and sound effects (aka the "cocktail fork problem").

      The TWIML AI podcast, formerly known as This Week in Machine Learning & AI, was created in 2016 and is followed by more than 10,000 subscribers on Youtube and Twitter. Jonathan's interview marks the 555th episode of the podcast.
  •  
  •  TALK    [MERL Seminar Series 2021] Harnessing machine learning to build better Earth system models for climate projection
    Date & Time: Tuesday, December 14, 2021; 1:00 PM EST
    Speaker: Prof. Chris Fletcher, University of Waterloo
    MERL Host: Ankush Chakrabarty
    Research Areas: Dynamical Systems, Machine Learning, Multi-Physical Modeling
    Abstract
    • Decision-making and adaptation to climate change requires quantitative projections of the physical climate system and an accurate understanding of the uncertainty in those projections. Earth system models (ESMs), which solve the Navier-Stokes equations on the sphere, are the only tool that climate scientists have to make projections forward into climate states that have not been observed in the historical data record. Yet, ESMs are incredibly complex and expensive codes and contain many poorly constrained physical parameters—for processes such as clouds and convection—that must be calibrated against observations. In this talk, I will describe research from my group that uses ensembles of ESM simulations to train statistical models that learn the behavior and sensitivities of the ESM. Once trained and validated the statistical models are essentially free to run, which allows climate modelling centers to make more efficient use of precious compute cycles. The aim is to improve the quality of future climate projections, by producing better calibrated ESMs, and to improve the quantification of the uncertainties, by better sampling the equifinality of climate states.
  •  
  •  NEWS    Toshiaki Koike-Akino Gives Seminar Talk at IEEE Boston Photonics
    Date & Time: December 9, 2021; 7pm EST
    Where: virtual
    MERL Contact: Toshiaki Koike-Akino
    Research Areas: Communications, Machine Learning, Signal Processing
    Brief
    • Toshiaki Koike-Akino (Signal Processing group, Network Intelligence Team) is giving an invited talk titled, `Evolution of Machine Learning for Photonic Research' for the Boston Photonic Chapter of the IEEE Photonic Society on December 9. The talk covers recent MERL research on machine learning for nonlinearity compensation and nanophotonic device design.
  •  
  •  EVENT    Prof. Melanie Zeilinger of ETH to give keynote at MERL's Virtual Open House
    Date & Time: Thursday, December 9, 2021; 1:00pm - 5:30pm EST
    Location: Virtual Event
    Speaker: Prof. Melanie Zeilinger, ETH
    Research Areas: Applied Physics, Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Control, Data Analytics, Dynamical Systems, Electric Systems, Electronic and Photonic Devices, Machine Learning, Multi-Physical Modeling, Optimization, Robotics, Signal Processing, Speech & Audio, Digital Video, Human-Computer Interaction, Information Security
    Brief
    • MERL is excited to announce the second keynote speaker for our Virtual Open House 2021:
      Prof. Melanie Zeilinger from ETH .

      Our virtual open house will take place on December 9, 2021, 1:00pm - 5:30pm (EST).

      Join us to learn more about who we are, what we do, and discuss our internship and employment opportunities. Prof. Zeilinger's talk is scheduled for 3:15pm - 3:45pm (EST).

      Registration: https://mailchi.mp/merl/merlvoh2021

      Keynote Title: Control Meets Learning - On Performance, Safety and User Interaction

      Abstract: With increasing sensing and communication capabilities, physical systems today are becoming one of the largest generators of data, making learning a central component of autonomous control systems. While this paradigm shift offers tremendous opportunities to address new levels of system complexity, variability and user interaction, it also raises fundamental questions of learning in a closed-loop dynamical control system. In this talk, I will present some of our recent results showing how even safety-critical systems can leverage the potential of data. I will first briefly present concepts for using learning for automatic controller design and for a new safety framework that can equip any learning-based controller with safety guarantees. The second part will then discuss how expert and user information can be utilized to optimize system performance, where I will particularly highlight an approach developed together with MERL for personalizing the motion planning in autonomous driving to the individual driving style of a passenger.
  •  
  •  EVENT    Prof. Ashok Veeraraghavan of Rice University to give keynote at MERL's Virtual Open House
    Date & Time: Thursday, December 9, 2021; 1:00pm - 5:30pm EST
    Location: Virtual Event
    Speaker: Prof. Ashok Veeraraghavan, Rice University
    Research Areas: Applied Physics, Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Control, Data Analytics, Dynamical Systems, Electric Systems, Electronic and Photonic Devices, Machine Learning, Multi-Physical Modeling, Optimization, Robotics, Signal Processing, Speech & Audio, Digital Video, Human-Computer Interaction, Information Security
    Brief
    • MERL is excited to announce the first keynote speaker for our Virtual Open House 2021:
      Prof. Ashok Veeraraghavan from Rice University.

      Our virtual open house will take place on December 9, 2021, 1:00pm - 5:30pm (EST).

      Join us to learn more about who we are, what we do, and discuss our internship and employment opportunities. Prof. Veeraraghavan's talk is scheduled for 1:15pm - 1:45pm (EST).

      Registration: https://mailchi.mp/merl/merlvoh2021

      Keynote Title: Computational Imaging: Beyond the limits imposed by lenses.

      Abstract: The lens has long been a central element of cameras, since its early use in the mid-nineteenth century by Niepce, Talbot, and Daguerre. The role of the lens, from the Daguerrotype to modern digital cameras, is to refract light to achieve a one-to-one mapping between a point in the scene and a point on the sensor. This effect enables the sensor to compute a particular two-dimensional (2D) integral of the incident 4D light-field. We propose a radical departure from this practice and the many limitations it imposes. In the talk we focus on two inter-related research projects that attempt to go beyond lens-based imaging.

      First, we discuss our lab’s recent efforts to build flat, extremely thin imaging devices by replacing the lens in a conventional camera with an amplitude mask and computational reconstruction algorithms. These lensless cameras, called FlatCams can be less than a millimeter in thickness and enable applications where size, weight, thickness or cost are the driving factors. Second, we discuss high-resolution, long-distance imaging using Fourier Ptychography, where the need for a large aperture aberration corrected lens is replaced by a camera array and associated phase retrieval algorithms resulting again in order of magnitude reductions in size, weight and cost. Finally, I will spend a few minutes discussing how the wholistic computational imaging approach can be used to create ultra-high-resolution wavefront sensors.
  •  
  •  AWARD    Mitsubishi Electric US Receives a 2022 CES Innovation Award for Touchless Elevator Control Jointly Developed with MERL
    Date: November 17, 2021
    Awarded to: Elevators and Escalators Division of Mitsubishi Electric US, Inc.
    MERL Contacts: Daniel N. Nikovski; William S. Yerazunis
    Research Areas: Data Analytics, Machine Learning, Signal Processing
    Brief
    • The Elevators and Escalators Division of Mitsubishi Electric US, Inc. has been recognized as a 2022 CES® Innovation Awards honoree for its new PureRide™ Touchless Control for elevators, jointly developed with MERL. Sponsored by the Consumer Technology Association (CTA), the CES Innovation Awards is the largest and most influential technology event in the world. PureRide™ Touchless Control provides a simple, no-touch product that enables users to call an elevator and designate a destination floor by placing a hand or finger over a sensor. MERL initiated the development of PureRide™ in the first weeks of the COVID-19 pandemic by proposing the use of infra-red sensors for operating elevator call buttons, and participated actively in its rapid implementation and commercialization, resulting in a first customer installation in October of 2020.
  •  
  •  TALK    [MERL Seminar Series 2021] Prof. Thomas Schön presents talk at MERL entitled Deep probabilistic regression
    Date & Time: Tuesday, November 16, 2021; 11:00 AM EST
    Speaker: Thomas Schön, Uppsala University
    Research Areas: Dynamical Systems, Machine Learning
    Abstract
    • While deep learning-based classification is generally addressed using standardized approaches, this is really not the case when it comes to the study of regression problems. There are currently several different approaches used for regression and there is still room for innovation. We have developed a general deep regression method with a clear probabilistic interpretation. The basic building block in our construction is an energy-based model of the conditional output density p(y|x), where we use a deep neural network to predict the un-normalized density from input-output pairs (x, y). Such a construction is also commonly referred to as an implicit representation. The resulting learning problem is challenging and we offer some insights on how to deal with it. We show good performance on several computer vision regression tasks, system identification problems and 3D object detection using laser data.
  •  
  •  EVENT    MERL Virtual Open House 2021
    Date & Time: Thursday, December 9, 2021; 100pm-5:30pm (EST)
    Location: Virtual Event
    Research Areas: Applied Physics, Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Control, Data Analytics, Dynamical Systems, Electric Systems, Electronic and Photonic Devices, Machine Learning, Multi-Physical Modeling, Optimization, Robotics, Signal Processing, Speech & Audio, Digital Video, Human-Computer Interaction, Information Security
    Brief
    • Mitsubishi Electric Research Laboratories cordially invites you to join our Virtual Open House, on December 9, 2021, 1:00pm - 5:30pm (EST).

      The event will feature keynotes, live sessions, research area booths, and time for open interactions with our researchers. Join us to learn more about who we are, what we do, and discuss our internship and employment opportunities.

      Registration: https://mailchi.mp/merl/merlvoh2021
  •  
  •  NEWS    Keynote Speech by Dr. Rui Ma at EDICON2021
    Date: December 10, 2021
    Research Areas: Electronic and Photonic Devices, Machine Learning
    Brief
    • MERL's Researcher Dr. Rui Ma is the keynote speaker for Electronic Design Innovation CON (EDICON2021) to be held in Shenzhen, China from Dec. 9-10, with a talk titled "Digitization and intelligence: unlocking the innovation of future radios". The conference brings together international researchers from academics, industry, and media distribution to share perspectives on the technology needed and being developed for the next generation of communication.
  •  
  •  TALK    [MERL Seminar Series 2021] Dr. Hsiao-Yu (Fish) Tung presents talk at MERL entitled Learning to See by Moving: Self-supervising 3D scene representations for perception, control, and visual reasoning
    Date & Time: Tuesday, November 2, 2021; 1:00 PM EST
    Speaker: Dr. Hsiao-Yu (Fish) Tung, MIT BCS
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Robotics
    Abstract
    • Current state-of-the-art CNNs can localize and name objects in internet photos, yet, they miss the basic knowledge that a two-year-old toddler has possessed: objects persist over time despite changes in the observer’s viewpoint or during cross-object occlusions; objects have 3D extent; solid objects do not pass through each other. In this talk, I will introduce neural architectures that learn to parse video streams of a static scene into world-centric 3D feature maps by disentangling camera motion from scene appearance. I will show the proposed architectures learn object permanence, can imagine RGB views from novel viewpoints in truly novel scenes, can conduct basic spatial reasoning and planning, can infer affordability in sentences, and can learn geometry-aware 3D concepts that allow pose-aware object recognition to happen with weak/sparse labels. Our experiments suggest that the proposed architectures are essential for the models to generalize across objects and locations, and it overcomes many limitations of 2D CNNs. I will show how we can use the proposed 3D representations to build machine perception and physical understanding more close to humans.
  •  
  •  NEWS    Ankush Chakrabarty gave an invited talk at CRAN: Centre de Recherche en Automatique de Nancy, France
    Date: October 21, 2021
    Where: Université de Lorraine, France
    MERL Contact: Ankush Chakrabarty
    Research Areas: Artificial Intelligence, Control, Machine Learning, Multi-Physical Modeling, Optimization
    Brief
    • Ankush Chakrabarty (RS, Multiphysical Systems Team) gave an invited talk on `Bayesian-Optimized Estimation and Control for Buildings and HVAC' at the Research Center for Automatic Control (CRAN) in the University of Lorraine in France. The talk presented recent MERL research on probabilistic machine learning for set-point optimization and calibration of digital twins for building energy systems.
  •  
  •  AWARD    Daniel Nikovski receives Outstanding Reviewer Award at NeurIPS'21
    Date: October 18, 2021
    Awarded to: Daniel Nikovski
    MERL Contact: Daniel N. Nikovski
    Research Areas: Artificial Intelligence, Machine Learning
    Brief
    • Daniel Nikovski, Group Manager of MERL's Data Analytics group, has received an Outstanding Reviewer Award from the 2021 conference on Neural Information Processing Systems (NeurIPS'21). NeurIPS is the world's premier conference on neural networks and related technologies.
  •  
  •  TALK    [MERL Seminar Series 2021] Prof. Greg Ongie presents talk at MERL entitled Learning to Solve Inverse Problems in Computational Imaging: Recent Innovations
    Date & Time: Tuesday, October 12, 2021; 1:00 PM EST
    Speaker: Prof. Greg Ongie, Marquette University
    MERL Host: Hassan Mansour
    Research Areas: Computational Sensing, Machine Learning, Signal Processing
    Abstract
    • Deep learning is emerging as powerful tool to solve challenging inverse problems in computational imaging, including basic image restoration tasks like denoising and deblurring, as well as image reconstruction problems in medical imaging. This talk will give an overview of the state-of-the-art supervised learning techniques in this area and discuss two recent innovations: deep equilibrium architectures, which allows one to train an effectively infinite-depth reconstruction network; and model adaptation methods, that allow one to adapt a pre-trained reconstruction network to changes in the imaging forward model at test time.
  •  
  •  TALK    [MERL Seminar Series 2021] Dr. Ruohan Gao presents talk at MERL entitled Look and Listen: From Semantic to Spatial Audio-Visual Perception
    Date & Time: Tuesday, September 28, 2021; 1:00 PM EST
    Speaker: Dr. Ruohan Gao, Stanford University
    MERL Host: Gordon Wichern
    Research Areas: Computer Vision, Machine Learning, Speech & Audio
    Abstract
    • While computer vision has made significant progress by "looking" — detecting objects, actions, or people based on their appearance — it often does not listen. Yet cognitive science tells us that perception develops by making use of all our senses without intensive supervision. Towards this goal, in this talk I will present my research on audio-visual learning — We disentangle object sounds from unlabeled video, use audio as an efficient preview for action recognition in untrimmed video, decode the monaural soundtrack into its binaural counterpart by injecting visual spatial information, and use echoes to interact with the environment for spatial image representation learning. Together, these are steps towards multimodal understanding of the visual world, where audio serves as both the semantic and spatial signals. In the end, I will also briefly talk about our latest work on multisensory learning for robotics.
  •