RE: FOA PA-20-188
June 8, 2022
Dear NIH Staff,
This is an initial submission of a K99/R00 Research Career Development application in response to the Parent
FOA PA-20-188, entitled Ultra-precision Clinical Imaging and Detection of Alzheimer’s Disease Using Deep
Learning”. My research uses deep learning methods to improve imaging and detection of early-stage Alzheimer’s
Disease in individuals and improve aging and AD research. Therefore, I am requesting assignment of this grant
application to the National Institute of Aging (NIA).
The following individuals have agreed to write letters of reference in support of my application:
1. Dr. Bernd Girod, Ph.D. Professor of Electrical Engineering,
Department of Electrical Engineering, Stanford University;
2. Dr. Vivek Goyal, Ph.D. ECE Associate Chair of Doctoral Programs, Professor (ECE),
Department of Electrical and Computer Engineering, Boston University; and
3. Dr. Gordon Wetzstein, Ph.D. Associate Professor of Electrical Engineering,
Department of Electrical Engineering, Stanford University.
Thank you very much for handling my submission. Please do not hesitate to contact me at any time during the
application process if I can be of assistance in any way.
Sincerely,
Sean I. Young
Research fellow, Harvard Medical School
Research affiliate, CSAIL, MIT
MGH/HST Martinos Center for Biomedical Imaging
149 13th St,
Charlestown, MA, 02129
Phone: (617) 758 9783
E-mail: siyoung@mgh.harvard.edu
PROJECT SUMMARY AND ABSTRACT
In Alzheimer’s Disease (AD) studies, longitudinal within-subject imaging and analysis of the human brain gives
us valuable insight into the temporal dynamics of the early disease process in individual subjects and allows to
assess therapeutic efficacy. However, longitudinal imaging tools have not yet been optimized for clinical studies
or for use on nonharmonized scans. Challenges include reduction of noise across serial magnetic resonance
imaging (MRI) scans while weighting each time point equally to avoid biases; and appropriately accounting for
atrophy all in the presence of varying image intensity, contrasts, MR distortions and subject motion across time.
Many general tools exist for detecting longitudinal change in carefully curated research data (such as ADNI)
in which the scan protocol has been harmonized across acquisition sites so as to minimize differential distortion
and gradient nonlinearities removed prior to data release. Unfortunately, these tools do not work accurately for
unharmonized MRI scans that comprise the bulk of the research data available, and on clinical data, where the
practical need for clinicians to schedule a subject on different scanners leads to additional differences in scans
acquired across multiple scan sessions. For retrospective analysis of past scans or clinical use, it is thus critical
to develop imaging tools that are agnostic to global scanner-induced differences in images but very sensitive to
subtle neuroanatomical change, such as atrophy in AD, that is highly predictive of the early disease process.
To address the above issues, we propose to design, implement, and validate a deep learning (DL) AD image
analysis framework for detecting neuroanatomical change in the presence of large image differences due to the
acquisition process itself, including the field strength, receive coil, sequence parameters, gradient nonlinearities
and B0 distortions, scanner manufacturer, and subject motion in the images across time. We leverage the fact
that, within a subject, there is a physical deformation that relates the brain scans acquired across time unlike the
cross-subject case. Focusing exclusively on longitudinal within-subject studies allows us to craft ultra-sensitive
registration and change detection tools that drastically outperform general purpose ones used in cross-subject
studies, where registration is intended only to find approximate anatomical correspondences. Our longitudinal
imaging framework is thus able to learn to disentangle true neuroanatomical change from irrelevant distortions.
Since the applicant has a computational background, the proposed training program at Harvard, MIT and
MGH will focus on neuroscience and neurology during the K99 phase to develop the skills needed to transition
to independence in the R00 phase. The applicant aims to become an expert in clinical imaging of AD and push
the limits of what is currently possible in AD research, fundamentally enhancing the quality of healthcare. We
believe that the proposed project is a first step in this direction and the tools developed will further pave the way
for clinical imaging and analysis of AD and neurodegenerative disease processes in general.
PROJECT NARRATIVE
Certain change in the human brain is strongly predictive of early disease processes (such as atrophy in the case
of Alzheimer’s DiseaseAD). However, using conventional image processing to detect this anatomical change
in clinical MR scans taken across time is difficult due to various MR effects and subject motion, which can mask
the anatomical change of interest relevant to AD diagnosis and research. The aims of the proposed project are
to leverage the recent advances in deep learning to design, develop and evaluate an AD imaging and analysis
framework that can resolve anatomical change with high accuracy, aiding in the early diagnosis and intervention
before the onset of dysfunction.
FACILITIES AND OTHER RESOURCES: MARTINOS CENTER
The main facilities and resources of the MGH/HST Martinos Center for Biomedical Imaging at Massachusetts
General Hospital (MGH) are based on the hospital’s research campus in the Charlestown Navy Yard area. The
center has close affiliations with the HarvardMIT Division of Health Sciences and Technology (HST) and the
Harvard Center for Brain Science Imaging Facility in Cambridge, MA. Satellite research facilities are located at
the Martinos Imaging Center at MIT. The center currently occupies about 85,000 ft
2
of land in the Charlestown
Navy Yard and houses basic and clinical research laboratories as well as educational areas and administrative
offices.
1 Large-bore Human MRI Systems at Martinos
The Martinos Center uses a range of Siemens MRI scanners of several software versions and has strong local
support from Siemens Healthcare through on-site engineers.
Bay 1: 3T Laboratory (Skyra). This is a 3T Siemens Skyra with 128-channel receive capabilities and a 2-channel
parallel transmit. The system comes with 128 RF channels, 40 mT/m gradients and a 70 cm patient bore for the
improved subject comfort (mandatory for fetal imaging) and stimulus access. The scanner provides Siemens 32-
and 64-channel head coils, and an assortment of body arrays. Bay 1 also contains an array of audiovisual and
sensory stimuli equipment for fMRI studies, including digital high-definition rear projection, audio stimulation and
a subject response device. The stimulus equipment is set up to be run from a PC or a Mac, or the user’s laptop
computer. Stimuli can trigger or be triggered by the scanner. Bay 1 has also been equipped with a state-of-the-
art power injector. The system is configured for simultaneous TMS/MRI operation, including a video navigation
system for the TMS stimulator. The Bay 1 area contains necessary subject care environment, including waiting
and changing rooms, support areas that include a business office, a data viewing area, a physician’s office and
computer and magnet rooms.
Bay 3: 3T Laboratory (Trio). This is a 32-channel Siemens TIM Trio 3T whole-body MRI scanner which has an
insertable 36-cm (gradient coil ID) head-only gradient. The whole-body gradient system uses the same gradients
as the 1.5T Avanto (45 mT/m strength and 200 T/m/s slew rate). It has 32 independent RF receive channels for
phased-array coils and includes a Siemens 32-channel head coil and a home-built 32-channel head coil for the
gradient insert. Bay 3 also features an insertable, asymmetric head gradient coil (Siemens AC88) that is capable
of 60 mT/m and slew rates exceeding 600 T/m/s at a 70% duty cycle. This enables single-shot 3-mm resolution
EPI with an echo spacing of 300 μs at a sustained rate of 14 images per second. Bay 3 also has an assortment
of audiovisual and sensory stimulus equipment for fMRI studies, including rear projection, audio stimulation, a
subject response device and an eye-tracking setup.
Bay 4: 3T Laboratory (Prisma Fit). This is a 3T Siemens Prisma Fit, 128-channel whole-body MRI scanner with
a two-channel transmit system. The system features the Siemens XR200 gradient system with 80 mT/m gradient
strength and 200 mT/m/ms maximum slew rate. Bay 4 is equipped with a full assortment of body-imaging coils
as well as Siemens 32-channel and 64-channel head-neck coils. Bay 4 also has multinuclear capability and an
MGH-built 8-channel 31P head array is available. Additionally, it has an assortment of audiovisual and sensory
stimulus equipment for fMRI studies, such as rear projection, audio stimulation, a subject response device and
an eye-tracking setup. Bay 4 has been configured to allow simultaneous TMS stimulation, as well as recording
of simultaneous EEG.
Bay 5: 7T Laboratory. This laboratory supports an ultrahigh-field 7T whole-body MRI scanner with a 70 mT/m
(200 T/m/s max slew rate) gradient set (SC72B) and 32 RF receive channels. The 7T whole body magnet (90
cm magnet ID) was built by Magnex Scientific (Oxford, UK). Siemens provided the conventional MRI console,
the gradient, its drivers and the patient table. The system is shielded by 460 tons of steel. Integration of these
components and the design and construction of RF coils were performed jointly by MGH and Siemens. With its
high-performance gradient set, the system can provide better than 100 μm resolution and ultra-fast EPI readouts
for reduced image distortion. The system is equipped with a home-built 32-channel coil and an 8-channel head
array coil for human imaging. A selection of specialized coils is also available for ex-vivo MR microscopy and
primate imaging. The system has multinuclear imaging capability and coils for 31P and 13C are available. The
scanner has been upgraded by Siemens to contain 8 independent 1-kW transmit channels that are capable of
simultaneous parallel excitation with different RF pulse shapes for B1 shimming and/or parallel transmit methods
such as transmit SENSE. The 7T laboratory includes a visual display system and a button box for the acquisition
of subject responses in the scanner. A MedRad power injector is installed for the injection of gadolinium contrast
agents. A total of 32 GB of RAM is installed in the image reconstruction computer, facilitating higher-resolution
reconstructions on the scanner, and cabling is installed for routine streaming of data for offline processing. The
system has been upgraded with parallel transmit capability.
Bay 6: 3T Laboratory (TIM Trio with MR-PET Insert). The combined MR-PET system consists of a 3T Siemens
TIM Trio 32-channel whole-body MRI scanner (60 cm RF coil ID) with a BrainPET head-camera insert to allow
simultaneous MR-PET acquisitions. This system contains EPI, second-order shimming, CINE, MR angiography,
diffusion, perfusion and spectroscopy capabilities for neuro and body applications. It uses the same gradients
as the 1.5T Avanto (Bay 2; 45 mT/m strength, 200 T/m/s slew rate). The system is equipped with standard TIM
32-channel receivers, accommodating up to 32-element array coils. In addition, Bay 6 contains audiovisual and
sensory stimulus equipment for fMRI studies such as rear projection, audio stimulation, subject response device
and eye tracking setup. The system has one of the first PET cameras capable of simultaneous PET acquisition
during MR acquisition. The PET system is a head-only insert camera. This scanner is in close proximity to the
cyclotron and radiopharmaceutical facility, allowing for imaging studies that use radiotracers with short half-lives.
Bay 7: 3T Laboratory (Biograph mMR). The Bay 7 Biograph mMR scanner consists of a 3T whole-body super-
conductive magnet with active shielding, external interference shielding and a whole-body PET scanner. It is
equipped with a gradient system with a maximum gradient amplitude of 45 mT/m and a maximal slew rate of
200 T/m/s. Separate cooling channels that simultaneously cool primary and secondary coils allow the application
of extremely gradient-intensive sequences. This scanner is equipped with the TIM RF coils that were custom
designed to minimize the 511-keV photon attenuation. The fully integrated PET detectors use APD technology
and LSO crystals (8x8 arrays of 4x4x20 mm3 crystals). The PET scanner’s trans-axial and axial fields of view
are 59.4 cm and 25.8 cm, respectively. This scanner is housed close to the cyclotron and radiopharmaceutical
facility.
Bay 8: 3T Laboratory (Skyra with Connectom Gradients). A new Siemens Skyra 3T MRI system was installed
and upgraded to a Siemens Connectom platform with the addition of new high-power gradients. It comes with
64 RF channels and 300 mT/m gradients providing a unique system for performing in-vivo diffusion imaging. The
scanner has the capability for EPI imaging at a sustained rate of 15 images per second, CINE, MR angiography,
diffusion and perfusion studies and spectroscopy. Bay 8 contains an assortment of audio, visual, and sensory
stimulus equipment for fMRI studies including rear projection, audio stimulation, subject response device. Stimuli
can trigger or be triggered by the scanner. The stimulus equipment can be run using a PC or a Mac computer
installed in the 3T laboratory or the user’s laptop computer. This system is dedicated to connectomics imaging
in support of the multi-site Human Connectome Project consortium.
2 MRI Systems on MGH Main Campus
MGH’s main campus is located in Boston, about 15 minutes from the Martinos Center in Charlestown. Frequent
shuttle transport is provided between the two campuses for both researchers and patients. Resources on the
main campus include MRI and PET imaging, support laboratories, animal housing facilities and the MGH medical
library. These facilities are located in several buildings across the campus.
1.5T Whole-body MR Systems. Four GE and Siemens whole-body MRI systems are equipped with hardware
and software for CINE, MR angiography and spectroscopy capabilities.
3T Whole-body MR Systems. Three Siemens Trio MRI scanners are available, equipped with hardware and
software systems to perform EPI, second-order shimming, CINE, MR angiography, diffusion, perfusion, as well
as spectroscopy for both neuro and body applications.
3 Computational Resources
The following computational resources are freely available to the PI for the proposed project.
Martinos Center. The Martinos Center’s computing infrastructure consists of over 400 Linux workstations and
150 Windows and Mac desktop computers on users’ desks owned by individual research groups. There is also
a farm of over 90 Linux servers which handles central storage, e-mail, websites and other shared services. The
total storage capacity across the Martinos Center including disks in local workstations and central storage is 4
PB. The Martinos Center also hosts a compute cluster with 846 Intel Xeon E5472 3.0GHz cores and over 5900
GB of distributed memory for batch analysis jobs. These IT facilities are supported by IT staff including one full-
time PhD-level manager, who directs two full-time system administrators. In-house software tools are developed
and supported by three full-time programmers. Available commercial software is: Advanced Visual Systems
(AVS), MathWorks Matlab and Sensor Systems MEDx for general computation, simulation and image analysis.
In fall 2020, the Martinos Center installed a new HPC GPU cluster with funding from the Massachusetts Life
Science Center (MLSC). It consists of:
four NVIDIA DGX A100 servers with eight NVIDIA A100 GPUs, two AMD EPYC 7742 64 core CPUs, 1TB
of memory, and ten 100Gbe network connections;
five EXXACT servers having ten NVIDIA RTX8000 GPUs, two Intel Xeon Gold 6226R 16 core CPUs,
1.5TB of memory, and one 100Gbe network connection;
thirty-two DELL R440 servers with two Intel Xeon Silver 4214R 12 core CPUs, 384GB of memory and one
25Gbe network connection; and
a 1.35PB VAST storage server with 100% solid-state storage and eight 100Gbe network connections.
The SLURM batch scheduler is used to manage job submission by users to the cluster. The NVIDIA DGX A100
servers run Ubuntu 18.04 while all EXXACT and Dell servers run CentOS 8.
Also available are Bruker BioSpin XWIN-NMR, OriginLab Origin, Acorn NMR Nuts suite for analysis of NMR
spectra, the Siemens IDEA environment for development of pulse sequences, as well as other general-purpose
image reconstruction software. A substantial level of internal software development for image and data analysis
is ongoing, using C/C++, Java, MATLAB, Pascal, Python, Perl, and TCL/TK.
Laboratory for Computational Neuroimaging. The Laboratory for Computational Neuroimaging (LCN) at the
Martinos Center hosts high-end workstations with NVIDIA Tesla graphics processing units (GPUs) and include
four V100, four P100, two P40s, and two K40 GPUs.
For high-performance image reconstruction, LCN is equipped with a custom-designed ScaleMP vSMP system
having 128 Xeon E5472 3.0-GHz cores and 1 TB shared memory. It uses a 40-GBit/s QDR Infiniband backplane
equipped with a Rackswitch G8000 48-port aggregation switch using two 100-Gbit/s Ethernet links with fiber-
optic extenders for real time data streaming from the MRI systems. The vSMP system runs the Siemens image
reconstruction software and can be fully incorporated into any of the Martinos Center’s MRI systems to enable
online image reconstruction of the very large or high-data-rate acquisitions.
Partners HealthCare. The ERISOne Linux cluster provided by Partners HealthCare offers investigators cloud
computational resources. This is a cluster of Linux remote-desktop and compute nodes connected to high-speed
storage. ERISOne has over 7000 CPU cores, a total of 56 TB memory and 5 PB of storage as well as specialized
parallel-processing resources such as GPUs and specialized networks. Popular and on-request applications are
installed and maintained centrally and supported by the Scientific Computing support team. Also available are
high-end Microsoft Windows Analytics servers for data analysis, a large-memory server with 3 TB memory, 64
CPU cores, a GPU compute cluster with four NVIDIA Tesla P100 GPUs and 24 M2070 GPUs.
Harvard Medical School. Harvard Medical School’s O
2
cluster is a shared high-performance computing (HPC)
environment with dedicated hardware available for high-memory and GPU-intensive tasks. The cluster contains
11,000 state-of-the-art compute cores, 32 GPUs (8 V100, 8 M40, 16 K80) and 68 TB of memory.
Advanced Computational Image Processing and Analysis Center (ACIPAC). The ACIPAC is a satellite of the
Martinos Center on MIT campus, which was established in collaboration with the MIT Artificial Intelligence (AI)
Laboratory. This facility provides extensive resources and expertise for solving practical image-processing and
analysis problems relevant to biomedical imaging. The ACIPAC is a bridge to affiliated MIT research community
and provides MIT students a direct avenue to engage in biomedical imaging research at the Martinos Center.
4 Support Resources at Martinos Center
Electronics and Machine Workshops. Instrumentation for design, construction and repair is distributed across
the High-Field Laboratory in Bays 2–3; Bays 4–5; and the Photon Migration Laboratory. These workshops are
equipped with tools for working with electronic circuitry, fiber optics and mechanical devices. There is additional
equipment for fabrication of printed circuit boards, instrumentation for electronic testing as well as measurement
of digital, analog, and RF circuitry: power supplies, voltmeters, an R/L/C meter, RF power meters, oscilloscopes,
gaussmeters, RF sweepers, an analogue impedance meter, a digital impedance analyzer and 5 HP RF network
analyzers. Also available are machine tools including drill presses, a belt sander, a grinder, a band saw, a 13in
lathe and a small milling machine. A stock of materials, hardware and electronic components is kept. Machine
tools are available to carry out complete computer-assisted design and fabrication of probes, animal carriers,
gradient coils, etc. In addition to these resources, Martinos investigators also have access to the MGH machine
shop. Design and simulation tasks are supported by multiprocessor workstations running the Remcom BioPro
FDTD software for simulation of electromagnetic fields, Electronics Workbench Multisim 2001 for simulation of
electrical networks and IMSI TurboCad for mechanical design.
RF-coil Laboratory. The RF-coil laboratory consists of a 500 ft
2
area with 6 RF-compatible work benches and 5
RF network analyzers. This space includes an electronics storage room for maintaining an extensive supply of
RF parts and tools. The laboratory has a circuit-board milling machine for creating circuit boards and coil layouts
and also a Dimension SST-1200 3D printer capable of making head-shaped models and helmet designs out of
ABS plastic from CAD files generated from MRI volume scans. Additional equipment includes an RF spectrum
analyzer, oscilloscopes (includes 1 GHz bandwidth, digital), RF frequency synthesizers and common electronics
measurement and test devices. This laboratory creates custom MRI coils, such as for brain tissue imaging.
Education Area. This area consists of a conference room, an audio-visual laboratory, staff offices and general
desk space for graduate students, postdoctoral fellows, and junior faculty. The audiovisual laboratory has been
equipped with computers, TV monitors, VCRs, carousels, teaching files and tapes.
Administration Area. The Martinos Center’s administration area is located on the second floor of Building 149
in area 2301. Facilities located here include fax machines, photocopying, standard, color laser printers as well
as mailboxes for staff and faculty. The area also contains faculty and secretarial office space and a conference
room.
FACILITIES AND OTHER RESOURCES: CSAIL, MIT
The MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) is located at the MIT Stata Center, a
truly unique building designed by world-famous architect Frank Gehry. The Center occupies roughly 713,000 ft
2
of land area and is located in the new gateway to MIT campus at 32 Vassar Street.
1 Computational Resources
Stata Center. Stata Centers main data network was designed and implemented by the CSAIL's full-time staff of
network and computing system managers, known collectively as T!G (the Infrastructure group). It consists of a
state-of-the-art 10 gigabit, single-mode fiber backbone, Cisco Catalyst series switches, a fault-tolerant network
topology, and 10/100/1000 Ethernet service via CAT6 copper cable to the desktop. Additionally, T!G maintains
and supports various centralized computing resources for the CSAIL community, including e-mail service, web
servers, DNS, DHCP, and many other common enterprise services. An 802.11g wireless local area network is
available throughout virtually all of the occupied space in the building.
Computer Vision Group. In addition to the resources described above, the computer vision group maintains an
extensive software library. This includes a range of generic public domain and commercial software: compilers
and debuggers, and software packages (including GNU and Sun proprietary), Netscape, Matlab, Mathematica
and different security related packages such as S-key (one-time passwords for access of the lab from outside),
ssh and PGP encryption. Graphics and image processing software packages include XGL, XIL, OpenGL, Open
Inventor, Data Explorer, Analyze, XV, IslandWrite, IslandPaint, and IslandDraw, and Adobe Photoshop for Unix
systems.
High Performance Cluster. Additionally, CSAIL investigators have access to a high-performance cluster at the
Massachusetts Green High Performance Computing Center (MGHPCC), a state-of-the-art computing facility in
Holyoke, Massachusetts. This resource provides peak capacity of 9,00018,000 cores, peak memory of 36TB
and a high-speed working storage of 1.5 PB. This system is dedicated to life sciences research and is operated
jointly by a consortium of research groups from five universities and associated partners. The system enables
individual research participants to scale to the entire system for bursts of computation but is operated so that all
participants have balanced access on average.
Medical Vision Group. In addition to the resources available at CSAIL, Polina Golland’s Medical Vision Group
maintains a 400-core, 100-GPU computing cluster for computationally demanding processing. The group owns
270TB of secure storage space that is maintained by the infrastructure group. The computing resources available
to the PI will be instrumental for the proposed research, which requires extensive computing power, adequate
memory and storage capacity for manipulating large data sets.
EQUIPMENT
Equipment available to the candidate for use at the Martinos Center, MIT and MGH during the K99 phase of the
award is listed below.
1 MGH/HST Martinos Center for Biomedical Imaging
Large-bore MRI systems:
3.0T Siemens Prisma Fit 128-channel whole-body MRI
3.0T Siemens Skyra 128-channel whole-body MRI
7.0T Siemens TIM ultra-high-field 32-channel head-only MRI
3.0T Siemens Connectom 64-channel whole-body MRI with high-amplitude gradients
3.0T Siemens TIM Trio 32-channel whole-body MRI
3.0T Siemens TIM Trio 32-channel whole-body RMI with MR-PET insert
3.0T Siemens Biograph mMR whole-body MR-PET
High-performance computing cluster:
32 NVIDIA A100 GPUs across eight AMD EPYC 7742 64-core CPUs
52 NVIDIA RTX8000 GPUs across twelve Intel Xeon Gold 6226R 16-core CPUs
18 NVIDIA RTX6000 GPUs across six Intel Xeon Gold 6226R 16-core CPUs
12 additional NVIDIA GPUs: four Tesla V100, four P100, two P40, and two K40
32 DELL R440 servers each with two Intel Xeon Silver 4214R 12-core CPUs
Other equipment and facilities:
RF-coil laboratory
Electronics and machine workshops
Education area
Administration area
2 Computer Science and Artificial Intelligence Lab, MIT
High-performance GPU computing cluster:
48 NVIDIA Titan XP GPUs across 48 Intel Xeon E5-2620 8-core CPUs
36 NVIDIA GeForce 2080ti GPUs across 36 Intel Xeon Silver 4210 10-core CPUs
12 NVIDIA Quadro RTX5000 GPUs across 12 Intel Xeon Silver 4110 8-core CPUs
Other equipment and facilities:
Electronics and machine workshops
Education area
Administration area
3 Massachusetts General Hospital Main Campus
Large-bore MRI systems:
3.0T Siemens Skyra 64-channel whole-body MRI
3.0T Siemens TIM Trio 32-channel whole-body MRI
1.5T Siemens Avanto 32-channel whole-body clinical MRI
High-performance GPU computing cluster (ERISXdl):
40 NVIDIA Tesla V100 across 40 Intel Xeon Gold 6226R 16-core CPUs
BUDGET JUSTIFICATION: K99 PHASE
Funding for the K99 phase of the award will support the personnel, activities and equipment listed below.
1 Key Personnel
Sean Young, PhD (PI). 10.8 calendar months (90.4%), years 12. Dr. Young is research fellow at the Martinos
Center, Harvard Medical School and research affiliate in the Computer Science and Artificial Intelligence Lab
(CSAIL), MIT. His research training in electrical engineering focused on computational imaging and displacement
modeling and estimation techniques for video compression and medical image registration. He assumes overall
responsibility for conducting the proposed research, which includes the design, execution, and interpretation of
experiments. The requested support is consistent with the established salary structure for MGH investigators of
equivalent qualification, rank and responsibilities.
Bruce Fischl, PhD (Primary Mentor). Dr. Fischl is Professor of Radiology at Harvard Medical School, and the
director of the Laboratory for Computational Neuroimaging (LCN) at the Martinos Center. He is well known for
his work on automated segmentation and labeling of brain morphometry data, implemented in the open-source
FreeSurfer software. His current research focuses on deep learning methods to increase the speed, flexibility
and accuracy of FreeSurfer. Dr. Fischl has coauthored over 290 journal publications and has over 122,000 total
citations. He is currently a principal investigator on multiple NIH R01, R25, RF1, and U01 grants. Dr. Fischl will
supervise the PI in all aspects of his research training, hold weekly meetings with the PI to ensure access to all
resources, facilities, and mentoring. No salary support or fringe benefits requested.
Polina Golland, PhD (Co-mentor). Dr. Golland is Professor of Electrical Engineering and Computer Science at
MIT and the director of the Medical Vision Group (MVG) in the Computer Science and Artificial Intelligence Lab
(CSAIL) at MIT. Her research interests span computer vision and machine learning. Her current research is
focused on developing statistical analysis methods for the characterization of biological processes from images
from MRI to microscopy. Dr. Golland has coauthored over 300 refereed publications that have been cited over
17,000 times in total. She has been a primary advisor of 14 PhD students and 9 postdoctoral researchers, many
of whom are now independent investigators at leading research institutions. Dr. Golland will supervise the PI in
the deep learning aspects of the research, meet with the PI weekly to ensure access to resources, facilities, and
mentoring. No salary support or fringe benefits requested.
Bradley Hyman, MD, PhD (Co-mentor). Dr. Hyman is Professor of Neurology at Harvard Medical School and
directs the Alzheimer’s Unit at the MGH Institute for Neurodegenerative Disease and Massachusetts Alzheimer’s
Disease Research Center. His current research is focused on the anatomical and molecular basis of dementia
in Alzheimer’s disease, and dementia with Lewy bodies. Dr. Hyman has coauthored over 750 journal publications
cited over 167,000 times in total. He is currently a principal investigator across multiple NIH R01, RF1, R56 and
P30 grants. Dr. Hyman will meet with the PI every four weeks to train him in clinical aspects of the Alzheimer’s
Disease research. No salary support or fringe benefits requested.
Randy L. Buckner, PhD (Co-mentor). Dr. Buckner is Professor of Psychology and Neuroscience, Department
of Psychology, Harvard University, and directs the Buckner Lab, Harvard University as well as the Psychiatric
Neuroimaging Research Division at MGH. His current research seeks to determine whether dysfunction can be
detected prior to clinical symptoms in individuals at genetic risk for illness. Dr. Buckner has coauthored over 260
journal publications with over 132,000 total citations. He is currently a principal investigator across a number of
NIH R01, U01 and T32 grants. Dr. Buckner will meet with the PI every two weeks to train him in translational
Alzheimer’s Disease research. No salary support or fringe benefits requested.
André JW van der Kouwe, PhD (Collaborator). Dr. van der Kouwe is Associate Professor of Radiology at HMS
and Director of 7T Imaging at the Martinos Center and a member of the LCN. He is a renowned expert in MR
physics and pulse-sequence design. In particular, Dr. van der Kouwe is widely recognized for his expertise in
prospective motion correction and has developed and published on several different navigator sequences for
motion tracking. During the mentored phase, he will meet with the PI as needed to provide consultation with the
modeling and estimation of MR distortions. Dr. van der Kouwe will be available to discuss career development
and continue to collaborate during the R00 phase. No salary support or fringe benefits are requested.
2 Non-key Personnel
Research technician (TBD). 0.60 calendar months (5%) in year 1, and 1.20 calendar months (10%) in year 2 is
requested. The research technician will help the PI with test-data acquisition at Martinos. The budget will partly
support the salary of a current RA who also has other admin duties for the PI’s primary mentor, Dr. Fischl.
3 Travel
International Conferences. $4,500 for one international conference per year is requested in years 1 and 2. This
covers conference registration ($750), travel ($2,500), lodging ($1,000) and meals ($250). A typical conference
duration is seven days including travel. The actual conference to attend will depend on factors including travel
restrictions by the US government and the covid19 situation in the destination country. One conference will be
chosen from:
Intl. Conf. on Medical Image Computing and Computer Associated Intervention (MICCAI), held every year
(Vancouver, Canada in 2023, and Marrakesh, Morocco in 2024);
Information Processing in Medical Imaging (IPMI), held biennially (location TBD in 2023);
International Society for Magnetic Resonance (ISMRM, annual; Toronto in 2023, Singapore in 2024);
Intl. Workshop on Biomed. Image Registration (WBIR), held biennially (location TBD for 2024);
Intl. Conf. on Computer Vision (ICCV), held biennially (Istanbul, Turkey in 2023);
European Conf. on Computer Vision (ECCV), held biennially (location TBD for 2024);
Asian Conf. on Computer Vision (ACCV), held biennially (location TBD for 2024); and
Image and Vision Computing New Zealand held biennially (location TBD for 2024).
The PI will write a conference paper for presentation at the chosen conference (preferably MICCAI or ISMRM
conferences). Attendance is mandatory for the paper to be published as part of the conference proceedings.
Domestic Conferences. $5,000 for two domestic conference per year is requested for years 1–2. This will cover
conference registration ($1,500), travel ($1,000), lodging ($2,000) and meals ($500). A typical duration for local
domestic conferences is five days including travel. Dr. Young will attend both:
The BRAIN Initiative Investigators Meeting, held annually (location TBD for 2023); and
Conference on Computer Vision and Pattern Recognition (CVPR) held annually (Indianapolis, IN in 2023
and Seattle, WA, 2024).
The PI will write conference papers for presentation at the two conferences. Attendance is mandatory for these
papers to be published in the conference proceedings. The PI will still attend even if his paper is rejected, since
these conferences attract leading medical imaging researchers and provide the PI with an opportunity to discuss
and receive feedback on his work.
4 Materials and Supplies
Laptop Computer. $4,660 is requested in year 1 for a 16” performance laptop computer. The specifications are:
Apple M1 Max chip with 10-core CPU, 32-core GPU, and 16-core Neural Enginefor deep learning;
64GB unified memoryto allow programming, manuscript preparation, and video calls all in parallel;
4TB SSD storageto hold and instantly access tens of thousands of high-resolution MR volumes;
16-inch Liquid Retina XDR displayfor crisp visualization of MR images and segmentations; and
Pro Apps Bundle for Educationto create movie-quality oral presentation videos for published work.
This performance laptop will replace the PI’s older 2014 15-inch Mac computer to ensure continued productivity
especially while on the run (commutes on the train, flight layovers, etc) and provide a portable environment for
scientific writing, communication with collaborators, presentations, a platform for development and debugging of
deep learning systems as well as access to the Martinos GPU cluster over the Internet.
Desktop Monitor. $1,900 is requested in year 1 for a performance 27” desktop monitor. The specifications are:
27-inch 5K Retina display; and
12MP Ultra-wide camera, studio-quality mics, and six-speaker sound systemto allow high quality remote
presentations.
This performance monitor will greatly boost the PI’s productivity on the above 16” laptop computer whenever he
needs to work after hours or on weekends especially for presentations to other continents or time zones.
Computer Accessories. $1,000 is requested in year 1 for keyboard ($200) and trackpad ($150) for use with the
desktop monitor, noise-cancelling headphones ($550) for work in noisy environments, and various dongles and
cables for connecting the laptop to the wired network, projector, etc ($100).
Stationery. $81 is requested in years 1 and 2 for stationery items, including pens and paper, and notebooks.
5 Others
IT Support Charges. $750 is requested in K99 year 1, and $1,070 in year 2. This covers:
Unix account fees ($180) per year in years 1 and 2;
Unix workstation maintenance ($250) per year in years 1 and 2; and
High-performance RAID data storage at Martinos. 1TB in year 1 ($320) and 2TB in year 2 ($640).
RAID storage includes automatic backups and will be used to house all data and models related to the proposed
project. Workstation maintenance includes weekly backup of the workstation data.
Open Access Publication Charges. $3,545 is requested in K99 year 1 for one open access publication in the
IEEE Trans. Medical Imaging, and $7,090 in year 2 for two publications in the same journal. $3,295 for each
publication covers the open access fee ($2,045), mandatory page charges ($1,500) at $250 per page in excess
of the first eight pages (14 pages is typical for a paper in this journal). The requested funds may be used for
open access publications in NeuroImage ($3,450 per publication) instead of IEEE Trans. Medical Imaging.
Postdoc Learn-and-Lunch. $131 is requested in K99 year 2 for the PI to host a learn-and-lunch event among
the eight or so postdocs in the lab. Since the event will be held within the lab, the money will be used to order
food from food delivery services (UberEats, DoorDash, etc).
6 Indirect Cost
Indirect Costs. Indirect costs are requested at an 8% rate per federal guidelines for K awards.
BUDGET JUSTIFICATION: R00 PHASE
During the R00 Independent phase, $249,000 is requested per year for years 35.
CANDIDATE’S BACKGROUND
Sean I. Young is research fellow in the Department of Radiology, Harvard Medical School, and research affiliate
in the Computer Science and Artificial Intelligence Lab (CSAIL), MIT, where he works on various computational
neuroimaging problems. Previously, he was a postdoctoral scholar at Stanford University, where he worked on
fast computational methods for non-line-of-sight imaging. He received his PhD in electrical engineering from the
University of New South Wales in Sydney, Australia. In 2018, he received (together with Prof. David Taubman)
the Australian Pattern Recognition Society (APRS)’s best paper award for his work “Fast optical flow extraction
from compressed video”.
1
His research expertise lies in computational imaging and, in particular, modeling and
estimation of displacement for video compression, scene understanding and image registration problems.
Sean’s computational imaging research began in 2012 with his PhD work on video compression (under the
supervision of Prof. Taubman), exploring the use of dense and parametric displacement models for an efficient
representation of video. Since two successive video frames usually contain almost the same content, one way
to reduce redundancythereby compressing video datais to transmit only the first video frame together with
displacement and difference fields to warp the first and synthesize the second one. Interestingly, these are the
same principles that underlie anatomical change detection in images. However, as the displacement field itself
has a transmission cost, compactly parameterizing this displacement field and precisely determining its values
led to a significant compression improvement. Sean’s US patent
2
filed in 2018 demonstrates the utility of high-
order (generalization of affine) displacement models for video compression, accompanied by a procedure that
can recover the values of displacement parameters to a high precision. Such high-order nonlinear displacement
models also play an important role in registering MRI volumes in the presence of geometric MR distortions.
Concurrently with his work on parametric displacement estimation techniques, Sean investigated using dense
deformation fields to improve video compression systems. In particular, Sean demonstrated that the estimation
of deformations and many related inverse problems in imaging and vision can be solved with high-dimensional
Gaussian filtering to obviate the need for slow, iterative optimization procedures. For certain tasks, this filtering
technique accelerated displacement estimation by a factor of 100.
1
Sean’s doctoral work resulted in a PhD thesis
on non-linear optimization and regularization techniques for inverse problems in imaging, and six first-author
conference papers
3–8
and two first-author journal papers
9,10
. Sean’s PhD research provides a sound, theoretical
foundation for robustly registering images and detecting neuroanatomical change in the proposed project.
After obtaining his PhD, Sean moved to Stanford University in 2019 for postdoctoral training with Prof. Bernd
Girod and continued his work on efficient methods for imaging and vision problems. Collaborating closely with
Dr. Gordon Wetzstein, he published an efficient method for non-line-of-sight imaging
11
that can resolve surfaces
of hidden scene objects with an unprecedented level of detail and a 1000-fold speed-up over the state-of-the-art
methods. At the same time, Sean published compression algorithms for neural networks
12
that enable efficient
imaging and computer vision on embedded devices, improving the accuracy of compressed networks over the
state-of-the-art compression methods. Neural network compression is an important problem in radiology as well
due to the demand for real-time image processing on imaging devices using neural network models. Sean’s work
at Stanford led to two first-author papers
11,13
in flagship computer vision conferences and four (three first author
and one senior author) papers
1,12,14,15
in two top-ranked computer science and artificial intelligence journals.
In response to the 2019 pandemic, Sean became determined to pursue research closer to people’s lives. He
moved to the Martinos Center late 2020 for medical imaging research with Dr. Juan Eugenio Iglesias and worked
on semi-supervised learning for imaging, which allows segmentation networks to be trained on a mix of labeled
and unlabeled data. This semi-supervised learning framework is used later for longitudinal brain segmentation
(see Research Strategy), where both labeled (but synthetic) and unlabeled (but real acquired) brain images are
used for anatomical segmentation of the brain optimal for disease detection. This work led to one journal paper
(under revision, listed in the biosketch) and also to Sean’s collaboration with NIST as a consulting author. More
importantly, this work familiarized Sean with the intricacies of MRI that can adversely affect our ability to detect
anatomical change, which is crucial for the imaging and analysis of Alzheimer’s Disease (AD) for example. These
include gradient non-linearities and B0-distortions as well as other acquisition-induced differences in the image
(intensity and contrast), all of which hinder change detection using optimization-based displacement estimation
techniques. More recently, Sean proposed a brain image registration framework
16
capable of recovering smooth
deformations with sub-voxel accuracy in the presence of MR distortions and noise. The image registration and
semi-supervised segmentation frameworks together form the scientific axis of the proposed project.
Sean’s expertise in computational imaging and modeling, coupled with his newly discovered aptitude for deep
learning in MRI makes him an extremely suitable candidate to embark on the proposed project. Sean’s mentors
from Harvard, MIT and MGH will afford him guidance and mentoring in the fields of neurology and radiology to
ensure that appropriate methods are used to address clinically relevant questions. The NIH K99/R00 award will
not only provide Sean an opportunity to work with mentors and solve a problem of universal importance but also
support him to grow as an independent researcher and ultimately raise the next generation of researchers to
carry on our scientific legacy.
CAREER GOALS AND OBJECTIVES
Dr. Sean I. Young aims to become a scientist and academic with expertise in clinical computational imaging of
neurological disease processes and is eager to apply his computational background and the recent advances in
deep learning towards translational research. Transitioning to faculty, he will start his own research lab focused
on imaging of neurological diseases and other closely related problems. Sean will utilize his experience with a
range of computational imaging problems and a deep understanding of neurodegenerative diseasesgained in
the K99 phaseto spearhead the computational imaging and aging research of his future lab in the R00 phase.
Sean sees the increase in demand for engineering academics with medical knowledge as an opportunity for
translational research, either in an engineering department with strong ties to a clinic or a radiology department
having strong ties to engineering. Sean’s transitioning to faculty will also allow him to pursue his passion as an
educator to motivate and inspire the next generation of researchers, passing on to them very carefully distilled
knowledge in hopes that they will take our scientific discoveries and innovations further. Since Sean is trained in
engineering and computational imaging, his aim for the K99 phase is to address the knowledge gap in neurology
and neurobiology, more specifically as they relate to neurodegenerative diseases or disorders. A comprehensive
training program in neurology and radiology steered by a mentoring committee of global leaders in neuroimaging
and neurology at Harvard, MIT and MGH will ensure that Sean is well grounded in all aspects of his project.
Sean’s objective for the first two years of the R00 phase is to flesh out research findings and prepare R01s to
continue developing clinical and research tools that push the boundaries of what is currently achievable in the
imaging of neurodegenerative disease processes. A natural extension to AD imaging is to fine-tune the software
and repurpose it for e.g., Huntington’s disease and schizophrenia. The deep learning-based longitudinal image
registration tools developed as part of the project are also applicable to other aging-related analyses. Seizing
the opportunity, Sean will thus prepare an R01 entitled “Contrast and distortion-invariant longitudinal registration
of brain images”, leveraging the same deep learning-based framework to provide further insight into the brain
aging process. He will distribute his longitudinal imaging tools as part of the popular FreeSurfer
17
software suite
for neuroimaging to positively impact the wide AD, aging and neuroimaging research community.
CAREER DEVELOPMENT PLAN
The main objective of the proposed career development plan is to address the candidate’s knowledge gaps in
neurology and general neuroscience. Additionally, the plan aims to provide the candidate with training in peer
review, grant writing and components of the academic life that will be essential to the scientific independence of
the candidate. During the K99 phase of the award, 75% of the candidate’s full-time effort will be devoted to the
proposed project and 25% to mentoring and coursework. During the R00 phase, Sean will continue with a 75%
effort dedicated to the proposed project. In this section, different components of Sean’s career development
activities are discussed. See Table 1 for a summary of the planned activities across the K99/R00 timeframes.
1 Direct Mentoring
The following mentoring committee of world-class experts in radiology, medical imaging and neurology across
Harvard, MIT and MGH will direct and evaluate my progress during the K99 phase of the award. Fig. 1 plots the
expertise of the mentors on a neurologyphysics spectrum, showing the responsibilities of each mentor.
Dr. Bruce Fischl, PhD, Professor of Radiology at Harvard Medical School, is the director of the Laboratory for
Computational Neuroimaging (LCN) at the Martinos Center and will serve as a primary mentor. He is known for
his work on automated segmentation and labeling of brain morphometry data, implemented in the open source
FreeSurfer software suite. His current research focuses on the use of learning to increase speed, flexibility, and
accuracy of FreeSurfer. Dr. Fischl has coauthored over 290 journal publications, which have over 122,000 total
citations. He is currently a principal investigator across many NIH R01, R25, RF1 and U01 grants. He has been
primary mentor of 5 PhD students and 10 postdoctoral trainees, all of whom are now independent investigators
at leading research institutions. Sean will meet with Dr. Fischl weekly in the lab to receive feedback on progress
Table 1. Career development activities (coursework, conferences, mentoring, supervision, professional) across K99/R00 timeframes.
K99 YEAR 1
K99 YEAR 2
R00 YEAR 3
R00 YEAR 4
R00 YEAR 5
Coursework
MIT9.01, MIT9.015, HST.580
MIT9.13, MIT9.24
Instruct classes
Instruct classes
Conferences
MICCAI or ISMRM, BRAIN, CVPR
MICCAI or ISMRM, BRAIN, CVPR
MICCAI, ISMRM
MICCAI, ISMRM
MICCAI, ISMRM
Mentoring
Fischl, Golland, Buckner (weekly),
Hyman (monthly), 2 formal updates
Fischl, Golland (weekly), Buckner,
Hyman (monthly), 2 formal updates
Consultation
Fischl, Golland
Touch base
Fischl, Golland
Touch base
Fischl, Golland
Supervision
Summer undergrad students (1)
Summer undergrad students (1)
Undergrads
Grad students
Grads/postdocs
Professional
Grant and IRB writing, and
responsible conduct of research
Faculty job search, interview and
negotiation skills, and lab initiation
Grant writing
Help Fischl, Golland with grants
Help Fischl, Golland with grants
Prepare R01
Apply for R01
Obtain R01
and training in grant writing and peer review. Sean will also assist Dr. Fischl in preparing grants for the lab.
Dr. Polina Golland, PhD, Professor of Electrical Engineering and Computer Science, MIT, is the director of the
Medical Vision Group (MVG) in the Computer Science and Artificial Intelligence Lab (CSAIL). She will serve as
Sean’s primary co-mentor at MIT. Her research interests span computer vision and machine learning fields. Her
current research focuses on developing statistical analysis methods for characterization of biological processes
from images, from MRI to microscopy. Dr. Golland has coauthored over 300 refereed publications with over
17,000 total citations. She has been a primary advisor of 14 PhD students and 9 postdoctoral researchers, many
of whom are now independent investigators at world-class research institutions. Sean will meet with Dr. Golland
once a week at MIT to receive feedback on progress and training in deep learning research. Sean will also assist
Dr. Golland in preparing grant applications for collaborative projects with Dr. Fischl’s lab.
Dr. Bradley Hyman, MD, PhD, Professor of Neurology at Harvard Medical School, directs the Alzheimer’s Unit
at the MGH Institute for Neurodegenerative Disease and also the Massachusetts Alzheimer’s Disease Research
Center and will serve as a co-mentor. His research is focused on the anatomical and molecular basis of dementia
in Alzheimer’s disease and dementia with Lewy bodies. Dr. Hyman has coauthored over 750 journal publications
and has been cited over 167,000 times. He is currently a principal investigator on multiple NIH R01, RF1, R56
and P30 grants. He has received the Metropolitan Life Award, the Potamkin Prize, an NIHNIA Merit award and
an Alzheimer’s Association Pioneer Award. Sean will meet with Dr. Hyman formally every quarter, with monthly
check-ins in his lab to receive training in clinical and neurological aspects of Alzheimer’s Disease research.
Dr. Randy L. Buckner, PhD, Professor of Psychology and Neuroscience at Harvard University, is the director of
the Buckner Lab at Harvard University and the Psychiatric Neuroimaging Research Division at MGH and will
serve as a co-mentor. His current research seeks to determine whether neurological dysfunction can be detected
prior to clinical symptoms in individuals at genetic risk for illness. Dr. Buckner has coauthored more than 260
journal publications with over 132,000 total citations. He is currently a principal investigator across a number of
NIH R01, U01 and T32 grants. He has received the MetLife Award for Medical Research, the Troland Research
Award from the National Academy of Sciences. Sean will meet with Dr. Buckner every week initially (and once
a month afterwards) to receive training in translational aspects of Alzheimer’s Disease research.
Dr. André JW van der Kouwe, PhD, Associate Professor of Radiology at Harvard Medical School, is director of
7T Imaging at the Martinos Center and a member of the LCN and will serve as a collaborator. He is a renowned
expert in MR physics and pulse-sequence design. In particular, Dr. van der Kouwe is widely recognized for his
expertise in prospective motion correction and has developed and published on a number of different navigator
sequences for motion tracking. During the mentored phase, Sean will meet with Dr. van der Kouwe as needed
to provide consultation with the modeling and estimation of MR distortions. Dr. van der Kouwe will be available
to discuss career development and continue to collaborate during the R00 phase.
2 Coursework
The coursework will help the candidate familiarize himself with key concepts in (ultimately) neurodegeneration.
9.01 Introduction to Neuroscience (MIT). Introduction to the mammalian nervous system, with emphasis on the
structure and function of the human brain. Topics include: function of nerve cells, learning and memory, sensory
systems, control of movement and brain diseases. Instructor: M. Bear (fall, K99 year 1).
9.015 Molecular and Cellular Neuroscience Core I (MIT). Surveys selected areas in molecular and cellular
neurobiology. Topics include nervous system development, axonal pathfinding, synapse formation and function,
synaptic plasticity, ion channels and receptors, cellular neurophysiology, glial cells, sensory transduction, and
examples in human disease. Instructor: J. T. Littleton, M. Sheng (fall, K99 year 1).
0.580 Data Acquisition and Image Reconstruction in MRI (HST). Applies analysis of signals and noise in linear
systems, sampling, and Fourier properties to magnetic resonance (MR) imaging acquisition and reconstruction.
Provides adequate foundation for MR physics to enable study of RF excitation design, efficient Fourier sampling,
parallel encoding, reconstruction of non-uniformly sampled data, and the impact of hardware imperfections on
reconstruction performance. Surveys active areas of MR research. Assignments include Matlab-based work with
real data. Includes visit to a scan site for human MR studies. Instructor: to be announced (fall, K99 year 1).
Fig. 1. Expertise of mentors (black markers) and collaborators (white markers) positioned on a neuro–physics spectrum, showing mentor
responsibilities. Drs. Fischl and Golland jointly provide computational mentoring at the candidate’s two institutions. Dr. Hyman provides
mentoring in neurology. Dr. Buckner bridges neurology with computation, Dr. van der Kouwe bridges physics with computation.
Purely neurological Computational Purely physical
vd Kouwe
Fischl
Golland
Buckner
Hyman
9.13 The Human Brain (MIT). Surveys the core perceptual and cognitive abilities of the human mind and asks
how these are implemented in the brain. Key themes include the functional organization of the cortex as well as
the representations and computations, developmental origins, and degree of functional specificity of particular
cortical regions. Emphasizes the methods available in human cognitive neuroscience, and what inferences can
and cannot be drawn from each. Instructor: N. Kanwisher (spring, K99 year 2).
9.24 Disorders and Diseases of the Nervous System (MIT). Topics examined include regional functional
anatomy of the central nervous system; brain systems and circuits; neurodevelopmental disorders including
autism; neuropsychiatric disorders such as schizophrenia; neurodegenerative diseases such as Parkinsons and
Alzheimers; autoimmune disorders such as multiple sclerosis; gliomas. Emphasis is given to diseases where a
molecular mechanism is well-understood. Diagnostic criteria, clinical and pathological findings, genetics, model
systems, pathophysiology and treatment for each disorder/disease. Instructor: M. Sur (spring, K99 year 2).
3 Self Study
The candidate will also study MR physics under the guidance of Dr. van der Kouwe. Books to be used are:
Totally Accessible MRI. Written by Michael D Lipton, MD, PhD, this practical guide offers an introduction to the
principles of MRI physics. Each chapter explains the why and how behind MRI physics. Readers will understand
how altering MRI parameters will have many different consequences for image quality and the speed in which
images are generated. Practical topics, selected for their value to clinical practice, include progressive changes
in key MRI parameters, imaging time, and signal to noise ratio. Illustrations, complemented by concise text, will
help the reader gain a thorough understanding of the subject without requiring prior in-depth knowledge. To be
read in K99 year 1 with the accompanying YouTube video lectures.
Principles of Magnetic Resonance Imaging. Written by Dwight G Nishimura PhD, this book presents the basic
principles of magnetic resonance imaging (MRI), focusing on image formation, image content, and performance
considerations. Emphasis is put on the signal processing elements of MRI, particularly on the Fourier transform
relationships. While developed as a teaching text for an electrical engineering course at Stanford University, the
material should be accessible to those coming from other technical fields. Chapters 1–7 cover the foundational
material. Latter chapters (Chapters 811) provide extensions and selected topics. To be read in K99 year 2.
4 Conferences and Workshops
During the five years of K99/R00, the candidate will continue to attend scientific conferences to communicate
his research and develop new collaborations to stay on top of current developments in the field. In each year of
the K99/R00 award period, he will attend one international conference chosen from (in the order of preference):
the International Conference on Medical Image Computing and Computer Associated Intervention (MICCAI); the
International Society for Magnetic Resonance (ISMRM) annual meeting; the Information Processing in Medical
Imaging (IPMI) meeting, held biennially; the International Conference on Computer Vision (ICCV), held biennially
and the European Conf. on Computer Vision (ECCV), held biennially. In addition, the candidate will attend two
domestic conferences: the BRAIN Initiative Investigators Meeting, and the IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), both held annually. Sean will attend the education sessions held in the MICCAI
or ISMRM meetings, which consist of two full days of lectures. Sean will focus on sessions that cover medical
registration and MR imaging of neurodegenerative diseases. Sean will also participate in relevant challenges in
MICCAI workshops held each year, such as Machine Learning for Medical Image Reconstruction and Machine
Learning for Clinical Neuroimaging.
In addition to attending conferences and workshops, Sean will continue to attend local scientific seminars such
as the weekly BrainMap seminar series at the Martinos Center, and the two yearly FreeSurfer courses at LCN.
5 Career Development and Transition to Independence
The Office for Research Career Development at Massachusetts General Hospital (MGH) offers a number of
professional development workshops. To advance his academic skills, Sean will participate in workshops on
grant and IRB-protocol writing, interview and negotiation skills as well as lab initiation across the five years of
the award. During the K99 phase, Sean will enhance his grant-writing capacity by assisting LCN with grants. To
prepare for the launch of his own lab by the end of the R00 phase, Sean will write R01s in year 3, submit them
in year 4 and begin mentoring students and postdocs in year 5. Sean will still continue to work with his mentors
during and beyond the R00 phase while also establishing new collaborations to gain independence in his field.
6 Progress Review and Evaluation
Sean’s research progress and career development will be reviewed and evaluated by Dr. Fischl every week at
the Martinos Center. Dr. Golland will also evaluate Sean’s research progress weekly at MIT. Sean will additionally
coordinate two meetings per K99 year with all mentors to provide formal updates on progress and milestones.
TRAINING IN THE RESPONSIBLE CONDUCT OF RESEARCH
Research at the MGH/HST Martinos Center and MIT employs the highest moral standards under strict guidance
from the center’s faculty mentors. The PI pledges to act in accordance with these standards.
1 Training Program and Schedule
In the first year of the K99 award, the PI will complete a three-part program comprising on-line training, a large
group meeting as well as lectures and discussions. A total of 8 hours of face-to-face lectures are included:
Online Training. MGH Partners HealthCare (PHS) uses the CITI (Collaborative Institutional Training Initiative)
program. The PI will perform on-line CITI training in responsible scientific conduct, with a refresher course every
three years, as mandated by MGH. In addition, the PI will complete the modules devoted to research misconduct,
data acquisition and management, responsible authorship and peer review, mentoring, as well as collaborative
research.
Large Group Meeting. In the first year of K99, the PI will attend a four-hour RCR seminar run by PHS. This is a
large group meeting that includes faculty presentations, panels and group discussions of contents covering
compliance/post-award financial management, conflict of interest and an overview of human-subject research
policies. The majority of session time is devoted to faculty presentations and panels focused on topics such as
data integrity and documentation, authorship, publication, and research misconduct.
Lectures and Discussions. This comprises lectures and discussion groups hosted by MGH. The PI will attend
a minimum of four lectures selected from a list of hospital-based offerings eligible for RCR credit. These expand
on topics covered in the CITI program and include topics such as documentation and integrity, interactions with
industry, mentor/mentee responsibilities, and introduction to clinical research.
2 Lecture and Seminar Topics
The large-group RSR seminar and the lectures and discussion groups draw upon a roster of senior hospital
faculty who teach on a revolving basis. Examples of current instructors and their topics include:
Dr. David Altshuler (mentorship)
Dr. Barbara Bierer (research misconduct, conflict of interest)
Dr. F. Richard Bringhurst (conflict of interest, data Integrity)
Dr. Dennis Brown (data integrity, publication, peer review, authorship)
Dr. Tayyaba Hasan (research misconduct, authorship)
Dr. Henry Kronenberg (documentation and data integrity)
Mary Mitchell (grants financial management)
Dr. Karen Miller (documentation and data integrity)
Allison Moriarty (research misconduct)
Dr. P. Pearl O’Rourke (IRB, IACUC, IBS, privacy board and research misconduct)
Dr. Marc Sabatine (conflict of interest)
Sarah White (IRB, privacy board)
Upon finishing all PHS program requirements, the PI will receive a certificate of completion.
In accordance with the MGH regulations, the PI has already completed the following CITI on-line courses:
Conflict of Interest in Research (Oct 24, 2021)
Biomedical Research Investigators and Key Personnel (Oct 23, 2021)
During the K99 phase of the award, Drs. Bruce Fischl and Polina Golland will ensure that the PI has a thorough
understanding of practical RCR aspects before initiating the experiments. They will also monitor my RCR training
throughout the mentored phase, and make sure that these skills are fully developed while transitioning to the
independent phase.
DESCRIPTION OF INSTITUTIONAL ENVIRONMENT
The K99 phase of this project will be carried out at the Martinos Center for Biomedical Imaging and CSAIL MIT.
1 MGH/HST Martinos Center for Biomedical Imaging
The MGH/HST Martinos Center is a research center based in the Department of Radiology of Massachusetts
General Hospital (MGH) and is closely affiliated with Harvard Medical School and the Massachusetts Institute of
Technology (MIT). Its mission is to create, develop and apply a variety of innovative and advanced imaging and
other technologies to facilitate a comprehensive understanding and better care of the human mind and body.
The Martinos Center currently has roughly 100 faculty researchers and more than 200 affiliated and visiting
faculty and postdoctoral research fellows and graduate students, with expertise across a spectrum of disciplines
including engineering, physical sciences, computational sciences and informatics, and behavioral and cognitive
neurosciences; basic and applied biological sciences, chemistry, imaging, radiological sciences as well as
clinical sciences. Additionally, the center serves as a resource to the greater MGH community and biomedical
imaging experts from institutions across the greater Boston area and around the United States. MGH provides
training for professionals from MIT, Harvard Medical School, and Tufts University. One crucial component of the
center’s research is its partnerships with the government, industry and private foundations including the National
Center for Research Resources, the Office of National Drug Control Policy, Siemens Healthcare, as well as the
National Foundation for Functional Brain Imaging.
The center occupies roughly 85,000 ft
2
of land area on the MGH East Campus in the Charlestown Navy Yard
and includes clinical, research, educational, and administration areas. A large expansion of the office space and
clinical and experimental space was recently completed. The center uses low-field, high-speed, high-field, and
conventional magnetic resonance (MR) imaging, MR spectroscopy, optical imaging, magnetoencephalography
(MEG) and electroencephalography (EEG) to explore properties of biological systems and to study and develop
novel ways to treat human pathologies such as neurodegenerative disorders and mental illnesses and cancer
cardiovascular diseases. The Martinos Center operates 6 MRI and 2 MR-PET large-bore scanners for human
clinical studies and 4 small-bore NMR systems for animal, and chemistry applications. The center has additional
laboratories specializing in MEG, optical imaging, photon migration, molecular imaging, behavioral testing, RF-
coil electronics and biochemistry. The aim of the Martinos Center is to foster interdisciplinary interaction through
training and integration of students, fellows, clinicians, and researchers with diverse backgrounds. Toward this
aim, the center supports and interacts with various training and educational programs. Educational endeavors
include affiliation and active involvement with degree-granting institutions and training programs sponsored by
NIH. Imaging tools and technology developed by the Martinos Center staff are taught to students, scientists, and
clinicians via specialized on-site and international workshops.
The Laboratory for Computational Neuroimaging at the Martinos Center explores new ways to conduct basic
and clinical, as well as cognitive neuroscience research through the use of MR imaging technologies. The lab’s
research is focused on optimization and analysis of neuroimaging data, encompassing structural, functional, and
diffusion MRI. Efforts are also focused on developing MR scanner pulse sequences and image reconstruction
methods that enhance image tissue contrast, reduce motion artifacts and improve the reliability of scans across
and within individuals. The lab has a number of computing resources, including a high-performance computing
cluster and GPU workstations, and promotes collaboration and resource sharing with other leading institutions.
2 CSAIL, Massachusetts Institute of Technology
CSAIL has long been a leader in the fields of Artificial Intelligence, Cognitive Science, and Computer Science
and is home to 110 principal investigators, which includes both MIT faculty and research staff. These numbers
include seven current or former MacArthur fellows and eight Turing award winners. CSAIL consistently ranks at
or near the top of undergraduate and graduate computer science programs in the world. Students and postdocs
in Golland’s group interact closely with other groups in CSAIL, including computer vision, machine learning and
MRI acquisition. This environment is ideal for creativity and exchange of ideas. In addition to the many relevant
courses and seminars at MIT, Golland’s group hosts a Biomedical Imaging and Analysis seminar series. Talks
in the series are given by invited top researchers in the field of medical image computing. Their visits provide an
invaluable opportunity for the students and postdocs in the group to share their own research and to brainstorm
new research directions with the thought leaders in the field in a less formal atmosphere of hosting the visiting
speaker. Our close connections with the machine learning and statistical inference groups at CSAIL and at MIT
is particularly relevant for the proposed project to stay abreast theoretical developments relevant for our domain.
In addition to CSAIL, Golland’s group also interacts closely with the MIT Institute for Data Science and Society
(IDSS), a recently founded epicenter of statistical modeling and machine learning at MIT, and with the MGH/BWH
Clinical Data Science Center (CDSC) recently established to explore applications of machine learning in medical
image computing and radiology. Through these centers, we come together with other local groups who share
our focus on inference and learning in medical images.
SPECIFIC AIMS
In Alzheimer’s Disease (AD) studies, longitudinal within-subject approaches have immense potential to increase
sensitivity and specificity and to improve the efficiency of clinical trials by requiring fewer subjects and providing
potential surrogate endpoints to evaluate therapeutic efficacy. There is also great potential for these studies to
enhance modeling of the disease process and its temporal dynamics. However, longitudinal tools have not yet
been optimized for use in clinical studies or in the wild with nonharmonized scans. Challenges include optimum
denoising of serial scans while weighting each time point equally to avoid bias in morphometry; accounting for
potential atrophy and handling varying session-specific MRI contrast and distortion when registering images.
Current computational tools for registration and detection of neuroanatomical change are mostly intended for
use on carefully curated research data
1829
such as ADNI
30
, where the scan protocol has been harmonized across
acquisition sites to minimize differential distortions and the bulk of the remaining ones e.g. gradient nonlinearities
are removed prior to data release. Unfortunately, these tools fail in the presence of differences in the acquisition
that are ubiquitous in nonharmonized datasets and in clinical imaging, where scheduling a subject on the same
scanner and protocol as a previous session is difficult or impossible for a clinician to ensure. For use in clinical
settings, it is therefore critical to develop registration and change tools which can ignore large-scale acquisition-
induced image differences but are highly sensitive to subtle neuroanatomical change that is predictive of early
disease processes. Handcrafting an image processing pipeline to achieve this would be extremely challenging
and time-consuming so we turn to deep learning and train a deep neural network instead to achieve this goal.
At first glance, learning to both register images and detect anatomical change between them may seem like
conflicting objectivesmaximizing the structural overlap across any two images minimizes detectable changes
and vice-versa. This is not the case with supervised learning
31
, in which a deep neural network can be trained to
register two images with a precisely relaxed fit in regions of anatomical change, allowing change to be captured
as a byproduct of registration. Supervised registration thus extends rigid or affine registration to high-order non-
linear registration and can compensate for acquisition-induced deformations and subject motion. To expose the
registration network to these physical effects, we train the network on synthetic images produced by our neuro
physics simulation engine, which is capable of synthesizing pairs of brain images simulating neuro-anatomical
change, such as atrophy, and physics-induced distortion. The registration network is supervised using synthetic
deformations as target fields, allowing it to learn to disentangle actual neuroanatomical change of interest from
the irrelevant physical deformation that relate the images. Previous deep learning-based registration
3235
cannot
be used for change detection since they merely maximize structural overlap while classical optimization-based
methods
26,28
can produce grossly inaccurate results in the presence of contrast change and MR distortions.
Supervised registration has a number of critical advantages: (1) the resulting framework is completely free of
biases in image contrast and modality since the registration network is trained on a vast intentionally unrealistic
range of pairs of synthetic image contrasts; (2) supervision using known deformation fields as targets allows the
registration network to learn to predict change with subvoxel accuracy across images; and (3) end-to-end training
of the registration network eliminates preprocessing steps (such affine preregistration and skull stripping) and
facilitates downstream deep learning tasks such as longitudinal segmentation, which require registration to be
end-to-end differentiable. The core of our AD imaging framework is simulation-powered supervised registration
and semi-supervised longitudinal segmentation on both simulated and acquired data. Our specific aims are:
Aim 1 Deep Acquisition-Invariant Registration and Change Detection
1.1. Neuro–physics engine for simulating neuro-anatomical change and MRI distortions in image pairs.
1.2. Subvoxel-accurate registration and change detection separating atrophy from irrelevant MR distortions. A
neural network is trained using the image pairs produced by the neurophysics engine from 1.1.
Aim 2 Deep Longitudinal Image Segmentation for Disease Detection
2.1. Biological models of the disease process for supervised spatiotemporal learning.
2.2. Spatiotemporally consistent longitudinal segmentation for optimal disease detection. A deep neural network
segments structures with spatiotemporal consistency by aggregating MR scans across all time points.
Aim 3 FreeSurfer Integration and Retrospective Longitudinal Analysis
3.1. Integration into FreeSurfer for dissemination to the wider aging, AD and neuroimaging community.
3.2. Use the FreeSurfer-integrated software for retrospective analysis of existing longitudinal cohort data with
irregular scan intervals and session-specific protocols collected by the Massachusetts AD Research Center.
A key deliverable of this project is a DL-based longitudinal imaging framework that can detect anatomical change
from a pair of scans acquired from any modality with a high level of accuracy. It can additionally segment them
by aggregating the scans across all time points. While our key interest is in detecting AD from MR images, our
imaging framework is adaptable to an array of other neurological diseases and imaging modalities. As such, we
expect the project to lead to further proposals for NIA-sponsored R01s during the candidate’s R00 award phase.
RESEARCH STRATEGY: SIGNIFICANCE
Brain MRI has become a primary tool for both studying brain aging and clinical diagnosis of neurodegenerative
disorders, because MR images across many time points allows us to visualize degenerative processes showing
atrophy of specific areas and degeneration of structures.
3640
Yet, the ubiquitously present geometric distortions
in MR scans continue to limit its full potential for use in longitudinal aging and AD studies, especially on clinical
data, where session-specific image differences across time hamper studies. In clinical environments, a subject
imaged at two time points will typically be scanned on different hardware platforms, with different vendors, field
strengths, receive coils and sequence parameters. This implies that differential distortion between the two scan
sessions will prevent rigid or affine registration from aligning most of the brain with a high level of accuracy, and
intensity and contrast properties of the two images will vary substantially. However, retrospectively harmonizing
clinical data for diagnosis and research would require a handcrafted image processing pipeline tailored to each
setting, which is impractical for economic and technical reasons. The computational community has fortunately
made remarkable progress with deep learning in the last seven years
41,42
, allowing an optimal image processing
pipeline for many problems to be learned from data. One successful deep learning paradigm for machine vision
is supervised learning from synthetic data
43,44
, which reaps the full benefits of deep supervised learning without
manually labeled data. We extend this paradigm pioneered by our lab to transform AD imaging research.
Modernizing Image Processing for Diagnosis and Research (Aims 12)
Currently, clinicians spend large amounts of time seeking to visualize the same slice in image data acquired at
different times. This is, of course, not entirely possible due to differences in subject positioning, slice orientation
and location, and differential MRI distortions. All of these factors obscure true anatomical change, reducing the
diagnostic accuracy, and resulting in reduced efficacy of care and poorer patient outcomes. In this research, we
seek to eliminate all of these barriers to accurate detection of human brain changes over time by (1) developing
simulators that enable us to synthesize temporal atrophy and session-specific MRI distortions, (2) leverage the
flexibility and power of modern deep learning to create an image processing pipeline to register the images over
time while learning to ignore distortions, noise and atrophy, and (3) build deep learning algorithms that directly
detect true anatomical change in the presence of MRI distortions, noise and other non-anatomical changes over
time and also generate segmentations of multi-timepoint images that are optimal for detecting disease effects.
For aging and AD research, deep learning not only speeds up image processing compared to e.g., FreeSurfer
but also leads to better study outcomes, since, in the synthesis paradigm, a neural network model is exposed to
an infinite number of brain images of different contrast and distortion and works like a superhuman RA. This will
allow clinical and neuroscientific researchers to benefit from the improved statistical power and vastly reduced
bias in longitudinal studies. Also, longitudinal frameworks currently work only on scans with an identical contrast
across time points. This is a major limitation because hospital scheduling systems typically make it impractical
to match scanner, sequence, field strength, manufacturer, head coil, etc., across time. This potentially excludes
tens of thousands of datasets especially clinical ones that use both T2w and FLAIR at different times or across
different sites. Deep neural networks trained on simulated data change this and unlock existing clinical data for
use in aging and AD studies.
Economies of Scale across AD and Aging Research Community (Aim 3)
It is also worth pointing out that the PI is a research fellow in the FreeSurfer Lab and a regular contributor to the
development of the FreeSurfer software, especially its latest deep learning components. The software written for
this project will be integrated into FreeSurfer and distributed to its 56,000 licensees worldwide, further boosting
the significance of the research project. Currently, the FreeSurfer software has been used in over 5,000 cases
by the Alzheimer’s Disease Neuroimaging Initiative as well as other AD
4548
, Parkinson’s
4952
, Huntington’s
5355
Diseases and Schizophrenia
5660
studies. The software tools developed for the PI’s research thus translates to
economies of scale across the entire aging, AD, and neuroimaging community. The PI will use the FreeSurfer-
integrated version of his tools on Massachusetts ADRC data for analysis, giving himself ample opportunities to
test its user-friendliness (usability), portability, and scalability, and finally prepare the tools for mass adoption.
To summarize, the overall significance of this research project is as follows:
Longitudinal AD imaging will work on clinical data for diagnosis and not just on curated research data and
ongoing studies need not adhere to original scan protocol, allowing e.g., better scanners part way through.
Better imaging using deep learning will lead to increased statistical power for longitudinal AD studies.
Existing longitudinal clinical data will be unlocked for research, enabling further aging and AD studies.
Integrating PI’s longitudinal software into FreeSurfer will lead to economies of scale in aging research.
RESEARCH STRATEGY: INNOVATION
Generally speaking, innovations in clinical imaging are attributable to better hardware (sensing), better software
(reconstruction), or both. Our innovations are entirely software-based and thus improve imaging outcomes for
all clinics and research labs engaged in longitudinal analysis of neuroimaging data at zero cost. Our innovations
come from using supervised deep learning on simulated data for an optimal learned longitudinal reconstruction
pipeline (registration, change detection, and segmentation) even in the presence of various forms of noise and
geometric distortion irrelevant to neuroanatomical change of interest. Our premise is that combining cutting-edge
deep learning networkswhich have dramatically improved the state-of-the-art in an array of applicationswith
sophisticated models of MR image formation that can simulate artifacts that exceed the realistic range, will yield
a set of tools that provide unparalleled accuracy, robustness and speed, enabling significant advances in basic
and clinical neuroscience and resulting in a sustained positive impact on neuroimaging research. The proposed
research will greatly advance the goals of the NIH standards of rigor and reproducibility, by providing access to
rapid, accurate automated segmentations of neuroanatomical change that take any of wide class of input images
would allow large-scale studies to investigate whether functional, structural, genetic, molecular, and connectional
changes are associated with an array of diseases. Here we expand on our technical innovations in Aims 12.
Registration and Change Detection Using a NeuroPhysics Engine (Aim 1)
In Aim 1, we will develop a comprehensive neurophysics engine capable of generating images with simulated
contrast, resolution, distortion, and subject motion, as well as degree and location of atrophy. This builds on the
FreeSurfer laboratory’s work on a hippocampal atrophy simulator
28
and the deformation-based simulator
61
. We
embed the simulator into a deep learning framework so that different synthetic images will be created on the fly
in every minibatch. The advantage of this approach is that the networks learn to be invariant to a wide array of
effects including the direction of MRI contrast, image resolution, subject motion, gradient nonlinearities, B0 and
B1 distortions and inhomogeneities, field strength, structure-specific atrophy as well as image noise.
A critical advantage of the synthetic approach is that there can be no mismatch between the label maps from
which ground truth is derived, and the imaging data, as the intensity images are synthesized directly from the
label maps. For detecting atrophy this is even more important, as atrophic effects are by definition subtle in the
early stages of AD and thus difficult to detect and/or manually label. The advantage of synthesizing atrophy is
similar to the advantage of synthesis in generalthe ground truth labels localizing the atrophy are exactly correct
since the images with atrophy are synthesized directly from the labels. We emphasize that the patterns of atrophy
need not match exactly what we see in practice. We then train the registration network on pairs of such images.
An overwhelming majority of recent image registration networks
3235
are trained unsupervised, in the sense
that ground-truth deformation fields are not required in the supervision of these networks. Instead, a surrogate
photometric loss is used to maximize the similarity between the fixed image and the moving onewarped by the
predicted deformation fieldin lieu of a loss that penalizes the differences between the predicted and ground-
truth deformations. This approach is not capable of detecting the anatomical change directly since warping one
brain image merely to maximize its structural similarity with another minimizes detectable change. Supervising
the registration network using ground-truth warps (which contain subject motion and MR distortions but not the
neuroanatomical change) overcomes this challenge by allowing the network to learn to register with a precisely
relaxed fit in the regions of change, separating neuroanatomical change from physical deformations to produce
a difference image of neuroanatomical change as a byproduct of registration. We will rigorously test the networks
ability to separate neuroanatomical change from MRI distortion by building a separate capacity-matched change
detection network that has access to the full warp field in addition to the pair of registered images and verifying
that this improves change detection accuracy. Our preliminary results below suggests this is indeed the case.
Semi-Supervised Longitudinal Segmentation for Disease Detection (Aim 2)
For Aim 2, we leverage the registration and change detection tools developed in Aim 1 to implement an optimal
anatomical segmentation framework for AD detection. This will build on top of the PI’s recent work on 3D medical
image segmentation with semi-supervised learning
62,63
, which allows segmentation neural networks to be trained
using a mix of labeled and unlabeled images to significantly improve segmentation accuracy. Segmentations of
a subject’s brain images acquired across multiple time points exhibit regularity or smoothnessnot only in the
3D spatial domain but also along the longitudinal direction. We segment a subject’s longitudinal series of brain
images, all at the same time, with regularity in the longitudinal direction imposed to improve the accuracy of the
anatomical segmentation at each time point. The segmentations are then fed to a simple AD prediction network.
Regularization of segmentation networks typically produces an over-smooth or blurred reconstructions so
longitudinally consistent brain segmentation without introducing biases in the individual brain segmentations is
challenging. Bias-free segmentations is even more important for AD detection since smoothing them along the
longitudinal direction translates to a loss of important anatomical change informationatrophy, dilation, etc. We
overcome this by training a 4D segmentation network on a mix of labeled synthetic data, produced by a disease
simulation enginea longitudinal version of the neurophysics engine—and unlabeled clinically acquired data
in a semi-supervised setup. 4D neural networks have only very recently been proposed (within the last 3 years)
for robot navigation and a handful of medical image segmentation tasks
6466
, with significant improvements over