Showcase • Student-led projects

Projects From Our Incubator

Discover a growing portfolio of incubated work across AI ethics, climate tech, governance, and digital well-being. Click a project to expand and explore details, metrics, and artifacts.

All these projects are from our fall 2025 cohort. Stay tuned to see what our Winter-Spring 2026 cohort builds!

16
Active Teams
88%
Reach Completion
10+
Mentors
6+
Regions
Harsha Singla & Dhatri Medidhi
Led by Harsha Singla & Dhatri Medidhi|AI & Health|September 12, 2025

AI Prevention and Prediction of Type 2 Diabetes Research Paper

A comprehensive research paper examining how artificial intelligence can be applied to predict and prevent type 2 diabetes by analyzing diverse datasets including genetics, EHRs, and wearables.

Read overview
AI Prevention and Prediction of Type 2 Diabetes Research Paper

Description

This paper surveys AI models and algorithms (from logistic regression to deep neural networks) for diabetes risk prediction, highlights real-world deployments by organizations like Google Health, IBM Watson, and Medtronic, and compares AI’s accuracy against traditional screening tools. It also addresses ethical concerns such as data bias, algorithmic discrimination, and patient privacy, while proposing multimodal approaches to improve predictive validity. The study concludes that AI offers transformative potential for early detection and prevention of type 2 diabetes, though challenges remain in generalizability, fairness, and transparency.

Ka Lam Tam
Led by Ka Lam Tam|Biotech|May 3, 2025

Exploring Potential Solutions to Optimize Cancer Therapy with Cell Reprogramming using Gene Network Analysis

A computational pipeline that identifies hub genes in malignant networks as potential reprogramming targets, providing a proof-of-concept for cancer therapy optimization.

Read overview
Exploring Potential Solutions to Optimize Cancer Therapy with Cell Reprogramming using Gene Network Analysis

Description

This project explores cell reprogramming as a therapeutic strategy by analyzing gene expression datasets from colon cancer samples. Using GEO2R, STRING, and network analysis, the team identified differentially expressed genes and hub regulators—such as GTPBP4, RPF2, GRWD1, and RRS1—that converge on ribosome biogenesis. These findings suggest that targeting ribosome biogenesis could reduce metastatic potential and reprogram cancer cells toward more benign states. The work mirrors KAIST’s colon cell modeling, offering a scalable bioinformatics workflow that guides future wet-lab validation and personalized therapeutic approaches.

Charis Tsang
Led by Charis Tsang|Neurocardiology|Sep 2, 2025

The Stroke-Heart Syndrome Protocol

A standardized neurocardiology workflow that integrates risk scoring tools to prevent cardiac complications in acute stroke patients.

Read overview
The Stroke-Heart Syndrome Protocol

Description

Following an acute stroke, patients face significantly elevated risk of major cardiac events, yet systematic screening is rare. This project develops a 5-phase clinical workflow that transitions care from reactive to proactive. It introduces two novel scoring systems—SHS-Early Risk Score for initial stratification and SHS-Severity Score for escalation of care—supported by a digital calculator for rapid implementation. Deliverables include an implementation guide, flowcharts, and a prototype tool, with the goal of establishing a scalable standard of care that reduces preventable mortality and optimizes cardiology resources.

Ishita Varia
Led by Ishita Varia|AI in Healthcare|Sep 21, 2025

Evaluating the Diagnostic Accuracy of Large Language Models for Common Diseases

An experimental study testing ChatGPT, Google Gemini, and Anthropic Claude on diagnosing Covid-19 and gallstones across multiple patient scenarios.

Read overview
Evaluating the Diagnostic Accuracy of Large Language Models for Common Diseases

Description

This paper investigates the reliability of large language models (LLMs) in medical diagnosis by comparing their accuracy in identifying common diseases. Three models—ChatGPT, Google Gemini, and Anthropic Claude—were tested across progressively detailed patient scenarios, from symptoms alone to symptoms combined with demographics and vital signs. Results showed high accuracy for Covid-19, with Claude consistently performing best (85–90%) and Gemini excelling in symptom-only inputs. However, all models struggled with rarer conditions such as gallstones, where confidence levels were inconsistent and significantly lower (ChatGPT 48–57%, Claude ~70%). The study highlights both the promise and risks of LLMs in clinical contexts, emphasizing issues like hallucinations, fairness gaps, and variability across patient subgroups. The authors conclude that while AI can augment diagnostic workflows, it must be paired with human oversight and rigorous fairness evaluations before integration into healthcare systems.

Sarah Kim
Led by Sarah Kim|AI & Health|Aug 2025

Cognitive Violence: The Neurological and Elevated Cancer Consequences of Environmental Racism in Communities of Color and Its Implications in Artificial Intelligence

An interdisciplinary paper examining how environmental racism contributes to neurological decline and cancer risk in communities of color, and the dual role of AI in exposing or obscuring these inequities.

Read overview
Cognitive Violence: The Neurological and Elevated Cancer  Consequences  of Environmental Racism in Communities of Color and Its Implications in Artificial Intelligence

Description

This project analyzes the compounded harms of environmental racism on marginalized communities, focusing on both neurological and oncological outcomes. Exposure to pollutants such as lead, mercury, benzene, formaldehyde, and PM2.5 disproportionately affects communities of color, causing elevated risks of cognitive impairment, ADHD, Alzheimer’s, Parkinson’s, and multiple cancers. The paper situates these harms within discriminatory zoning practices, industrial siting, and limited healthcare access, while also addressing the shortcomings of environmental monitoring systems that obscure race-specific data. It highlights AI’s promise in detecting disparities through predictive modeling and medical imaging, but warns of bias, fairness gaps, and underrepresentation in training datasets that risk worsening inequities. The study concludes that stronger environmental regulation, racially disaggregated research, and equitable AI deployment are essential to address this public health crisis.

Rohith Deshamshetti
Led by Rohith Deshamshetti|Human–Computer Interaction|August 20, 2025

AI-Powered Browser-Based Gesture Control: A Touchless Interface for Accessibility and Public Use

A browser-based gesture recognition system that enables touchless, low-cost, and accessible human–computer interaction using only a webcam.

Read overview
AI-Powered Browser-Based Gesture Control: A Touchless Interface for Accessibility and Public Use

Description

This project introduces a touchless interface for accessibility and public use, motivated by both inclusivity for differently-abled individuals and hygiene in shared environments. Using OpenCV, MediaPipe, and PyAutoGUI, the system interprets gestures for cursor movement, clicking, and scrolling with high accuracy. Unlike Kinect or Leap Motion, it requires no specialized hardware, relying solely on standard webcams. Tested across multiple browsers, the prototype achieved over 95% accuracy for core gestures with sub-100ms latency. Applications extend to healthcare, education, and public kiosks, with future directions including VR/AR integration and expanded gesture vocabularies.

Sashti Kandaswamy
Led by Sashti Kandaswamy|AI & Health|September 14, 2025

NeuroRisk Early Detection and Monitoring System

A smartphone-based system that generates a continuous Neuro Score by analyzing speech, motor, and cognitive signals to detect early signs of neurodegenerative disease.

Read overview
NeuroRisk Early Detection and Monitoring System

Description

This project proposes an AI-driven system for proactive monitoring of neurodegenerative diseases such as Parkinson’s, Alzheimer’s, and ALS. Using smartphones, the system captures subtle deviations in speech clarity, tapping rhythms, walking pace, and memory recall, consolidating them into a Neuro Score that reflects neurological stability. Unlike diagnostic imaging or invasive tests, this continuous monitoring provides patients and physicians with early warnings and trend data. The system includes sample patient narratives demonstrating detection of early Parkinson’s, Alzheimer’s, and ALS progression. Ethical safeguards include HIPAA/GDPR compliance, randomized identifiers, and encrypted data storage. The NeuroRisk framework reimagines brain health tracking as a proactive, accessible tool for prevention and early intervention.

Zelvin Elsafan Harefa
Led by Zelvin Elsafan Harefa|Physics & AI|2025

Predicting Superconductors’ Critical Temperature Using Machine Learning

An interdisciplinary framework combining physics, chemistry, and deep learning to accelerate the discovery of superconductors by predicting their critical temperatures with high accuracy.

Read overview
Predicting Superconductors’ Critical Temperature Using Machine Learning

Description

This study presents a machine learning framework trained on a dataset of over 21,000 superconducting materials to predict their critical temperature (Tc). Using chemically informed features such as electronegativity, valence electron count, and atomic mass variance, the deep neural network achieved state-of-the-art performance (MAE ≈ 4.88 K, R² ≈ 0.918). The model generalizes across cuprate and non-cuprate families, with accuracy sufficient to guide experimental synthesis and reduce reliance on costly trial-and-error methods. Beyond predictive performance, the analysis interprets key features in light of BCS and Eliashberg theory, demonstrating how AI can reveal structure–property relationships in materials science. The work underscores the potential of responsible AI in materials discovery, offering scalable tools for quantum computing, sustainable energy, and lossless power transmission.

Yuvraj Singh
Led by Yuvraj Singh|AI & Robotics|September 17, 2025

Gesture-Guided AI Task Planning for Virtual Surgical Robots in AMBF

An integrated framework that translates real-time hand gestures into surgical intents, enabling safe, AI-assisted task planning in virtual surgical robotics using AMBF.

Read overview
Gesture-Guided AI Task Planning for Virtual Surgical Robots in AMBF

Description

This project introduces a gesture-driven AI pipeline for surgical task planning within the Asynchronous Multi-Body Framework (AMBF). Using MediaPipe and OpenCV, human hand gestures are recognized and converted into structured intents, which are parsed into JSON commands for a ChatGPT-powered planner. The planner enriches commands with task-level reasoning and sequences deterministic surgical skills—implemented as ROS nodes—such as vessel clamping and tissue retraction. A safety supervisor enforces physical and procedural constraints, ensuring safe execution in a da Vinci–style simulation. The prototype achieved >95% gesture classification accuracy, sub-200ms planning latency, and consistent interception of unsafe motions, demonstrating the feasibility of naturalistic human–robot collaboration in surgical robotics.

Trisha Eunice T. Maghari
Led by Trisha Eunice T. Maghari|Biotech|August 3, 2025

Recent Advancements in CRISPR-Cas

A review of CRISPR-Cas9’s applications, risks, and future potential in medicine, agriculture, biotechnology, and environmental science.

Read overview
Recent Advancements in CRISPR-Cas

Description

This paper surveys the transformative role of CRISPR-Cas9 as the leading genome-editing system due to its efficiency, low cost, and precision. Applications span human health (gene therapies, drug development, diagnostics like SHERLOCK and DETECTR), agriculture (improved crop resistance, livestock enhancement), environmental conservation (resilience in endangered species, invasive species control), and biotechnology (synthetic biology, functional genomics, nanotechnology, and bioinformatics). While promising, the technology faces significant safety concerns, including off-target effects, genomic instability, and unintended large-scale mutations. Future development requires strategies to mitigate these risks and responsibly translate CRISPR tools into clinical and ecological use.

Jemima Fong
Led by Jemima Fong|Finance & Ethics|September 14, 2025

Proposing Methodologies for Hybrid AI–Human Models in Finance

A framework for integrating AI into financial services while preserving human oversight, protecting jobs, and ensuring compliance and transparency.

Read overview
Proposing Methodologies for Hybrid AI–Human Models in Finance

Description

This project examines the risks of unchecked AI adoption in finance and proposes a hybrid workflow that balances machine efficiency with human judgment. AI handles high-volume tasks like anomaly detection, transaction classification, and preliminary forecasts, while accountants review outputs, apply ethical reasoning, and engage with clients. The model embeds governance policies requiring human verification of AI-generated outputs and audit logs for accountability. Parallel reskilling programs train accountants as AI supervisors, ensuring professional dignity and fairness. Supported by real-world cases and ethical theory, the study argues that hybrid AI–human systems represent not just a technical adjustment but a moral commitment to transparency and equitable employment in the financial sector.

Agastya Munnangi
Led by Agastya Munnangi|Biotech|2025

A Framework for Auditing the Consequences of CRISPR

An audit tool designed to evaluate the ethical, clinical, societal, and environmental consequences of CRISPR using both algorithmic scoring and human comparison.

Read overview
A Framework for Auditing the Consequences of CRISPR

Description

This project introduces an auditing framework to measure the overlooked risks of CRISPR technology across four dimensions: ethical, clinical, societal, and environmental. The tool assigns a 1–5 severity rating based on case studies and past applications, then compares results with assessments from 20 unbiased human judges. In testing, the framework produced results that aligned closely with human evaluations in most categories, such as ethical disruption and environmental impact, while diverging in some societal predictions. Case studies included the world’s first personalized CRISPR treatment at the Children’s Hospital of Philadelphia, where the tool rated risk at 4/5 compared to human consensus of 3/5. While not yet ready for deployment in real-world policy or clinical decisions, the framework provides a pioneering proof of concept for embedding systematic auditing into the future of biotechnology.