SPARSE Coding Lab

Drexel University · Department of Computer Science

The SPARSE Coding Lab investigates neuro-inspired approaches to machine learning, drawing on principles from computational neuroscience to build more robust, efficient, and interpretable AI systems. Directed by Dr. Edward Kim, the lab focuses on sparse representations, neuromorphic computing, and biologically plausible learning algorithms.

Research Areas

01

Sparse Coding & Neuromorphic Computing

Biologically-inspired sparse representations, spiking neural networks, dictionary learning, and neuromorphic hardware implementations.

02

Adversarial Robustness & AI Safety

Using sparse coding principles to defend against adversarial attacks; biological immunity models for robust perception.

NSF CAREER Award
03

Medical AI & LLMs

Applying machine learning to medical imaging (ultrasound, pathology), clinical NLP with large language models, and EHR data extraction.

DARPA · Moberg Research
04

Computer Vision & Multimodal Learning

Deep learning for image understanding, information graphic accessibility, and multimodal representation learning.

05

LLM Security & Trustworthy AI

Automated penetration testing, secure multiparty generative AI, conformal prediction for LLM verification.

NIST AI Safety Institute

Funding & Support

NSF DARPA DOE Intel Amazon Bill & Melinda Gates Foundation Moberg Research NIH
Member, NIST AI Safety Institute Consortium

Press & Media