Drexel University · Department of Computer Science
The SPARSE Coding Lab investigates neuro-inspired approaches to machine learning, drawing on principles from computational neuroscience to build more robust, efficient, and interpretable AI systems. Directed by Dr. Edward Kim, the lab focuses on sparse representations, neuromorphic computing, and biologically plausible learning algorithms.
Biologically-inspired sparse representations, spiking neural networks, dictionary learning, and neuromorphic hardware implementations.
Using sparse coding principles to defend against adversarial attacks; biological immunity models for robust perception.
NSF CAREER AwardApplying machine learning to medical imaging (ultrasound, pathology), clinical NLP with large language models, and EHR data extraction.
DARPA · Moberg ResearchDeep learning for image understanding, information graphic accessibility, and multimodal representation learning.
Automated penetration testing, secure multiparty generative AI, conformal prediction for LLM verification.
NIST AI Safety Institute