I am pursuing my Ph.D. in Computer Science (CS) from George Mason University(GMU), Virginia. I graduated with my Bachelor's and Master's in Computer Science from the Volgenau School of Engineering at George Mason University. I joined GMU's Ph.D. program in Fall 2021 and am currently working in the
SAGE Lab
advised by Dr. Kevin Moran. My research is focused on using machine learning and computer vision techniques to create developer-facing tools that enhance the software development workflow, bridging the gap between innovation and practical implementation.
Certifications: Stanford Machine Learning
Interests: Photography, Golf, Basketball, Football (NFL), and Formula 1
My current project is SearchAccess, a tool for developers to search for Android App Screens based on an initial mockup of the screen. I'm working on making mobile apps more accessible for all users. I use computer vision techniques to study how different elements in app screens are organized. To do this, I use a combination of tools and technologies, including PyTorch, OpenCV, and BERT in order to generate the image-based search query
- Identified the drought of developer facing tools that leverage Machine Learning techniques to make accessible Android applications
- Designed and implemented MotorEase, an automated tool to detect motor-impairment accessibility issues in mobile applications using Java and Python programming languages.
- Integrated state-of-the-art techniques in PyTorch computer vision, pattern-matching, and static analysis to detect various accessibility violations through application screenshots and XML data.
- Designed and implemented SearchAccess, a developer-facing search engine which facilitates the search of accessible User Interface Screens using Node.js and Flask backend and Python and React frontend.
- Designed a search functionality using CLIP Embeddings and Solr search indexing in conjunction with a MongoDB database and AWS S3 image storage server to perform a search of accessible Android UIs.
- Worked on a Multi-Disciplinary team with researchers and surgeons that worked to prototype a surgical voice assistant
- Designed and implemented a wake-word for surgical voice assistants using Tensorflow, Sagemaker, S3, and current research in voice assistants after consulting with surgeons and hospitals about requirements.
- Used Python, Librosa, PyAudio, and PyTorch to parse and classify windowed audio to detect the wake-word.
- Achieved 80\% accuracy on wake-word detection prototype in an input stream which exceeded expectations and is now in operation room devices across the US
- Developed a project to increase ease of communication between doctors and patients at hospitals by tracking calls, requirements, and patient to doctor communication
- Built a series of REST APIs using Node Js back-end, React front-end, and MongoDB database
- Lead weekly SCRUM meetings with offshore teams in development and integration into production
Python 98%
Java 96%
Keras/Tensofrlow 90%
PyTorch 85%
Hadoop 80%
AWS Sagemaker/EC2 92%
Apache Spark 85%
Photoshop 95%
Docker/Kubernetes 75%
C & C++ 85%
If you've made it this far, let's talk and get things rolling!