Responsive image

Daniel Van Der Maden

Student | Software Engineer

My Resume

About Me

Hi there! I am an undergraduate at the University of California, Berkeley pursuing a B.A. in Computer Science and I am finishing up my degree this fall.

I have always enjoyed the process of building and showing my creations. This is why software development/engineering appealed so much to me throughout my time in higher education — and has ultimately pushed me to learn a vast skill set in the world of Computer Science. I have also gained a particular academic interest in Natural Language Processing, Human-Computer Interaction, and Artificial Intelligence throughout my time at UC Berkeley.

I also enjoy helping and teaching other people. At UC Berkeley, I tutored C, C++, Java, and Python in the EECS department's self-paced center. I also joined a student organization called Computer Science Mentors where I mentored a group of 5 students through their lower division EECS courses.

I have also had the opportunity to professionally work with a diverse team of engineers during my summer software engineering internship at Microsemi, where I helped develop a modernized eLoran (radio navigation and data) system.

Currently, in my spare time, I am working on a chatbot program that allows anyone to more easily learn how different parameters of a neural network model affects the fidelity of a chatbot.

Education

University Of California, Berkeley
B.A. Computer Science - GPA: 3.68/4.00 - Graduating December 2019
RELEVANT Course Work:
Computer Security
Intro to Database Systems
User Interface Design and Development
Efficient Algorithms & Intractable Problems
Computer Architecture & Machine Structures
Structure and Interpretation of Computer Programs
Data Structures
Natural Language Processing
Intro to Artificial Intelligence
Intro to Linguistics
Intro to Cognitive Science
Designing Information Devices and Systems 1 & 2
Linear Algebra & Differential Equations
Discrete Math & Probability Theory

Underlined = In Progress | Last updated: 6/19

Irvine Valley College
Transferred - GPA: 3.92/4.00 - Transferred May 2017

Experience

Software Engineering Intern
Microsemi Corporation | Frequency and Timing Division

I was part of a team that worked on a modernized eLoran (radio navigation and data) system which could function as a self-contained backup of a GNSS (i.e: GPS). I was tasked with implementing multiple proprietary eLoran signal schemes and creating an API to have them interface with a Microsemi eLoran transmission timer. I was also tasked with creating development tools that simulate said timing unit’s behavior as well as collect data and display vital timing statuses. I ended my internship by presenting my work to the rest of the eLoran team at their frequency and timing division headquarters in Boulder, Colorado.

May 2018 — August 2018


Tutor
UC Berkeley | EECS Self-Paced Center

The EECS department at UC Berkeley provides self-paced courses where students can learn C, C++, Java, Python, MATLAB and UNIX. I was one of the course's tutors. We proctored and graded all of the in-person quizzes, conducted project checkoffs, and graded final exams. We also helped students debug their projects and answered questions during our office hours.

January 2019 — May 2019


Academic Intern
UC Berkeley | Structure and Interpretation of Computer Programs

I was an intern for the Structure and Interpretation of Computer Programs (CS 61A) course where I assisted course staff by providing students with guidance and help on homeworks and projects during office hours. I was also tasked with answering questions during lab and conducting lab checkoffs.

August 2018 — December 2018


Mentor
UC Berkeley | Computer Science Mentors

I was part of the Computer Science Mentors student organization at UC Berkeley. The organization’s goal is to provide guidance and supplemental discussion sections for students taking lower division computer sciences courses. My focus was to help students taking Designing Information Devices and Systems 1 by holding a supplemental 1.5 hour discussion section each week on the topics that were covered. I also helped run midterm review sessions, and I helped develop supplemental course material. Lastly, I mentored a cohort of five students throughout the semester.

January 2019 — May 2019



Technology Advisor
J-Sei Community Center

I was part of a student group at UC Berkeley that volunteered our weekends to help advise the elderly at J-Sei on a variety of tech-related topics. I primarily focused on tech safety topics with a heavy emphasis on internet privacy, scam detection, and malware protection.

September 2017 — November 2017


Calculus Tutor
Private Tutor

I started to tutor single variable calculus during my time at Irvine Valley College (to reinforce my teaching skills). Eventually, people offered to pay for my time. From the spring of 2016 to fall of 2017, I had 1-3 students (through recommendations) that I tutored on a regular schedule. As a tutor, I had to become familiar with different lesson plans and provide supplemental problem sets & solutions.

January 2016 — June 2017


Graphic Designer
Cauldron Ice Cream

I worked with the store owners to design their launch website and marketing assets along with a dynamic menu that could be easily modified.

September 2015 — December 2015

Portfolio

Below you will find an interactive list of notable projects that I've done throughout my time at UC Berkeley and beyond!

You can click on each project to see a short description of what it is, some detail on its key features as well as an overview of what I've learned from the project. Also, if available, a link to each project's repository can be found in their respective descriptions. Lastly, if you would like to know more about any of the projects listed below, feel free to contact me!

I will be updating the list once I have some more cool things to share.

This was a personal project that I did in my free time.

Description: This project implements a chatbot using a sequence to sequence (seq2seq) model, but more importantly, it also has easy ways of defining model parameters. This allows it to be used as a learning tool to demonstrate how different data sets and model parameters affect a chatbot's fidelity.

Key Features: It has variable parameters for the model and data processing. It uses a vocab cache and an encoded data cache for quicker data reprocessing. It can save and load models. It can recover and resume model training if the script is interrupted. It uses a more memory friendly way of training the model, so it can use relatively large data sets on standard hardware. It can use Name Entity Recognition to improve the perplexity of the model’s responses.

Skills Acquired: I learned how to use Keras to implement a sequence to sequence model, and I learned how to filter a generic data set for useful training data using NLTK and spaCy. I learned how to efficiently deal with large data sets as well as how to implement an efficient caching system to greatly reduce re-computation. Lastly, I learned how to use AWS EC2 GPU instances to train large models with this project.

An in-depth overview and project files can be found on my GitHub page.

This was the project that I worked on during my 2018 summer internship at Microsemi.
For reference, eLoran is a long range radio navigation and data system that functions as a high-power PNT service.

Description: I was part of a team that worked on a modernized eLoran scheme. The goal for this new scheme was to substantially increase data rates and signal reliability while minimizing hardware upgrades of existing Loran transmission stations. My contributions to the project were as follows: I learned and documented the limits of the project's Transmission Timer Unit (TTU). I implemented the new eLoran scheme in Python and created an API in C to interface it with the TTU. Lastly, I created development tools and data collection tools for the TTU using Python and C.

Key Features: The project now has a "base" TTU logic script that has a flexible API and efficiently interfaces with its hardware. All TTU scripts now have easily modifiable parameters for on the fly scheme changes. The project now has a development tool that can simulate a perfect TTU's behavior given a TTU logic script. Lastly, the project now has a data collection tool that collects pulse timings, command timings, buffer sizes, and various statuses and saves it as a Pandas data frame for analysis.

Skills Acquired: I learned how to effectively communicate with multiple teams in order to get my tasks done efficiently. I learned how to write programs that adhere to strict performance constraints. I learned how to use sockets in C and Python so that I could interface programs over a network. I learned how to use CUnit to test my C code. I learned how to store, process and analyze data using Pandas and Scipy. Lastly, I learned how to use a Mercurial VCS.

The project files and scheme details are under an NDA, however I would be happy to expand more on the topic (at least as much as I can) if asked. So please contact me if you wish to know more.

This was the group project for the Efficient Algorithms and Intractable Problems course at UC Berkeley. Our approximate solver yielded solutions that were in the top 10% of all approximations in the course (the ranking metric is explained in the repo).

Description: The script approximates a solution to the following problem: Given a group of children, friendship relations, n buses and a set of trouble groups, find an assignment of students to the buses that maximizes the number of friendship relations that are on the same bus. The difficulty comes from this added condition: Groups of children that form a trouble group and are all assigned to the same bus do not have their friendships counted.

Key Features: It uses a greedy algorithm with various heuristics and tie-breakers (each suited to a certain input/solution structure) to compute a good initial solution in polynomial time. It uses a local search and a simple Monte Carlo tree search to improve solutions. It has an easy way to visualize any given input or partial solution as a graph using NetworkX.

Skills Acquired: I learned how to come up with (and implement) an effective and efficient greedy algorithm. I learned how to effectively visualize problems as a graph using NetworkX. And most importantly, I learned to coordinate and utilize the expertise of my team-mates as we all had different backgrounds (EE, Math, and CS).

An in-depth overview and project files can be found on my GitHub page.

This was the final project done for the Machine Structures course taken at UC Berkeley.

Description: This is a file server that features a cache which can efficiently handle thousands of concurrent file requests.

Key Features: It has a cache that is fully associative (with random replacement) and has a variable size (which is defined on start-up). It sanitizes the request's file path to mitigate a directory traversal attack. It has an API to handle cache clear requests and cache status requests. It has a toggleable debugging mode that logs file requests, file returns and cache contents. The project also has a large test suite to ensure that the concurrent executions are correct. Lastly, it has a testing framework that deliberately slows down hard-disk file reads to visualize the cache's effectiveness.

Skills Acquired: I learned how to program in Golang. I learned how to write and debug a concurrently executed program. I learned how to handle HTTP requests in Golang. I learned how to mitigate a directory traversal attack. I learned how to thoroughly test my program using Golang's testing framework.

An in-depth overview and project files can be found on my GitHub page.

This was a project done for the combined graduate/undergraduate Natural Language Processing course at UC Berkeley.

Description: This project annotates the antecedent for each pronoun in a given data set using a Logistic Regression model. It achieves a ~70% accuracy on a test data set.

Key Features: It has an easy way to redefine and/or add features for the logistic regression model. It efficiently uses memory when training the model. Lastly, it does some filtering (based on mentions) on the training data set to improve the accuracy of the model.
Note: Please reference the project write-up (found in the project repository) for more details about the model's features and data filtering.

Skills Acquired: I learned how to process large data sets using Pandas. I learned how to work with NLTK to filer training data. I learned how to work with Scikit-learn's Logistic Regression model. Lastly, I learned how to efficiently encode data for the model using Numpy/Scipy.

An in-depth overview and project files can be found on my GitHub page.

This was a personal/professional project that I did in my free time.

Description: This is my personal/resume website that I created from scratch while learning HTML, CSS and some JavaScript. It uses Bootstrap and jQuary to handle most of the styling and interactive components. But I also wrote a lot of additional CSS to customize the website to my taste.

Key Features: It is ultra-wide and mobile friendly. It has some custom JavaScript code for custom button and scroll functionalities. It uses Font Awesome to keep all the icons up-to-date. It presents information in a clean, non-cluttered fashion. It has a landing page that has all the essential information for a recruiter. It has an easy way of adding (or taking out) content in each major section.

Skills Acquired: I learned how to create and style websites using HTML5 and CSS3. I learned how to use the Bootstrap 4 framework. I learned how to add custom functionality to websites using JavaScript. Lastly, I learned how to debug websites using Google Chrome's web page inspect tool.

The project files can be found on my GitHub page.

This was an optimization project for the Machine Structures course at UC Berkeley.

Description: This project optimizes a batched forward pass of a given Convolutional Neural Network (CNN) image classifier (implemented in C). Note that a batch contains 1000+ pictures, and the program would run on a machine that has an Intel Haswell CPU. My optimized classifier is about 20 times faster than stock (for reference, a 16x gain was full points).

Key Features: It uses Intel AVX intrinsics to execute 4 additions simultaneously and 4 multiplications simultaneously (SIMD). It unrolls each layer's "dot product for loop" to reduced overhead. It "blocks" out the matrix multiplication to keep the working set hot in the CPU's cache. It does not leak memory. Lastly, It splits the batch into mini-batches (according to the number of threads that the machine has) and runs the forward passes for each mini-batch in parallel using OpenMP.

Skills Acquired: I learned how to use Gprof to profile C programs. I learned how to use Intel AVX to efficiently execute a single instruction on multiple matrix entries. Lastly, I learned how to use OpenMP to parallelize the forward passes in a batch.

Since this project will be assigned in future offerings of Machine Structures (CS61C) at UC Berkeley, I am unable to post my project files due to concerns of plagiarism. However, I would be happy to send the files over upon requests.

This was a project for the Machine Structures course at UC Berkeley.

Description: This project compiles 61Ccc code to RISC-V. The 61Ccc language is a subset of the C that has the following: variable declarations (with initialization and support for pointers), struct definitions, function declarations/definitions, if/else statements, for loops, break statements, continue statements and return statements. Furthermore, 61Ccc only has the following data types: char, int, bool, struct, and pointers to the previous 4 types. Lastly, 61Ccc only supports unary and binary operations.
Note: To run the compiled 61Ccc code, I used a RISC-V emulator called Venus.

Key Features: It has a lexer that catches syntax errors and filters out comments. It has a parser that creates an abstract syntax tree (AST). It has a debugging mode that prints the tokens and the AST. It has an error handler that can report syntax errors and the associated line number from the AST. It has a large CUnit test suit that ensures that the lexer, parser, and code generator function correctly. It does not have a memory leak that grows proportional to the input size. Lastly, it generates RISC-V code that adheres to the caller/callee saved conventions for universal compatibility.

Skills Acquired: I learned how to manage memory in C. I learned how to use Valgrind to debug memory leaks and segfaults. I learned how to create unit tests with CUnit. Lastly, I learned how to write RISC-V assembly.

Since this project will be assigned in future offerings of Machine Structures (CS61C) at UC Berkeley, I am unable to post my project files due to concerns of plagiarism. However, I would be happy to send the files over upon requests.

Skills

Programming
Languages
Python    Java    C++    C    Golang    λ Scheme    RISC-V / x86
SQL    HTML    CSS    JavaScript    LaTeX

Technologies
Scipy/Numpy/Pandas   NLTK   Keras   Scikit-learn   NetworkX   Hadoop
JUnit   OpenMP   Intel AVX   CUnit   AWS EC2   MySQL   Bootstrap

Misc Tools
JetBrains Suite   Microsoft Suite   Photoshop   Lightroom   Cinema 4D

Spoken languages
English (Native)    French (Fluent)    Vietnamese (Basic)