Carlos Jaramillo

 CarlosA. Jaramillo

Carlos A. Jaramillo

  • Courses2
  • Reviews6
  • School: Lehman College
  • Campus:
  • Department: Computer Science
  • Email address: Join to see
  • Phone: Join to see
  • Location: 250 Bedford Park Blvd W
    Bronx, NY - 10468
  • Dates at Lehman College: November 2013 - June 2016
  • Office Hours: Join to see

Biography

Lehman College - Computer Science


Resume

  • 2013

    Carlos

    Jaramillo

    PhD

    Lehman College

    Mitsubishi Electric Research Laboratories

    Piaggio Fast Forward

    CUNY City College STEM Institute

    Aurora Flight Sciences Corporation

    The City College of New York

    Taught talented high school students about the fundamentals of mobile robotics using the Raspberry Pi (computer) and Python programming language in order to actuate motors and poll sensor data (e.g. ultrasonic

    infrared) and various electronic components. Ultimately

    participants built robots to compete in an autonomous robot sumo tournament.

    CUNY City College STEM Institute

    Piaggio Fast Forward

    Boston

    Massachusetts

    Enhancing the capabilities of personal mobile robots through computer vision

    Senior Robotics Engineer

    Greater New York City Area

    CIS 212: Microcomputer Architecture (CUNY Lehman College

    Spring 2014-Present) \n\nThis requirement course provides a broad study of architecture of microcomputer systems with emphasis on CPU functionality

    system bus & memory design and performance

    secondary storage technologies and management

    input/output peripherals (display and printer technologies)

    and network technologies. The course follows the Systems Architecture textbook by Stephen D. Burd.\n\nCMP 230: Programming Methods I (CUNY Lehman College

    Fall 2013)\n\nIntroduced freshman students to structured computer programming using Python

    a modern high-level programming language. Programming constructs such as console I/O

    data types

    variables

    control structures

    iteration

    data structures

    function definitions and calls

    parameter passing

    functional decomposition

    object oriented programming

    debugging and documentation techniques.\n

    Adjunct Lecturer

    Lehman College

    02/2010 – 05/2018\tComputer vision applied towards navigation systems\t City College

    NY\n•\tConducted research in 3-D computer vision-centric systems applied towards assistive localization and navigation of visually impaired people and autonomous ground and micro aerial vehicles (MAVs).\n\n01/2010 – 05/2018\t Omnidirectional Depth Sensing with Catadioptric Rigs \tCUNY City College

    NY \n•\tDeveloped various catadioptric rigs in folded configurations using conic mirrors (spherical

    hyperbolical) separated by a baseline and a monocular camera inside the bottom mirror. The system approximates a single viewpoint with constraints in the design parameters. A complete globe of depth information can be obtained from the fusion of “omnistereo” (equator) and optical flow (poles).

    The City College of New York

    Aurora Flight Sciences Corporation

    Cambridge

    Massachusetts

    Developed solutions for evaluating landing zones of passenger VTOL aircrafts as well as perception for counter drone technology and detection of non-cooperative intruders.

    Perception Engineer

    Greater New York City Area

    Developed algorithms for SLAM (simultaneous localization and mapping) and 3D reconstruction using monocular cameras.

    Research Intern

    Mitsubishi Electric Research Laboratories

    English

    Spanish

    Great Minds in STEM (GMiS) scholarship

    The HENAAC Scholars Program addresses the immense need to produce more domestic engineers and scientists for the U.S. to remain globally competitive in the STEM marketplace.

    Intel

  • 2011

    Doctor of Philosophy (Ph.D.)

    Honors and Awards:\n- Ford Foundation Pre-Doctoral Fellowship [2012-2015]\n\nResearch Projects:\n\n- Computer vision applied towards navigation systems: Conducting research in 3-D computer vision-centric systems applied towards assistive localization and navigation of visually impaired people and autonomous ground and micro aerial vehicles (MAVs).\n\n- Omnidirectional Depth Sensing with Catadioptric Rigs: Developing various catadioptric rigs in folded configurations using conic mirrors (spherical

    hyperbolical) separated by a baseline and a monocular camera inside the bottom mirror. The system approximates a single viewpoint with constraints in the design parameters. A complete globe of depth information can be obtained from the fusion of “omnistereo” (equator) and optical flow (poles).\n

    Computer Science

    Research Assistant at the Robotics and Intelligent Systems Lab

    The Graduate Center

    City University of New York

  • 2010

    Master’s Degree

    Honors and Awards:\n- CCNY Mentoring Award as a student team in conjunction with Dr. Jizhong Xiao [May 2011]\n- NSF Bridge to the Doctorate

    STEM program funded by NSF/NYC-LSAMP [2010-2013]\n- Honorable Mention: 2011 National Science Foundation Graduate Research Fellowship Program\n\nResearch Projects:\n\n* Leader of the Intelligent Ground Vehicle Competition Team known as City Autonomous Transportation Agent (CATA)\n - Engineering an autonomous vehicle with a simplified electrical architecture (focusing in safety and usability) and by adopting a new software architecture based on the open-source Robotics Operating System (ROS)

    which enforces modularity and guarantees maintainability and reusability.\nLink to design report: http://www.igvc.org/design/2011/City%20College%20of%20New%20York%20-%20CATA.pdf

    Computer Science

    Intelligent Ground Vehicle Competition (2011)

    City College of New York

    CUNY

  • 2003

    Bachelor’s Degree

    Honors and Awards:\n- Google Scholarship awarded through the Hispanic College Fund [2010-2011]\n- First Place in Design Competition (18th Intelligent Ground Vehicle Competition) [2010]\n- General Motors Engineering Excellence Award through HACU [2008-2009]\n\nResearch Projects:\n* The 18th Annual Intelligent Ground Vehicle Competition (IGVC):\t\n- Participated in the design of the City College's IGVC 2010 rover (CityALIEN) by incorporating a novel approach based on stereo and omnidirectional vision.\n- Our team was Awarded First Place in Design Category (June 4-7

    2010)\n- Link: https://youtu.be/mHm1WIUUBzw\n

    Computer Engineering

    City College Robotics Club

    Autonomous Ground Vehicle Team (IGVC)

    Eta Kappa Nu (HKN)

    Beta Pi Chapter and Phi Beta Kappa (Gamma Chapter)

    City College of New York

    Magna Cum Laude

  • DMT demo for 3DV 2017 conference

    Compilation of some visual results achieved by the proposed Direct Multichannel Tracking (DMT) method presented in the 3DV 2017 conference

    DMT demo for 3DV 2017 conference

    Visual Odometry with a Single-Camera Stereo Omnidirectional System at Grand Central Terminal

    This video exemplifies the qualitative performance of a single-camera stereo omnidirectional system (SOS) in estimating visual odometry (VO) in real-world en...

    Visual Odometry with a Single-Camera Stereo Omnidirectional System at Grand Central Terminal

    Carlos_Jaramillo-resume

    Curriculum Vitae

    Street Address Phone: ###-###-#### Canton

    MA 02021 Email: omnistereo@gmail.com LIFE OBJECTIVE To enjoy being part of building our future...

    Carlos's curriculum vitae

    City Alien at IGVC2010

    City Alien: Winner of the Design Competition at the 18th Annual Intelligent Ground Vehicle Competition. June 2010

    City Alien at IGVC2010

    Enhancing 3D Visual Odometry with Single-Camera Stereo Omnidirectional Systems

    Dissertation Thesis

    We explore low-cost solutions for efficiently improving the 3D pose estimation problem of a single camera moving in an unfamiliar environment. The visual odometry (VO) task -- as it is called when using computer vision to estimate egomotion -- is of...

    Intro to Mobile Robotics and Robot Sumo Tournament

    This video is about Intro to Mobile Robotics and Robot Sumo Tournament for the 2015 CCNY STEM Institute Program.

    Intro to Mobile Robotics and Robot Sumo Tournament

    Single-camera Stereo Omnidirectional System on top of a quadrotor - POV-Ray office 01

    Synthetic images for the catadioptric omnistereo rig mounted on an AscTec pelican. Render with POV-Ray for the office scene and motion sequence # 01

    Single-camera Stereo Omnidirectional System on top of a quadrotor - POV-Ray office 01

    City Alien at IGVC2010

    City Alien: Winner of the Design Competition at the 18th Annual Intelligent Ground Vehicle Competition. June 2010

    City Alien at IGVC2010

    This intensive program was dedicated for selective high school students who learned fundamentals of mobile robotics using the Raspberry Pi (computer) and Python programming language in order to actuate motors and poll sensor data (e.g. ultrasonic

    infrared) and various electronic components. Ultimately

    participants built robots to compete in an autonomous robot sumo tournament (youtu.be/6138-qjoD3Q)

    City College STEM Institute

    Arduino

    C

    Research

    Raspberry Pi

    C++

    Computer Science

    LaTeX

    Computer Vision

    Matlab

    Java

    Python

    Microsoft Office

    Machine Learning

    Algorithms

    Programming

    Incremental Registration of RGB-D Images

    Jizhong Xiao

    Ivan Dryanovski

    Robotics and Automation (ICRA)

    2012 IEEE International Conference on

    An RGB-D camera is a sensor which outputs range and color information about objects. Recent technological advances in this area have introduced affordable RGB-D devices in the robotics community. In this paper

    we present a real-time technique for 6-DoF camera pose estimation through the incremental registration of RGB-D images. First

    a set of edge features are computed from the depth and color images. An initial motion estimation is calculated through aligning the features. This initial guess is refined by applying the Iterative Closest Point algorithm on the dense point cloud data. A rigorous error analysis assesses several sets of RGB-D ground truth data via an error accumulation metric. We show that the proposed two-stage approach significantly reduces error in the pose estimation

    compared to a state-of-the-art ICP registration technique.

    Incremental Registration of RGB-D Images

    Jizhong Xiao

    Igor Labutov

    Robotics and Automation (ICRA)

    2011 IEEE International Conference on

    We present a novel catadioptric-stereo rig consisting of a coaxially-aligned perspective camera and two spherical mirrors with distinct radii in a “folded” configuration. We recover a nearly-spherical dense depth panorama (360°×153°) by fusing depth from optical flow and stereo. We observe that for motion in a horizontal plane

    optical flow and stereo generate nearly complementary distributions of depth resolution. While optical flow provides strong depth cues in the periphery and near the poles of the view-sphere

    stereo generates reliable depth in a narrow band about the equator. We exploit this principle by modeling the depth resolution of optical flow and stereo in order to fuse them probabilistically in a spherical panorama. To aid the designer in achieving a desired field-of-view and resolution

    we derive a linearized model of the rig in terms of three parameters (radii of the two mirrors plus axial separation from their centers). We analyze the error due to the violation of the Single Viewpoint (SVP) constraint and formulate additional constraints on the design to minimize the error. Performance is evaluated through simulation and with a real prototype by computing dense spherical panoramas in cluttered indoor settings.

    Fusing Optical Flow and Stereo in a Spherical Depth Panorama Using a Single-Camera Folded Catadioptric Rig

    Ph.D. Thesis in which we explore low-cost solutions for efficiently improving the 3D pose estimation problem of a single camera moving in an unfamiliar environment. The visual odometry (VO) task -- as it is called when using computer vision to estimate egomotion -- is of particular interest to mobile robots as well as humans with visual impairments. The payload capacity of small robots like micro-aerial vehicles (drones) requires the use of portable perception equipment

    which is constrained by size

    weight

    energy consumption

    and processing power. Using a single camera as the passive sensor for the VO task satisfies these requirements

    and it motivates the proposed solutions presented in this thesis.

    THESIS: Enhancing 3D Visual Odometry with Single-Camera Stereo Omnidirectional Systems

    Jizhong Xiao

    Ling Guo

    The limited payload and on-board computation constraints of Micro Aerial Vehicles (MAVs) make sensor configuration very challenging for autonomous navigation and 3D mapping. This paper introduces a catadioptric single-camera omni-stereo vision system that uses a pair of custom-designed mirrors (in a folded configuration) satisfying the single view point (SVP) property. The system is compact and lightweight

    has a wide baseline which allows fast 3D reconstruction based on stereo calculation. The algorithm for generating range panoramas is also introduced. The simulation and experimental study demonstrate that the system provides a good solution to the perception challenge of MAVs.

    A Single-Camera Omni-Stereo Vision System for 3D Perception of Micro Aerial Vehicles (MAVs)

    Yuichi Taguchi

    We present direct multichannel tracking

    an algorithm for tracking the pose of a monocular camera (visual odometry) using high-dimensional features in a direct image alignment framework. Instead of using a single grayscale channel and assuming intensity constancy as in existing approaches

    we extract multichannel features at each pixel from each image and assume feature constancy among consecutive images. High-dimensional features are more discriminative and robust to noise and image variations than intensities

    enabling more accurate camera tracking. We demonstrate our claim using conventional hand-crafted features such as SIFT as well as more recent features extracted from convolutional neural networks (CNNs) such as Siamese and AlexNet networks. We evaluate the performance of our algorithm against the baseline case (single-channel tracking) using several public datasets

    where the AlexNet feature provides the best pose estimation results.

    Direct Multichannel Tracking

    Jizhong Xiao

    Daniel Perea Ström

    Ivan Dryanovski

    In this paper we present a navigation system for Micro Aerial Vehicles (MAV) based on information provided by a visual odometry algorithm processing data from an RGB-D camera. The visual odometry algorithm uses an uncertainty analysis of the depth information to align newly observed features against a global sparse model of previously detected 3D features. The visual odometry provides updates at roughly 30 Hz that is fused at 1 KHz with the inertial sensor data through a Kalman Filter. The high-rate pose estimation is used as feedback for the controller

    enabling autonomous flight. We developed a 4DOF path planner and implemented a real-time 3D SLAM where all the system runs on-board. The experimental results and live video demonstrates the autonomous flight and 3D SLAM capabilities of the quadrotor with our system.

    Autonomous Quadrotor Flight Using Onboard RGB-D Visual Odometry

    Jizhong Xiao

    Ling Guo

    We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size

    catadioptric spatial resolution

    field-of-view. In addition

    we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively

    we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.\n\nSee Source Code Repository at https://github.com/ubuntuslave/omnistereo_sensor_design

    Design and Analysis of a Single-Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs)

    Yuichi Taguchi

    This paper presents the advantages of a single-camera stereo omnidirectional system (SOS) in estimating egomotion in real-world environments. The challenge of applying omnidirectional stereo vision via a single camera is what separates our work from others. In practice

    dynamic environments

    deficient illumination

    and poor textured surfaces result in the lack of features to track in the observable scene. As a consequence

    this negatively affects the pose estimation of visual odometry (VO) systems

    regardless of their field-of-view. We compare the tracking accuracy and stability of the single-camera SOS versus an RGB-D device under various real circumstances. Our quantitative evaluation is performed with respect to 3D ground truth data obtained from a motion capture system. The datasets and experimental results we provide are unique due to the nature of our catadioptric omnistereo rig

    and the situations in which we captured these motion sequences. We have implemented a tracking system with simple rules applicable to both synthetic and real scenes. Our implementation does not make any motion model assumptions

    and it maintains a fixed configuration among the compared sensors. Our experimental outcomes confer the robustness in 3D metric visual odometry estimation that the single-camera SOS can achieve under normal and special conditions in which other perspective narrow view systems such as RGB-D cameras would fail.

    Visual Odometry with a Single-Camera Stereo Omnidirectional System

    Jizhong Xiao

    Ivan Dryanovski

    We present a 6-degree-of-freedom (6-DoF) pose localization method for a monocular camera in a 3D point-cloud dense map prebuilt by depth sensors (e.g.

    RGB-D sensor

    laser scanner

    etc.). We employ fast and robust 2D feature detection on the real camera to be matched against features from a virtual view. The virtual view (color and depth images) is constructed by projecting the map's 3D points onto a plane using the previous localized pose of the real camera. 2D-to-3D point correspondences are obtained from the inherent relationship between the real camera's 2D features and their matches on the virtual depth image (projected 3D points). Thus

    we can solve the Perspective-n-Point (PnP) problem in order to find the relative pose between the real and virtual cameras. With the help of RANSAC

    the projection error is minimized even further. Finally

    the real camera's pose is solved with respect to the map by a simple frame transformation. This procedure repeats for each time step (except for the initial case). Our results indicate that a monocular camera alone can be localized within the map in real-time (at QVGA-resolution). Our method differentiates from others in that no chain of poses is needed or kept. Our localization is not susceptible to drift because the history of motion (odometry) is mostly independent over each PnP + RANSAC solution

    which throws away past errors. In fact

    the previous known pose only acts as a region of interest to associate 2D features on the real image with 3D points in the map. The applications of our proposed method are various

    and perhaps it is a solution that has not been attempted before.

    6-DoF Pose Localization in 3D Point-Cloud Dense Maps Using a Monocular Camera

    Jizhong Xiao

    Igor Labutov

    Machine Vision and Applications

    We design a novel “folded” spherical catadioptric rig (formed by two coaxially-aligned spherical mirrors of distinct radii and a single perspective camera) to recover near-spherical range panoramas (about 360° × 153°) from the fusion of depth given by optical flow and stereoscopy. We observe that for rigid motion that is parallel to a plane

    optical flow and stereo generate nearly complementary distributions of depth resolution. While optical flow provides strong depth cues in the periphery and near the poles of the view-sphere

    stereo generates reliable depth in a narrow band about the equator instead. We exploit this dual-modality principle by modeling (separately) the depth resolution of optical flow and stereo in order to fuse them later on a probabilistic spherical panorama. We achieve a desired vertical field-of-view and optical resolution by deriving a linearized model of the rig in terms of three parameters (radii of the two mirrors plus axial distance between the mirrors’ centers). We analyze the error due to the violation of the single viewpoint constraint and formulate additional constraints on the design to minimize this error. We evaluate our proposed method via a synthetic model and with real-world prototypes by computing dense spherical panoramas of depth from cluttered indoor environments after fusing the two modalities (stereo and optical flow).

    Generating Near-Spherical Range Panoramas by Fusing Optical Flow and Stereo from a Single-Camera Folded Catadioptric Rig

Possible Matching Profiles

The following profiles may or may not be the same professor:

  • Carlos Jaramillo (70% Match)
    Adjunct Lecturer
    Lehman College - Lehman College Adj

  • Carlos Jaramillo (30% Match)
    Adjunct Instructor
    The City College of New York - City College Adj

  • Carlos A Jaramillo (80% Match)
    Assistant Professor/Research
    University Of Texas Health Science Center At San Antonio - University Of Texas Health Science Center At San A

CIS 212

4.3(5)