NNU Logo

2019 Research

Computer Science

GABRIEL JOHNSON

Title: Mapping Dirt Roads from Imagery Using Deep Learning.

Thesis Adviser: Dr. Dale Hamilton

Throughout the past couple of years, the NNU summer research team has been obtaining a substantial amount of aerial imagery using a sUAS. The majority of the imagery gathered is of forests and post burn areas. A lot of this imagery contains certain archeological features, such as, hand stacks, dredge tailings, roads, and rail grades. These features can be quite tedious to find but can be very important to archaeology, as these features can often lead to the discovery of other archeological artifacts. The previous methods of finding these features was to walk through the forests and hope you happen upon an artifact or feature. The goal of this research project will be to construct a more dynamic approach to finding dirt roads. This goal will be achieved by using a mask region convolution neural network (Mask R-CNN) to find these features within the aerial imagery. The idea of this program is to be able to run the imagery and identify any dirt roads that it detects and give a specific geo-referenced poly-line of where the feature is located.

ENOCH LEVANDOVSKY

Title: Evaluating the Burn Extent of Land Fires With Spatial Imagery.

Thesis Adviser: Dr. Dale Hamilton

In the past wildland fire researchers have mapped the fire extent of major wildland fires through expensive and tedious work of walking around the perimeter of the wildfire. This method was not only expensive, but it could not determine how much acreage of unburned area was within the boundary line. The Spatial Imagery Burn Extent (SIBE) algorithm is a final collection of all the work by Dale Hamilton, Ph. D and previous undergraduate researchers at NNU, mapping the spatial extent of wildland fires using a suite of algorithm that used 5cm hyperspatial imagery acquired with a small unmanned aircraft systems (sUAS) in addition to lower resolution imagery such as can be acquired with Landsat and Sentinel. The hyperspatial imagery was used to train a Support Vector Machine (SVM) to map burn extent on the lower resolution imagery which has a much larger spatial extent than the hyperspatial sUAS orthomosaic. However, these algorithms have not been previously tested and analyzed on raw satellite imagery. This effort was a preliminary look at the capability of SIBE to map burn extent from Sentenel-2 and Landsat 8 imagery. This proof-of-concept analysis shows that burn extent of very large wildland fires can be effectively mapped from satellite imagery using hyperspatial sUAS imagery to train an SVM.

ALEXANDER DRINNON

Title: Mapping of Surface Fire in Forested Biomes from Hyperspatial Imagery using Machine Learning.

Thesis Adviser: Dr. Dale Hamilton

Over the past decade, wildland-fires have continued to increase in severity with wildfires burning an average of five to ten million acres in the United States a year. This elevated activity increases the costs of fighting them, with the 2017 season costing $2.9 billion in wildland-fire suppression. For the past three years, NNU’s Fire Monitoring and Assessment Platform (FireMAP) team has been using Small Unmanned Aircraft Systems (sUAS) to capture hyperspatial imagery to map post-fire effects. The purpose of this project was to add capabilities to the existing FireMAP analytic tools and refine and document the process used to gather hyperspatial imagery with sUAS. The analytic tools were improved by adding the ability to identify crown underburn, which is defined by an unburned crown being surrounded contiguously with burned surface vegetation. As tree canopy blocks the drones view from capturing the ground for classification other methods need to be used to infer this information. The Denoise tool was used to detect pixels that are crown underburn and reclassify them as burned. This improvement allowed for more accurate classification of burned surface vegetation which is obstructed by unburned crown vegetation in forested environments.

HANNA MOXHAM

Title: Identifying Prostate Cancer in Biopsy Images using a Support Vector Machine and Decision Tree.

Thesis Adviser: Dr. Barry Myers

Prostate cancer is the second most common cancer in men. Its high five-year relative survival rate hinges on identification of the cancer, especially before it spreads. A negative misdiagnosis can be deadly, which creates need for a consistently accurate method of identification. This research sought to develop a computer vision software tool that, given a digital image of a prostate biopsy, locates any malignant glands present in the image. A three-step process was devised for this: first, run supervised machine learning classifiers to mark the key cellular structures that point to adenocarcinoma of the prostate—nuclei, nucleoli, and lumina. Second, analyze those structures for key traits such as size and clustering. Third, use these derived traits in a second round of classification to locate cancerous regions. A support vector machine and decision tree were used for step one with reasonable success—nuclei and lumina were found with high accuracy, but nucleoli identification was troublesome. Better accuracy than this is desired. Future work includes determining the value of continuing with this three-step method, and if so refining step one and completing steps two and three. Otherwise, a new classification algorithm such as a convolutional neural network will be investigated.

NICHOLAS HAMILTON

Title: Classifying wildland fire severity on Landsat imagery using Machine Learning trained by hyperspatial imagery.

Thesis Adviser: Dr. Dale Hamilton, Dr. Barry Myers

Many different machine-learning algorithms have previously been used to map wildland fire effects using satellite imagery from the Landsat satellites with 30-meter spatial resolution. Small-unmanned aircraft systems (sUAS) can capture images with five-centimeter (hyperspatial) resolution. Consequently, the amount of data needing to be stored and analyzed significantly increased. There is a need for more tools that focus on extracting actionable knowledge from hyperspatial imagery and providing timely information for management of wildland fires. This analysis shows that the accurate mapping of fire effects from hyperspatial imagery increased from 56.62% to 93.16% for Burn Extent and 28.4% to 95.94% for Biomass Consumption. The classifier developed to do this analysis uses a support vector machine (SVM) to determine the burn severity by classifying image pixels into canopy crown, surface vegetation, white ash, and black ash.

The use of sUAS to map burn severity creates another problem. The flight time of sUAS allows them to have the capability to only map small fires. Classifications were modified to utilize machine-learning algorithms. Images, obtained from Landsat, are analyzed using the new classification. Implementing the new classification allows, not only small fires but, large fires to be modeled as well.

RYAN PACHECO AND BRENDAN PELTZER

Title: Classification of Aerial Imagery using a Relational Convolutional Neural Network

Thesis Adviser: Dr. Dale Hamilton, Dr. Barry Myers

This project set out to use aerial imagery from Small Unmanned Aircraft Systems (sUAS) to train a Relational Convolutional Neural Network (RCNN) to identify and label linear features. For this research, significant amounts of training data were generated using labelImg for rectangular object identification and labelMe for polygonal object detection. This training data was then used to retrain a RCNN to identify and label rail grades, mine tailings, hand stacks, dirt roads, and foundations. Several pre-trained models, including: ssd_mobilenet_v1_coco, faster_rcnn_inception_v2_coco, and rfcn_resnet101_coco were used as a starting point for retraining. Each of these models was designed to allow further retraining of the RCNN, however, each one had roadblocks that prevented successful retraining in this experiment. Several roadblocks were identified that caused valuable time to be wasted. Google Drive proved to be troublesome when attempting to move large amounts of data necessary for retraining. This led to valuable time being spent attempting to send data to and from Google’s server that could have been spent further diagnosing retraining errors. To counteract this, an API was developed that would allow for training imagery to be stored easily on the NNU servers rather than Google Drive.

JON FENN

Title: Development of the Data Extraction Utility SweetData./em>

Thesis Adviser: Dr. Barry Myers

SweetData is a utility to extract data and convert it to information reports. It is to replace an outdated utility used by Amalgamated Sugar. It uses a GUI(graphical user interface) instead of a console interface and is written in C#. Designing software is a process that can be difficult, no matter how simple the job. It takes planning to be done efficiently and what the developer envisions is often different than what the end-user envisions. Using Agile development helps to alleviate that problem. Programming languages are similar and learning new languages is easier after learning your first. A starting point for designing an application is the GUI. The GUI helps to visualize the framework of the application. The GUI should allow the user to navigate the application without out confusion. Binary files had the data that needed to be extracted and processed into information. The information is exported to a CSV(comma separated values), text, or Excel format. The reports show date ranges with current or YTD(year to date) values. The use of multithreading allowed the report to be written while doing other stuff. This lowered the runtime from seconds to milliseconds. Future versions will have PDF support.

EMILY, KELLY

Title: Creating a Mobile Application About Costa Rican Frogs and Toads Using React Native.

Thesis Adviser: Dr. Dale Hamilton

The purpose of this project was to create a cross-platform mobile application for Dr. John Cossel of the NNU Biology department based on his book Field Guide to the Frogs and Toads of Costa Rica. The purpose of this application is to provide easily accessible species identification resources to aid researchers in the field.

React Native, a framework created by Facebook, was used to create the app. React Native combines JavaScript and React languages to develop iOS and Android apps simultaneously without using native languages. Expo toolkit was also used to manage and display the application. Using React Native and Expo allowed the creation of a lightweight app that is 10% the size of the eBook. Although the app is faster and uses less memory than the book, the app is lacking several necessary features, due to the limitations of features imposed by Expo. Developing with React Native using Expo is an easy approach to create simple applications, but is not as well-suited for creating feature-rich apps for projects such as this one.

To use the app, go to https://bit.ly/2CnMeBr and follow the User Manual. Screenshots of the application will be shown in the seminar presentation of this project.