CREU

CREU funded research project over the academic 2017-2018 year. This will host any code and the required blog posts as requested by the CREU.


Project maintained by HalcyonAura Hosted on GitHub Pages — Theme by mattgraham

Weekly CREU Updates

Hi there, my name is Cecilia La Place and this page will be used for required weekly updates for the CREU program. This repo also will house any code we might develop.

9/3-9/9 - Week 1

Our first few meetings up to this point have focused on administrative tasks for the CREU but also understanding the scope of the research project. Professor Bansal gave us materials and tutorials to look through to familiarize ourselves with the topics ahead. We looked through the following titles and topics: Semantic Web Revisited, Linked Data, ETL Framework for Big Data Integration, XML, RDF. I intend to return to these topics to gain a better understanding during a second or third read-though next week.

9/10-16 - Week 2

While I have a very surface level understanding of our topics, I’m becoming more comfortable with looking into them. My prior research experience makes me inclined to delve deep into a large variety of papers. At our meeting this week, Professor Bansal was cautious to warn that such an approach would be hard given how deep the topics go and how high level the papers are at this point. She mentioned she would compile a list of papers she felt would be an appropriate start. At least I have an idea of what conferences and journals to look at once we pick up speed. We’ve also made a tentative plan to have online storage for the papers we read and find. Truthfully though, I don’t think I’ll have time this week to go through anything in depth due to career fair both on Tempe and the Polytechnic campus which will set me behind in two days of classes. I was at least able to take a look through google scholar on knowledge representation, big data, and linked data and open some links. I intend to read those in my free time. Unfortunately a lot of the papers I found were very high level and I questioned their relation to our focus given the diverse topics.

9/17-9/23 - Week 3

This week Professor Bansal gave us 7-8 more papers to give us a better starting point in regards to understanding the project. I’ve been having some difficulty getting through them due to the dense nature of the topic but I’ve been also reading them over a few times and taking notes after the initial read through. I’ve been trying to dedicate a half hour or an hour to sitting down and reading/taking notes throughout the week until I get overwhelmed and then come back to it later. It’s been helping me get through them but I feel at times the concepts are too easy to understand and that I’m missing something key–hence why I’ve gone back over them a few more times. I’ve even pulled in the papers given to us at the beginning to see if I can connect the concepts there to make sure I understand. I hope to get to the tutorials given to us by this weekend as well but I’m also preparing to go to GHC (studying for technical interviews) so my focus has been mostly on the papers and will likely be there for this week.

9/24-9/30 - Week 4

After our meeting this week I was asked to mention the connections I’d built from the papers to our project. From the paper about Clinga, I found it interesting their methods of data analysis through the form of SVM (support vector machines). I’ve briefly worked with them in my summer REU research program. From the paper about Food in Open Data, I felt it was a particularly relevant paper on the grounds of what if we need to remake or add on to an existing ontology. Furthermore, I think this paper shows the possibilities that open up as a result of open big data–such as mobile or web applications. In the paper about soil ontologies, this cemented the idea of using existing big data sources, but also adding on to them, or reformatting them. Directly considering the project, this data source is also something to consider or at least use as a springboard. We may want to consider soil properties, or perhaps building code and ability as a part of neighborhood sustainability. e.g. A home built on poor soil may have poor stability. In regards to showing an application of our potential work (this being related to work near the end of the year), the paper on illegally parked bikes in Tokyo, and the paper concerning MOOC’s being available all at once in a contained source, show and detail possible ways to reach out to the public whether through cooperation or use. I spent this week focusing on what neighborhood sustainability could be in the context of our project. We should look at what sustainability is, and perhaps in the context of a few papers or existing implementations such as the United Nations or general community definitions in the US. (http://iopscience.iop.org/article/10.1088/1757-899X/160/1/012046/meta, http://www.crcworks.org/guide.pdf). From there we might have to narrow our project to focus on specific types of data (e.g. transportation was something we discussed in this week’s meeting), or find a way to incorporate things piece by piece. I likened it to building a “neighborhood” out of our data. There’s honestly a lot to consider…

9/31-10/7 - Week 5

Prior to leaving for GHC I spent some time going through the tutorials posted on XML and OWL. While I was out at the Grace Hopper Celebration for 3 days and traveling for 1, I spent some of my time at GHC observing other undergradutes’ research posters, going to research related presentations and more. While it was for my CRA-W GHC Research Scholars program, I did enjoy it. I found myself looking for instances in which big data appeared or was useful, whether in research or companies. There were a few companies that focused on security of data, or simply hosting multitudes of data, but there were also companies that managed that data which I found intriguing. From the presentations, I wondered how the presenters were managing multitudes of data and in most cases found it was an algorithm or forays into machine learning/etc. or drawn from trends that appeared. I’m not sure if we’ll have statistical needs because most of the algorithms were implemented in Python and R and usually numerically based analysis. There’s more analysis on the posters I saw to be done… GHC was honestly an overwhelming experience. There was a lot to learn, to see, and more. I spent a lot of time in the expo hall, admittedly I was disappointed during my job search that I couldn’t find anything but I think my background is too spread out. After my failed attempt at a job search I ended up focusing on just networking which went a little bit easier. The third day though I spent a lot of time at the student opportunity labs learning about jobs and how to prepare for and choose them. I also took the time to talk to graduate schools and national labs. A few days after I got back I realized I’m actually terrified of industry and coding interviews and as a result I want to stay in school and pursue a PhD. I’m still researching and deciding how I’d like to go about it, but I still have a year thanks to the 4+1 at ASU to figure it out.

10/8-10/14 - Week 6

This week I compiled all the documents we had been given into a spreadsheet to make it easier for when we do the paper writing process at the end. The papers had sections indicating their year of publication, title, authors, main contributions, and topics covered. I also included a section for their citation so we don’t have to worry about that either, but I wanted to confirm the format we would be using before I did so. There are likely more sections I can add but those are to be determined. I then compiled some links on sustainability that Vatricia had found and put those in (and their unique definitions) to another page so as we define things or find things we want to define using other sources, we can add those. Another tab went to technologies we had been given and some I had found out about through my recent meetups. This past Saturday I attended a meetup that had a variety of tech sessions and took note of some of the big data applications they mentioned such as Spark structured streamline and ember. While I looked more into ember than spark, it still made me realize we needed a dedicated page for how we might go about visualizing the data we have. Pre-emptively, I made a section for datasets as well. Knowing which ones we’re using, how to download them, etc. is incredibly important. While the work that went in seems a little too early, my previous research experience is what was driving me to do this. My first paper I didn’t remember all the papers I’d gone through and had to refind new sources to still convey the same idea that had been formulated by other papers. For my second paper I needed to understand the technologies I was using and having a compiled list of places to refer to, teach myself, and more was super helpful. This spreadsheet was more to help us in the long run and also create an easy way for us to find what we need without having to waste time rereading/re-finding/etc things or find ourselves scrambling in an unorganized fashion. I wanted to encourage the eventual quality of whatever we might do. In any case, the spreadsheet is still not up to date as it has some blanks and some applications that need research done on them. I do still think it was a decent attempt to promote organzation.

10/15-10/21 - Week 7

It is midterm week, but I still am trying to devote some time to catching up the spreadsheet and finding more things to add to it technology wise. Actually I veered away from that to try and understand Ember.js’s function a bit more (it’s like AngularJS and is a web framework). After finding that out I then started considering more definitions. During this week’s meeting we mentioned scope and I realized we should probably be looking at neighborhoods–except, what is a neighborhood? A city? This thought came up again when I was reviewing a document on sustainability and how it was being emulated in a neighborhood in Minnesota (http://www.crcworks.org/guide.pdf). Their proess for defining the necessary terms was something I felt might be useful. They started with sustainability, then defined indicators, then sustainability indicators, and then neighborhood sustainability indicators. But while they didn’t explicitly define neighborhood (as they were doing this in regards to their own) we would have to define neighborhood in our work–and we still haven’t done that. So I researched some more, seeking to define things like neighborhood, city, suburbs, etc. so we have knowledge of the scopes we can consider and work with. This search led me to green spaces, and their necessity and how they lead to the quality of life. Parks, vegetation, and more. I think this only a small part of what we could work on. I’m still concerned of our scope because even focusing on a neighborhood could involve many things: green spaces, crime, house rates and availability, house income, etc. Even broadening it to a city which would allow us a significant amount more data in such as transportation, crime rates, income rates. However it would be easier to find said data. I wonder when we should start looking at data. The scope we decide on will decide that likely. Furthermore what do we want to truly accomplish with our project? Will it be in parts or an overarching idea? How would we represent that solution?

10/22-10/28 - Week 8

This morning I found a paper on the workings of a city (https://www.princeton.edu/~rbenabou/papers/QJE1993.pdf), and it implied that it would be discussing the link between residential choice, educational investment, production in a city. As I skimmed it, it looks like a lot of logical math so I will be spending this week focusing on that and the remains of my findings from last week, perhaps looking into more greenspaces, or other aspects that reach a sustainability goal–though I believe we should have a unified definition of sustainability before I can progress with that. Still doesn’t hurt to know what solutions for sustainability are out there. During today’s meeting we actually addressed the need to choose a focus and direction. We choose to do a real estate application–something that would encompass those who move locally searching better locations, and also relocators, and other people seeking to move to Arizona–specifically Maricopa County. We chose Maricopa for the fact we are most familiar with it, and it encompasses a wide range of cities such as Mesa, Tempe, Phoenix, and more. So the remainder of the week we all focused on finding datasets. I ended up looking at the city level for datasets instead of county or overarching datasets (like the US census). It was really interesting to find that the major cities in the county were moving toward open data policies or had something set up in order to help be more open to the public. Unfortunately, a vast majority of the smaller cities {Surprise, Goodyear, Paradise Valley, Avondale, Cave Creek, Fountain Hills, Litchfield Park, Carefree, Tolleson, El Mirage, Laveen, Youngtown, Wickenburg, Guadalupe, Anthem, Gila Bend, Tonopah, Wittman, Central City, Fort McDowell, Alhambra, orristown, Aguila, Arlington, Komatke, Circle City, Wintersburg, Theba, Gila Crossing, Deer Valley, Wranglers Roost, Citrus Park, Maricopa Colony} don’t have open data. Or if they do it is through other websites that do not offer access to the data (so it’s open to the service being used and visualized pleasantly but not open to the public to have complete access to). Alternatively some data was pretty limited like Chandler who’s police department was the only form of dataset that a shallow search turned up. However, I did see a familiar name, ArcGIS, which is a mapping tool. (See? Data! -> http://azgeo-azland.opendata.arcgis.com/) I think there may be some benefit in exploring the application as we may find more data through that, or at least another way we can visualize data. Surprise, Goodyear, Paradise Valley, Avondale, Cave Creek, Fountain Hills, Litchfield Park, Carefree, Tolleson, El Mirage, Laveen, Youngtown, Wickenburg, Guadalupe, Anthem, Gila Bend, Tonopah, Wittman, Central City, Fort McDowell, Alhambra, orristown, Aguila, Arlington, Komatke, Circle City, Wintersburg, Theba, Gila Crossing, Deer Valley, Wranglers Roost, Citrus Park, Maricopa Colony don’t have open data or it is found through third party websites with no API or access. From there though a lot of the common websites were OpenGov, OpenBooks (Arizona’s financial history and plans I believe), and Trulia (had heat maps showing crime rate and more). I wonder if as we explore more we’ll be able to find information through them or by using them. Open Data Network was another site I came across that may help with the smaller towns, and the api itself is just parsing a JSON file. I’ll have to look into that more.

Opendatanetwork, Opengov, openbooks, trulia… Useful? http://azgeo-azland.opendata.arcgis.com/

10/29-11/4 - Week 9

With the data we found last week, and the papers we’d read the previous week we came together to determine what our end result would be aimed toward. The datasets we’d found ranged from crime, to locations of historical to vacant buildings, census data, income data, and much more. The papers we’d read helped us understand how we wanted to determine sustainability (or at least give us scope to determine how we would define sustainability). We decided a real estate application would be best. We wanted to reach the most amount of people, and the best way to also show sustainability of a place would be something aimed toward relocating people–both local and long distance to even foreign. I wonder that we may run into information overload, I feel we may also be able to reach a lot of people’s interests or requirements by having so much data. With it we can reach families, relocators, people who move locally to be in better school districts, people starting out, people looking for jobs, etc. We then decided we needed to create an idea of our intended ontology or at least define a way we wanted to organize our datasets and how we will eventually add up to a measure of sustainability. On Sunday, we met up to do this. We have set up a spreadsheet (linked here) which details how we intend to organize the levels of information that add to an overall sustainability index. We begin with our 3 indices {social, ecological, environmental} and break it down into factors we found through the 2016’s sustainable cities document which details the factors and indicators used to determine a level of sustainability. We are guided by it, and use many aspects, but we also combine and add other factors and indicators such as delving into more specifics concerning types of health {mental, physical, rehabilitation}. As a result of having guided factors and indicators, we are able to find data specific to the indicators. While this week and next week will consist of that, I’m wondering how we will be able to normalize the data to a numerical format in order to determine an overall sustainability index. I still have to return to the paper I mentioned a few weeks back and I believe that might add some insight as to what we might consider doing.

11/5-11/11 - Week 10

This week’s meeting we came to the conclusion that the semester was picking up and that the next week’s meeting would be our last for the semester. Essentially we will likely need to be figuring out what we’d do until we met next semester. With that in the back of my head, I progressed through this week focusing on looking for databases again, and trying to see where they would fit with our described ontology. I realize I actually have no experience working with databases and I wonder if I can use some of these databases as practice. I’m going to have that as a to do for over the break. Moving on, I realized there was the issue of having the data trickle down where detail was missing because earlier on data was available for most large cities but not for smaller cities. Accounting for that I don’t think will be all that tricky if it’s as simple as a trickledown. But what about people wanting the surrounding area or an overview at a higher level? That leads me to wonder about our goal audience and users… Lots to think about this week honestly. And to be fair my week is fairly busy as I have projects milestones due next week in my capstone class, my educational gaming class, and my honors thesis. So this week has also been an exercise in time management. As for next week though, I want to continue the dataset search, but also look and see if there are papers that address normalizing data. We keep saying we’re building to a sustainability rating, and while this makes a ranking system is the obvious method, I wonder about other ways we could do this. I also wonder about how we’re marketing this to people (continuing off the earlier thought about the intended users). Should it be a website? A mobile application? Something else? In regards to a mobile app it would be intriguing to represent that information and it would be highly accessible given this day and age. Furthermore, thanks to my capstone and thesis I will have some pretty decent Android experience when it comes time. I might also look into what existing applications (mobile, web or otherwise) in which the ontologies are represented in order to be informed when the time comes and we can hit the ground running on that.

11/12-11/18 - Week 11

This week’s meeting we mostly went over the newly built ontology and minor questions. We’ve decided on a ranking system for now, and solved the problem of missing information. We will either look above at a bigger scope (such as going from a city level to a county or state level) or the surrounding areas and possibly average them. As for future work, we will conduct meetings or have updates done over slack. The latest problem is also datasets in which they do not have API’s and require consistent downloading for updated information, or datasets that are released in new files. I want to take a look at how we can maybe programmatically solve that or make it a little bit easier for us to handle. There’s also the datasets mentioned that have thousands of variables. As the semester calms down (when it ends) we will probably be able to dedicate some time to manually going through it which I’m definitely okay with as we determine what datasets we need to do so with. I still want to continue with my ideas and plans from the prior week. They seems like interesting paths to follow and round out my knowledge. I unfortunately still feel like our project, while defined and given direction, is still missing pieces. I’m not sure what specifically but as I determine them I will bring them up in future conversations.

11/19-11/25 - Week 12

I tried to understand the paper I had found a while back that mentioned what relocating people were looking for but wasn’t able to get very far between it being Thanksgiving and tests going on this week.

11/26-12/2 - Week 13

Unfortunately this week contained a lot of project due dates and I personally have a research paper due for WACV that I am trying to complete for the 12/2 deadline.

12/3-12/9 - Week 14

This week was finals and more deadlines and I am working on the supplementary aspect of the WACV submission for 12/8.

To make up for the discrepancies of the last 3 weeks, I planned to spend much of my winter break working on learning webscraping (for the datasets that do not have easily accessible manners), learning about databases and how to use them (and store data in them depending on the web or mobile) and practice developing an android application. I have already been looking through tutorials and had to develop an application for another class–though rudimentary at best–and I intend to improve it to create a functional and efficient application. I will also be reviewing web development so I am prepared for whichever direction we decide to go. My hopes are that most of these skills will be applied to our project when we decide on how we will approach our presentation of our data.

12/10-1/20 - Break until first meeting

From this point in time it was winter break and only work on learning android was done. We did not have our first meeting of the semester until the middle of the this week in this long break.

1/21-1/27 - First meeting of the semester

This week we had our first meeting of the semester and decided we wanted to develop a poster, begin the paper, and highlighted tasks for the week that we would meet on Sunday to finish the ontology, layout the paper, and decide on the UI and how it might look. My task for the following week is to digitize the wireframes we thought of for both the web and mobile applications, and then put them through photoshop to give them life and substance. I’m a lot more interested now that we’re finally working toward development work. I know I need to look into database management though for android, Julia is supposed to look into what she wants to use for the web application and from there I can look into it from an Android standpoint and work toward developing a parallel application. I’m really excited.

1/28-2/3

I got the wireframes and UI’s done! Next is a prototype for the android app. I’m wondering if the colors I’m using for the circles need to be consistent with other things online but I don’t know if there is one and will ask during next week’s meeting. We weren’t able to meet separately this Sunday just because we all had group project work going on so things are a little behind in that regard. Wireframes - https://drive.google.com/a/asu.edu/file/d/19KFKmNGSbcVGJzh7nHVMtTwJgfvDI1eE/view?usp=sharing UI designs - https://drive.google.com/a/asu.edu/file/d/1sxeW02XdDglblI2XsdwoOyj1L41qChQX/view?usp=sharing

2/4-2/10

This week I’ll be making edits to the UI’s I’ve made and hopefully begin the android prototype so we have something tangible to play with. Based on our meeting it looks like I should add a dropdown menu in the toolbar and an app title for the mobile application. As for the web design, I just need to add labels to the circles. As for my colors question from last week, Vatricia was mentioning there is a consistency, but it looks like its more of a social standard than declared standard? I’ll have to look into it regardless. Professor Bansal also sent out some links for us to look over which I’ll be doing this weekend so I can start working on getting data.

2/11-2/17

Admittedly last week was quite busy handling course homework (from AI to game development using shaders to android development–my brain is being stretched in a lot of directions and it can be a bit much), honors thesis (more android development) and other volunteer commitments (k-12 outreach) I have. I’m also trying to find a research lab to work in for my Master’s year so time has been scarce. But thankfully I was able to at least start working on a general prototype for the mobile application. Actually, it’s now on my github! I’ve only gotten the skeleton of how things will be displayed so far. Professor Bansal also sent us some information to read on Linked Data and examples of implementations. I had to refine how I envisioned the app as a result of the information and also how I’m going to develop the application. My initial approach was just pulling from a database (another thing I need to look at but I’m waiting for the team to decide what database to work with and I can get on learning how to use it in relation to android) and populating a list. But I realize now for ease of reading that I’ll have to get the listed display of information working with some sample values before I can even work with the eventual database. It’ll introduce too many unpredictable errors if I’m developing the display along with realtime data. I think by doing this I’ll also be able to stagger my development so that it’s done incrementally and I can still help with data scraping/gathering and adding to the database we’ll eventually create.

2/18 - 2/24

This week has been a reading week due to next week being midterms and I have some particularly hard ones that I’m quite worried for. I spent my time studying SPARQL in preparation for us having data and a database to query from. It seems like we’re using Jena, which Professor Bansal provided a query langauge that works with it (SPARQL). I’ve never worked with query languages before so its taking longer than I had expected to understand it. I’m mostly just trying to wrap my head around its possible functions and how we might query for data. I have also spent some time looking at android documentation again so I can continue with the next stage of the prototype I’m building–at least one functional part. Given that some of the pieces I’ve decided to use I’ve never even touched before, I’d like to experiment a bit before committing to a singular UI method. Unfortunately, the rest of my week is dedicated to studying for my AI midterm, the math has me struggling quite a bit.

2/25 - 3/3

Unfortunately, this week consisted of me entirely studying for my midterms and completing assignments. However, once this wek is up, we are on Spring Break, which I plan on using to work on the android application and continue understanding SPARQL. There are other responsibilities I have for other assignments, but because they are other android applications my hope is to do work in one and also be able to do work on this one if they have the same components, thus cutting some development time down.

3/4 - 3/10

It is currently spring break for us and we are trying to submit a poster to GHC in time for the due date, so today (3/4) I am working on reviewing the paper we have up to this point, and adding the current development process to it. For the remainder of the week I worked on our android application linking the activities and understanding swipe tab views. From there I focused on finding information needed to adjust a lot of the visual aspects of our application. For example, the buttons should be rounded and not square, and thankfully its a decently easy fix. However, I’m still having trouble getting swipe tabs to work properly so I’m also trying to look into buttons linking to singular tabs (so we can optimize the user looking at subindices and be able to swipe through the components. I realize that I don’t have much of a sample for what our output could be and think I’ll spend next week focusing on understanding how the server wil come into play in the application.

3/11 - 3/17

During this week we were unable to have our meeting, however, I continued to work on the application, and also realized that I should start looking into being able to access our database. Unfortunately Jena-Fuseki doesn’t quite mesh well with Android, and there are a few fixes, but none recent or recently updated. For now, I will put that research to the side to focus on at least parsing the JSON files that will be received from Jena-Fuseki. I’ve created a sample file that can be adjusted but has the general structured result of a query from SPARQL. My time management will need to be better this week as the following two weeks are quite busy preparing for my thesis defense, midterms, and other homework and large projects.

3/18 - 3/24

I decided I would take a break from android development this week (I’ve had to do a lot between capstone and thesis alone and would like to clear my head a bit) or at least refocus my efforts in it to be later this week. So instead I’ve been focusing on finding resources on Jena-Fuseki and Android links. There were a few I found when I was working on adding the development process to the paper that I was unable to look through at the time that I decided to sit down and read. Unfortunately, my understanding of servers is limited as the last time I took a related class was two years ago, and quite a lot of content has been learned between then and now. I plan on skimming through Julia’s blog to see if she wrote of any understandings or best practices as she’s been working on getting the server working on her side, so I can speed up my learning. As can be seen below, there’s a couple different references, but it seems like updates aren’t coming in the near future, and I realize I should probably be prepared for having to figure out a lot of the weird things myself. At the very least, I understand that a majority of the incompatabilities come from the different Java compilers, used packages, and a lot of changes to the uses of the program just to get it to cooperate. However, SPARQL queries may not be an easy thing to work with based on a project from 5-7 years ago, and I need to delve into that to understand what its referring to. It seems that there is beta work on porting ARQ to android, whether that’s changed or not I haven’t delved into it yet.

http://users.jena.apache.narkive.com/N6pRlaOh/apache-jena-for-android https://github.com/sbrunk/jena-android https://github.com/lencinhaus/androjena

3/25 - 3/31

This week I was prepping for my thesis defense at the end of the week. However, I focused on making the Android application adjust better to different API’s because it turns out that it would be overcrowded depending on the API. Vatricia uploaded the finalized ontology, so I met up with her to make sure I understood enough about the layout in order to format the queries. We did run into problems though. The server isn’t quite set up to test queries, and the queries can’t be properly formatted or tested just yet due to the lack of server and data in the ontology. Instead, I focused on trying to fit the general layout of the ontology to a basic SPARQL query to be used to gather information as a testing procedure once the server is set up. From there I felt we could expand what we needed in the query incrementally. It is at this point we agreed upon joining the Innovation Showcase (poster and product showcase held by our college) to display our work over the semester.

4/1 - 4/7

We have a poster session coming up at the ASU Innovation Showcase and therefore we need to complete a poster submission in time, so our weekly meeting briefly covered how to approach the poster. We decided upon a 3 column approach in which information relevant to the project would be covered in the first column, the second would house visuals of the project (subsection of the ontology, diagram of how the work is connected, etc.), and the third would discuss the algorithms we developed to shape the ranking and rating system of the information and the future work. For the next portion of the meeting we looked at some points of data that were questionable. By that I mean that we weren’t sure how we could accurately rate the data being returned. We did resolve one such instance concerning the age diversity, but struggled to come up with how to represent cultural diversity until Vatricia came up with a solution. To solve the age diversity, we would look at the age ranges and their population counts and compare their distributions to determine if it indicated a stable population growth (relatively even distribution of ages). To solve the cultural diversity, Vatricia mentioned that the diversity of a town should match the overall diversity of the US, therefore the closer to the larger population set, the closer to cultural diversity it was on a local stage.

4/8 - 4/14

For this week’s meeting it was just myself and Professor Bansal. I asked for a more hands on approach to working with SPARQL queries because I was still having trouble applying what I’d learned (theoretically) to our current work. She was able to send me that by the end of the week. Prior to then though I had a poster to put together. I ended up filling in most of the content as placeholders while I waited for Julia and Vatricia to fill in their parts and tried to make sure design wise it was nice. Everything was put together by Saturday, although most of Sunday (the due date) were the revisions. More graphic changes occurred and some we needed to add more content. Professor Bansal stepped in later to offer more critique. Admittedly, it wasn’t my best work. I’m really exhausted from having to handle a lot of capstone work the last two days since our sprint is ending tomorrow. After a lot more revisions it’s done! You can find it here. I wanted to include screenshots of the application, but I think it would detract from the color scheme and professionalism of the poster so I only showed the design. The app should look pretty simliar to the design by the time the showcase occurs. Poster - https://drive.google.com/a/asu.edu/file/d/1uDnX6adebOXyfZKujruxGSpk_rGVMIn5/view?usp=sharing

4/15 - 4/21

This week we focused on the ranking system during the meeting. There was a lot of confusion last week in regards to how we were ranking things as it seemed convoluted. As is such with research, things are always in flux, and they are always subject to change (or iterative development). I spent a majority of the time this week looking into the queries to better structure the queries for the ontology so we can grab data.

4/22 - 4/28

This week is Innovation Showcase. I wasn’t able to go to the meeting this week but instead we had a make up meeting that evening to discuss how we wanted to approach the innovation showcase. We discussed the types of people we might interact with and how we could best address them. For example, those that have no experience may ask vague questions about our interests in the project, our difficulties, etc. Whereas, those with plenty of experience can be conversed with and prompted to give insight on areas we had trouble with due to lack of expertise. Furthermore, I spent this week finishing up the app enough that we could have a demo for the showcase. Since there’s been trouble getting data, querying the ontology from the server, and undoubtedly problems with the app connecting to the server, I opted to change the application a little bit. It will serve as a show point in explaining the ontology with one of the pages, it was also show an example of most of the app’s functionality in action. The idea is to provide a method for us to explain our project better with visuals, and also allow visitors to enjoy learning about our project.

4/29 - 5/6

This week is finals week for the university and I am quite occupied in finishing the remaining projects and assignments that are also due this week so I can study for my classes. However, we all spent time doing the final write up for the CREU this week. I would like to discuss instead how well the application worked out for Innovation Showcase. It served as an integral and immersive method to explain our year’s work and where it intended to go next. There are minor revisions that should be made on the app, such as adding labels to the buttons and a way to indicate that they should tap on the buttons. I noticed people weren’t able to figure out that they were buttons. Also there should be labels at the head of each tab to indicate what subindex the factors/indicators on the page belong to. Aside from those, I felt it was a great prototype. All that needs to be implemented is connections to the server, add queries to send, and data to parse for the app.

End

Thank you for reading, I really enjoyed this research project. It was an adventure into the unknown, and a fantastic experience with great team members.