<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:media="http://search.yahoo.com/mrss/"
	>

<channel>
	<title>bourai</title>
	<link>https://abourai.com</link>
	<description>bourai</description>
	<pubDate>Fri, 07 Dec 2018 07:14:08 +0000</pubDate>
	<generator>https://abourai.com</generator>
	<language>en</language>
	
		
	<item>
		<title>Multimodal QA</title>
				
		<link>http://abourai.com/Multimodal-QA</link>

		<comments></comments>

		<pubDate>Fri, 07 Dec 2018 07:14:08 +0000</pubDate>

		<dc:creator>bourai</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">301605</guid>

		<description>Multimodal Question AnsweringThis was a research collaboration between Oath (the artist formerly known as Yahoo!) and the Language Technologies Institute at Carnegie Mellon where I worked in a group headed by Professors Alexander Hauptmann, Robert Frederking, and Eric Nyberg. Below is the research abstract, but I think you’ll find the poster more enjoyable (here). I am a big proponent of interactive machine learning systems, and this project was one of many such research projects I was able to work on during my time at LTI.&#38;nbsp;
We&#38;nbsp; present an integrated multimodal question and answering (QA) system. We will present two case studies. One will focus on Flickr data, where the answers rely on the text description, the actual content of the photos or videos or within both modalities. The other will be a more general question answering system relying on Yahoo! Answers. The front end integrates both of these services into an Android app, which is capable of receiving a question &#38;nbsp;in written or spoken form. The multimodal system will analyze the question through a pipeline that resides in an external server and will answer back with the most likely answers from both modalities. The best possible answers will be presented to the user as a list of cards that contain the answer in text and the associated multimedia. We will present some cases where the right answer is contained within the multimedia description, another case where the correct answer is within the content of the multimedia as no description is available and one final case where both modalities enrich the answer to the user. The Yahoo! Answers QA part of the system follows a similar approach but does not rely on modalities outside of text. Finally, we will demonstrate our user feedback system that allows for swiping away incorrect or irrelevant answers for the user to improve our models.
</description>
		
		<excerpt>Multimodal Question AnsweringThis was a research collaboration between Oath (the artist formerly known as Yahoo!) and the Language Technologies Institute at...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>fMRI Visualization</title>
				
		<link>http://abourai.com/fMRI-Visualization</link>

		<comments></comments>

		<pubDate>Fri, 07 Dec 2018 06:38:35 +0000</pubDate>

		<dc:creator>bourai</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">301598</guid>

		<description>Interactive Web-Based fMRI Visualization Tool&#38;nbsp;
My first ever large-scale software project is still the one I hold dearest to my heart. I worked with an incredible team at the NASA Jet Propulsion Laboratories’ Human Interfaces Group to create a web-based tool for neuroscientists at Caltech to upload fMRI data and visualize their results. I was the software lead on this project and worked closely with both an interactive designer and neuroscience faculty to complete this visualization tool.&#38;nbsp; I’ve attached a short video of the tool below, but please do read the excellent summary my design partner, Sarah Churng, created here.


</description>
		
		<excerpt>Interactive Web-Based fMRI Visualization Tool&#38;nbsp; My first ever large-scale software project is still the one I hold dearest to my heart. I worked with an...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>Dinosaurs</title>
				
		<link>http://abourai.com/Dinosaurs</link>

		<comments></comments>

		<pubDate>Fri, 24 Nov 2017 19:19:50 +0000</pubDate>

		<dc:creator>bourai</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">232198</guid>

		<description>I Taught a Class on Dinosaurs Once at Carnegie Mellon
I don’t have much more to say than that. I had fun, the students had fun, and we got to visit a natural history museum to see some behind the scenes fossils. Oh and we had guest lectures and got to hang out with an awesome Pittsburgh paleontologist who can’t stop finding new species. Don’t believe me? Here’s the syllabus</description>
		
		<excerpt>I Taught a Class on Dinosaurs Once at Carnegie Mellon I don’t have much more to say than that. I had fun, the students had fun, and we got to visit a natural...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>UDBS</title>
				
		<link>http://abourai.com/UDBS</link>

		<comments></comments>

		<pubDate>Fri, 24 Nov 2017 00:30:42 +0000</pubDate>

		<dc:creator>bourai</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">232079</guid>

		<description>Using Virtual Reality Systems to Improve Architecture Design&#60;img width="1500" height="844" width_o="1500" height_o="844" src_o="https://cortex.persona.co/t/original/i/cd604393370dcff5c0b8405d6824075656663c5e12807164d4546d7423afd572/NavADAPT_2.3_Hydra-Data-Overlay.jpg" data-mid="330305" border="0" data-scale="85"/&#62;
Motion data captured from our Kinect system allows architects to have a clearer understanding of user needs, especially for underserved groups such as disabled patrons (image from UDBS site)During my senior year I was involved in the first prototype of the CMU School of Architecture’s ADAPTIVE KITCHEN project. This was a partnernship between the Urban Design Build Studio and a collection of reality computing students. From the UDBS site:&#38;nbsp;“The ADAPTIVE KITCHEN seeks to serve those with lower limb loss, visual impairment, and limited mobility, which are some of the most prominent injuries among veterans. As a working laboratory exploring the potentials of reality computing and emerging technologies in the design of human-centered environments, the NavADAPT LAB was developed to iteratively prototype and test components of adaptive and augmented spaces for application into the ADAPTIVE Kitchen.”My core contributions were twofold:
 Implementing the Kinect motion capture system to create heatmaps of user activity within a spaceAn augmented reality tool to allow architects and urban planners to explore neighborhoods and structures using Google’s Project Tango environment
Motion Capture System:
  Activity heatmap generation examples can be seen above. We conducted a series of studies where users were instructed to complete tasks in our kitchen prototype under different conditions.

Tango Tablet for Virtual Exploration
&#60;img width="1920" height="1080" width_o="1920" height_o="1080" src_o="https://cortex.persona.co/t/original/i/ee40f390962cc55a4ddbcd41ce4e567691650f77792de3e6be06004e7ae6684b/pointcloud-screenshot-1.png" data-mid="330306" border="0" data-scale="54"/&#62;One of the generated pointclouds from Autodesk

One of our collaborators was Autodesk, who donated their time and instruments to help us generate extremely detailed point clouds of our target neighborhood and buildings within our target area. Using a Project Tango (RIP) tablet, I prototyped a tool that allows architects, city planners, and any other city planners to have a virtual walk through of any pre-mapped location. I was also able to extract the building renderings my architect colleagues were designing and add them into the mapped neighborhood so they could “experience” the impact their proposals would have on the neighborhood.









The final prototype in use!






</description>
		
		<excerpt>Using Virtual Reality Systems to Improve Architecture Design Motion data captured from our Kinect system allows architects to have a clearer understanding of user...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>Tell Me a Story</title>
				
		<link>http://abourai.com/Tell-Me-a-Story</link>

		<comments></comments>

		<pubDate>Wed, 22 Nov 2017 04:53:58 +0000</pubDate>

		<dc:creator>bourai</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">231598</guid>

		<description>Tell Me a Story
&#60;img width="2048" height="1365" width_o="2048" height_o="1365" src_o="https://cortex.persona.co/t/original/i/6279c525d885439f8e835cc1caa6f1117bc5b9de61cc896773757402029a2416/12314656_10153717993212567_8577862639903252538_o.jpg" data-mid="599635" border="0" /&#62;Storytelling is a central part of human history, spanning from the oral tales of the griots to cave paintings to traveling bards. The human brain is incredibly adept at hearing or reading these stories and recreating the sensations needed to truly experience the vivid details of a masterfully-told story. This exhibit attempts to capture the user's story, related orally, and project back the regions of the brain activated by their tale. Specifically, it was designed for pre-kindergartners from Carnegie Mellon’s Children’s School who were learning about the brain.&#38;nbsp;
&#60;img width="1512" height="692" width_o="1512" height_o="692" src_o="https://cortex.persona.co/t/original/i/88252fae3a0ff761117986e52cd05d897b4575974ed64c1590f4881464b62014/Screen-Shot-2017-11-21-at-11.55.46-PM.png" data-mid="599636" border="0" /&#62;
A 3D brain mesh floats&#38;nbsp; (you may recognize it from the background of this site!) as the child begins to retell their busy and exciting weekend. Perhaps they ran into&#38;nbsp; Jenny at the park, or had a scary dream about a coyote (true story), or better yet adopted a puppy. The exhibit was able to pick up on a few key features: the content of their story and the pitch contours of their voice. If the child mentioned they were scared, the amygdala would be highlighted (the amygdala is involved in fear modulation). If they were a retelling a story about the park, the hippocampus would light up (hippocampus is heavily linked to spatial memory). This same simple association was done with other active verbs, pronouns, etc to map these stories to regions of the brain. A simple word embedding was used for this. For the “excitement” detection I extracted the pitch contour from the audio and looked at very simple features like the range of the pitch (the higher the pitch range the more excited the voice sounds). If you find this type of analysis exciting check out this paper I worked on 9 months after this project.

As any educator can tell you, there is something special about watching a student’s eyes widen as they grasp a concept. While I can’t pretend that these 3-5 year olds could suddenly point out where the hippocampus was with pinpoint precision, nor could I prove to you my model was that precise (it hilariously mapped a friend’s story about buying yogurt from Trader Joe’s to fear), the children were asking in-depth questions about an organ they seldom thought about before.

</description>
		
		<excerpt>Tell Me a Story Storytelling is a central part of human history, spanning from the oral tales of the griots to cave paintings to traveling bards. The human brain is...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>Portfolio</title>
				
		<link>http://abourai.com/Portfolio</link>

		<comments></comments>

		<pubDate>Wed, 22 Nov 2017 04:28:43 +0000</pubDate>

		<dc:creator>bourai</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">231593</guid>

		<description>Portfolio
A selection of works I’ve done over the years
&#60;img width="2048" height="1365" width_o="2048" height_o="1365" src_o="https://cortex.persona.co/t/original/i/7c33fd90994ffbb6491dd953cca93838bfa148897d893ada8a85e8359d3cc737/12304315_10153717992472567_8026705207253553050_o.jpg" data-mid="329517" border="0" /&#62;
Tell Me a Story&#60;img width="2858" height="1500" width_o="2858" height_o="1500" src_o="https://cortex.persona.co/t/original/i/dfd483bf2f46b788778f5afdaa95e103a2dc37e7f6f5f9af365005db5a042c9a/brain_viz.png" data-mid="329526" border="0" data-scale="80"/&#62;
Interactive fMRI Visualization Tool
&#60;img src="https://cortex.persona.co/w/1500/i/cd604393370dcff5c0b8405d6824075656663c5e12807164d4546d7423afd572/NavADAPT_2.3_Hydra-Data-Overlay.jpg"&#62;

UDBS

&#60;img src="https://lh4.googleusercontent.com/265Coh7avbo4Pc8SXPW_1ZHcU7bck9061b0MWFkuuMwI1rPeUNcA0FdEL2zpW74W2OIdp6braywcFj2SVeiv6u1AyhHEUb3k-2Gkcr85vnsj8SBcSmg2sx7qvxSRXjvgHg" width="206" height="414" style="width: 206px; height: 414px;"&#62; &#38;nbsp; &#38;nbsp;&#60;img src="https://lh5.googleusercontent.com/aNeQ72_A1Yy_oCTzH9ByKq32NUZFEf1LNfpfZDuyqekOgAoTR2uQGb6_SITfJCMCg9-L3Sz6QrUwBdYN1aLLtNRnJeOM3WSlPr5QkzjoxnUNn8J6LA0tFtfu2Y7R_yrrrw" width="207" height="421" style="width: 207px; height: 421px;"&#62; &#38;nbsp;&#38;nbsp; &#60;img src="https://lh6.googleusercontent.com/Y8vA-Dwh_f7CYfskQ0FKvE78zqBqRYJRxF5IIdfbMOt7voyvh_mq7Nk4SSgj_7vvAY2rbY7zeVzthTqVKKyCG1nn1wn7SW3mSAmRBR4kFuwTMrt8aadCmDeGmJATCB8HBw" width="203" height="408" style="width: 203px; height: 408px;"&#62;

Multimodal Question Answering


&#60;img width="1892" height="929" width_o="1892" height_o="929" src_o="https://cortex.persona.co/t/original/i/f684f5af4053fc7fbb20b84e6249f4898f0359754419ff9b1546b3c77d92ed48/Screen-Shot-2017-11-22-at-12.14.23-AM.png" data-mid="329527" border="0" data-scale="80"/&#62;
Facebook Live Video Highlight Generation



&#60;img width="1911" height="716" width_o="1911" height_o="716" src_o="https://cortex.persona.co/t/original/i/d8bed5f516346f3eb5e5c704b1444fb766ad1e740e3a1e8fce7eeabd68f9f3ef/Screen-Shot-2017-11-21-at-11.47.08-PM.png" data-mid="329522" border="0" data-scale="80"/&#62;
Agnosia Exhibit
</description>
		
		<excerpt>Portfolio A selection of works I’ve done over the years  Tell Me a Story Interactive fMRI Visualization Tool   UDBS   &#38;nbsp; &#38;nbsp; &#38;nbsp;&#38;nbsp;   Multimodal...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>About</title>
				
		<link>http://abourai.com/About</link>

		<comments></comments>

		<pubDate>Wed, 22 Nov 2017 03:57:52 +0000</pubDate>

		<dc:creator>bourai</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">231589</guid>

		<description>&#60;img width="300" height="449" width_o="300" height_o="449" src_o="https://cortex.persona.co/t/original/i/fbd96ffe42a3ddcded745d3ae934d030ff0c73fe2ccc99e859e7491905b97196/12033113_709567245839929_2356672398304477429_n-1.jpg" data-mid="329516" border="0" /&#62;</description>
		
		<excerpt></excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>Research</title>
				
		<link>http://abourai.com/Research</link>

		<comments></comments>

		<pubDate>Wed, 22 Nov 2017 02:57:44 +0000</pubDate>

		<dc:creator>bourai</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">231583</guid>

		<description>Research

Abdelwahab Bourai, Jaime Carbonell. 2018. I Know What You Don't Know: Proactive Learning through Targeted Human Interaction. In Proceedings. of the 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS Oral 2018). July 2018, Stockholm,Sweden. pdf

Abdelwahab Bourai, Tadas Baltrušaitis, and Louis-Philippe Morency. 2017. Automatically Predicting Human Knowledgeability through Non-verbal Cues. In Proceedings of the 19th ACM International Conference on Multimodal Interaction (ICMI Oral 2017). ACM, New York, NY, USA, 60-67. pdf

Fatima Al-Raisi, Abdelwahab Bourai, Weijian Lin. 2017. Neural and Symbolic Arabic Paraphrasing with Automatic Evaluation. LTI Student Research Symposium. August 2017. pdf

</description>
		
		<excerpt>Research  Abdelwahab Bourai, Jaime Carbonell. 2018. I Know What You Don't Know: Proactive Learning through Targeted Human Interaction. In Proceedings. of the 17th...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
		
	<item>
		<title>Home Page</title>
				
		<link>http://abourai.com/Home-Page</link>

		<comments></comments>

		<pubDate>Wed, 02 Dec 2015 20:09:56 +0000</pubDate>

		<dc:creator>bourai</dc:creator>
		
		<category><![CDATA[]]></category>

		<guid isPermaLink="false">231576</guid>

		<description>My name is Abdelwahab Bourai, but you can call me Abdel. I’m from the beautiful country of Algeria, but now I work on self-driving cars in&#38;nbsp;Pittsburgh. I studied computer science with a heavy dose of cognitive science at &#38;nbsp;Carnegie Mellon where I was blessed to work on engaging&#38;nbsp;projects with equally engaging people. I also did some graduate&#38;nbsp;research in the&#38;nbsp;Language Technologies Institute. For some reason I grew up really loving brains (a lot) and paleontology.&#38;nbsp;
Resume/CV&#38;nbsp; &#38;nbsp; Portfolio&#38;nbsp; &#38;nbsp; Research&#38;nbsp; &#38;nbsp; </description>
		
		<excerpt>My name is Abdelwahab Bourai, but you can call me Abdel. I’m from the beautiful country of Algeria, but now I work on self-driving cars in&#38;nbsp;Pittsburgh. I...</excerpt>

		<!--<wfw:commentRss></wfw:commentRss>-->

	</item>
		
	</channel>
</rss>