Sravya Kondrakunta

I am a Ph.D. student at in . My research focuses on developing intelligent autonomous agents using several Artificial Intelligence methodologies. Specifically, my research interests are Goal reasoning, Machine learning, Natural Language Processing, Automated Planning and monitoring, Explainable AI, Anomaly detection and handling, Meta-cognition, and Cognitive Architectures.

Education

Wright State University

Ph.D. in Computer Science August 2017 - Present

Wright State University

Master's Degree in Computer Science August 2015 - July 2017

KL University

Bachelor's in Electronics and Communication Engineering May 2011 - April 2015

Work

WSU: Collaboration and Cognition Laboratory

Graduate Research Assistant May 2016 - Present

Research towards goal reasoning and decision dynamics in autonomous agents to manage their own goals. Experience in Probabilistic and Statistical models to detect and respond to anomalies. Publications in goal reasoning and goal operations in International Conferences.

WSU: Data Semantics Laboratory

Independent Study April 2016 - July 2016

Natural language processing and data mininig from Rate My Professor website. Process the collected data using stanford NLP parser using R language to detect gender bias.

Teaching

Wright State University

Instructor August 2019 - December 2019

Designed and handled the coursework for Introduction to Computer Programming (CS1160). Mentored several Undergraduates and two GTA students

Notable Projects

Fish Habitat Surveillance using Unmanned Underwater Vehicles(UUV)

Team member January 2019 - December 2021

An autonomous underwater agent(Slocum Glider) named Grace tries to find the highest density of tagged fish(hotspots) in the entire region. The gray area around Grace is the acoustic receiver, and the red dots are tagged fish. The fish emit an acoustic signal with a specific time-frequency. However, Grace, through just a control architecture, to control its physical system cannot achieve such a complex goal of surveying and finding hotspots. Instead, it also needs a high-level cognition for surveying the region(Gray's Reef National Marine Sanctuary). ...

While Grace surveys the region to find hotspots, it encounters several anomalies in the domain. Some of the anomalies include blockades, Remora attacks. Specifically, blockades try to hinder the agent's movement by blocking its path either partially or fully. The partial blockades(green lines): the agent can pass through them (either by diving up or down in water). However, the agent cannot pass through the full blockades(red lines). Apart from the blockades, there are Remora Attacks. Remora is a type of fish that latch onto Grace and reduce its forward speed, thus hindering Graces' ability to achieve its goal of finding a hotspot.

Grace can only be categorized as smart when it handles such anomalies with minimal to no human intervention. So, Grace performs high-level decision-making through MIDCA Cognitive Architecture while also communicating with the Control Architecture of Grace. The Cognitive Architecture uses percepts from the control signals, uses those percepts to perform state-space planning, and generates actions for Grace. Finally, Since the physical system of Grace cannot understand planning actions, we convert them to control signals for Grace to execute in the real world.

Finally, in the demo, we can observe the Graces' response to the blockade at 0.18, 0.35, and several other instances throughout the video. In comparison, we observe Remora's responses as Long pauses at 0.22, 0.43, and several other instances.

Mine Clearance using Unmanned Underwater Vehicles(UUV)

Team member August 2017 - December 2018

An autonomous agent(Remus 100) named Remus tries to clear all the mines(Green triangles) in specified locations(Octagons/ Gren Areas). The gray area around the Remus is the range of the sonar sensor. Remus uses a sonar sensor to detect mines. However, Green areas designated by humans are incomplete. Mines are also present in other important areas (Wide rectangle) apart from just the octagons. Therefore, Remus needs to generate new goals(from data obtained through its perception) in addition to the goals provided by a human. ...

Remus must be competent in generating such new goals due to its resource limitations(battery life of Remus). Therefore, Remus must not generate a new goal for every anomalous mine. Instead, Remus reasons about the anomalous mines, postulate potential reasons for the mine to be present in the area(Ex. Enemy submarine, enemy ship, aerial vehicle). It then proposes potential threats from the mines to itself and other friendly agents in the environment and finally generates goals to remove mines that Remus perceives to be a threat.

In this demo, Remus uses percepts from the real world, state-space planner, explanation, goal-reasoning, control signals. We observe Remus generating new goals for mines outside of Green Areas at several instances in the video.

Baxter Robot Playing Tic-Tac-Toe with a Human

Team member Jan 2017 - July 2017

Any robot working in the real world must perform three types of actions. First, it should perceive the world around it. Second, from the percepts, try and understand the world. Third, it should execute specific tasks in the world as per its ability. The robot performing all three actions in the demo is a Baxter. ...

Baxter robot receives sensory signals from the real world using cameras and microphones. It has inbuilt cameras on its face, two palms of its hands, additional Asus Kinect external camera on its waist. From all the cameras, we receive 2-dimensional images and 3-dimensional point cloud images. We use Convolutional Neural Networks to process the data and detect real-world objects. In addition, Baxter has an inbuilt microphone. We use Sphinx to convert the speech commands to text.

From the percepts, Baxter attempts to perform mental actions. Baxter performs mental actions using a Cognitive Architecture called MIDCA. In this scenario, it tries to understand the tic-tac-toe game. Its initial mental motive is to win, but it changes its goal and shoots for a draw when winning seems impossible.

Finally, Baxter performs actions in the real world using its arms. Its arms have seven degrees of freedom. Baxter uses Robot Operating System (ROS) and Inverse Kinematics to perform real-world actions using its end effectors.

Baxter Robot Working with Various Colored Blocks

Team member May 2016 - December 2016

Any robot working in the real world must perform three types of actions. First, it should perceive the world around it. Second, from the percepts, try and understand the world. Third, it should execute specific tasks in the world as per its ability. The robot performing all three actions in the demo is a Baxter. ...

Baxter robot receives sensory signals from the real world using cameras and microphones. It has inbuilt cameras on its face, two palms of its hands, additional Asus Kinect external camera on its waist. From all the cameras, we receive 2-dimensional images and 3-dimensional point cloud images. We use Convolutional Neural Networks to process the data and detect real-world objects. In addition, Baxter has an inbuilt microphone. We use Sphinx to convert the speech commands to text.

From the percepts, Baxter attempts to perform mental actions. Baxter performs mental actions using a Cognitive Architecture called MIDCA. In this scenario, it tries to stack the red block on the green block. When the world changes, and it perceives that it can no longer pick up the red block. The agent changed its initial plan and achieved the goal.

Finally, Baxter performs actions in the real world using its arms. Its arms have seven degrees of freedom. Baxter uses Robot Operating System (ROS) and Inverse Kinematics to perform real-world actions using its end effectors.

Meta-Cognitive Integrated Dual Cycle Architecture

Team member March 2016 - December 2021

MIDCA Cognitive Architecture

The above figure depicts a cognitive architecture called Meta-cognitive Integrated Dual Cycle Architecture (MIDCA). It has two layers, a cognitive layer that works in the real world and a Meta-cognitive layer that works on the cognitive layer. Each layer has six different modules that perform six distinct operations. ...

The cognitive layer observes the world through the "Perceive Phase." It then attempts to understand the percepts and generate goals from percepts in the "Interpret phase." Next, it tracks the completion of the goal in the "Evaluate phase." If there are multiple goals, then the agent tries to choose and prioritize goals in the "Intend phase." Next, it plans for the selected goals in the "Plan phase." Finally, it takes each action in plan and applies it in the real world in the "Act phase."

The meta-cognitive phases are the same as the cognitive phases, but they work on the cognitive layer instead of the real world.

Awards & honors

Grants 2016 - 2021 Worked under several prestigious grants: NSF 1849131; ONR N00014-18-1-2009; AFOSR FA2386-17-1-4063.

Start Up 2018 Our start up idea, SquadUp, won the October 2018 Hackathon conducted by YCombinator with 250 participants across 80 projects.

Publications

Journal, Conference and Workshop Publications

[FLAIRS 2020] Gogineni, V., Kondrakunta, S., Molineaux, M., Cox, M. T. (2020, May). Case-Based Explanations and Goal Specific Resource Estimations. In the Thirty-Third Florida Artificial Intelligence Research Society Conference, North America (pp.407-412). AAAI Press.

[MIDCA Workshop 2019] Dannenhauer, D., Schmitz, S., Eyorokon, V., Gogineni, V. R., Kondrakunta, S., Williams, T., & Cox, M. T. (2019). MIDCA Version 1.4: User manual and tutorial for the Metacognitive Integrated Dual-Cycle Architecture (Tech. Rep. No. COLAB2-TR-3). Dayton, OH: Wright State University, Collaboration and Cognition Laboratory.

[ACS 2019] Kondrakunta, S., Gogineni, V. R., Brown, D., Molineaux, M., & Cox, M. T. (2019). Problem recognition, explanation and goal formulation. In Proceedings of the Seventh Annual Conference on Advances in Cognitive Systems (pp. 437-452). Cognitive Systems Foundation.

[ICCBR 2019] Gogineni, V. R.,Kondrakunta, S., Brown, D., Molineaux, M., & Cox, M. T. (2019, September). Probabilistic Selection of Case-Based Explanations in an Underwater Mine Clearance Domain. In International Conference on Case-Based Reasoning (pp. 110-124). Springer, Cham.

[GRW: IJCAI 2018] Kondrakunta, S., Gogineni, V. R., Molineaux, M., Munoz-Avila, H., Oxenham, M., & Cox, M. T. (2018). Toward problem recognition, explanation and goal formulation. In Working Notes of the 2018 IJCAI/FAIM Goal Reasoning Workshop, Stockholm, Sweden. IJCAI

[XCBR: ICCBR 2018] Gogineni, V., Kondrakunta, S., Molineaux, M., & Cox, M. T. (2018). Application of case-based explanations to formulate goals in an unpredictable mine clearance domain. In Proceedings of the ICCBR-2018 Workshop on Case-Based Reasoning for the Explanation of Intelligent Systems, Stockholm, Sweden (pp. 42-51). Springer, Cham.

[GRW: IJCAI 2017] Dannenhauer, D., Munoz-Avila, H., & Kondrakunta, S. Goal-Driven Autonomy Agents with Sensing Costs. In Working Notes of the 2017 IJCAI Goal Reasoning Workshop, Melbourne, Australia. IJCAI.

[GRW: IJCAI 2017] Kondrakunta, S., & Cox, M. T. (2017, July). Autonomous goal selection operations for agent-based architectures. In Working Notes of the 2017 IJCAI Goal Reasoning Workshop, Melbourne, Australia. IJCAI.

[AAAI 2017] Cox, M., Dannenhauer, D., & Kondrakunta, S. (2017, February). Goal operations for cognitive systems. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 31, No. 1). AAAI Press.

[IEEE 2015] Kishore, P. V. V., Rahul, R., Kondrakunta, S., & Sastry, A. S. C. S. (2015, August). Crowd density analysis and tracking. In 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI) (pp. 1209-1213). IEEE.

Thesis

[Master's Dissertation 2017] Kondrakunta, S. (2017, August). Implementation and Evaluation of Goal Selection in a Cognitive Architecture. Browse all Theses and Dissertations. 1811.

Conferences

ACS 2020 Eighth Annual Conference on Advances in Cognitive Systems. Palo Alto, California, USA.

ACS 2019 Seventh Annual Conference on Advances in Cognitive Systems. Massachusetts Instituteof Technology, Massachusetts, USA. Poster presentation on problem recognition.

MIDCA 2018 Second Annual MIDCA Workshop. Wright State University, Ohio, USA. Oral presentation on goaloperations in cognitive architecture.

AAMAS 2018 The 17th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-2018). Stockholmsmässan, Stockholm, Sweden.

ICML 2018 Thirty-fifth International Conference on Machine Learning. Stockholmsmässan, Stock-holm, Sweden.

ECAI 2018 The 23rd European Conference on Artificial Intelligence. Stockholmsmässan, Stockholm, Sweden.

IJCAI 2018 The 27th International Joint Conference on Artificial Intelligence. Stockholmsmässan, Stockholm, Sweden.

ICCBR 2018 The 26th International Conference on Case-Based Reasoning. Stockholmsmässan, Stockholm, Sweden.

GRW: IJCAI 2018 The 6th Goal Reasoning Workshop. Stockholmsmässan, Stockholm, Sweden. Oral presentation ongoal selection operation.

MIDCA 2017 First Annual MIDCA Workshop. Wright State University, Ohio, USA. Oral presentation on MIDCA Architecture.

Hackathons

COMTOR DerbyHacks 3, University of Louisville, KY.

  • Technologies used: Flask, OpenCV, Tensorflow, Convolutional Neural Networks.
  • A Full stack application to train several actions and alarm the user or an organization when they aredetected.
  • Evaluated on multiple real-world actions and obtained an average F1 score of 0.87.
  • HACK-STATA Hack-CWRU, Case Western Reserve University, OH.

  • Technologies used: Python, Neural Networks, Scikit-learn, Tableau.
  • A web application to visualize all the statistics related to a hackathon and recommend a hackathon uponuser’s profile.
  • Web scrapped more than 10,000 hackathons to obtain the data.
  • YOUR VIRTUAL DOCTOR SpartahackIV, Michigan State University, MI.

  • Technologies used: Flask, SQLite, Google Maps API, Bayesian Model, Machine Learning.
  • A Full stack application which acts as a personalized doctor for every internet user.
  • Implemented recommendation system to the nearest hospital based on the symptoms
  • Academic

    GRW 2021 Chair for the 9th Goal Reasoning workshop(GRW) held at the 9th Conference on Advances in Cognitive Systems.

    INTEX 2021 PC Member for Integrated Execution (IntEx) / Goal Reasoning (GR) workshop held at the 31stInternational Conference on Automated Planning and Scheduling

    INTEX 2020 PC Member for Integrated Execution (IntEx) / Goal Reasoning (GR) workshop held at the 30thInternational Conference on Automated Planning and Scheduling

    ECAI 2020 Sub-reviewer for 24th European Conference on Artificial Intelligence

    MITW 2019 Organized annual Make-IT-Wright Hackathon at Wright State University to encourage undergraduatestudent to code