direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Page Content

Implementation of an Interface between ROS and OMF


Die vorliegende Arbeit ist eine schriftliche Ausarbeitung zum Studentenprojekt im WS13/14 am Fachgebiet TKN der TU-Berlin. Diese umfasst sowohl alle relevanten Vorbetrachtungen und Abstraktionen als auch die Dokumentation der Umsetzung einer Schnittstelle zwischen dem Betriebssystem des Roboters (ROS) und des Versuchscontrollers (OMF).

Ziel dieses Projektes ist die Implementierung einer Schnittstelle zwischen der Software zur Steuerung mobiler Roboter (ROS) und der Steuerung von Versuchsabläufen (OMF), da momentan beide Systeme nur separat bedient werden können und somit keine Synchronisation besteht, damit Versuche/Experimente automatisch durchgeführt werden können.


The objective of this student project is the implementation of an interface between the controller of mobile devices - in our case turtlebots - (ROS) and a measurement controller, which administrates measurements (OMF).

Currently both systems operates individually, that means there is no possibility of synchronization between them. Therefore the devices has to be moved or placed using ROS before an experiment is executed. This documentation will show all relevant abstractions and also the implementation that was used to achieve the objective.

1 Introduction

This section begins with the motivation and objective of the project. Furthermore the assumptions and problems will be stated.


Currently the administration of the measurement controller (framework) and the controller of a mobile device (robots in general and especially in this project turtlebots) has to be done manually. This means that the path a turtlebot needs to take while executing a measurement can only be set separately from the measurement setup. If there is a malfunction on one of these units, the other one will not be informed about it. This leads to a waste of time, because the experiment could have been aborted earlier. To simplify the control of measurements with mobile devices both entities need to be synchronized.

A detailed abstraction and precise definition of the functionalities is necessary for an interface between the experiment controller and the robots controller.

The robot operating system which is used to control and administrate the turtlebots allows access to the whole device including the movement control as well as the camera based to react to obstacles which might be placed before the device.


The system consists of two parts. First, the experiment controller which is able to run experiments based on a user's predefined script. It also includes all relevant settings for data storage, resources that are needed during measurement. Second, the controller for the mobile platform which is able to let the device move from an initial point to a specified destination. This is functionality done by ROS, robot operating system. Therefore each device is controlled by an operating system that manages the individual behavior of that type of robot.


The aim of the project is to implement an interface to facilitate the control of the robot by the experiment controller. The focus of our work is the movement control. At the end of this project the experiment controller should be able to access the movement-algorithms of a device, so the user only needs to configure the whole experiment at one single point. To achieve this, we first have to specify all relevant parameters that are needed for communication for both, the mobile platform controller and the experiment controller. In the following section we will address these issues and specify our definition of a waypoint and path, too.

2 Abstraction

In this section we want to discuss all parameters and factors that are highly relevant for the implementation of the interface. We will start with abstractions on a high level describing the used devices and measurement tools. We will also address the questions concerning the behavior in case of unexpected or not viable commands as well as error handling.

Robot and Waypoint

A robot in general is a device that can receive commands and execute them in a previous defined way. The robot might also be able to provide information to the user or other processes about the current status incl. warnings and errors, pending tasks etc. For mobile devices that are able to move autonomously some of the most important status information are the current position, orientation and speed. The turtlebots, that are used in our project allow these mentioned basic capabilities.

The exact position of a rigid body in a room is fully described by the coordinates including the orientation. The body can have six degrees of freedom: 3 transversal (x, y, z) and 3 rotation (yaw, pitch, roll). Due to the fact that the turtlebot is limited in transversal and rotation components, only three of the six degrees are needed to determine the exact position of the robot (neither the height nor roll or pitch can be modified directly).

We will call the conglomeration of these three position-components including parameters for tolerance margins and optional time specification a waypoint (WP).

When the user entered a single waypoint (robot receives a message/command) it will start moving to the given point (execute command). While moving the current position will be announced periodically to all related processes (send status information).

The operating system (robot controller) combines information of all available components and sensors of the device, i. e. in addition to the move-base also information for instance of the camera, floor map, etc. to evaluate the current status and initiate further actions (e. g. detect obstacles and sidestep them, identify the shortest possible track to the given waypoint). Attention: We need to bear in mind that the report of the current position or the notification of reaching the given waypoint by ROS might be incorrect compared to the real / physical position. The user will only be informed about the position ROS believes the device is currently at.


As described in section 2.1, only "x,y,yaw" can be modified for the kind of robots we are using and that a waypoint is a conglomeration of parameters for the position, orientation, tolerances and optional time: "x,y,z,yaw,r,d_yaw,time".

After we now know what components are necessary to define a waypoint, we need to be able to define a path. In general a path is a function of waypoints and time, like \(\overrightarrow P = f(\overrightarrow{WP}(t))\). Therefore we can define an intermediate as the conglomeration of one waypoint element and one time element t. This additional parameter can be interpreted in two ways: First as the duration [in seconds] until the current waypoint should be reached or second as the absolute time when this waypoint is reached.

By default the move-base tries to reach a given waypoint by using the shortest track, that means it will drive linear on an imaginary line. If there is an obstacle in the way which might be detected by sensor modules like the camera, which is already included in ROS (Robot Operating System), the turtlebot bypass the obstacle and adapt its speed if needed to reach the next waypoint in time. If the turtlebot is not able to reach a waypoint within the given duration or at the given time, the user should be informed and based on the selected setup the respecting waypoint will be skipped or the measurement will be pause until the user decided the further actions.

Experiment description

In this section we will address the factors how a user can describe an experiment in OMF. Basically there are three main aspects given: resources (here are all in the experiment used devices mentioned and grouped if needed), applications (which is run on the before described resources, examples: ping or iperf) and measurements (in a central database all results are collected and stored, real time availability is given).

This description can be seen as the input-script for the experiment controller. It is written in OEDL and contains all required resources and applications or in general actions that are needed to fulfill the entire experiment/measurement.

In order to simplify the configuration and provide a high-level abstraction, the robot as well as the path-definition should be set up as new resources. For a detailed list of all components of the implemented resources see section 3.

The accuracy is an important aspect when defining measurements. Additionally to the desired position and orientation of each waypoint the user should be able to set also margins for every component for accuracy reason. When the robot is not able to reach the waypoint with given tolerances according to the user's specification, this waypoint may either be skipped or, in case this waypoint is marked as critical, the experiment is terminated.  The robot reports the successful execution if all tolerance margins were met. As mentioned in 2.1 the reported (from the operating system expected) position might vary from the real position.

Figure 1. Importance of accuracy of defining waypoints

Figure 1 illustrates the importance of accuracy with respect to the choice of tolerance margin and quantity of waypoints. Note that by accuracy it is meant the matching grade of the planned and executed waypoint or paths, not if the device report the true real position.


The communication between both controllers is done by secure message exchange (based on publisher/subscriber).

Both OMF and ROS use Pub/Sub mechanisms as means of communicating between entities that are part of their respective domain. Since OMF uses a more general applicable communication protocol (FRCP based on XMPP), this was favored over TCP/UDP-ROS. A singular message exchange between a control instance and a resource instance (usually done by proxy in the form of a resource controller) usually has the form that the control instance sends a message and the resource instance/controller sends a reply upon successful or failed handling of said message.

FRCP (Federated Resource Control Protocol)

FRCP is a mostly agnostic protocol specifically designed to control resources in a testbed. To do so, there are five classes of messages: create, configure, inform request and release, each of which fulfilling a specific function within the lifecycle of a resource.


By default services and messages use TCPROS for data transmission, which uses the commonly known stream sockets. Therefore it exists a persistent connection until closing it explicitly. Based on the information carried in the header, a message will be routed either to a publisher or to a service. If both nodes support UDPROS, the not reliable connection could also be used. If one node is not supporting this kind of connection, TCPROS will be used. A message will contain amongst others the following fields: callerid, topic, service, type, msg_definition, error.

3 Implementation

This section will give an overview of what signals/messages are sent from OMF to ROS and vice versa to accomplish the objective. The following figure shows all defined message-types that are used by OMF and ROS.

Figure 2 describes the lifecycle of an experiment which will include the usage of mobile devices like the Turtlebots. This diagram will also be used to describe all types of messages that may occur during an experiment.

Figure 2. Messages from and to OMF and ROS

There are two approaches to encode the messages. One will use a prototype to encode all the information and the other approach allows service providers to define the attributes they want and throw the unused attributes away. In this project, the default set up will use the second approach.

Basic Communication

According to custom the experiment controller (EC) first will send a create message to each needed resource at the beginning in order to setup it up and configure it. Additional to the default resources (nodes in each room), all mobile devices (robots) and waypoints are resources as well. A resource controller (RC) is used to configure XYXYXYXY   -   i.e. in order to set up a resource, i. e. a robot and/or a waypoint. Therefore a generic controller will be used to configure the desired resource type after creation is completed.

To modify a resource, configure messages are used. An example of a modification is the insertion or deletion of waypoints within a path.

If a controller (e.g. EC) requests information from another controller (e.g. RC) the request messages are used. The resource controller (RC) will send request messages to ROS. These messages contain all relevant movement settings, like coordinates of waypoints and tolerance margins, time or duration until the next waypoint is reached.

Information on any kind of message (e.g. a reply to a request message), inform is used. The current position, speed, next waypoint, etc. will be part of the inform messages that are received from the RC by ROS.

If an experiment is finished, the EC can release all created RC that are not needed anymore by using the release messages. OMF will use XMPP/FRCP for the internal messages.

Example (.rb):

The following source code written in OEDL contains one path (resource), which currently holds 4 waypoints. Each waypoint is characterized by its name (no. of WP, no. of path), uid and description. Note, this is not used in OMF Version 6.

# -*- coding: utf-8 -*-


#auto generated from XML-file start
def_property('wp2_p1', '12465546554', "Waypoint 2 of Path 1")
def_property('wp3_p1', '78778787987', "Waypoint 3 of Path 1")
def_property('wp4_p1', '12394908895', "Waypoint 4 of Path 1")

# Events to trigger when named waypoints are reached
defEvent :wp2_p1_reached do |state|
state.find_all do |v|
v[:type] == 'robot' && v[:last_waypoint_reached] == property['wp2_p1']

defEvent :wp3_p1_reached do |state|
state.find_all do |v|
v[:type] == 'robot' && v[:last_waypoint_reached] == property['wp3_p1']

defEvent :wp4_p1_reached do |state|
state.find_all do |v|
v[:type] == 'robot' && v[:last_waypoint_reached] == property['wp4_p1']

#auto generated from XML-file stop
defProperty('uri', "xmpp://rosomf", "The XMPP server for communication")
defProperty('my_orb', "omf_ros_bridge", "Resource Controller for robot")
defProperty('my_app_server', "app_server", "A node that runs apps")

defApplication('ping_omf') do |app|
app.description = "Example application"
app.binary_path = "/path/to/binary/ping_omf"
app.defProperty('target', 'Address to ping', '-a', {:type => :string})
app.defProperty('count', 'Number of times to ping', '-c', {:type => :integer})

defRobot('my_robot', type: 'robot') do |rob|
rob.description = "A robot"
# Configuration
rob.load_path = "/some/path/to/a/yaml/file.yaml"

defPath('my_path', type: 'path') do |path|
path.description = "A simple path consisting of four waypoints"
# Path specification
path.load_path = "/some/path/to/a/xml/file.xml"

# bridge
defGroup('orb', 'my_orb') do |orb|
orb.addPath('my_path') do |p|
# Path is to be checked in terms of feasability before deployment
# Test path without actually sending to ROS

# example application
defGroup('pings', 'ping_omf') do |g|
g.addApplication('ping_omf') do |app|
app.setProperty('target', "test.com")

group('orb').resources[type: "robot", name: "my_robot"].start_on_path('my_path')

# Waypoint 1 was not marked as a waypoint the RC was to inform the
# EC about and so it is ignored here.

# Path specification is such that the robot does not stop here, but informs
# the EC of it reaching it.
onEvent :wp2_p1_reached do
info ">>> Waypoint 2 was reached by robot"
info ">>> Info '#{g.resources[type: 'robot'].last_waypoint_info}'"
after 2.seconds do

# Path specification is such that the robot stops here and informs the
# Resource controller
onEvent :wp3_p1_reached do
info ">>> Waypoint 3 was reached by robot"
info ">>> Info '#{g.resources[type: 'robot'].last_waypoint_info}'"
after 10.seconds do
group('orb').resources[type: "robot", name: "my_robot"].continue_on_path

# Path specification has no options, therefore, the default behavior (it being the last)
# Waypoint is, that the robot stops.
onEvent :wp4_p1_reached do
info ">>> Waypoint 4 was reached by robot"
info ">>> Info '#{g.resources[type: 'robot'].last_waypoint_info}'"
after 2.seconds do

after 20.seconds do
info ">>> Release engines"
g.resources[type: 'path'].release
g.resources[type: 'robot'].release

Some extensions to OEDL were made to facilitate the control of a robot for an experiment.

The methods defRobot and defPath allow for the definition of robots and paths, respectively. To simplify the configuration of paths, in particular - as direct interaction and modification by the EC itself is not supported (yet) - and robots somewhat, the configuration can be automated by specifying a file with a default configuration. (((A template for the turtlebot is part of the source code.)))

The methods addPath and addRobot puts paths or robots into  groups, respectively. As such the configuration can be done in a style already known from applications.

Walkthrough of experiment

After parsing the OEDL-File and initialization and setting up of data structures, as well as establishing connection with an XMPP-server and subscribing to various-needed resource controllers, the resources will need to be created.

Creation of path:  For the creation of the path, a CREATE-message is send to the resource-controller which contains the path-specification read from the XML-file stored in the load_path-property.

The RC will send a INFORM-message to the EC, containing information regarding the (un)successful creation of the path. In case of unsuccessful creation, a reason is send as to why creation failed.

Regardless of successful or unsuccessful creation, it is possible to let the RC check the path for plausibility. Errors, or contradictory specifications are then included in the feedback, and should be viewable in the log of the EC.

Creation of robot: For the creation of a robot-resource, a CREATE-message is send to the resource-controller which contains its name, (optionally) the specification of the robot (currently empty) and it's type, as well. The RC checks the type of the robot against the type of the connected robot, If it  isn't equal, the creation fails, and a INFORM-message with that content is send by the EC. Same goes if there is no connected robot available.

All up and installed: After this event indicated the successful initialization, the first waypoint will be approached by sending the appropriate message to ROS (start_path). ROS periodically publishes messages about the current status like the position, orientation, speed and feedback (e.g. done, reached waypoint). If the waypoint is reached within the given time (optional), the next waypoint is send to ROS (next_WP), which will immediately start the movement in that direction. Again, if there are obstacles in the way and the turtlebots needs to bypass them in order to reach the next waypoint, its speed might increase to fulfill the time-criteria. Nevertheless the maximum speed of this kind of devices is upper-bounded (turtlebot 1.2kph approx.). In case the waypoint is not reachable (in the optional given time or duration or at all) and it is not marked as critical, the waypoint will be skipped and the next waypoint (if existing) will be set as current goal.

Finishing, releasing: If the last waypoint was reached and reported from ROS to RC as a inform-message (feedback), this information will be transmitted to EC (inform-message). The EC will then release both resources, path and robot and end the experiment. Upon release, the robot might either stop where it finished, or return to where it first started out.

Path realization

The inputs from the user through the web-interface will produce a XML-based file which consists of the following entries:


Table 1. Elements of OMF messages, path
uid, pid
int, int
attributes (path)
unique path-ID, version or type-ID
path-wide options
path specification
Table 2. Elements of OMF messages, waypoint
unique WP-ID
tolerance of orientation
tolerance of position (radius)
opts   option     continue     continue_and_inform     stop_and_inform     continue_on_path     at_time_rel_or_sooner     at_time_rel_exact     at_time_abs_or_sooner     at_time_abs_exact     move_at_speed   params
    continue to next WP without informing EC continue to next WP and inform EC stop at current WP and inform EC load another path be at WP, relative to previous WP be at WP, relative to previous WP be at WP, absolute, timestamp be at WP, absolute, timestamp move at given speed [parameter for options]


There are many possible approaches to achieve the objective. The first possibility could be the implementation of a node only in ROS, second only an adaption on the OMF-side, or third a bridge that needs adaption on both sides.

Figure 3. Possibilities of interface implementations
Figure 4. Adaption on ROS-side
Figure 5. Adaption on OMF-side

Figure 4 and Figure 5 show a possibility of realizations of the two first mentioned approaches: Either the RC in OMF needs to be adapted to communicate with different nodes of ROS or one ROS-node needs to handle different resource controller of OMF. We decided to pursue the first approach, i. e. the resource controller can communicate or even access the robot in order to control its movement. This is done by using the ActionClient and ActionServer model of ROS, where a remote process can connect to the ActionServer of ROS. The desired Node (move_base) has to be stated as well the desired Topic.

 rospy.init_node('move_base', anonymous=True)
move_base_client = actionlib.SimpleActionClient('move_base', MoveBaseAction)
goal = MoveBaseGoal()
goal.target_pose.header.frame_id = 'map'
goal.target_pose.header.stamp = rospy.get_rostime()
goal.target_pose.pose.position.x = 2 #self.params['x']
goal.target_pose.pose.position.y = 2 #self.params['y']
goal.target_pose.pose.position.z = 7 #self.params['z']
goal.target_pose.pose.orientation.x = 0 #self.params['phi']
goal.target_pose.pose.orientation.y = 0 #self.params['rho']
goal.target_pose.pose.orientation.z = 0 #self.params['theta']
goal.target_pose.pose.orientation.w = 0.8 # self.params['w']


On the side of the robot controller, only minor changes had to be done. As ROS supports a variaty of external communication, e.g. web-services by ros_bridge. But it is also possible to subscribe to an already opened topic or publish a message to any open topic. The only conditions that have to be fulfilled, are the valid structures of the messages that are send to a specific node/topic.

Figure 6. Common ROS nodes incl. dependencies

The basic nodes that are needed for moving the robot are shown in fig. 6. The move_base handles the movement for the device with special emphasis of the given map, where obstacles etc. are deposited. The AMCL (adaptive monte carlo localization) called node handles the localization of a device.

In Fig. 6 there are no external publishers or subscribers given. Fig. 7 shows the adaption of the rqt_graph, when other processes set up a connection with the move_base.

Figure 7. ROS interaction with pub/sub

The basic nodes that are needed for moving the robot are shown in fig. 6. The move_base handles the movement for the device with special emphasis of the given map, where obstacles etc. are deposited. The AMCL (adaptive monte carlo localization) called node handles the localization of a device.

In Fig. 6 there are no external publishers or subscribers given. Fig. 7 shows the adaption of the rqt_graph, when other processes set up a connection with the move_base.


Also OMF needs to be modified, that means at least one new resource controller has to be created to handle the communication with the robot controller. These RC will send and receive messages to and from ROS.

Figure 8. Virtual resource controllers (RC)

There are at least three possible approaches on the OMF-side to realize it. (A) only provides one RC for the robot and path control, (B) splits this by adding another RC especially for the path and (C) is comparable to (B) with the difference that the added RC is only representing a single waypoint instead of a whole path (lists of waypoints and time component).

The following table will show advantages and disadvantages of all possible approaches. Afterwards we will state our chosen approach.

Table 3. Possibilities of OMF-RC implementations
A [only RC robot]
less communication with EC needed all in one solution
not quite true to the distr. system idea
B [robot, path]
less communication with EC needed extension of the (C)-model easy configuration/modification flexible path specification
needed at least 2 waypoints, but: if only 1 given, another can be set automatically additional waypoint information needed (lin/polynom., duration)
C [robot, waypoint]
simple implementation logical part in EC minimal waypoint data (coordinates only)
EC directly controls path, therefore needs to call RC (next waypoint) potentially pause sequences after each reached waypoint size of EC config file increases


After thinking about these possibilities we realized that (C) is the best solution for us. Basically all three approaches provides the same functionality and differ only in the implementation. Although (B) needs the most parameter to be transmitted to the path resource controller, the load on EC is reduced by the fact that most of the logic resides in the RC. In (C), in case of errors the EC would have to trigger reactions based on events of the RC. If a RC only contains one waypoint, the robot may stop after reaching it, before the EC has chosen the next RC (waypoint).

Protocol and message description

The inputs from the user through the web-interface will produce a XML-based file which consists of the parameters denoted in 3.2 on page 13.

We defined the content of messages that are sent from the EC to RC. Now we concentrate on the content of messages that are sent from ROS to the RC.  Every waypoint is realized in OMF by a resource controller (RC). This RC will connect to a certain Node of ROS, the move_base. ROS will send/publish feedback and status messages through the ActionServer to the connected RC. These messages consists of the following parameters: current position and orientation, status of the current goal. The RC will evaluate the values of all relevant parameters. If the device enters the tolerance margin, the RC will report that this waypoint is reached to the EC.

User interactions

The section defines what the end user needs to know before starting the measurement and will give an overview of all parameters the user is able to modify.

All inputs from the user and relevant outputs from ROS and OMF will be realized via a web-interface. Based on simple HTML, jQuery, CSS3 and javascript, all user inputs will be transformed to valid XML structure before transmitting to OMF. The user can enter the following parameters: waypoint (x, y, z, phi), tolerance margins for each waypoint, whether the next waypoint is reached after the given duration or at a specific time. In our case the device tries to reach the next given waypoint by choosing the shortest track, i.e. direct or linear. To already address some further work dealing with a more complex movement between two waypoints in the future, there will be an option to define by which type of function the waypoints are connected to each other, e.g. linear or polynomial.

In case of errors, we distinguish between two types: show-stopper and less serious. Show-stopper could only occur before a measurement starts. If relevant parameters are not entered by the user and no default values are available, an error-message will be presented and the user has to fix it manually. Less serious errors can occur during an experiment, for example if the next waypoint cannot be reached within the time-criterion or can never be reached (z is not equal to the device specific height). This kind of error can also be seen as a warning which may need input from the user which has to decide: skip that waypoint and try to reach the next one, abort whole experiment, try to reach it anyway regardless of time-criterion.

User controller

As described in section 3.7 the user inputs and system outputs are realized through an web-interface. The creation of new waypoints are dynamically so the user could create the path of hundreds of points. Via drag-&-drop the order can be changed easily. If the user leaves relevant parameter empty, the measurement will not start but give a notification to the user or default values are used instead. The waypoints can be stored to repeat an experiment. Because of the fact, that the waypoints are independent of the device, a path could be used for other measurements with different robots as well.

Figure 9. Screenshot of the web-interface

4 Used Tools

In this project we used two programs: ROS and OMF. In this section we will present these tools.


Our mobile devices are controlled by ROS, tobot operating system. This tool was developed by Stanford Artificial Intelligence Laboratory in 2007. This open source software consists of different frameworks. Mostly written in Python.

The system is setup based on different Nodes. A Node is a process that performs any kind of computation and might be connected to other Nodes as well. The whole system or components of it can be displayed in a graph, see also Figure 6 and Figure 7.

The communication between Nodes can be done in two ways. Via Topics or via Services. A Topic is a named bus that allows message exchange (unidirectional). The other possibility is the concept of Services. This is a classical publisher-subscriber model which is very flexible but might not be appropriate for RPC interactions. In this concept there exists always a pair of messages: request and reply.

A message itself can be represented by a simple data structure.

It is also possible to communicate by using web services via rosbridge. Then a new Node represents a socket for processing incoming messages and process them (forward to the appropriate Node).


The second tool that is necessary for our project is the control and measurement framework or also experiment control framework, OMF.

It was originally developed for the ORBIT wireless testbed at Rutgers University but is now an open source framework which supports both heterogeneous wireless and wired resources.

The architecture consists of three parts: control, measurement and management. The first one is used by the user to setup the experiment while the second part is used to collect data and store them. The last part is used configure all resources that run during an experiment.

To start an experiment the user has to build a script that contains the description of the desired experiment. This generated OEDL file (OMF Experiment Description Language) will then be loaded into the Experiment Controller (EC) to execute the experiment. At the beginning of an experiment the EC will first connect to every Resource Controller (RC).

Figure 10. OMF components and architecture, http://omf.mytestbed.net/projects/omf/wiki/Introduction

5 Tests

The validation of the implemented functionalities were tested locally. The Robot Operating Systems includes an emulator with devices behaving equal to the turtlebots that are used at TKN. There are also maps (floor with obstacles to model an environment of a building).

At the day of hand-over the project results, the locally tested implementation should run on real devices and the TKN testbed as well.

6 Conclusion and Outlook

We focused our work on the movement control of devices (especially turtlebots). The experiment controller of the measurement framework is now able to send a device to the given waypoint. It receives the current position of the device and decides (based on the settings of the user) when to start the desired script e.g. ping or iperf.

Instead of using OMF to handle the whole communication to all components, another approach might be to integrate the web-interface more than it is now. As ROS supports web services by using rosbridge, the web-interface can also be used to send goals or track the current position of each device. Although this interface then should never be disconnected or in other words: If the server breaks down, the communication to ROS is not possible anymore and the running experiments cannot finish properly.

7 Literature

1. Tanenbaum, Andrew. Computer-Netzwerke. Amsterdam : Wolfram's, 1992.

2. Seitz, Jochen. Digitale Sprach- und Datenkommunikation. s.l. : Hanser, 2007. 978-3-446-22979-2.

3. Tanenbaum, Andrew and Weatherall, David. Computer Networks. 5th edition. Boston : Pearson, 2011. 978-0-13-212695-3.

Zusatzinformationen / Extras

Quick Access:

Schnellnavigation zur Seite über Nummerneingabe

Auxiliary Functions


Matthias Dietsch
Garrit Honselmann


Jasper Büsch
Mikolaj Chwalisz