I am looking at having SUMO as the mobility server and I would like to map an OpenDS vehicle object onto the SUMO simulation by interfacing with SUMO using TraCI. A human user would be interacting with OpenDS through a driving machine and the aim is to create a human-in-the-loop simulation. I would be looking at transmitting mobility information in a bidirectional manner between SUMO and OpenDS.
I am relatively new to this area and I was not able to find suitable references. I would appreciate pointers to any relevant documentation or projects.
Thanks in advance.
There are several projects coupling SUMO to driving simulators, the PARCOURS project has been described here, the rFPro solution here (also look at the proceedings), but unfortunately I do not know of any available open source implementation. Usually the couple using the TraCI interface with the most important command being vehicle.moveToXY which synchronizes the position of the user-driven vehicle with its counterpart in SUMO.
Related
If I would like to interconnect two Smalltalks, namely Smalltalk/X with GemStone/S, what approach would you recommend? I would like to have an application in Smalltalk/X with persistent objects in GemStone/S.
Prior to any development I tried to investigate the issue. I have found some open-source implementation done - I like to learn from others mistakes so I don't repeat them.
I have found an implementation for Pharo - gt4gemstone - Glamorous Toolkit for remote work with Gemstone/S.
I have found also from James Foster - Jade which achieves more as it is an
Alternative Development Environment (IDE) for GemStone/S that runs on Microsoft Windows.
Where would you recommend to start? Would it be to read the gt4gemstone or Jade and then come up with similar way to interconnect Smalltalk/X with GemStone/S?
Glad to hear of your interest in GemStone (one of my passions!). The key to interoperability with GemStone is to provide a wrapper for the GemStone C Interface (GCI), a C library used to connect to GemStone. This is the method used by every GemStone client (whether C, Smalltalk, or something else) to communicate with the system.
For a Smalltalk example, see GciLibrary* and GciSession in Jade.
For a couple other recent examples that might be cleaner starting points, see GciForJavaScript, GciForPython.
For an older (ruby) example see gemstone_ruby.
So, I'd suggest that you investigate what Smalltalk/X has for a Foreign Function Interface (FFI), then follow the examples above to connect to GemStone.
I have a project just starting up that requires the kind of expertise I have none of (yet!). Basically, I want to be able to use the user's webcam to track the position of their index finger, and make a particular graphic follow their finger around, including scaling and rotating (side to side of course, not up and down).
As I said, this requires the kind of expertise I have very little of - my experience consists mostly of PHP and some Javascript. Obviously I'm not expecting anyone to solve this for me, but if anyone was able to direct me to a library or piece of software that could get me started, I'd appreciate it.
Cross compatibility is of course always preferred but not always possible.
Thanks a lot!
I suggest you to start reading about OpenCV
OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision, developed by Intel Russia research center in Nizhny Novgorod, and now supported by Willow Garage and Itseez.1 It is free for use under the open-source BSD license. The library is cross-platform. It focuses mainly on real-time image processing.
But first, as PHP and JavaScript are mainly used for web development, you should start reading about a programming language which is supported by OpenCV like C, C++, Java, Python or even C# (using EmguCV) etc.
Also, here are some nice tutorials to get you started with hand gestures recognition using OpenCV. Link
Good luck!
I have a basic firmware question. I am looking to program a nRF51822 IC and integrate it on my own PCB. The evaluation kit seems to already have the IC soldered on it. Is it a way I can only program the nRF51822 and getting it ready to use elsewhere?
Get yourself one of these J-LINK LITE CortexM:
and hook up your connection header like this to your microcontroller (SWDIO, SWCLK, VCC and GND are the only ones needed):
.
Then, use Keil or nRFGo Studio to program your device.
You don't need J-Link at all. Any STLinkv2 board will work, like STM32 dev boards. But even nicer are these cheap Chinese programmers: http://www.aliexpress.com/item/FREE-SHIPPING-ST-Link-V2-stlink-mini-STM8STM32-STLINK-simulator-download-programming-With-Cover/32247200104.html
All you need to do is connect the Vcc, Ground, SDIO, and SWDCLK lines from your board/chip to the programmer, so make sure those pins are broken out and easy to get to.
There are some good instructions on how to do that here: https://github.com/RIOT-OS/RIOT/wiki/Board:-yunjia-nrf51822
I've built Linux workstations for workers on assembly lines to use with this method, and it just loops over and over for new boards. So they don't even need to touch the PC, they can just place a board on the jig or connect a header and it's all automatic.
You will need a programming device, such as a Segger Jlink. The eval kit has an on-board Segger programmer on it (that big chip with the Segger sticker on it).
I'm working through this process myself at the moment. I read somewhere that some people were successful at 'hacking' the eval kit, to bring the SWDIO and SWCLK over to their custom board but that really isn't the right way to go about it.
Instead, purchase an actual programmer and put a programming header on your custom circuit board.
While I am also still in the research phase here as well, it looks like there are 4-5 pins to connect from the programmer to your custom target board. The nRF documentation seems to be rather lacking in the definition of the programming setup, but look under the debugging category and take a look at Segger documentation as well.
If going into mass production there are ways to pre-program the chip before assembly, but I haven't had a chance to learn about that just yet.
Can you suggest on this points related to Autosar, taking into consideration I am a software developer who can write some software in C?
Now I Develop a functionality in C, that has to read some ECU specific data, process it & update some ECU specific data (which can be some variable or i/o signal).
Now how I will be using Autosar RTE & virtual functional bus?
What will be there use to a software developer?
Also, as Autosar says "standardization of interfaces" what does it mean? Does it mean that if some else anywhere around the world is also developing same functionality (in C language, like me) we both will be using same name of the API's for those I/O signals?
How RTE will be helpful for me in Unit testing? Or what really RTE is doing from software developer point of view?
http://www.autosar.org/gfx/AUTOSAR_TechnicalOverview_b.jpg
I read a lot technical terms... but being a software developer these points are important for me to know. Can you explain it a bit to me.
Your reply will be appreciated.
I don't think it is going to be that easy...
I believe that you are developing Autosar SWC (software component).
I would recommend for you to develop a portable C module. That has very clear inputs, outputs and req. on execution (check Autosar runnables).
Remember Autosar ECU includes RTOS, therefore your module will be part of a OS task.
When and if you come to the point of building an Autosar ECU, you will be able to wrap the module and connect ins/outs with Autosar virtual functional bus signals. For that you will need Autosar framework and probably configuration tools. These are complex and expensive.
Unit test the module the usual way you test C module.
Good luck.
P.S. RTE is just the "glue" code generated automatically by configuration tools according the configuration of ECU BSW and System Extract for that ECU. You will worry about it during wrapping.
The Idea behind dividing the functionality in AUTOSAR SWC and Basic software is to make the application SW development independent of any platform. To answer your questions.
RTE is giving the application a signal based interface, hence you expect the other SW components (inter-ECU /intra-ECU) to provide the required data in the form of signals, you dont care about the platform or type of communication medium
Yes by standardizing the interfaces (all kind of interactions), a software component or any Basic software module can be Fixed into the SW architecture. Read more about the different type of AUTOSAR interfaces.
Refer to answer 1
RTE is there as a layer to 'abstract' the inner components of the system. For example, if you need to get access to the system's installed flash memory, you have to use the RTE-related memory functions.
You are correct. You only need to read the specifications and use the corresponding functions to get your desired result in an AUTOSAR system.
RTE makes sure that the developers of the software components and the middle-layer systems would work properly with minimal interaction between them. SWC developers just need to read the AUTOSAR standard and follow it to ensure compatibility with the middle-layer systems, since it is expected that the middle-layer system developers would follow that same standard in providing functionalities on their side. It also helps developers with the portability of their software.
I think all your questions can be answered by reading the AUTOSAR standard documents at the AUTOSAR website. Most of my limited knowledge in development of AUTOSAR systems (started reading about it for close to a month already), I got there.
I am a Software developer who Developed a Console Application Tool for Autosar RTE, Test Case Generation for RTE, and wrote Unit Testing Scripts for the tool I created.
I Developed these using C# and NUnit Framework. Same can be Developed using C or a java or any other language. Ultimate goal is to generate AUTOSAR modules (.c and .h files) based on the requirement.
1. Software Developer Scope
As a Software Developer, i had a task to implement complete RTE and Test Applications for the Implemented RTE code.
Inputs and Outputs:
Basically our inputs were Software Component files and ECU Extract which were in ARXML format and Outputs were Rte and test application source and header files (.c and .h) which were created based on the requirements.
Tasks as a developer:
Here, as a developer, we need to perform Input parsing from AXXML to our own data structure, Schema Validation, Modal Validation, File generation etc.
2. Standardization
Yes, AUTOSAR Architecture provides standardized interface. Irrespective of the implementation strategy, API structure remains same which eases the usage. This acts as a generalised library where you can use already developed Module or you can implement the module in your own way by considering API specification. All you need is to follow the specifications provided for every module you use.
Requirement varies from Company to Company but the way of using APIs remains same.
3. Unit Testing
Unit Testing has nothing to do with RTE or AUTOSAR modules. You will be testing the Uints of Your Code. When i say your code, it is the one which you used to develop any particular module (eg. Rte.c) and not testing the generated module itself. You will be testing the Source code you developed to generate the specfic module. Your source code is not part of RTE or any other module implementation but is tool which generates the module implementation.
Overview:
Software developer have various scope in generating AUTOSAR modules depends on the Requirement.
You can develop a tool which will generate AUTOSAR modules.
You can develop an Editor which will is used to edit/create AUTOSAR XML files. (Eg: Artop)
Developing might sound complex as we do not get direct resources other than specifications. Once you are in, you will learn a lot.
To answer your question
If you will go through the Layered Architecture of AUTOSAR, you will come to know this architecture is followed to minimize the dependency of the each module
(layer) with lower layer.
Again, RTE is a like wrapper to separate the lower layered dependency, this enables to work on each layer independently. Most of the virtual buses are mapped with RTE, in my experience I have worked on IOC which is allowed to map with RTE and which communicates with other SWC's with memory and core boudary. To OS Developer its via to the application layer and Mapped software partitions.
The standard is used to maintain uniformation in all software layers, however to meet the requirements the developers may have different way of implementation and design, but the API's and requirements will be universal.
This is useful for standardised intefacing too.
For Unit testing of the developers OS design and implementation RTE works as abstract module.
Reading Specs for different module will resolve most of doubts.
is there a way (hardware/software-combination) that I can use to control one or more "Philips Living Colors" lamps using a PC - e.g. a USB-stick that acts as the "remote". This way i could control the lamp through software (e.g. a web-app - over iPhone / remotely) or even create what Philips builds into some of their TVs and calls "ambilight" (graphics driver detecting the main color to control the lamp).
I guess this is more like a hardware than a software question - but I couldn't find anything about this online and I'm sure not to be the first to have come up with this idea right when I unpacked my LivingColors lamp yesterday ;)
There are two version of the LivingColors lamp, the Gen1 lamp can be controlled with a small kit, as far as i know the Gen2 can not be controlled with 3rd party products.
There is an Arduino shield that can control the Gen1 lamps, the article describing this is in Dutch. In short : the shield, and by extension the lamp, can be controlled by serial-over-USB. Google translate may help :
The hardware : http://www.knutsel.org/2010/04/11/assembling-the-cc2500-arduino-shield/
The link to the software is at the end of the post. (I can only post one link.)
There is a schema and software, enough information to build your own controller for Gen1 lamps.
Some remarks:
I am the a author of these posts.
The shields are sold as a kit in the Netherlands and Belgium (hence the dutch blog post).
The Gen2 uses IEEE802.15.4 (it says so in the manual) and is said to use encrypted Zigbee. Zigbee and encrypted Zigbee use IEEE802.15.4.
I should probably make a better translation of the posts.
[ 11 April 2010 edit : made translations of the blogposts in English and changed the links here ]
LivingColors uses an implementation of 802.15.4, the ‘ZigBee’ mesh-network wireless protocol designed for consumer appliances.
The second-gen LivingColors lamps can be persuaded to talk to the Philips Hue wireless bridge and integrate with a Hue setup. Much anecdotal information about how this is done can be had here:
http://www.everyhue.com/?page_id=38#/discussion/7/hue-and-living-colors
... for your purposes, integrating with Hue is your best bet, as the bridge exposes (as of yet, unofficially) a comprehensive RESTful JSON API, which is easily scripted — one of the better resources on using this API can be found here:
http://rsmck.co.uk/hue
I personally have had a good deal of fun doing what you are trying to do, with the Hue bridge and LivingColors lamps. Good luck!
I would too be interested by controlling my Living Colors from a computer, through a 2.4Ghz USB transmitter (mainly just for fun ;)
I have two Living Colors, a "Generation 1" and a "Generation 2", and the bad news is that the remote hardware and (maybe) the protocol have been totally modified by Philips in the process (probably to add the "fading effects" of the second generation). So it's even more complicated now, such a transmitter would have to deal with the 2 protocols.
Another link about what's inside the official controller
(in addition to the Elektor article given above) :
Gen 1 : http://www.knutsel.org/2009/01/01/livingcolors-1st-generation/
Gen 2 : http://www.knutsel.org/2009/12/01/philips-redesigns-livingcolors-breaks-compatibility/
Elektor (reverse engeneering of the procotol : http://www.ideetron.nl/Livcol/UK2008110661.pdf
I checked the Philips website where you can download the user documentation. The following trouble-shooting tip provides a clue:
LivingColors doesn’t respond quickly to the remote control.
- The communication between the remote control and the
LivingColors can be affected by heavy traffic on a wireless data
network, for example a wireless router.You should move Living-
Colors away from the wireless access point and switch your
wireless router to channels 8-11 for minimum interference.
So the controller uses wireless communication. It is clearly quite a sophisticated communication link, one controller can control up to 6 lights.
Unless it is a full WiFi link getting a computer to control the light would necessitate some heavy hardware hacking. Should it be a WiFi link it would be possible to write a driver.
If anyone has one these could they do a WiFi scan to see if the light and controller show up?