I am currently developing a project document for my assignment. The title of my project is 'Systematic Train Tracker'.
Let me describe about my system:
The GPS receiver on the train will get the trains information and pass it to the control server using GSM Network. Then the control server will transmit the information to the train administrative office for monitoring purpose. Then the information will also passed to particular stations to display it to passengers.
So what will be the functional and non-functional requirements for my project? And what are the possible constraints will be??
Please help.
Related
Many of the articles I've read that describe use of Tacotron2 to training models for speech synthesis seem to use very high end NVIDIA GPUs. I realize there is a great deal of data that needs to be analysed, but is it possible to apply the various available tools on say, a desktop CPU with a lower end card? I have a project in mind but don't really have access to some of the more expensive hardware described by various authors.
For a poc project, we would like to build digital twin of an physical device like e.g. coffee machine. Would like to know which open source software components can be used for the same. Some software components based on the information available are mentioned below:
Eclipse Hono IOT platform for iot gateway
Eclipse Vorto for describing information models
Eclipse Ditto for Digital Twin representation. It provides abstract representation of device last state in the form of http or websocket apis
Blender / Unreal Engine for 3D models
Protege for Ontology editor
I have the following questions:
Are we missing any software components to create digital twin of an physical asset?
Assuming we have 3D models available and sensor data is also available, how can we feed live sensor data to 3D models e.g changing the color of water tank based on real sensor data of water tank level? Not able to understand how real time sensor data will be connected to 3D models.
How will ontology be helpful in creating 3D models?
So you have a 3d model and sensor information, and you want to change some properties of the 3d model to reflect the sensor information? You should need to use 5 different tools for something like that. I would suggest looking into video games development tools like Unity3D or UnrealEngine 4.
I am working on an IoT gateway like raspberry pi. So I want to use Brickschema for data modelling and data normalisation of sensors data. I had gone through a paper link and find some theory about brickschema. There I didn't find the implementation thing. Could you tell me how can I start working with Brickschema using python libraries and what I need to integrate with brick to model the sensor data..? Please share me some example to implement Brickschema for my IoT gateway.Thanks in advance.
I am studying about tensorflow-federated API to make federated learning with real multiple machines.
But I found the answer on this site that not support to make real multiple federated learning using multiple learning.
Are there no way to make federated learning with real multiple machines?
Even I make a network structure for federated learning with 2 clients PC and 1 server PC, Is it impossible to consist of that system using tensorflow federated API?
Or even if I apply the code, can't I make the system I want?
If you can modify the code to configure it, can you give me a tip?If not, when will there be an example to configure on a real computer?
In case you are still looking for something: If you're not bound to TensorFlow, you could have a look at PySyft, which is using PyTorch. Here is a practical example of a FL system built with one server and two Raspberry Pis as clients.
TFF is really about expressing the federated computations you wish to execute. In terms of physical deployments, TFF includes two distinct runtimes: one "reference executor" which simply interprets the syntactic artifact that TFF generates, serially, all in Python and without any fancy constructs or optimizations; another still under development, but demonstrated in the tutorials, which uses asyncio and hierarchies of executors to allow for flexible executor architectures. Both of these are really about simulation and FL research, and not about deploying to devices.
In principle, this may address your question (in particular, see tff.framework.RemoteExecutor). But I assume that you are asking more about deployment to "real" FL systems, e.g. data coming from sources that you don't control. This is really out of scope for TFF. From the FAQ:
Although we designed TFF with deployment to real devices in mind, at this stage we do not currently provide any tools for this purpose. The current release is intended for experimentation uses, such as expressing novel federated algorithms, or trying out federated learning with your own datasets, using the included simulation runtime.
We anticipate that over time the open source ecosystem around TFF will evolve to include runtimes targeting physical deployment platforms.
I'm currently working with LabVIEW 2012 and I'm about to begin a project with the 3D sensor mapping (Sensor Mapping Express VI) from LabView.
I read about it and most of the time, they're talking about NI-DAQmx tasks, but for my project, I'd like to use data from shared variables that I wrote.
Does anyone knows whether it is possible and/or very difficult to do that, because I also see that we can put "free sensors to represent data you wire to the Express VI." So is it the answer to my question ?
Yes you can connect your data via the free sensor input this should not give you a problem. You can even connect simulated channels if need be.