Does anyone know what kind of logic gates Graphcore uses? - ipu

Does anyone know what kind of logic gates Graphcore uses for their IPU's? Specifically the GC200, not the server racks (MK2000).

Related

Omnet++ with Reinforcement Learning Tools [ML]

I am currently failing into find an easy and modular framework to link openAI gym or tensorflow or keras with omnet++ in such a way I can produce communication between each tool and have online learning.
There are tools like omnetpy and veins-gym, however one is very strict and not trustworthy (and no certainty into bridge with openAI, for example) and the other is really poor documented in such a way one person can’t taper how it is supposed to be incorporated into a project.
Being omnet so big project, how is it possible that it is so disconnected to ML world like this?
On top of that, I still will need to use federated learning, so a custom scrappy solution would be even more difficult.
I found various articles that say “we have used omnet++ and keras or tensorflow”, etc, but none of them shared their code, so it is kinda misterious how they did it.
Alternatively, I could use NS3, but as far as I know, it is very steeped to learn it. Some ML tools are well documented, apparently, for NS3. But since I didn’t tried to implement something in NS3 with those tools, I can’t know for sure. Omnet++ was easy to learn for what I need, changing to NS3 still seems a burden with no clear guarantees.
I would like to ask help in both senses:
if u have links regarding good middleware between omnetpp and openai-gym or keras or such, and you have used them, please share with me.
if u have experience with NS3 and ML using ML middleware to link NS3 with openai-gym and keras and so on, please share with me.
I will only be able to finish my POC if I manage to use Reinforcement Learning tooling online a omnet++ simulation (i.e., agent is deciding on simulation runtime which actions to take).
My project is actually complex, but the POC may be simple. I am relying in these tools because I have no sufficient experience to build a complex system translating a domain to another. So a help will be nice.
Thank You.

Compare SDN Mininet results to traditional network results

My topic is: Comparative performance analysis of SDN-based network and traditional network. So I decided to use mininet and already know how to perform some tests. However, I am wondering what tests would be better to choose (throughput, jitter, packet delivery ratio, latency, end packet delay, etc.) and how/where actually I can do tests for traditional network? NS2? What would be you suggestions? Maybe any useful links/tutorials?
Many thanks,
You should first use the same simulator to simulate the both networks types. The traditional and SDN are almost the same. The only difference is the management view and the flexibility.
You need first to:
set your goals from the study. Why you are performing this study? Has someone did this before? google scholar and check.
If some people have done this then think in some metrics or objectives they were missing and then start to think how to highlight them.
A good start for SDN research is always this paper (http://ieeexplore.ieee.org/document/6994333/).
Try to comment to let us know more in case this is not sufficient. I'm doing my PhD in SDN so I would like to help and exchange knowledge.

Optimization algorithms optimizing an existing system connections

i am currently working on an existing infrastructure where i have about a 1000 customer sites connected to about 5 different hubs. A customer site may connect to one or two hubs to ensure reliability but each customer site is connected to at least one hub. I want to ensure if the current system is the best or can be optimised to have better connection from customer sites to hubs, to help improve connectivity and reliability. Can you suggest good Optimisation Algorithms to look into?. Thank you
Sounds like you're doing some variation of the Facility Problem.
This is a well-known problem, and while there are algorithms that can solve for the global optimum (Djiskra's Algorithm, or other variants of Dynamic Programming), they do not scale well (i.e. you run into the curse of dimensionality). You could try this, but 1000 sounds already pretty big (depends on your exact problem formulation though).
I'd recommend taking a look at this coursera mooc Discrete Optimization. You don't have to take the whole course, but in the "Assignments" section of the video lectures, he also explains a variant of the Facility problem, some possible approaches to think about, and once you've decided which one you want to use, you can look deeper into that particular approach.

Semantic techniques in IOT

I am trying to use semantic technologies in IOT. From the last two months I am doing literature survey and during this time I came to know some of the tools required like (protege, Apache Jena). Now along with reading papers I want to play with semantic techniques like annotation, linking data etc so that I can get the better understanding of the concepts involved. For the same I have put the roadmap as:
Collect data manually (using sensors) or use some data set already on the web.
Annotate the dataset and possibly use ontology (not sure)
Apply open linking data principles
I am not sure whether this road map is correct or not. I am asking for suggestions in following points
Is this roadmap correct?
How should I approach for steps 2 and 3. In other words which tools should I use for these steps?
Hope you guys can help me in finding a proper way for handling this issue. Thanks
Semantics and IoT (or semantic sensor web [1]) is a hot topic. Congratulations that you choose a interesting and worth pursuing research topic.
In my opinion, your three steps approach looks good. I would recommend you to do a quick prototype so you can learn the possible challenges early.
In addition to the implementation technologies (Portege, etc.), there are some important works might be useful for you:
Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE). [2] It is an important work for sharing and exchanging sensor observation data. Many large organizations (NOAA, NASA, NRCan, AAFC, ESA, etc.) have adopted this standard. This standard has defined a conceptual data model/ontology (O&M, ISO 19156). Note: this is a very comprehensive standard, hence it's very BIG and can be time consuming to read. I recommend to read #2 mentioned below.
OGC SensorThings API (http://ogc-iot.github.io/ogc-iot-api/), a IoT cloud API standard based on the OGC SWE. This might be most relevant to you. It is a light-weight protocol of the SWE family, and designed specifically for IoT. Some early research work has been done to use JSON-LD to annotate SensorThings.
W3C Spatial Data on Web (http://www.w3.org/2015/spatial/wiki/Main_Page). It is an on-going joint work between W3C and OGC. Part of the goal is to mature SSN (Semantic Sensor Network) ontology. Once it's ready, the new SSN can be used to annotate SensorThings API for example. A work worth to monitor.
[1] Sheth, Amit, Cory Henson, and Satya S. Sahoo. "Semantic sensor web." Internet Computing, IEEE 12.4 (2008): 78-83.
[2] Bröring, Arne, et al. "New generation sensor web enablement." Sensors 11.3 (2011): 2652-2699.

What is the application of automata?

In other words, why should I learn about it? When am I going to say... oh I need to know about push down automata or turing machines for this.
I am not able to see the applications of the material.
Thanks
You should learn about automata theory because it will help you understand what is computationally possible in a given system. People who understand the difference between a push-down automata and a universal turing machine understand why trying to parse HTML with regular expressions is a bad idea. People who don't think it is just fine to try to parse HTML with REs.
There are problems that are nice fit to this kind of solutions, some of which are:
parsers
simulations of stateful systems
event-driven problems
There are probably many others. If you start writing code that has some ad-hoc state variable depending on which some functions can do this or that, you can probably benefit from proper FSA.
First off, it's my position that there are things worth learning not because they're immediately useful, but because they are inherently valuable. A great failing of modern education is that it does nothing to convince students of this when they're still impressionable.
That being said, automata theory is both inherently valuable and incredibly useful. Parsing text, compiling programs, and the capabilities of computing devices can only really be understood using the kinds of things automata theory gives us... and getting the most out of computational systems requires deep understanding. Automata theory allows us to answer some of the most fundamental questions we can ask about computation: what resources do we need to do computation? with given resources, what can we solve? are there problems which can't be solved no matter how many resources we possess? Let alone the fact the complexity theory - which deals with the efficiency of computations - requires automata theory in order to be meaningfully defined.
Learning about automata(which are nothing but machines) gives an idea about the limits of computation. When an automata does not accept a string, it mean a machine cannot take that string as an input. State diagrams generally gives the possible outcomes for an input which makes us build parsers/machines.
Good example would be checking the format of email-id. Softwares donot accept the email-ids while filling a form if the email format is not good. Here the software is accepting email-ids only in a specific format. We were able to build a software of such by basically sorting out this theoretically using automata and state machines.