I'd like to start out with the Arduino to make something that will (preferably) dim my room lights and turn on some recessed lighting for my computer when a button or switch is activated.
First of all, is this even possible with the Arduino?
Secondly, how would I switch on and off real lights with it? Some sort of relay, maybe?
Does anyone know of a good tutorial or something where at least parts of this are covered? I'll have no problems with the programming, just don't know where to start with hardware.
An alternative (and safer than playing with triacs – trust me I've been shocked by one once and that's enough!) is to use X-10 home automation devices.
There is a PC (RS232) device (CM12U UK or CM11 US) you can get to control the others. You can also get lamp modules that fit between your lamp and the wall outlet which allows you to dim the lamp by sending signals over the mains and switch modules which switch loads on and off.
The Arduino has a TTL level RS232 connector (it's basically what the USB connection uses) – Pins 0 and 1 on the Diecimila so you could use that, connect it via a level converter which you can buy or make and connect to the X-10 controller, theirs instructions on the on the Arduino website for making a RS232 port.
Alternatively you could use something like the FireCracker for X-10 which uses 310MHz (US) or 433MHz (UK) and have your Arduino send out RF signals which the TM12U converts into proper X-10 mains signals for the dimmers etc.
In the US the X-10 modules are really cheep as well (sadly not the case in the UK).
Most people do it using triacs. A triac is like two diodes in anti-parallel (in parallel, but with their polarity reversed) with a trigger pin. A triac conducts current in either direction only when it's triggered. Once triggered, it acts as a regular diode, it continues to conduct until the current drops bellow its threshold.
You can see it as a bi-directional switch on a AC line and can vary the mean current by triggering it in different moments relative to the moment the AC sine-wave crosses zero.
Roughly, it works like this: At the AC sine-wave zero, your diodes turn off and your lamp doesn't get any power. If you trigger the diodes, say, halfway through the sine's swing, you lamp will get half the normal current it would get, so it lights with half of it's power, until the sine-wave crosses zero again. At this point you start over.
If you trigger the triac sooner, your lamp will get current for a longer time interval, glowing brighter. If you trigger your triac latter, your lamp glows fainter.
The same applies to any AC load.
It is almost the same principle of PWM for DC. You turn your current source on and off quicker than your load can react, The amount of time it is turned on is proportional to the current your load will receive.
How do you do that with your arduino?
In simple terms you must first find the zero-crossing of the mains, then you set up a timer/delay and at its end you trigger the triac.
To detect the zero-crossing one normally uses an optocoupler. You connect the led side of the coupler with the mains and the transistor side with the interrupt pin of your arduino.
You can connect your arduino IO pins directly to the triacs' triggers, bu I would use another optocoupler just to be on the safe side.
When the sine-wave approaches zero, you get a pulse on your interrupt pin.
At this interrupt you set up a timer. the longer the timer, the less power your load will get. You also reset your triacs' pins state.
At this timers' interrupt you set your IO pins to trigger the triacs.
Of course you must understand a little about the hardware side so you don't fry your board, and burn your house,
And it goes without saying you must be careful not to kill yourself when dealing with mains AC =).
HERE is the project that got me started some time ago.
It uses AVRs so it should be easy to adapt to an arduino.
It is also quite complete, with schematics.
Their software is a bit on the complex side, so you should start with something simpler.
There is just a ton of this kind of stuff at the Make magazine site. I think you can even find some examples of similar hacks.
I use MOSFET for dimming 12V LED strips using Arduino. I chose IRF3710 for my project with a heat sink to be sure, and it works fine. I tested with 12V halogen lamp, it worked too.
I connect PWM output pin from Arduino directly to mosfet's gate pin, and use analogWrite in code to control brightness.
Regarding 2nd question about controlling lights, you can switch on/off 220V using relays, as partially seen on my photo, there are many boards for this, I chose this:
As a quick-start, you can get yourself one of those dimmerpacks (50-80€ for four lamps).
then build the electronics for the arduino to send DMX controls:
Arduino DMX shield
You'll get yourself both the arduino-expirience + a good chance of not frying your surrounding with higher voltage..
Related
This is probably a long shot question, but I try it anyway.
I'm developing hardware using PIC Microcontrollers (MicroChip). Communication is done through a FS USB 2.0 link.
I connect the microcontrollers to a Windows 10 Home edition, version 21H1, build 19043.1826. The processor is an AMD Ryzen 5 3600 6-Core Processor.
First I used the PIC18F45K50, for which everything worked fine from day one. But due to the shortages on the market, I now am experimenting with PIC18F47J53. Both microcontrollers are working fine, as I can (for example) control a MAX7219 controlled display (3 x 7-segment) and also control a bunch of LED's using an STP08CP05TTR. Clock timings seem also ok - I measured it with an oscilloscope.
These 2 microcontrollers are pretty much the same, at least for the core functionality such as USB. The differences that are relevant for the issue I'm reporting here are:
PIC18F45K50 uses internal clock of 8MHz, and has on board correction logic to keep clock synced for HS USB - this is a 5V processor
PIC18F47J53 uses a XTAL of 16MHz, all should be within the USB 2.0 specs - this is a 3.3V processor
I'm using the MPLab X IDE v5.45 with the MCC (MPLab Code Configurator) in which I setup the System Module (to set the correct clock frequencies including the 48MHz for USB) and where I configure the USB.
In both microcontrollers, the setup of the USB is exactly the same. I even checked the 4 files that are automatically generated by MCC, and except for the descriptors (I used different names), all is exactly the same.
When I connect the USB to my PC (same port), then the PIC18F45K50 works perfect. But the PIC18F47J53 gives error code 10.
This does not happen every time. For example, if I try 10 times (connect/disconnect the cable), then I had it 7 times. 1 time the device even didn't appear, and 2 other times I read "The device is working properly.". Although, in the latter case, my software that communicates with my controller isn't working, so there is still something wrong.
Based on the above, the first I would think of is some hardware issue. Although, the strange thing is that things like vendor ID (0x4D8), Product ID (0xA), BCD Device Release (0x100), Serial Number (12345678), etc... seem always to be read out correctly. If there would be a hardware problem, shouldn't I have more random issues with this as well? Or is this data read out in a slower mode than Full Speed (because that could of course explain this)?
Below are screenshots via "Device Manager / Ports (COM & LPT) / my serial device", then selecting the property in the Details.
If I compare the properties from the working microcontroller (PIC18F45K50) with the not working one (PIC18F47J53), it looks like all are exactly the same.
I also tried to compare the D- (CH1) and D+ (CH2) signals between the 2 microcontrollers with my oscilloscope. My USB knowledge is not detailed enough to interpret the signals, but what I can tell is that both look exactly the same to me, both timing wise and voltage level wise. Be aware that the CH2 signal on the PIC18F47J53 (D+), the second screenshot, is clipping in the picture below, but I measured it later and it shows the same voltage level as for the PIC18F45K50.
Does anybody here a single clue where I should look at in the first place? The good news is that I have a working and not working version, so I can start debugging step by step and compare. But some hints as where to start would be appreciated.
EDIT 24JUL2022
I did the measurement with my oscilloscope again. Now I soldered 2 wires to the USB port to be able to easily attach my probes. This time, both D- and D+ signals have a Vpp of about 3.3V. I put some cursors which also shows a pulse-width of about 84ns, which correlates with the USB HS frequency of 12MHz (should be 83.33ns).
I found the issue. The Vusb on my PIC18F47J53 had a bad (or was even not) connected. I gave it another touch of my soldering iron, and bingo! Now the "error 10" has disappeared completely, and each time I connect/disconnect it gives "This device is working properly.", and error 10 never appears. I now also see a continues signal on my oscilloscope - not one that is disappearing after a while. And I could send/receive already some commands.
As explained here:
http://www.edn.com/design/sensors/4407580/Brushless-DC-Motors-Part-II--Control-Principles
, switching the motor windings should occur when the back-emf voltage across the 1/2 VDCC value. How to effectively perform that in stm32f4 which don't has embedded comparator module?
It seems the only way is using analog watchdog with selecting next single waited channel at every moment when interrupt happens?
And how to be if I want drive 4 bldc from single stm32 chip?
There are few ways you can achieve this. The most popular way with STM32's are sensing the floating phase. The technique is little different to what your link is suggesting, nevertheless there are plenty of example codes to get this going.
Here is a nice explanation of ST's patented 3 resistor BLDLC position sensing method (and some other techniques).
A nice starting point would be this manual.
STM32 supports for two motor control timers (TIM1 and TIM8). You can use them to drive 2 BLDC motors. Nonetheless, it would not limit you to use other timers in combination in order to drive more BLDCs but would demand some additional programming complexity.
I'm working on an arcade cabinet that will be able to play various video game consoles (real hardware, not emulated.) There will be a PC inside to run a selection menu. I'll have to write that myself. I'll also need program a PLC which will do various things like control the relays which switch audio/video/controls between the PC and the various consoles, etc. I'll need help with those two tasks in time, but they are not what I'm working on right now.
What I'm working on as a starting point has to do with the controller encoding. Basically, the controls for each player consist of a few buttons and a joystick. These use momentary, normally-open contact switches, one for each button, and one for each cardinal direction on the joystick. Pressing the button, or joystick direction, closes the switch. The state of the buttons is then communicated to the console by an encoder.
The encoder has a connection for each button and joystick direction which is connected to 5 volts ("high") through a pull-up resistor. When a button or direction is pressed, a connection to ground is made through the momentary switch. When the encoder reads ground ("low") on a button connection, it knows that a button has been pressed and it communicates this to the console.
I already have all this working with the various consoles, but I've thought of some features that would be nice to add. This is where my current task comes in.
The first feature is button remapping. Some of these games were designed with controllers in mind, so when you use them with an arcade control panel, some of the buttons may not be where you want them. Some games allow buttons to be remapped via software, but others do not. My idea is to add a PLC in between the joystick and buttons and the encoder. I'll call this PLC a "pre-encoder."
The pre-encoder would read the states of the buttons on some input pins, then write these states back to some output pins, relaying them to the encoder. The advantage is that its programming could associate any input pin with any output pin, effectively remapping the buttons. Whenever a console is selected via the computer's menu, a button-mapping profile associated with a particular game could be selected as well, and forwarded to the pre-encoder.
Of course, the pre-encoder's routine which reads the buttons and relays their states to the encoder must repeat very quickly for smooth control. These games will be running at about 50 to 60Hz, meaning a new a video frame every 16.67ms or less. Ideally, the pre-encoder will be able to repeat this routine many, many times per frame to ensure the absolute minimum input lag. I want to ensure that the code and hardware selection is optimized to run as fast as possible.
The second feature is turbo buttons. Some games, especially arcade games, require a fire button to be pressed repeatedly every time you want to fire your gun, or your ship's cannons, etc, even if you have unlimited ammo. This seems unnecessary, and it will tire your fingers out pretty quickly. A turbo button is one that can be held down continuously, yet the game is being told that you are rapidly pressing and releasing it. This could be done in software for anything running on the PC, or with an analog solution like a 555 timer, but the best method is to synchronize the turbo button timing with the video refresh rate. By feeding the vertical sync pulse from the PC or video game console's video output to a PLC, it will know exactly how often a frame of video is rendered. Turbo button timing can then be controlled by defining, in numbers of frames, the periods when the button should be pressed and released. Timing information could also be included with the game-specific button profiles.
The third feature is slow buttons. Actually, this would probably only be applied to the joystick, but I'm referring to the switches for its cardinal directions as buttons. In certain games (it will probably only be used in shmups) it is sometimes needed to move your character (ship/plane) through very tight spaces. If movement is too fast in response to even minimal joystick input, you may go too far and crash. The idea is that, while a slow activation button is held, the joystick will be made less responsive by rapidly activating and deactivating it in the same manner as the turbo buttons.
I'm not sure if I want the pre-encoder itself to be watching the vertical sync pulse or if it will slow it down too much. My current thinking is that a seperate PLC will be responsible for general management of the cab itself; watching the "on" button, switching relays, communicating directly with the PC, watching the vertical sync pulse, etc. This will free up the pre-encoder to run more quickly.
Here is some example "code" for the pre-encoder. Obviously, it's just a rough outline of what I have in mind, as I don't even know what language it will be. This example assumes that a dedicated PLC will be used just as the pre-encoder. A separate PLC will be responsible for watching the vertical sync pulse, in addition to other tasks, like getting a game profile from the computer and passing some of that info to the pre-encoder. That PLC will know what the frame timing should be for turbo and slow functions, it will count frames, and during frames when turbo buttons should be disabled, it outputs high to a pin on the pre-encoder PCB, letting it know to disable turbo buttons. During frames when it should be enabled, it outputs low to that pin. Same idea with the slow buttons. There is also a pin which the pre-encoder checks at the end of its routine, so it can be told to stop and await a different game profile.
get info from other PLC (which got it from the computer, from a user-selected game profile):
array containing list of turbo buttons (buttons are identified by what input pin they are connected to)
array containing list of slow buttons (will probably only be the joystick directions, if any)
array containing list of slow activation buttons (should normally be only one button, if any)
array containing list of normal buttons (not turbo or slow)
array containing which output pin to use for each button (this determines remapping)
Begin Loop
if turbo pin is high
for each turbo button
output pin = high
next
else
for each turbo button
output pin = input pin
next
end if
if slow pin is high and slow activation button is pressed
for each slow button
output pin = high
next
else
for each slow button
output pin = input pin
next
end if
for each normal button
output pin = input pin
next
Restart Loop unless stop pin is low
If you've read all this, thank you for your time. So (finally), here are my questions:
What are your overall thoughts; on my idea in general, feasibility, etc.?
What kind of PLC should I use for the pre-encoder? I was originally thinking of trying an Arduino, but my reading indicates that it will be much too slow, due to its use of high-level programming libraries. I don't have a problem building my own board around another PLC.
What language should I use to program the PLC? I don't mind learning a new language. There's no time limit on this project, and I'll put it in whatever it takes to get the pre-encoder running as fast as possible.
What will I need to flash my program onto the PLC?
At run-time, how should these PLC's communicate with each other, and with the PC?
Am I asking in the right place; right forum, right section, etc.? Anywhere else I should ask?
Awaiting your response eagerly,
-Rob
I have some thoughts that might be useful to you:
What are your overall thoughts; on my idea in general, feasibility, etc.?
This project sounds like you want to cheat at Defender, like I used to do with a 555 timer chip in my Atari joystick when I was a kid.
The project is feasible but you will need a pretty fast PLC.
You might spend a lot of time making this work, like a quest.
What kind of PLC should I use for the pre-encoder? I was originally thinking of trying an Arduino, but my reading indicates that it will be much too slow, due to its use of high-level programming libraries. I don't have a problem building my own board around another PLC.
As I thought of what PLC might be fast enough, a few things came to mind.
If you use a PLC that has a task architecture, you can use an event to trigger a task on the v-sync pulse, and another event to trigger on console activity. If you use a PLC without a task architecture, the user might recognize the variable latency that will occur as the program scan moves in and out of phase with the v-sync and the activity in the game. This might not be true if the PLC is fast enough, say 1ms scan time.
Most inexpensive PLCs are never going to make it. The overhead and performance will keep most PLCs around 5-10ms per scan. However, a PC-based PLC might work well. So maybe a Beckhoff controller will work nicely. If you use something like a CX2000, it has Windows 7, USB, DVI for the user interface, and an Ethercat bus on the side to attach physical I/O cards for the controller and console connections. See about the software below. There are many non-PC-based PLCs that would work fine, but these will likely be expensive and harder to integrate.
The Arduino solution should work if you are using a fast enough model. But your development time will be higher because it doesn't come with anything but a blank screen and a bunch of libraries. Troubleshooting is much more of a pain-in-the-neck than PLCs that really shine. You'll need to plan carefully to get the Arduino to work. Also, hardware interfacing a microcontroller is harder and you'll have to manage debouncing the switches in your code. Every PLC has filtering in its inputs, and the variety of I/O makes design easy. But, the Arduino or other microcontroller is really the choice if money is an issue. A fast PLC can be real expensive ($800 to $20k, think around $1500). If you are going to build more than a few systems, the Arduino might be better.
What language should I use to program the PLC? I don't mind learning a new language. There's no time limit on this project, and I'll put it in whatever it takes to get the pre-encoder running as fast as possible.
IEC61131 is a standard for PLC programming languages. In the USA most PLCs are programmed in ladder logic because it is really easy to learn and quicker to troubleshoot and maintain in machinery. Structured text has its advantages too, particularly in performance. It looks like some amalgamation of basic/C/Java, easy to learn and looks almost like your pseudocode example. As for your project, I think it could be programmed in either language. I would never use the other IEC61131 languages for this task.
Beckhoff TwinCAT3 uses MS Visual Studio as the IDE, where you can write both the selection menu (in VB/C++/C#) and the PLC code (in IEC61131) in the same project. The runtime license for TwinCAT (on the CX2000 unit) runs in kernel mode, providing processing performance to Windows 7 whenever it is not doing something else more important. I've used a few CX1020 models and they were great performers. The scan times were around 5ms with a significant amount of code. Faster units will scan <1ms.
What will I need to flash my program onto the PLC?
PLCs don't "flash" like microcontrollers. Whatever software you use to write the software will have a way to connect to the controller. The term "go online" makes the connection. The terms "download" and "upload" refer to transferring the program between the development computer and the PLC. The term "online edit" refers to making code changes while the PLC is executing the code. When modern PLCs are powered down, they use a battery to copy program and user RAM to flash. When they power up, they copy the flash back to RAM. To make a connection to any modern PLC, you will use a USB or Ethernet cable.
At run-time, how should these PLC's communicate with each other, and with the PC?
You plan more than one PLC? A PLC connection to a PC is a complicated subject. The term "OPC Server" refers to some [expensive] software that lets your custom Windows PC application access memory in PLCs. The Beckhoff solution glues all that together nicely without buying more stuff. PLC to PLC communication is easier. The method is usually by ethernet and varies widely as to the details.
Am I asking in the right place; right forum, right section, etc.? Anywhere else I should ask?
Sure, there is some PLC activity on this forum, which appears to tend toward hardcore PC/Web/Mobile development. I come here for awesomely intelligent answers to my deeper software questions.
You could try plctalk.net, a forum that is a little more geared toward nuts-and-bolts engineers and service techs with wild connectivity and compatibility questions related to machinery and automation. You might get some blank stares about vertical sync pulses. Their skill sets revolve around an industrial paradigm, where reliability is probably their highest calling.
You might also ask questions about performance on an Arduino or Microchip/Atmel/ARM forums. If you tell them that a PLC is faster than their hardware, that will rile them up real good! They might tell you that you can get microsecond performance numbers, which you can if you are using hardware interrupts and lots of physical circuitry to make that a reality, and you are able to cope with the sleepless nights of troubleshooting.
-Dennis
Hi Im using the following RF module
http://www.apogeekits.com/rf_receiver_module_rx433.htm
on an embedded board with the PIC16F628A. Sadly, I realized that the signal strength was in analog form and couldn't get any ideas to get the RSSI reading off the pin because well my PIC is digital DUH!.
My basic idea was
To get the RSSI value from my Receiver
Send it to the PIC
Link the PIC to a PC via RS232
Plot a graph of time vs RSSI of the receiver (so I can make out how close my TX is to my RX)
I thought it was bloody brilliant at first but ive hit a dead end here. Any ideas on getting the RSSI data to my PC from this receiver would be nice.
Thanks in Advance
You can get a PIC that has an integrated ADC for sampling the analog signal. Or, you can use an external ADC chip to do the conversion. You would connect that to your PIC using SPI or I2C.
The simplest thing to do is obviously to use a more appropriate microcontroller - one with an ADC! There are many (most), including PICs (though that wouldn't be my first choice).
Attaching an external SPI or I2C ADC might be a bit tedious since having no SPI or I2C on your part, you'd have to bit-bash it. If you do that, use an SPI part - its simpler. Your sample rate will suffer and may end-up being a bit jittery if you are not careful.
Another solution is to use a voltage controlled PWM, then use the timer input capture to time the pulse width. That will give you good regularity and potentially good resolution. You can get a chip (example) to do that, or grow your own. That last option requires a triangle wave input as well as the measured (control) voltage, but on the same site...
In a similar vein, you could use a low frequency VCO (example) and use the output to clock one of the timers, then using a second timer periodically sampling the first and reset it. The count will relate to the voltage, though not necessarily a linear relationship, linearisation could be none on the PIC or at the receiving PC - I'd go for the latter - your micro will suck at arithmetic (performance wise) - even integer arithmetic, especially if it involves division.
I'm not interested in a hardware solution, I want to know about software that may "read" modulated signal received trough the power supply - some sort of a low-level driver that would access the power signal in a convenient place and demodulate it.
Is there a way to receive signal from the computer's power supply? I'm interested in an API or library that would allow the computer to be seen as a node in a Power Line Communication network and receive data directly through the power cable, without the need for a converter. Is there any active research in this field?
Edit:
There is software that reads monitors and displays internal component voltages - DC voltage after being converted and filtered by the power supply - now I need is a method of data encoding that would be invariant to conversion and filtering, the original signal embedded in AC being present in some form within the converted DC signal.
This is not possible, as described in the question. Yes, with extra hardware you can do it. No, with the standard hardware in a PC, you could not.
As others have noted, among other problems, the only information you can get from a generic PC is a bit of voltage info for the CPU. It's not going to give a picture of the AC signal, nor any signal modulated on top of it. You'll be watching a few highly regulated DC signals deep inside the computer, probably converted at a relatively low rate too. Almost by definition, if you could see external information on any of those signals, your machine is already suffering a hardware failure and chances are the CPU will be crashing soon...
*blink* No...
Edit: I mean, there's the possibility to use the powerlines as network cables, but only with special adapters. And it is just designed for home networks.
Edit2: You can't read something from the power supply of a computer...it's not designed for that. You would have to create your own component/adapter for this.
Am I mis-reading this? Wouldnt this be a pure hardware solution?
This is highly improbable without adding some hardware.
You see, the power supplies in a regular PC are switching power supplies which effectively decouple the AC input from the supplied DC voltage needed on the PC side. The AC side just basically provides power that fuels the high-speed power switching circuitry.
Also, a DC signal, by definition, doesn't provide a signal per se: it is a "static" power level (and yes the power level does vary a bit in the time domain but not as an easy to leverage function).
Yes there can be an AD (Analog to Digital) monitoring chip that can be used on the PC side to read the voltage of the DC component supplied to the motherboard etc., but that doesn't mean there is still a signal that can be harvested: the original power line "signal" might have been through enough filters that there isn't a "signal" left to be processed.
Lastly, one needs to consider that power supplies design varies from company to company; this fact will undoubtedly affect any possible design of a communication solution.
what you describe is possible but unfortunately, you need an adapter to convert the signal running on the powerlines to sensible network traffic.
the power line acts as a physical medium, thus is at the lowest level f the OSI stack. conversion from electrical signal to sensible network traffic requires a hardware adapter, same for your an ethernet adapter. your computer is unable to understand this traffic since its power supply was not build to transmit those informations. but note that you can easily find an adapter and it will works the same as an ethernet adapter, that is be accessible through the standard BSD socket library.
This is ENTIRELY possible, although you would need to either buy or build some hardware to make it happen. In addition, the software solution would be very, very complex.
The computer's power supply would be out of the picture for the most part. You need to read data straight from the wall with as little extraneous noise as possible. From the electrical engineering perspective, this is a very thoroughly covered topic. In the end, all you're really doing is an analog to digital conversion, and the rest keeps your circuit from being fried.
The software solution would basically be eliminating random noise, and looking for embedded signals. The math behind analog signal analysis is very complex, and you can spend a few semesters in college covering the topic, and the rest of your career trying to master it. If you're good at it, there's a cushy job for you on wallstreet predicting the stock market.
And that only covers reading incoming signals. Transmitting is a whole 'nother sport.
Now, it also sounds like you might be interested in a hack. That is...
You could buy a
commercial-off-the-shelf power-line
Ethernet adapter and tear it apart.
They have two prongs that plug into
a standard wall outlet. You could
remove these and wire them to the
INSIDE of a power supply.
To do that, you'd have to tear apart a power
supply as well, which is incredibly
dangerous and I hereby warn you and
anyone else to NEVER attempt this.
The entire Ethernet adapter could be
tucked into the power supply and you
could basically have an Ethernet
port on the surface of your power
supply (either inside or outside the
computer).
Simply wire that to a
standard Ethernet adapter and voila
(!), you have nothing but a power
cable connecting your computer to
the wall outlet, AND you magically have
Ethernet!
Note that there also has to be another power-line
Ethernet adapter somewhere else for
you to establish a network and make the whole project useful.
How can you read modulated data from the power supply, you are talking about voltage and ohms and apart from a possible electrical shock which would be just shocking :) There are specialized electrical plugs with ethernet jacks in them that you can use.
I just hazard a guess that this is totally transparent as per Adrien Plisson's answer, i.e. you would have all of the OSI layer and is no different. You can write code to read from the sockets.
AFAIK no company that produces this electrical plug would ever open up the API for competition reasons, it is still in early stages as adoption of that is low because obviously it is very expensive (120 euro here in my country for a pair of 'em), as it does not deliver the quoted speed, say 100Mbps power plug, may get maybe 85Mbps due to varying situations and phenomena with power (think surges, brown outs, interference).
My 2cents.
Hope this helps,
Best regards,
Tom.