I intend to build a set of skills for Amazon Alexa that will integrate with a custom software suite that runs on a RaspberryPi in my home.
I am struggling to figure out how I can make the Echo / Dot itself make an API call to the raspberry pi directly - without going through the internet, as the target device will have nothing more then an intranet connection - it will be able to receive commands from devices on the local network, but is not accessible via the world.
From what I have read, the typical workflow is as follows
Echo -> Alexa Service -> Lambda
Where a Lambda function will return a blob of data to the Smart Home device; using this return value
Is it possible, and how can I make the Alexa device itself make an API request to a device on the local network, after receiving a response from lambda?
I have the same problem and my solution is to use SQS as the message bus so that my RaspberryPi doesn't need to be accessible from the internet.
Echo <-> Alexa Service <-> Lambda -> SQS -> RaspberryPi
A |
+------ SQS <-----+
This works fine as long as:
you enable long polling (20sec) of SQS on the RaspberryPi and set the max messages per request to 1
you don't have concurrent messages going back and forth between Alexa and the RaspberryPi
This give the benefit of:
with a max message size of 1 the SQS request will return as soon as one message is available in the queue, even before the long poll timeout is met
with only 1 long polling at a time to SQS for the entire month this fit under the SQS free tier of 1 million requests
no special firewall permission for accessing your RaspberryPi from the internet, so the RaspberryPi's connection from the lambda always "just works"
more secure than exposing your RaspberryPi to the internet since there are no open ports exposed for malicious programs to attack
You could try using AWS IoT:
Echo <-> Alexa Service <-> Lambda <-> IoT <-> RaspberryPi
I though about using this for my Alexa RasberryPi project but abandoned the idea since AWS IoT doesn't offer a permanent free tier. But the free tier is no longer a concern since Amazon now offers Alexa AWS promotional credits.
https://developer.amazon.com/alexa-skills-kit/alexa-aws-credits
One possibility is to install node-red on your rPi. Node-red has plugins (https://flows.nodered.org/node/node-red-contrib-alexa-local) to simulate Philips hue and makes Alexa talk to it directly. It's an instant response. The downside is that it only works for 3 commands: on , off, set to x %. Works great for software/devices that control lights, shades and air-con.
It was answered in this forum a while ago and I'm afraid to tell you that situation hasn't changed since:
Alexa is cloud based and requires access to the internet / Amazon servers to function, so you cannot use it only within the intranet without external access.
There are a couple workaround methods I've seen used.
The first method is one that I've used:
I setup If This Then That (IFTTT) to listen for a specific phrase from Alexa, then transmit commands through the Telegram secure chat/messaging service where I used a "chat bot" running on my raspberry PI to read and act on those messages.
The second method I most recently saw would use IFTTT to add rows to a google spreadsheet which the raspberry pi could monitor and act on.
I wasn't particularly happy with the performance/latency of either of these methods but if I wrote a custom Alexa service using a similar methodology it might at least eliminate the IFTTT delay.
Just open an SSH tunnel into your rPi with a service like https://ngrok.com/ and then communicate with that as either your endpoint or from the lambda.
You can achieve this by using proxy. BST has a tool for that , I currently use that one http://docs.bespoken.tools/en/latest/commands/proxy/
So rather than using a Lambda you can use local machine.
Essentially it becomes Echo -> Alexa Service -> Local Machine
Install npm bst to your local machine https://www.npmjs.com/package/bespoken-tools
npm install bespoken-tools --save
Go to your projects index.js folder and run proxy command
bst proxy lambda index.js
This will give you a url as follow:
https://proxy.bespoken.tools?node-id=xxx-xxx-xxx-xxx-xxxxxxxx
Now go to your alexa skill on developer.amazon and click to configure your skill.
Choose your service endpoint as https and enter the url printed out by BST
Then click save, and boooom your local machine becomes the final end point.
Related
I am setting up HOME SIEM lab using SPLUNK. I am looking for sources which can provide different logs for various devices but not limited for below ones.
Windows Logs
IIS Logs
IDS/IPS Logs
Based on the logs i am planning to build search queries for various events and further using the same to build the rules.
It is not clear why you need logs when you can generate these? For example you can set up a VM with Windows Server and install an agent like NXLog (or any log collection agent that can send logs forwarded via TCP, UDP, TLS, or HTTP) for log collection to Splunk.
Checkout the Montgomery County Data Portal. It's free
https://data.montgomerycountymd.gov/
You could also connect to a crypto exchange API and have lots of data flow in real-time
I am using the Azure Container Service with Kubernetes orchestrator and have an app deployed on a cluster with 3 nodes. It has 5 replicas. How can I verify load balancing in action e.g. I want to be able to see that every time I hit the external IP I am being routed to perhaps a different node. Thanks.
The simplest solution is to connect (over ssh for example) to 3 nodes and run WinDump there. In order everything is working properly you will be able to see what happens on every node.
Also here is Microsoft documentation for testing a load balancer:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/tutorial-load-balancer#test-load-balancer
The default Load Balancer which are available to your Windows Azure Web and Worker roles are software load balancers and not so much configurable however they do work in Round Robin setting. If you want to test this behavior this is what you need to do:
Create two (or more) instances of your service with RDP access
enabled so you can RDP to both instances
RDP to your both instances and run NETMON or any network monitor
solution in it.
Now access your Windows Azure web application from your desktop You
need to understand that when a network connection is made from your
desktop the connection is still alive based on network settings
(default 60 seconds) so you need to wait until default timeout is
passed to access your Windows Azure web application again.
When you will access your Windows Azure Web application again you can
verify that seconds time the request went to next instance. BE sure
to pass the connection timeout otherwise your request will be keep
handled by same instance.
Note: If you dont want to use RDP, you sure can also create a test ASP.NET page to write some special code based on your specific instance which will show you that this page is specific to certain instance. The best way to do is to read the Instance ID as below:
int instanceID = RoleEnvironment.CurrentRoleInstance.Id;
If you want to have more control over Windows Azure Load Balancing, i would suggest using the Windows Azure Traffic Manager which will help you to route the traffic to your site via Round-Robin, Performance or backup based scenario. More info on using Traffis Manager is in this article.
I have read tutorials on Laravel Queue using Beanstalkd etc and the idea of using queue is fantastic because in my current project, sending a Welcome mail to a registered user takes up to 10 seconds to process cause of the attachment of a logo. I can imagine what will happen if more users register at an instance. So, using a queue for this will speed up things.
In the shared server I am working on, I have no SSH Access. So, setting up the queue according to the tutorials is far fetched.
I want to know if there is a way to setup Laravel Queue without SSH Access, if there is a way, I need a guide.
You can't use Beanstalkd on a shared server because you can't install the service and I don't know any hosting service that offers it for shared hosting. However you could use IronMQ which is a remotely hosted service (so you don't need to install anything on the server). The Laravel queues API is the same for any queue service, so you can just use Queue::push like you would with beanstalkd.
Here's a great video on setting this up by Taylor Otwell, the creator of Laravel:
http://vimeo.com/64703617. You can also read this tutorial which explains how to use IronMQ with Laravel in more detail.
IronMQ is a paid service, but it does have a Free Plan for developers which offers 1 million API requests per month.
Instead of using artisan queue:listen like you would for beanstalkd, you just define a route for IronMQ to call when processing each job on the queue:
Route::post('queue/receive', function()
{
return Queue::marshal();
});
Explanation:
I have one executable Jar deployed on one EC2 instance which can be run manually to listen on port 80 for proxy traffic
I have one Spring application on another EC2 instance which hits a website on third party server
Connection between these two machines:
Spring application setup i.e. B tells third party server to open a website and use A as a proxy, this leads to generation of logs of network calls on A.
What I want to do is: for every request I send from B to third party server I want network logs that are being generated on A to be transferred to B
What I tried:
One way is to rotate logs on A and write to S3 and then application and pick it from S3 and process them
ssh into A and grep the log file, but this stops the JAR to listen to the new traffic and it gets stuck
What I am looking for:
A realtime solution, as soon as logs show up on A I want them to be ported to B without stopping A on its listening job
I am not sure what OS you are running, but if you are running a nix variant, you can install syslog-ng instead of syslog, or rsyslog, which is capable of logging local and external events. In this case I would set up a central logging server, that listens for logs from server a, and server b.
Another alternative is syslog-ng is not what you're looking for, you could install splunk on a server, and have it pick up the logs from splunk reporters on each server you want to centrally log.
Hope this helps.
As Kevin mentioned , you could set up a Splunk Indexer on an EC2 instance and use this to aggregate the collection of logs from A and B and any other sources , and then use the Splunk Search language to search over this log data in "near realtime", correlate events together across your various systems , create custom dashboards, setup proactive alerting etc...
http://www.splunk.com/
As far as the mechanisms for getting this data from your systems to the Splunk Indexer :
1) Use a Splunk Universal Forwarder to monitor log output and forward it to your Splunk Indexer , http://www.splunk.com/download/universalforwarder
2) As your systems are Java based , SplunkJavaLogging has log4j/logback/jdk appenders that you can seamlessly wire in to your logging config to forward log events to your Splunk Indexer : https://github.com/damiendallimore/SplunkJavaLogging
3) Use the Splunk Java SDK , http://dev.splunk.com/view/java-sdk/SP-CAAAECN , to input log events into your Splunk Indexer via HTTP REST or Raw TCP
I've looked but can't see an answer to this one:
I have an application that passes Azure messages between a VM role and a worker role. Before I load this into Azure I'd like to test that both work correctly by using the Azure emulator.
Does anyone know if the Azure emulator will accept messages that originate from the VM role and will it allow me to send messages to the VM? Is there a workaround or solution to this?
Both the emulator and the VM will be running on the same host server in my case.
The queues are accessed as HTTP endpoints, so you need to ensure that both components you want to test can access the queue.
If you want to test your application using the storage emulator (an HTTP endpoint provisioned on your local machine, normally http://127.0.0.1:1001/) then you will to ensure that the VM role can get to that address.
I would recommend testing with the real storage service. There are difference between the emulator and the actual service, so it's better to test the real deal (you can always create a test queue).
In this case the endpoint will be on the internet (i.e. http://myaccount.queue.core.windows.net/).