I am currently building a server using Debian Squeeze. It will be administered by myself and a friend, and I am wondering if it is possible to forward the output from ONE process to TWO clients.
As an example, process abc is running on the server. It uses X for graphical output. I would like to be able to view and control its X window on both client computers at the same time. This way, both my friend and I would be able to see the status of process abc and send commands to it from our remote computers. The only solutions that I have found so far indicate that the process can only connect to one X instance at a time, requiring that for me to see the X output, my friend would have to disconnect. Is there a way for both of us to connect and control the same process at the same time?
I am afraid that X forwarding allows only one connection. However, you can try VNC to achieve your desired setup.
You maybe able to use VNC to control the remote desktop (the (Archlinux Wiki) states that you maybe able to set x11vnc to forward the root desktop, which you both may be able to access at the same time; I haven't tried it so I can't say for certain).
Related
I am trying to update a legacy system's sql solution to use the cloud.
The solution today involves a customer Windows SQL server installed onsite, then various machines are configured to connect to that IP Address / Port / Server Name. When they do connect the machines will set up any tables that are missing and regularly send their data. Data rates are low for an individual machine. Roughly one write request ever 10 seconds (it varies a lot), no more than 2-3k of information on each write request.
Moving this to the cloud is tricky mostly because each of the machines do not have a unique identifier. The good news is that we have the legacy machines connected to a IOT Gateway (Just think RPI) that knows a unique machineId. Furthermore the IOTG is a full fledge computer but not too powerful of one, and its Disk is an SD card.
New and Old Network Layout
So far I have had a few things fall on their face.
1) Setting the Machine to think the DB's IP/Port is that of the IOT Gateway. Setting up an Express server on the IOTG, listening, then injecting the unique id into the queries that I'd proxy up to the cloud. I may have had a bug, but for some reason I couldn't even see the requests coming in on the port. Even if I could I'd still have to figure out how to decode them. Shouldn't I at least be able to see these requests coming in?
2) Started looking into SQLite. The idea being to have SQLite listen on the port as an actual DB then have a process in the IOTG query data out of SQLite, append a unique ID, and then send it to the cloud. Unfortunately SQLite does not listen on a port.
I am starting to looking at just installing a whole SQL server on the device, but I'd really like to avoid that. I'm pretty sure its fairly large and writing to disk is not advisable for a small embedded system like I'm running.
Generally my questions boil down to:
1) Should I be able to see SQL Queries in an express server?
2) Should I be using a different tech? I failed to find a different more sql specific proxy.
3) Am I correct to think that the SQLite path is dead? Even if I could find a way to attach it to a port there is still not going to be any sort of response from SQLite when the clients try to make a connection.
4) Am I wrong to fear the local server? Diving into some documentation for making express work with DBs gets me to here: https://www.microsoft.com/en-us/sql-server/developer-get-started/node/ubuntu/ which suggests 4GB of memory, we're working on 0.5GB.
Any other thoughts on how to approach this would be great.
I'm working on a project that require a working similar to connecting 2 or multiple USB Internet Modems to just one PC and make them work all together at the same time, and i want to know if i can use 2 USB Internet Modems in just one PC and make them work simultaneously.
Thank you.
Combing multiple internet or WAN connection is definitely possible, although there are lots of caveats. Google '3G Multihoming' to see some more background.
There are actually relatively cheap ($50 approx) routers which will support load balancing over multiple WAN ports - look for Dual WAN or Multi-homming in the tech specs.
You can also 'trick' windows into using more than one internet connection - see some links below.
However, unless you use a service on the server side also to help split the loads, you need to be aware of some limitations:
If your use case has lots of requests that can run in parallel, as many web page downloads do, then this technique should work well.
If you are downloading a single large file in one request then it will likely not make much difference, although if it is smart enough to move all other request to another connection it may help a little.
Some links for Windows dual connection use (note I have not tested these personally):
http://www.wikihow.com/Combine-Two-Internet-Connections (be wary of the 'double speed' comment)
http://www.techkhoji.com/how-to-combine-multiple-internet-connections/ (see 'Method 2')
My name is Josue
I need your help with this:
Is there any way to audit or monitor the server processes that connect to the
Advantage Database Server?
Is there a log of running processes?
Thank's
There is no existing log of processes that use Advantage Database Server. Because it is a client/server architecture, there is no mechanism that I am aware of that can easily associate a connection on the server to a specific process.
However, it would be possible to use the system procedure sp_mgGetConnectedUsers() to obtain some of this information. It might be possible to use it to obtain the information you are looking for at a given point in time (a snapshot).
The output of that procedure includes three fields that you might be interested in. The Address column gives the address of the machine that connected to Advantage. It is typically the IP address of the client application. But it can also be of the form "IPC Connection N", which indicates that it is using shared memory for communications; this means that the client process is running on the same machine as the server.
The TSAddress column might also be of interest. If the connection is made by a client that is running through terminal services (e.g., a remote desktop), then that column contains the IP address of the client machine. If you are interested in knowing processes that originate from the server machine itself, then you would need this field to differentiate between those and clients that connected through terminal services.
The other column of potential interest would be ApplicationID. By default, that field contains the process name (e.g., the executable) of the client application. This could help identify the actual process. It is not guaranteed, though. The application itself can change that value through mechanisms such as sp_SetApplicationID.
Other than the possible lag issues, has anyone tried this? What are the pros or cons associated with this?
A lot of times for me it's the limitations of the remote desktop connection, be it VNC or RDP or whatever. For examples:
My workstation has two monitors. Remotely viewing my workstation reduces it to one.
Lag is tolerable in the IDE, but not with anything image-heavy. Everything from photoshopping to web browsing is done locally, not on the remote machine.
Adding to #2, when splitting up tasks between the local and remote machine, there's that extra layer of getting the two to play nice together that adds just a little bit of overhead per task, which adds up to a lot overall. Something as simple as saving a file from the web browser and opening it in the IDE takes more steps.
(I may think of more and add them later.)
All in all, it's fine if the setup can be adjusted properly. In my experience, the companies I've worked for have defined their remote connection capabilities by the needs of someone other than the software developers, and thus leave us with little pet peeves that make the process just slightly more difficult than it needs to be.
Here is my take on it from my experiences
PROS: Single dev environment, only need to license one set of tools (if applicable)
CONS: The lag got the best of me. Typing to only have it show up 1 - 3 seconds later...sometimes, other times works great. In VS, the popup notifications sometimes take forever to display as well. Other cons would include if you have to share your desktop with another employee and possible moving files to/from the dev machine as RDP does not natively allow you to drag/drop files.
same as other posters - lag when using tools that affect screen painting for vstudio (resharper,coderush) is a real problem - some stuff involving the mouse (dragging grid columns) is very difficult to use
I'd add that about every 10-15 times when I go to log back in on the physical workstation at work, it takes the stupid thing about 2 minutes to finally succeed in refreshing the displays
I have the following situation:
-I have a server A hooked up to a piece of hardware that sends values and information out of every second. Programs on the server machine can read these values. This server A is in a very remote location so Internet connection is very slow and not reliable but the connection does exist. Let's say it's a weather station in the Arctic.
-Users from the home location want to monitorize the weather values somehow. Well, the users can use a remote desktop connection the server A but that would be too too slow.
My idea is somehow to have a website on a web server (let's call the webserver - B and B is in a home location ) and make the server A connect to the server B and somehow send values and the web application reads the values and displays them....... but how to do such a system ?? I know I can use MySQL and have the server A connect to a SQL server on server B and send INSERT queries and have the web application running on server B constantly read from the SQL server but I think that would be way way too slow and I think there has to be a better solution.
Any ideas ?
BTW. The users should be able to send information to the weather station from the website as well ( so an ADMIN user should be allowed to shut down the weather station from the website or whatever)
Best regards,
MadSeb
Ganglia (http://ganglia.sourceforge.net/) is a popular monitoring tool that supports collecting arbitrary statistics using the gmetric tool. You might be able to build something around that.
If you do need to roll your own solution then you could have a persistent message queue at A (I'm a fan of RabbitMQ) to which you could log your metrics. You could then have something at B which listens for messages on the queue and saves the state at B.
This approach means you don't lose data when the connection drops.
The message might be a simple compressed data value, say csv or json so should be fine on low bandwidth connections.
All the work (parsing the csv or json, and saving the data to a database for example) is done at B where you don't have limitations.