SSL equivalent of givedescriptor() and takedescriptor() - ssl

I am converting an old tcp only server to use SSL (via IBM's GSkit), and one of the problems is getting the SSL handle into the spawned program. The original code passes the raw socket in via calls to the givedescriptor() and then uses takedescriptor() to get and then use the passed in socket.
Is there a GSKit/SSL equivalent of the give/take descripter methods?
givedescriptor() API documentation
UPDATE:
The issue is that the socket and the SSLHandle are created in one process, which initialized the SSL environment, and then need to be passed on to another process entirely - hence the need to give/take descriptor, as the socket / SSLHandle need to 'given' to the new process (it is actually an RPG program that is submitted and runs independently from the original program).
UPDATE 2:
Looks similar to this question, so I'll have a read of that as well.
From the other article (which doesn't have a code based answer, but a written solution)
"It looks like the session handles are just pointers to some storage
in heap. Due to the design of Single Level Store, you could copy them
via shared memory (memmap, shmget/shmat, ...). You just have to ensure
that the process that opened the GSK environment doesn't die or the
activation group will get cleaned up and those pointers will become
invalid. You also will probably need to put a mutex or some other
locking primitive around them if you're going to have multiple threads
accessing the shared data structure."
UPDATE 3:
This is the example I am using to share the memory between processes - Example: Using semaphore set and shared memory functions, still not exactly solved the issue yet though.
UPDATE 4:
I thought I'd add more details of why I need to ask the question. I am changing a non-blocking TCP server that is used as a connection point to an IBM i. It has the 'standard' mechanism for handling connections as they come it, creating threads and negotiating the connections in these threads. The threads then create independent process (via sbmjob). In the vanilla TCP version we can then give the running job the handle of the socket via the give/takedescriptor function, and will merrily write to and from the socket.
So I need an equivalent way of getting the independently running program to be able to write to SSL.
It maybe that this is not possible with the current mechanism.

There is no such thing as an 'SSL handle' known to the operating system and inheritable by child processes or transferable to other processes. The 'SSL handle' will inevitably be a pointer into some opaque data structure in the originating process, as SSL is an application layer protocol, and therefore implemented in the process, not in the kernel. So you can't 'give' an 'SSL handle' to another process and expect it to work.
EDIT
The answers here don't really answer the underlying question, which how I should do this, so although the bounty has been awarded, I can't accept the only answer.
The answer is that you can't do it.
It maybe that this is not possible with the current mechanism.
Correct. As you've foreseen this possiblity in your question, it is difficult to understand why you can't accept it in an answer.

In principle your idea is not impossible! If you believe that is possible try to find the answer!
If all answerer from the SO will say that is impossible it is not all time the true!
For example: 15 years ago I had tried to find the answer how can I write an Java-applet which can write and read images on a server.
Everybody had said to me that is impossible, but I did not believed it. I
tried to find my answer again and again. And I had found the answer: I
disassembled one online apllet from one specialist and in the source
code I find my answer: using PHP server we can do it. I asked the
owner from this applet about details of communication between
Java-applet and PHP server and he has helped me.
You have to find your specialist. That is the first rule to find the correct answer. May be on the IBM forum you will find someone.
The second rule is to read a lot of books from specialist about this. Not only one book. May be three of them or more.
I would recommend you also to read How do I ask a good question?, because in your question you do not have any computer language. And I think we have on SO someone specialist which could give you the correct answer.
The first rule on SO for finding of correct specialist is to set the correct tags. Without correct tags only few people see your question and it is only question of luck that somebody from them is the correct specialist for you.
Be optimistic and try to believe in you! Good luck and success!

Related

Making API calls with Try/Catch Blocks

This may be a dumb question, but do you always have to use a try/catch block when you make an API call from the internet? I can't really find an answer to this question on the net.
No they are not required.
These are useful for debugging erroneous code, or for resolving a situation where internet connectivity is lost, the server is down, or (for example) a cross domain request is not allowed.
APIs use them in example code to help developers and to provide an alternative to a totally bad experience to users should their server be down.

How to service HTTP requests on web server

Alright. I know this may draw some heat as "not good question"/etc., but I haven't found anywhere describing the process in particular (all the resources I've found describe the client-side requesting, not the server-side responses).
I'm going to be working on writing an iOS app in the next coming months necessitating the use of a web server. There are many resources on how to set these up, get them a static IP, etc. but I haven't found any clear ones (and by clear, I mean intelligible by someone not already experienced in it) on how to write a program for such a server that actually responds to the HTTP or client request.
Suppose I have a dummy app and web server combo where the app posts an HTTP request for the time. How would I write an app for the server to bounce the time back when the request comes in? Ideally, I'd like to write this in Objective-C as it's the language I've had the most experience in (whether forced or by choice).
Again, I apologize if it isn't a good question or very clear - I just haven't found any resources that are able to give me much of a place to start.
Your question could probably be described as 'too broad', but I will give it a shot anyway. Disclaimer: I haven't written much server-side code but I have been programming in objc for years now.
The reason you haven't found (m)any resources to help you do what you want to do is because Objective-C is rarely used for writing server-side code. Exactly why that is the case is no doubt a long story, but essentially the answer is because many of the dominant technologies out there (PHP, Python, C#, Java, to name only the prevailing languages) have feaures and associated frameworks that are better suited for that purpose.
In other words, although I can doubltless be done, you are probably better off using something other than Objective-C for the task because:
You will have many more resources available to help you get your job done.
You will have a much larger community that you can query for assistance when (not if) you encounter an obstacle.
You will not have to do many things the hard way because there will be existing tools to make it easier.
I would also recommend you to use PHP as the server-side programming language.
Some mounths ago I was in the same situation as you. We have planned to write a app (Android) which loads some data from a webserver. I've never programmed server-side code till the beginning of the project. So it was quiet interessting and new for me.
We have choosen PHP as the server-side language.
All I can say is, that it was really easy to learn and write your first scripts to get a response to a HTTP-Request. Also the usage of MySQL as the database is really easy and it works fine with PHP.
PHP is a standard. You can find a huge amount of documentation and examples. And of course tutorials and good books ... ;)

Any Multi-drop bus development help available?

Not that I can find any by googling, but ... does anyone know of any open source code/development frameworks/test software/etc for the Multidrop Bus commonly used in vending machines?
In my opinion there isn't a free framework for the MDB, as this bus is only used by profit oriented companies and nobody would make his own code open source (me too).
But the MDB protocol itself isn't very complex, it's the error handling for the several devices that is a bit complicated, as it should be 100% safe.
And today it can be tricky to implement the 9bit serial layer, as this isn't standard, even many MCUs didn't support it any more.
Edit: How I would implement it today
Regard all specification, especially the timings/timeout (ex. NAK-Timeout of 5ms).
I would use state machines to collect the configuration data, setting the normal mode of operation, set settings and all other things.
In the first step(not later) plan to build at any state an error handling, what should happen if the communication get lost, or you got an unexpected answer?
I would also implement logging much as possible, as sometimes there will money get lost and you have to explain why.

Why use AMQP/ZeroMQ/RabbitMQ

as opposed to writing your own library.
We're working on a project here that will be a self-dividing server pool, if one section grows too heavy, the manager would divide it and put it on another machine as a separate process. It would also alert all connected clients this affects to connect to the new server.
I am curious about using ZeroMQ for inter-server and inter-process communication. My partner would prefer to roll his own. I'm looking to the community to answer this question.
I'm a fairly novice programmer myself and just learned about messaging queues. As i've googled and read, it seems everyone is using messaging queues for all sorts of things, but why? What makes them better than writing your own library? Why are they so common and why are there so many?
what makes them better than writing your own library?
When rolling out the first version of your app, probably nothing: your needs are well defined and you will develop a messaging system that will fit your needs: small feature list, small source code etc.
Those tools are very useful after the first release, when you actually have to extend your application and add more features to it.
Let me give you a few use cases:
your app will have to talk to a big endian machine (sparc/powerpc) from a little endian machine (x86, intel/amd). Your messaging system had some endian ordering assumption: go and fix it
you designed your app so it is not a binary protocol/messaging system and now it is very slow because you spend most of your time parsing it (the number of messages increased and parsing became a bottleneck): adapt it so it can transport binary/fixed encoding
at the beginning you had 3 machine inside a lan, no noticeable delays everything gets to every machine. your client/boss/pointy-haired-devil-boss shows up and tell you that you will install the app on WAN you do not manage - and then you start having connection failures, bad latency etc. you need to store message and retry sending them later on: go back to the code and plug this stuff in (and enjoy)
messages sent need to have replies, but not all of them: you send some parameters in and expect a spreadsheet as a result instead of just sending and acknowledges, go back to code and plug this stuff in (and enjoy.)
some messages are critical and there reception/sending needs proper backup/persistence/. Why you ask ? auditing purposes
And many other use cases that I forgot ...
You can implement it yourself, but do not spend much time doing so: you will probably replace it later on anyway.
That's very much like asking: why use a database when you can write your own?
The answer is that using a tool that has been around for a while and is well understood in lots of different use cases, pays off more and more over time and as your requirements evolve. This is especially true if more than one developer is involved in a project. Do you want to become support staff for a queueing system if you change to a new project? Using a tool prevents that from happening. It becomes someone else's problem.
Case in point: persistence. Writing a tool to store one message on disk is easy. Writing a persistor that scales and performs well and stably, in many different use cases, and is manageable, and cheap to support, is hard. If you want to see someone complaining about how hard it is then look at this: http://www.lshift.net/blog/2009/12/07/rabbitmq-at-the-skills-matter-functional-programming-exchange
Anyway, I hope this helps. By all means write your own tool. Many many people have done so. Whatever solves your problem, is good.
I'm considering using ZeroMQ myself - hence I stumbled across this question.
Let's assume for the moment that you have the ability to implement a message queuing system that meets all of your requirements. Why would you adopt ZeroMQ (or other third party library) over the roll-your-own approach? Simple - cost.
Let's assume for a moment that ZeroMQ already meets all of your requirements. All that needs to be done is integrating it into your build, read some doco and then start using it. That's got to be far less effort than rolling your own. Plus, the maintenance burden has been shifted to another company. Since ZeroMQ is free, it's like you've just grown your development team to include (part of) the ZeroMQ team.
If you ran a Software Development business, then I think that you would balance the cost/risk of using third party libraries against rolling your own, and in this case, using ZeroMQ would win hands down.
Perhaps you (or rather, your partner) suffer, as so many developers do, from the "Not Invented Here" syndrome? If so, adjust your attitude and reassess the use of ZeroMQ. Personally, I much prefer the benefits of Proudly Found Elsewhere attitude. I'm hoping I can proud of finding ZeroMQ... time will tell.
EDIT: I came across this video from the ZeroMQ developers that talks about why you should use ZeroMQ.
what makes them better than writing your own library?
Message queuing systems are transactional, which is conceptually easy to use as a client, but hard to get right as an implementor, especially considering persistent queues. You might think you can get away with writing a quick messaging library, but without transactions and persistence, you'd not have the full benefits of a messaging system.
Persistence in this context means that the messaging middleware keeps unhandled messages in permanent storage (on disk) in case the server goes down; after a restart, the messages can be handled and no retransmit is necessary (the sender does not even know there was a problem). Transactional means that you can read messages from different queues and write messages to different queues in a transactional manner, meaning that either all reads and writes succeed or (if one or more fail) none succeeds. This is not really much different from the transactionality known from interfacing with databases and has the same benefits (it simplifies error handling; without transactions, you would have to assure that each individual read/write succeeds, and if one or more fail, you have to roll back those changes that did succeed).
Before writing your own library, read the 0MQ Guide here: http://zguide.zeromq.org/page:all
Chances are that you will either decide to install RabbitMQ, or else you will make your library on top of ZeroMQ since they have already done all the hard parts.
If you have a little time give it a try and roll out your own implemntation! The learnings of this excercise will convince you about the wisdom of using an already tested library.

How you test your applications for reliability under badly behaving i/o

Almost every application out there performs i/o operations, either with disk or over network.
As my applications work fine under the development-time environment, I want to be sure they will still do when the Internet connection is slow or unstable, or when the user attempts to read data from badly-written CD.
What tools would you recommend to simulate:
slow i/o (opening files, closing files, reading and writing, enumeration of directory items)
occasional i/o errors
occasional 'access denied' responses
packet loss in tcp/ip
etc...
EDIT:
Windows:
The closest solution to do the job as described seems to be holodeck, commercial software (>$900).
Linux:
Open solution wasn't found by now, but the same effect
can be achived as specified by smcameron and krosenvold.
Decorator pattern is a good idea.
It would require to wrap my i/o classes, but resulting in a testing framework.
The only remaining untested code would be in 3rd party libraries.
Yet I decided not to go this way, but leave my code as it is and simulate i/o errors from outside.
I now know that what I need is called 'fault injection'.
I thought it was a common production-line part with plenty of solutions I just didn't know.
(By the way, another similar good idea is 'fuzz testing', thanks to Lennart)
On my mind, the problem is still not worth $900.
I'm going to implement my own open-source tool based on hooks (targeting win32).
I'll update this post when I'm done with it. Come back in 3 or 4 weeks or so...
What you need is a fault injecting testing system. James Whittaker's 'How to break software' is a good read on this subject and includes a CD with many of the tools needed.
If you're on linux you can do tons of magic with iptables;
iptables -I OUTPUT -p tcp --dport 7991 -j DROP
Can simulate connections up/down as well. There's lots of tutorials out there.
Check out "Fuzz testing": http://en.wikipedia.org/wiki/Fuzzing
At a programming level many frameworks will let you wrap the IO stream classes and delegate calls to the wrapped instance. I'd do this and add in a couple of wait calls in the key methods (writing bytes, closing the stream, throwing IO exceptions, etc). You could write a few of these with different failure or issue type and use the decorator pattern to combine as needed.
This should give you quite a lot of flexibility with tweaking which operations would be slowed down, inserting "random" errors every so often etc.
The other advantage is that you could develop it in the same code as your software so maintenance wouldn't require any new skills.
You don't say what OS, but if it's linux or unix-ish, you can wrap open(), read(), write(), or any library or system call etc, with an LD_PRELOAD-able library to inject faults.
Along these lines:
http://scaryreasoner.wordpress.com/2007/11/17/using-ld_preload-libraries-and-glibc-backtrace-function-for-debugging/
I didn't go writing my own file system filter, as I initially thought, because there's a simpler solution.
1. Network i/o
I've found at least 2 ways to simulate i/o errors here.
a) Running a virtual machine (such as vmware) allows to configure bandwidth and packet loss rate. Vmware supports on-machine debugging.
b) Running a proxy on the local machine and tunneling all the traffic through it. For the case of upd/tcp communications a proxifier (e.g. widecap) can be used.
2. File i/o
I've managed to deduce this scenario to the previous one by mapping a drive letter to a network share which resides inside the virtual machine. The file i/o will be slow.
A cheaper alternative exists: to set up a local ftp server (e.g. FileZilla), configure speeds and use Novell's NetDrive to access it.
You'll wanna setup a test lab for this. What type of application are you building anyway? Are you really expecting the application be fed corrupt data?
A test technique I know the Microsoft Exchange Server people tried was sending noise to the server. Basically feeding every possible input with seemingly random data. They managed to crash the server quite often this way.
But still, if you can't trust input that hasn't been signed then general rules apply. Track every operation which could potentially be untrusted (result of corrupt data) and you should be able to handle most problems gracefully.
Just test your application behavior on random input, that should catch most problems but you'll never be able to fully protect your self from corrupt data. That's just not possible, as the data could be part of some internal buffer being handed off within the application itself.
Be mindful of when and how you decode data. That is all.
The first thing you'll need to do is define what "correct" means under these circumstances. You can only test against a definition of what behaviour is intended.
The tactics of testing will depend on technology. In the context of automated unit testing, I have found it very useful, in OO languages such as Java, to use various flavors of "mocking" or "stubbing" to pass e.g. misbehaving InputStreams to parts of my code that used file I/O.
Consider holodeck for some of the fault injection, if you have access to spare hardware you can simulate network impairment using Netem or a commercial product based on it the Mini-Maxwell, which is much more expensive than free but possibly easier to use.