So I have this online program that writes data on a PDS member (used/accessed by other programs too).
Does the OPEN command in COBOL "locks" the PDS for exclusive use kinda like IBM's ENQ/DEQ utility? So that other programs cant write on it while I'm using it? Im in mainframe zOS btw.
I already tested but I'm kind of skeptical with my test setup. Which is...
JCL > COBOL01 > COBOL02
... where COBOL01 opens the PDS, calls COBOL02 which opens the same PDS, then WRITE on it. Result was COBOL02 can't write on the PDS.
But that's on a single call chain, what if it's an online transaction?
The ENQ is part of the file allocation, which in your case is happening as a result of the status subparameter of the DISP parameter on your DD statement in your JCL.
DISP=(status,normal-termination,abnormal-termination)
...or, alternatively...
DISP=status
If you code DISP=OLD, you have exclusive control of the dataset. Check in the IBM Knowledge Center for differences between PDS and PDSE behavior.
The same applies for an online transaction, but your allocation may be done differently. If you're running as part of an ISPF dialog, the allocation may be done via an ALLOCATE command. If you're running in CICS then dynamic allocation may be done or, more commonly in your case of a PDS it may be done via the JCL for the CICS region.
You say you opened the PDS twice, are you certain the second open actually worked? If you have a FILE STATUS clause coded, did you check the assigned data name? Were there error messages in the JESMSGLG?
I always like to dig a little deeper in these types of questions in the hope that some might appreciate the parts of mainframe operating system design that are sometimes difficult to see from the outside...
As you surmise, at the core of "locking", you'll find the z/OS ENQ/DEQ system services. These functions provide a simple way to serialize just about any resource, and there are always two parameters: a "QNAME" and an "RNAME". The QNAME identifies the class of resources...in your example, it would be SYSDSN - a dataset enqueue. The RNAME is the resource name, and in your example, it would be the actual dataset name (not including the member name, if the dataset is a PDS).
As cschneid explains above, when you allocate a conventional mainframe dataset, the system allocation routines issue ENQ on SYSDSN with the dataset name, and either SHARED or EXCLUSIVE access, depending on whether you coded DISP=SHR or something else in your allocation. If there's a conflict (that is, you requested SHARED, but another task has EXCLUSIVE), the ENQ fails and you'll see a message on the console about your task "WAITING FOR DATASETS". Otherwise, the allocation proceeds, and the ENQ protects you according to the DISP you specified in your JCL.
There are a few other situations to think about...
PDS datasets are a little unusual because the ENQ is at the full dataset level, not the member level. This means there's generally no way to lock an individual member - the lock is at the level of the full dataset. Applications like the ISPF editor try to get around this by creating their own ENQs on a different resource. The ISPF editor uses QNAME=SYSISPF with an RNAME that includes a member name to detect two users editing the same member name at the same time, but this only provides protection between ISPF users, not an ISPF user and another application outside of ISPF.
VSAM has it's own notion of sharing that are controlled by the VSAM SHAREOPTIONS set when the VSAM file is defined. This makes the type of sharing a function of the file, rather than the application allocating the file.
ENQ can be single-system or can be across multiple systems. The system GRS (Global Resource Sharing) service is normally configured to propagate SYSDSN ENQs across all systems that might have physical access to the dataset.
Note that in the ENQ I described, there's no disk volume information. That is, if you have a dataset called XYZ on two different volumes, allocating the dataset you'll generally be serializing BOTH of them, no matter which you allocate. This can be a feature or a problem, depending on how you look at it.
JCL allocation is a little different than dynamic allocation. With dynamic allocation, most applications aren't allowed to wait for resources, so if there's a conflict, your call to DYNALLOC fails with a "resource not available' return code. In JCL allocation, you normally just wait for the resource contention to clear, then your job runs.
There are a few popular vendor products that change this flow a bit...CA's MIM product, for instance, automatically detects ENQ conflicts and reschedules the waiting job for later execution when its resource become available. This improves overall system throughput since another task can run instead.
Sophisticated applications sometimes use the RESERVE/RELEASE service to serialize access to files. RESERVE is a hardware feature of most mainframe devices that in essence serializes I/O to a given device until a corresponding RELEASE. It usually requires low-level I/O programming, but it can be faster than ENQ/DEQ, especially in a setting where there are many systems sharing resources.
Most system performance monitors have a function that lets you see and monitor ENQ activity...you can learn a lot by watching the flow of ENQs and how the system handles conflicts.
Related
Want to migrate bulk files (e.g VSAM) from Mainframe to Azure in the beginning of the Project, how that can be achieved ?
Any utility or do we need to write own scripts?
I suspect there are some utilities out there but I suspect they are most / all priced products. Since VSAM datasets are not defined using a language construct like DDL you will likely have to do most of the heavy lifting. Either writing your own programs or custom scripts. You didn’t mention operating system but I assume you’re working on z/OS.
Here are some things to consider:
The structure of the VSAM dataset is basically record oriented. There are three basic types you’ll run into that host application data:
Key Sequenced Datasets (KSDS)
Entry Sequenced Datasets (ESDS)
Relative Record datasets (RRDS)
Familiarize yourself with the means of defining the datasets as it will give you some insight into the dataset specifics. DFSMS Access Method Services Commands will show the utilities used to create them and get information like Keylength and offest of the key. DEFINE CLUSTER is the command to create the dataset. You mentioned you are moving the data toi Azure but this will help you understand the characteristics of the data you are moving.
Since there is no DDL for VSAM datasets you will generally find the structure in the programs that manipulate them like COBOL Copybooks, HLASM DSECTs and similar constructs. This is the long pole in the tent for you.
Consider what are the semantics of accessing the data. VSAM as an access method does have some ability to control read/write access on a macro level using a DEFINE CLUSTER option called SHAREOPTIONS. The SHAREOPTIONS instruct the operating system how to handle the VSAM buffers in terms of reading and writing so that multiple processes can access the same data. Its primitive if compared to sahred files systems like NFS. VSAM allows the application to control access (or serialization) using ENQ / DEQ functions. These enable applications to express intent in the cluster about a VSAM file and coordinate their own activities.
You might find that converting a VSAM file to a relational form like Db2 is better for you. Again, you’ll have to create the DDL to describe the tables, data formats and the like.
Another consideration is data conversion. You’ll find there is character data that is most likely in EBCDIC and needs to be converted to a new code page. Numeric data can be in Packed Decimal, Binary, or even text and will need to be converted.
The short answer is there isn’t an “Easy Button” to do what you want. Consider the data is only one of the questions that needs to be answered. Serialization and access to the data, codepage conversion, if you are moving some data but not others will you need to be able to map some of the converted data back to data on the mainframe.
Consider exploring IBM CDC classic replication. You can achieve it with click of buttons.
I have not done for Azure. So not sure about support.
I'm doing some moderately low-level programming of an embedded device that has some NVRAM we plan to use for retaining values between runs of a program. We'd like to abstract the operations into an API over a driver or talking to a daemon. This is lower-level than the serialization semantics I've seen here and there. Basically we want a process or function to be able to reserve some space (with some name or other identifier), store a value (arbitrary byte sequence) in that reserved space, retrieve the value later, and surrender the reservation if it no longer needs to use it. This feels a lot like malloc, write, read, and free. I'm tempted to implement nvAlloc() (or something) and so on. Or am I missing something obvious? Maybe security: another process getting a handle and accessing or corrupting the value.
It seems http://pramfs.sourceforge.net/ and normal file system access are the right answer.
First of all, thanks in advance for your help.
I've decided to ask for help in forums like this one because after several months of hard working, I couldn't find a solution for my problem.
This can be described as 'Why an object created in VB.net isn't released by the GC when it is disposed even when the GC was forced to be launched?"
Please consider the following piece of code. Obviously my project is much more complex, but I was able to isolate the problem:
Imports System.Data.Odbc
Imports System.Threading
Module Module1
Sub Main()
'Declarations-------------------------------------------------
Dim connex As OdbcConnection 'Connection to the DB
Dim db_Str As String 'ODBC connection String
'Sentences----------------------------------------------------
db_Str = "My ODBC connection String to my MySQL database"
While True
'Condition: Infinite loop.
connex = New OdbcConnection(db_Str)
connex.Open()
connex.Close()
'Release created objects
connex.Dispose()
'Force the GC to be launched
GC.Collect()
'Send the application to sleep half a second
System.Threading.Thread.Sleep(500)
End While
End Sub
End Module
This simulates a multithreaded application making connections to a MySQL database. As you can see, the connection is created as a new object, then released. Finally, the GC was forced to be launched. I've seen this algorithm in several forums but also in the MSDN online help, so as far as I am concerned, I am not doing anything wrong.
The problem begins when the application is launched. The object created is disposed within the code, but after a while, the availiable memory is exhausted and the application crashes.
Of course, this problem is hard to see in this little version, but on the real project, the application runs out of memory very quickly (due to the amount of connections made over the time) and as result, the uptime is only two days. Then I need to restart the application again.
I installed a memory profiler on my machine (Scitech .Net Memory profiler 4.5, downloadable trial version here). There is a section called 'Investigate memory leaks'. I was absolutely astonished when I saw this on the 'Real Time' tab. If I am correct, this graphic is telling me that none of the objects created on the code have been actually released:
The surprise was even bigger when I saw this other screen. According to this, all undisposed objects are System.Transactions type, which I assume are internally managed within the .Net libraries as I am not creating any object of this type on my code. Does it mean there is a bug on the VB.net Standard libraries???:
Please notice that in my code, I am not executing any query. If I do, the ODBCDataReader object won't be released either, even if I call the .Close() method (surprisingly enough, the number of unreleased objects of this type is exactly the same as the unreleased objects of type System.Transactions)
Another important thing is the statement GC.Collect(). This is used by the memory profiler to refresh the information to be displayed. If you remove it from the code, the profiler wont' update the real time diagram properly, giving you the false impression that everything is correct.
Finally, if you ommit the connex.Open() statement, the screenshot #1 will render a flat line (that means all the objects created have been successfully released), but unfortunatelly, we can't make any query against the database if the connection hasn't been opened.
Can someone find a logical explanation to this and also, a workaround for effectively releasing the objects?
Thank you all folks.
Nico
Dispose has nothing to do with garbage collection. Garbage collection is exclusively about managed resources (memory). Dispose has no bearing on memory at all, and is only relevant for unmanaged resources (database connections, file handles, gdi resource, sockets... anything not memory). The only relationship between the two has to do with how an object is finalized, because many objects are often implemented such that disposing them will suppress finalization and finalizing them will call .Dispose(). Explicitly Disposing() an object will never cause it to be collected1.
Explicitly calling the garbage collector is almost always a bad idea. .Net uses a generational garbage collector, and so the main effect of calling it yourself is that you'll hold onto memory longer, because by forcing the collection earlier you're likely to check the items before they are eligible for collection at all, which sends them into a higher-order generation that is collected less often. These items otherwise would have stayed in the lower generation and been eligible for collection when the GC next ran on it's own. You may need to use GC.Collect() now for the profiler, but you should try to remove it for your production code.
You mention your app runs for two days before crashing, and are not profiling (or showing results for) your actual production code, so I also think the profiler is in part misleading you here. You've pared down the code to something that produced a memory leak, but I'm not sure it's the memory leak you are seeing in production. This is partly because of the difference in time to reproduce the error, but it's also "instinct". I mention that because some of what I'm going to suggest might not make sense immediately in light of your profiler results. That out of the way, I don't know for sure what is going on with your lost memory, but I can make a few guesses.
The first guess is that your real code has try/catch block. An exception is thrown... perhaps not on every connection, but sometimes. When that happens, the catch block allows your program to keep running, but you skipped over the connex.Dispose() line, and therefore leave open connections hanging around. These connections will eventually create a denial of service situation for the database, which can manifest itself in a number of ways. The correction here is to make sure you always use a finally block for anything you .Dispose(). This is true whether or not you currently have a try/catch block, and it's important enough that I would say the code you've posted so far is fundamentally wrong: you need a try/finally. There is a shortcut for this, via a using block.
The next guess is that some of your real commands end up fairly large, possibly with large strings or image (byte[]) data involved. In this case, items end up on a special garbage collector generation called the Large Object Heap (LOH). The LOH is rarely collected, and almost never compacted. Think of compaction as analogous to what happens when you defrag a hard drive. If you have items going to the LOH, you can end up in a situation where the physical memory itself is freed (collected), but the address space within your process (you are normally limited to 2GB) is not freed (compacted). You have holes in your memory address space that will not be reclaimed. The physical RAM is available to your system for other processes, but over time this still results in the same kind of OutOfMemory exception you're seeing. Most of the time this doesn't matter: most .Net programs are short-lived user-facing apps, or ASP.Net apps where the entire thread can be torn down after a page is served. Since you're building something like a service that should run for days, you have to be more careful. The fix may involve significantly re-working some code, to avoid creating the large objects at all. That may mean re-using a single or small set of byte arrays over and over, or using streaming techniques instead of string concatenation or string builders for very large sql queries or sql query data. It may also mean you find this easier to do as a scheduled task that runs daily and shuts itself down at the end of the day, or a program that is invoked on demand.
A final guess is that something you are doing results in your connection objects still being in some way reachable by your program. Event handlers are a common source of mistakes of this sort, though I would find it strange to have event handlers on your connections, especially as this is not part of your example.
1 I suppose I could contrive a scenario that would make this happen. A simple way would be to build an object assumes a global collection for all objects of that type... the objects add themselves to the collection at construction and remove themselves at disposal. In this way, the object could not be collected before disposal, because before that point it would still be reachable... but that would be a very flawed program design.
Thank you all guys for your very helpful answers.
Joel, you're right. This code produces 'a leak' which is not necesarily the same as 'the leak' problem I have on my real project, though they reproduce the same symptoms, that is, the number of unreleased objects keep growing (and eventually will exhaust the memory) on the code mentioned above. So I wonder what's wrong with it as everything seems to be properly coded. I don't understand why they are not disposed/collected. But according to the profiler, they are still in memory and eventually will prevent to create new objects.
One of your guesses about my 'real' project hit the nail on the head. I've realized that my 'catch' blocks didn't call for object disposal, and this has been now fixed. Thanks for your valuable suggestion. However, I implemented the 'using' clause in the code in my example above and didn't actually fix the problem.
Hans, you are also right. After posting the question, I've changed the libraries on the code above to make connections to MySQL.
The old libraries (in the example):
System.Data.Odbc
The new libraries:
System.Data
Microsoft.Data.Odbc
Whith the new ones, the profiler rendered a flat line, whithout any further changes on the code, which it was what I've been looking after. So my conclussion is the same as yours, that is there may be some internal error in the old ones that makes that thing to happen, which makes them a real 'troublemaker'.
Now I remember that I originally used the new ones on my project (the System.Data and Microsoft.Data.Odbc) but I soon changed for the old ones (the System.Data.Odbc) because the new ones doesn't allow Multiple Active Recordsets (MARS) opened. My application makes a huge amount of queries against the MySQL database, but unfortunately, the number of connections are limited. So I initially implemented my real code in such a way that it made only a few connections, but they were shared accross the code (passing the connection between functions as parameter). This was great because (for example) I needed to retrieve a recordset (let's say clients), and make a lot of checks at the same time (example, the client has at least one invoice, the client has a duplicated email address, etc, which involves a lot of side queries). Whith the 'old' libraries, the same connection allowed to create multiple commands and execute different queries.
The 'new' libraries don't allow MARS. I can only create one command (that is, to execute a query) per session/connection. If I need to execute another one, I need to close the previous recordset (which isn't actually possible as I am iterating over it), and then to make the new query.
I had to find the balance between both problems. So I end up using the 'new libraries' because of the memory problems, and I recoded my application to not share the connections (so each procedure will create a new one when needed), as well as reducing the number of connections the application can do at the same time to not exhaust the connection pool.
The solution is far to ideal as it introduces spurious logic on the application (the ideal case scenario would be to migrate to SQL server), but it is giving me better results and the application is being more stable, at least in the early stages of the new version.
Thanks again for your suggestions, I hope you will find mines usefult too.
Cheers.
Nico
I'm coming into an existing (game) project whose server component is written entirely in erlang. At times, it can be excruciating to get a piece of data from this system (I'm interested in how many widgets player 56 has) from the process that owns it. Assuming I can find the process that owns the data, I can pass a message to that process and wait for it to pass a message back, but this does not scale well to multiple machines and it kills response time.
I have been considering replacing many of the tasks that exist in this game with a system where information that is frequently accessed by multiple processes would be stored in a protected ets table. The table's owner would do nothing but receive update messages (the player has just spent five widgets) and update the table accordingly. It would catch all exceptions and simply go on to the next update message. Any process that wanted to know if the player had sufficient widgets to buy a fooble would need only to peek at the table. (Yes, I understand that a message might be in the buffer that reduces the number of widgets, but I have that issue under control.)
I'm afraid that my question is less of a question and more of a request for comments. I'll upvote anything that is both helpful and sufficiently explained or referenced.
What are the likely drawbacks of such an implementation? I'm interested in the details of lock contention that I am likely to see in having one-writer-multiple-readers, what sort of problems I'll have distributing this across multiple machines, and especially: input from people who've done this before.
first of all, default ETS behaviour is consistent, as you can see by documentation: Erlang ETS.
It provides atomicity and isolation, also multiple updates/reads if done in the same function (remember that in Erlang a function call is roughly equivalent to a reduction, the unit of measure Erlang scheduler uses to share time between processes, so a multiple function ETS operation could possibly be split in more parts creating a possible race condition).
If you are interested in multiple nodes ETS architecture, maybe you should take a look to mnesia if you want an OOTB multiple nodes concurrency with ETS: Mnesia.
(hint: I'm talking specifically of ram_copies tables, add_table_copy and change_config methods).
That being said, I don't understand the problem with a process (possibly backed up by a not named ets table).
I explain better: the main problem with your project is the first, basic assumption.
It's simple: you don't have a single writing process!
Every time a player takes an object, hits a player and so on, it calls a non side effect free function updating game state, so even if you have a single process managing game state, he must also tells other player clients 'hey, you remember that object there? Just forget it!'; this is why the main problem with many multiplayer games is lag: lag, when networking is not a main issue, is many times due to blocking send/receive routines.
From this point of view, using directly an ETS table, using a persistent table, a process dictionary (BAD!!!) and so on is the same thing, because you have to consider synchronization issues, like in objects oriented programming languages using shared memory (Java, everyone?).
In the end, you should consider just ONE main concern developing your application: consistency.
After a consistent application has been developed, only then you should concern yourself with performance tuning.
Hope it helps!
Note: I've talked about something like a MMORPG server because I thought you were talking about something similar.
An ETS table would not solve your problems in that regard. Your code (that wants to get or set the player widget count) will always run in a process and the data must be copied there.
Whether that is from a process heap or an ETS table makes little difference (that said, reading from ETS is often faster because it's well optimized and doesn't perform any other work than getting and setting data). Especially when getting the data from a remote node. For multple readers ETS is most likely faster since a process would handle the requests sequentially.
What would make a difference however, is if the data is cached on the local node or not. That's where self replicating database systems, such as Mnesia, Riak or CouchDB, comes in. Mnesia is in fact implemented using ETS tables.
As for locking, the latest version of Erlang comes with enhancements to ETS which enable multiple readers to simultaneously read from a table plus one writer that writes. The only locked element is the row being written to (thus better concurrent performance than a normal process, if you expect many simultaneous reads for one data point).
Note however, that all interaction with ETS tables is non-transactional! That means that you cannot rely on writing a value based on a previous read because the value might have changed in the meantime. Mnesia handles that using transactions. You can still use the dirty_* functions in Mneisa to squeeze out near-ETS performance out of most operations, if you know what you're doing.
It sounds like you have a bunch of things that can happen at any time, and you need to aggregate the data in a safe, uniform way. Take a look at the Generic Event behavior. I'd recommend using this to create an event server, and have all these processes share this information via events to your server, at that point you can choose to log it or store it somewhere (like an ETS table). As an aside, ETS tables are not good for peristent data like how many "widgets" a player has - consider Mnesia, or an excellent crash only db like CouchDB. Both of these replicate very well across machines.
You bring up lock contention - you shouldn't have any locks. Messages are processed in a synchronous order as they are received by each process. In fact, the entire point of the message passing semantics built into the language is to avoid shared-state concurrency.
To summarize, normally you communicate with messages, from process to process. This is hairy for you, because you need information from processes scattered all over the place, so my recommendation for you is based of the idea of concentrating all information that is "interesting" outside of the originating processes into a single, real-time source.
I created this simple textpad program in WPF/VB.NET 2008 that automatically saves the content of the forms to an XML file on every keystroke.
Now, I'm trying to make the program see the changes on the XML file in realtime.. example, If I open two of my textpads, when I write on the first one, it will automatically reflect on the other textpad.
How can I do this?
One of my colleagues told me to read about iNotifyPropertyChanged (which I did) but how can I apply it to my application..?
:( help~
btw, I got the idea from a Google Wave demo, and I'm actually trying to do something bigger..
Note - this approach will be really, really expensive in terms of disk I/O, memory usage and CPU time. Why are you using XML is that the native format of the data you are editing? You may want to look at a more compact format - one that will use less memory, generate fewer I/Os and use less CPU.
Also note that you writer may need to flush the file for the watcher to notice any changes. This is expensive as well - especially if you re doing it on every key stroke.
Be sure to use the correct file open attributes (sharing, reading and writing).
You may want to consider using shared memory to communicate between your processes. This will be less expensive. You can avoid large ammounts of disk I/O by only writing changes to disk when the use asks to commit them, or there is a hint to do so. I suggest avoiding doing this on every key stroke.
Remember, your app needs to be a good system citizen and consume a reasonable amount of system resources. This is especially true running on netbooks and other 'low spec' systems.
You will probably need to use the FileSystemWatcher to watch the file on the disk rather than a property in the running instance of the application.
Or you could use some custom message passing between different instances of your application.
INotifyPropertyChanged isn't going to work for your application. That interfaced is used when data binding some element to a UI object.
Your best bet is going to be to attach a FileSystemWatcher to the file when you open it for editing. You can then use the change events to reload the file as needed in each instance of your application.
This will also load changes made from external editors.
It sounds like you are using file IO as a form of interprocess communication, if so, IMO you need to rethink your design, especially if you are doing something "bigger" than google wave (whatever bigger means in this context) as what you are proposing is terribly ineficient.
Do some searching on Interprocess communication and you will get a whole bunch of idea's #foredecker's idea (+1) of shared memory is a good possibility for example.