Can some one explain the lookup operation in NFS v3.0 in detail. Operations occurring on client side and server side.
This is something you should probably read up about and too generic for stackoverflow. The RFC says the following:
The LOOKUP procedure is used by the client to traverse
multicomponent file names (pathnames). Each call to LOOKUP is
used to resolve one segment of a pathname. There are two reasons
for restricting LOOKUP to a single segment: it is hard to
standardize a common format for hierarchical file names and the
client and server may have different mappings of pathnames to
file systems. This would imply that either the client must break
the path name at file system attachment points, or the server
must know about the client's file system attachment points. In
NFS version 3 protocol implementations, it is the client that
constructs the hierarchical file name space using mounts to
build a hierarchy. Support utilities, such as the Automounter,
provide a way to manage a shared, consistent image of the file
name space while still being driven by the client mount
process.
See https://www.ietf.org/rfc/rfc1813.txt for more information.
Related
I need to build a tables related to manage documents such as jpg,doc,msg,pdf using a sql server 2008 .
As i know sql server support .jpg images, so my question is if it's possible to upload other kind of files into a db.
This is an example of the table (could be redefined if it's needed).
Document : document_id int(10)
name varchar(10)
type image (doesnt know how it might works)
Those are the initial values for a table, but i dont know how to make it useful for any type.
pd: do i need to assign a directory to save this documents into the server?
You can store almost any file type in an sql server table...if you do, you will almost certainly regret it.
Store a meta-data / a pointer to the file in your database instead, and store the files themselves on a disk directly where they belong.
Your database size - and thus hardware required to run it - will grow very rapidly, so you will be incurring large costs that you do not need to incur.
Use Filestream
https://learn.microsoft.com/en-us/sql/relational-databases/blob/filestream-sql-server
I know that a link-only answer is not an answer but I can't believe no one has mentioned it yet
The proper database design pattern is not to save Files into DBMS. You should develop a kind of File Manager Subsystem to manage your files for all of your projects.
File Manager Subsystem
This subsystem should be Reusable, Extendable, Secure and etc. All your projects that want to save Files, can use this subsystem.
Files can be saved in every where such as Local Hard, Network Drive, External Drives, Clouds and etc. So this subsystem should be design to support all kind of requests.
(you can improve the mentioned subsystem by adding a lot of features to it. for example checking duplicate files,...)
This subsystem, should generate a Unique Key for each file. After uploading and saving the files, the subsystem should generate that key.
Now, you can use this Unique Key to save in database (instead of file). Every time if you want to get the file, you can get the Unique Key from database and request to get file from the subsystem by unique key.
I have a task to do. I need to build a WCF service that allow a client to import a file inside a database using the server backend. In order to do this, i need to communicate to the server, the setting, the events needed to start and set the importation and most importantly the file to import. Now the problem is that these files can be extremely large (much bigger then 2gb), so it's not possible to send them via browser as they are. The only thing that comes into my mind is to split these files and send them one by one to the server.
I have also another requirement: i need to be 100% sure that this file are not corrupted, so i need to implement also a sort of policy for correction and possibly recover of the errors.
Do you know if there is a sort of API or dll that can help me to achieve my goals or is it better to write the code by myself? And in this case, which would be the optimal size of the packets?
On my BizTalk server I use several different credentials to connect to internal and external systems. There is an upcoming task to change the passwords for a lot of systems and I'm searching for a solution to simplify this task on my BizTalk server.
Is there a way that I could adjust the File/FTP adapters to extract the information from an XML file so that I can change it only in the XML file and everything will be updated or is there an alternative that I could use such as PowerShell?
Did someone else had this task as well?
I rather don't want to create a custom adapter but if there is no alternative I will go for that one. Using dynamic credentials for the send port can be solved with Orchestration but I need this as well for the receive port.
You can export the bindings of all your applications. All the passwords for the FTP and File Adapter will be masked out with a series off * (asterisks).
You could then edit your binding down to just those ports you want to update, replace the masked out passwords with the correct passwords, and when you want the passwords changed, import them.
Unfortunately unless you have already prepared tokenised binding files the above is a manual effort.
I was going to recommend that you take a look at Enterprise Single Sign-On, but on second thoughts, I think you probably just need to 'bite the bullet' and make the change in the various Adapters.
ESSO would be beneficial if you have a single Adapter with multiple endpoints/credentials, but I infer from your question that isn't the case (i.e. you're not just using a single adapter). I also don't think re-writing the adapters to include functionality to read usernames/passwords from file is feasible IMHO - just changing the passwords would be much faster, by an order of weeks or months ;-)
One option that is available to you however, depending on which direction the adapter is being used: if you need to change credentials on Send Adapters, you should consider setting usernames/passwords at runtime via the various Adapter Property Schemas (see http://msdn.microsoft.com/en-us/library/aa560564.aspx for the FTP Adapter Properties for example). You could then easily create an encoding Send Pipeline Component that reads an Xml file containing credentials and updates the message context properties accordingly, the message would then be send with the appropriate credentials to the required endpoint.
There is also the option of using ESSO as your (encrypted) config store instead of Xml files / database etc. Richard Seroter has a really good post on this from way back in 2007 (its still perfectly valid tho.)
I'm running a very computationally intensive scientific job that spits out results every now and then. The job is basically to just simulate the same thing a whole bunch of times, so it's divided among several computers, which use different OSes. I'd like to direct the output from all these instances to the same file, since all the computers can see the same filesystem via NFS/Samba. Here are the constraints:
Must allow safe concurrent appends. Must block if some other instance on another computer is currently appending to the file.
Performance does not count. I/O for each instance is only a few bytes per minute.
Simplicity does count. The whole point of this (besides pure curiosity) is so I can stop having every instance write to a different file and manually merging these files together.
Must not depend on the details of the filesystem. Must work with an unknown filesystem on an NFS or Samba mount.
The language I'm using is D, in case that matters. I've looked, there's nothing in the standard lib that seems to do this. Both D-specific and general, language-agnostic answers are fully acceptable and appreciated.
Over NFS you face some problems with client side caching and stale data. I have written an OS independent lock module to work over NFS before. The simple idea of creating a [datafile].lock file does not work well over NFS. The basic idea to work around it is to create a lock file [datafile].lock which if present means file is NOT locked and a process that wants to acquire a lock renames the file to a different name like [datafile].lock.[hostname].[pid]. The rename is an atomic enough operation that works well enough over NFS to guarantee exclusivity of the lock. The rest is basically a bunch of fail safe, loops, error checking and lock retrieval in case the process dies before releasing the lock and renaming the lock file back to [datafile].lock
The classic solution is to use a lock file, or more accurately a lock directory. On all common OSs creating a directory is an atomic operation so the routine is:
try to create a lock directory with a fixed name in a fixed location
if the create failed, wait a second or so and try again - repeat until success
write your data to the real data file
delete the lock directory
This has been used by applications such as CVS for many years across many platforms. The only problem occurs in the rare cases when your app crashes while writing and before removing the lock.
Why not just build a simple server which sits between the file and the other computers?
Then if you ever wanted to change the data format, you would only have to modify the server, and not all of the clients.
In my opinion building a server would be much easier than trying to use a Network file system.
Lock File with a twist
Like other answers have mentioned, the easiest method is to create a lock file in the same directory as the datafile.
Since you want to be able to access the same file over multiple PC the best solution I can think of is to just include the identifier of the machine currently writing to the data file.
So the sequence for writing to the data file would be:
Check if there is a lock file present
If there is a lock file, see if I'm the one owning it by checking that its content has my identifier.
If that's the case, just write to the data file then delete the lock file.
If that's not the case, just wait a second or a small random length of time and try the whole cycle again.
If there is no lock file, create one with my identifier and try the whole cycle again to avoid race condition (re-check that the lock file is really mine).
Along with the identifier, I would record a timestamp in the lock file and check whether it's older than a given timeout value.
If the timestamp is too old, then assume that the lock file is stale and just delete it as it would mea one of the PC writing to the data file may have crashed or its connection may have been lost.
Another solution
If you are in control the format of the data file, could be to reserve a structure at the beginning of the file to record whether it is locked or not.
If you just reserve a byte for this purpose, you could assume, for instance, that 00 would mean the data file isn't locked, and that other values would represent the identifier of the machine currently writing to it.
Issues with NFS
OK, I'm adding a few things because Jiri Klouda correctly pointed out that NFS uses client-side caching that will result in the actual lock file being in an undetermined state.
A few ways to solve this issue:
mount the NFS directory with the noac or sync options. This is easy but doesn't completely guarantee data consistency between client and server though so there may still be issues although in your case it may be OK.
Open the lock file or data file using the O_DIRECT, the O_SYNC or O_DSYNC attributes. This is supposed to disable caching altogether.
This will lower performance but will ensure consistency.
You may be able to use flock() to lock the data file but its implementation is spotty and you will need to check if your particular OS actually uses the NFS locking service. It may do nothing at all otherwise.
If the data file is locked, then another client opening it for writing will fail.
Oh yeah, and it doesn't seem to work on SMB shares, so it's probably best to just forget about it.
Don't use NFS and just use Samba instead: there is a good article on the subject and why NFS is probably not the best answer to your usage scenario.
You will also find in this article various methods for locking files.
Jiri's solution is also a good one.
Basically, if you want to keep things simple, don't use NFS for frequently-updated files that are shared amongst multiple machines.
Something different
Use a small database server to save your data into and bypass the NFS/SMB locking issues altogether or keep your current multiple data files system and just write a small utility to concatenate the results.
It may still be the safest and simplest solution to your problem.
I don't know D, but I thing using a mutex file to do the jobe might work. Here's some pseudo-code you might find useful:
do {
// Try to create a new file to use as mutex.
// If it's already created, it will throw some kind of error.
mutex = create_file_for_writing('lock_file');
} while (mutex == null);
// Open your log file and write results
log_file = open_file_for_reading('the_log_file');
write(log_file, data);
close_file(log_file);
close_file(mutex);
// Free mutex and allow other processes to create the same file.
delete_file(mutex);
So, all processes will try to create the mutex file but only the one who wins will be able to continue. Once you write your output, close and delete the mutex so other processes can do the same.
There are parameters that I would not want to be transferred from production environment to QA system. Staff like network path and url's. The problem is that in ABAP everything is in the database and when the database is copied to the QA system you have to manually change those parameters. And this is prone to errors.
Is there a way to store configuration information in a way that won't get transferred with the database?
Thanks.
In short: no - at least that would be very unusual in a SAP environment.
If your QA system is set up as a system copy of your production environment (which is the usual path), there are quite a few steps to do to make the system work correctly. This includes some configuration, which can be as simple as filepaths such as you mention, but also the addresses and names of "partner systems". For example, one of my customers is a bank, so when copying his production system, he makes triply sure that no activity on the QA side accidentally trickles to the production side. Some other changes are made as well, for example obscuring peoples names and addresses so no mail gets accidentally sent etc.
There are a few ways to make applying these changes as easy as possible (look for some SAP documentation or books on SAP Transport and Change management, I had one by Sue McFarland Metzger or so that was quite good). From what I've seen, there is usually a set of transports that change the configuration and customizing etc. on the QA system to the
appropriate values.
Hope that helps.
You cannot prevent the configuration stored in the database from being copied to the cloned instance. However, you can design the configuration storage in a way that will prevent the copied entries from being used. You should check with your basis administrators if they can guarantee that the cloned system will get a new system ID (SID). If this is the case, then you can simply use the SID as key field in your configuration table. After the system copy, the SID will be changed and the cloned system will no longer access the original entries.
your question is not clear, are you talking about standard or custom config ?
Greetings, assuming you are storing these paths in a Z table, then some shops put the sy-sysid ( system id ) as one of the columns. Maintain all systems in your dev and transport to production. This becomes painful after a while, so I would only suggest this for information that does not change a lot ( file paths might be good ).
T.