We are looking to change the address part of all the users directory entries. Has anyone accomplished anything similar? Looking to see if there are any tables that hold this information, along the lines of returning and changing normal user profiles. Even if there is a way to read through the directory entries in a CL and then run the RNMDIRE on each user.
TIA
You can read the directory entries in this table qaok102a.
select * from qusrsys.qaokl02a;
Do not update the table directly and use system commands to make changes as you suggest in your post.
Related
I came here today to see if someone could give me a suggestion to improve the way I update my database.
Here is the problem, I have one file that I store new scripts every time that I need to change something. For instance, let's say I need to add a new column in a table. I would add the following line in my file called script1.sql:
alter table CLIENTS
add AGE integer
After doing that, I am going to send it to a client with an updated application, and ask him to run script1.sql on his database. That works just fine for me.
The problem shows up when this file starts to get bigger, and the client needs to receive the new updates.
The client would run the script1.sql file again, but now with more updates. He will get errors indicating that a column named AGE already exists in the database.
The biggest problem is when I change the version of my application. If I update my application from Application1 to Application2, I also change the script from script1.sql to script2.sql.
Now, my client will need to run both to get to the correct version without conflicts. He will also get lots of errors, since almost everything from script1.sql was already processed in his database.
What I want is to eliminate the chance to face conflicts. This process has been working for me, but always causing some sort of trouble. Therefore, if anyone has any idea about how I could make it work better, please help me out.
Usually SQL provides something called IF EXISTS ( also IF NOT EXISTS) so eg you can write a statement such as:
CREATE TABLE IF NOT EXISTS users ...
Which will only create the users table if it hasn't already been created.
There is usually a variant of this that can be added to all your statements (including updates such as renaming columns etc).
Then if the table has already been added (or column updated etc) then it won't try to run that SQL command again - which means you can run the same file over and over as many times as you like.
(Note: this is called idempotency)
You will need to google for the details on how to use EXISTS for sql-server
On Ubuntu 14.04 LTS running this osqueryi command:
osquery> SELECT * FROM file LIMIT 10;
returns no rows. Other tables like users are populated.
Do I need to "activate" something to populate the file table? Is there another table or some thing like the ls command?
There are no need to "activate" something to populate the file table, test with
SELECT * FROM file WHERE path = '/etc/group';
it is only an uggly way to send parameters to tables like file, device_file, device_partitions, etc. that are flagged at osquery.io/docs/tables with the "required in WHERE clause" icon in some column.
They will fix the information problem with an error message, and perhaps better documentation, see more details here at the issue discussion.
I'm trying to use oracle external tables to load flat files into a database but I'm having a bit of an issue with the location clause. The files we receive are appended with several pieces of information including the date so I was hoping to use wildcards in the location clause but it doesn't look like I'm able to.
I think I'm right in assuming I'm unable to use wildcards, does anyone have a suggestion on how I can accomplish this without writing large amounts of code per external table?
Current thoughts:
The only way I can think of doing it at the moment is to have a shell watcher script and parameter table. User can specify: input directory, file mask, external table etc. Then when a file is found in the directory, the shell script generates a list of files found with the file mask. For each file found issue a alter table command to change the location on the given external table to that file and launch the rest of the pl/sql associated with that file. This can be repeated for each file found with the file mask. I guess the benefit to this is I could also add the date to the end of the log and bad files after each run.
I'll post the solution I went with in the end which appears to be the only way.
I have a file watcher than looks for files in a given input dir with a certain file mask. The lookup table also includes the name of the external table. I then simply issue an alter table on the external table with the list of new file names.
For me this wasn't much of an issue as I'm already using shell for most of the file watching and file manipulation. Hopefully this saves someone searching for ages for a solution.
I've got an oracle database with several users (Other Users?), and I would like to import an schema which is in an .sql file.
My doubt is how to specify on my .sql file that the import is for an specific user.
Thank you in advance.
Examine your sql file. If the commands in there specify a schema name, then you'll need to modify it before you can import it into a different schema.
For example, does it have commands like this:
CREATE TABLE scott.mytable (...)
or like:
CREATE TABLE mytable (...)
If the schema name (e.g. "scott") has been hard-coded, then you'll need to edit your sql script to carefully remove it.
If not, then you just need to log in as the target username and run your sql script.
That depends on the content of your SQL file. You're not doing an import, you are running an SQL file, and that is a bit like "running a script" : it can contain anything. So, it's hard for us to tell from here, how you should run a file, to which we have no clue what is the content. There are many ways of defining the owner of an object. It can be done explicit, or implicit. So, that's a first thing to check : is a user (schema) specified IN the script ? If it is, where is it specified, and how ?
In the most simple case, people would just write a script that connects, and installs objects - in the current schema. Sometimes even without the connect. So, in that case you can call the script as any user you want the objects to be created in.
In the totally other way, you can have a script where a given owner, is specified at each object reference. In that case, you'll probably end up doing a global search and replace.
So, let us know how your script works, and we can detail.
I've never touched PervasiveSql before and now I have a bunch of .ddf and .Btr files. I read that all I had to do was create a new database in the control center and point to the folder that contains these files.
When I do this and look at the database there is nothing in it. Since I am new to Pervasive, I'm more than likely sure that I'm doing something wrong.
EDIT: Added a screen shot after running command prompt
To create a database name in the PCC, you need to connect to the engine then right click the engine name and select New then Database. Once you do that, the following dialog should be displayed:
Enter the database name, and path. The path being where the DDFs are located. In most cases the default options are sufficient.
A longer process is documented at http://docs.pervasive.com/products/database/psqlv11/wwhelp/wwhimpl/js/html/wwhelp.htm#href=uguide/using.02.5.html.
If you pointed to a directory that had DDF files (FILE.DDF, FIELD.DDF,and INDEX.DDF) when you created the database name, you should see tables listed.
If you pointed to a directory that does not have DDF files, the database will still be created but will have no tables defined. You'll either need to get DDFs from the vendor or create the table entries using CREATE TABLE (with IN DICTIONARY clauses) or use DDF BUilder to add table entries.
Based on your screen shot, you only have 10 records in FILE.DDF. This is not enough. There are minimum system tables required (X$FILE, X$FIELD, X$INDEX, and a few others). It appears your DDFs are not a valid set. Contact the client / vendor that provided the DDFs and ask for a set that include all of the table definitions.
Once you have tables listed in your Database Name, you can use ODBC to access the data.