how can we move the data in middle of PS file to left side in Mainframe? any shortcut command for this?
I have a Data set with data at column 13 and it has to be moved to column 11, Any short key to move it.
Want to align rest of the rows in DS as 1st row
BROWSE OSMDEV.ITALY3.DATA
Command ===>
----+----1----+----2----+----3----+----4----+-
758 200510 4323T
758 2005 10 4323N
758 2005 10 51149
758 2005 10 51154
758 2005 10 6758E
758 2005 13 34437
758 2005 13 34441
758 2005 13 53445
Use the ISPF 'BNDS' line command command and set the bounds (via the '<' and '>' characters) to column 11 and column 20:
(Bounds are used to constrain scrolling, shift line commands ( ">", "<", ")", "(" ), text line commands ("TS", "TF", "TE") and FIND , CHANGE , EXCLUDE and SORT commands.)
Now use the '(' (Shift left) command to shift the data 2 characters to the left.
(We will use the '((' form of the '(' command to indicate that we are applying the '(' command to a block of rows and the default shift value is 2, which is what you want, so we don't have to specify a value).
The previously set bounds will ensure that only data in columns 11 through to 20 will be moved:
Assuming that you want to remove the two spaces before 2005 and the two after 2005 and move the rest of the line four spaces to the left, you may follow these steps:
Label 1st line to change .A
Label last line to change .B
Issue the command CHANGE ALL ' ' '' 13 20 .A .B
(note that the above command has 2 spaces between the quotes, then 0 spaces between the quotes).
Related
We have a process that reads an XML file into our database and inserts any rows that aren't currently in another table to that table.
This process also has a trigger to write to an audit table and a nightly snapshot is also held in another table.
In the XML holding table a field looks like 1234567890123456 but it exists on our live table as 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6. Those spaces will not be removed by any combination of REPLACE functions. We have tried all CHAR values and it does not recognise the character. The audit table and nightly snapshot, however, contain the correct values.
Similarly, if we run a comparison between SELECT CASE WHEN '1234567890123456' = '1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 ' THEN 1 ELSE 0 END, this returns 1, so they match. However LEN('1234567890123456') is 16 and LEN('1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 ') is 32.
We have ran some queries to loop through the characters in the field and output the ASCII and Unicode values for the characters. The digits return the correct ASCII/Unicode values, but this random whitespace character does not return a value.
An example of the incorrectly displayed one is 0x35000000320000003800000036000000380000003300000039000000370000003800000037000000330000003000000035000000340000003000000033000000 and a correct one is 0x3500320038003600380033003200300030003000360033003600380036003000. Both were added by the same means on the same day. One has the extra bytes, the other is fine.
How can we identify this character and get rid of it? Is there a reason this would have been inserted originally? How can we avoid this in future?
Data entry
It looks like some null (i.e. Char(0)) characters have got into the data.
If the data was supposed to be ASCII when it was entered but UTF-16 data got, then it could be:
Entered character codes: 48 00
Sent to database: 48 00 00 00
To avoid that, remove disallowed characters as the first step in processing the input, say by using a regex to replace [\x00-\x1F] with an empty string.
Data repair
Search for entries which a Char(0) in them to confirm that they can be found that way.
If so, replace the Char(0) with an empty string.
If that doesn't work, you could convert the data to the format '0x35000000320000003800000036000000380000003300000039000000370000003800000037000000330000003000000035000000340000003000000033000000', replace '000000' with '00', and then convert back.
I'm trying to use gnuplot 4.6 patchlevel 6 to visualize some data from a file test.dat which looks like this:
#Pkg 1
type min max avg
small 1 10 5
medium 5 15 7
large 10 20 15
#Pkg 2
small 3 9 5
medium 5 13 6
large 11 17 13
(Note that the values are actually separated by tabs even though it shows as spaces here.)
My gnuplot commands are
reset
set datafile separator "\t"
plot 'test.dat' index 0 using 2:xticlabels(1) title col, '' using 3 title col, '' using 4 title col
This works fine as long as there is only a single data block in test.dat. When I add the second block spurious data points appear. Why is that and how can it be fixed?
YFTR: Using stat on the file yields only expected results. It reports two data blocks for the full file and correct values (for min, max and sum) when I specify one of the two using index
as mentioned in the comment to the question, one has to explicitly repeat the index 0 specification within all parts of the plot command as
plot 'test.dat' index 0 using 2, '' index 0 using 3, ...
otherwise '' refers to all blocks in the data file
After run Macro on my Excel file (.xlsx) I have output like this:
With 3 first empty columns for each row.
Then when I try to save this as Text with Tab delimited I got output (.txt) but without 3 first empty rows:
Others empty rows was displayed properly as tabulation, but these 3 first rows was somehow deleted. But in my case I need this.
Any solution how to avoid that situation? Adding it manually don't be a soltuion, because I have huge amounts of data.
Thanks.
In the First Row of First 3 Columns enter any dummy special character like "#".
Example:
# # # 1 999 999 2 10 3
Just enter these # symbols in first ROW. and now save the excel as Tab delimited text file. I get output as below.
Output:
# # # 1 999 999 2 10 3
1 999 999 2 10 3
1 999 999 2 10 3
1 999 999 2 10 3
Hope this solves the problem in this case. If the empty rows or columns are not consistent, then the code present in Alex page can be used.
Put a formula in the last columns of rows that are empty that evaluate to empty (e.g. =""). And then export.
OSX v10.6.8 and Gnuplot v4.4
I have a data file with 8 columns. I would like to take the first value from the 6th column and make it the title. Here's what I have so far:
#m1 m2 q taua taue K avgPeriodRatio time
#1 2 3 4 5 6 7 8
K = #read in data here
graph(n) = sprintf("K=%.2e",n)
set term aqua enhanced font "Times-Roman,18"
plot file using 1:3 title graph(K)
And here is what the first few rows of my data file looks like:
1.00e-07 1.00e-07 1.00e+00 1.00e+05 1.00e+04 1.00e+01 1.310 12070.00
1.11e-06 1.00e-07 9.02e-02 1.00e+05 1.00e+04 1.00e+01 1.310 12070.00
2.12e-06 1.00e-07 4.72e-02 1.00e+05 1.00e+04 1.00e+01 1.310 12070.00
3.13e-06 1.00e-07 3.20e-02 1.00e+05 1.00e+04 1.00e+01 1.310 12090.00
I don't know how to correctly read in the data or if this is even the right way to go about this.
EDIT #1
Ok, thanks to mgilson I now have
#m1 m2 q taua taue K avgPeriodRatio time
#1 2 3 4 5 6 7 8
set term aqua enhanced font "Times-Roman,18"
K = "`head -1 datafile | awk '{print $6}'`"
print K+0
graph(n) = sprintf("K=%.2e",n)
plot file using 1:3 title graph(K)
but I get the error: Non-numeric string found where a numeric expression was expected
EDIT #2
file = "testPlot.txt"
K = "`head -1 file | awk '{print $6}'`"
K=K+0 #Cast K to a floating point number #this is line 9
graph(n) = sprintf("K=%.2e",n)
plot file using 1:3 title graph(K)
This gives the error--> head: file: No such file or directory
"testPlot.gnu", line 9: Non-numeric string found where a numeric expression was expected
You have a few options...
FIRST OPTION:
use columnheader
plot file using 1:3 title columnheader(6)
I haven't tested it, but this may prevent the first row from actually being plotted.
SECOND OPTION:
use an external utility to get the title:
TITLE="`head -1 datafile | awk '{print $6}'`"
plot 'datafile' using 1:3 title TITLE
If the variable is numeric, and you want to reformat it, in gnuplot, you can cast strings to a numeric type (integer/float) by adding 0 to them (e.g).
print "36.5"+0
Then you can format it with sprintf or gprintf as you're already doing.
It's weird that there is no float function. (int will work if you want to cast to an integer).
EDIT
The script below worked for me (when I pasted your example data into a file called "datafile"):
K = "`head -1 datafile | awk '{print $6}'`"
K=K+0 #Cast K to a floating point number
graph(n) = sprintf("K=%.2e",n)
plot "datafile" using 1:3 title graph(K)
EDIT 2 (addresses comments below)
To expand a variable in backtics, you'll need macros:
set macro
file="mydatafile.txt"
#THE ORDER OF QUOTES (' and ") IS CRUCIAL HERE.
cmd='"`head -1 ' . file . ' | awk ''{print $6}''`"'
# . is string concatenation. (this string has 3 pieces)
# to get a single quote inside a single quoted string
# you need to double. e.g. 'a''b' yields the string a'b
data=#cmd
To address your question 2, it is a good idea to familiarize yourself with shell utilities -- sed and awk can both do it. I'll show a combination of head/tail:
cmd='"`head -2 ' . file . ' | tail -1 | awk ''{print $6}''`"'
should work.
EDIT 3
I recently learned that in gnuplot, system is a function as well as a command. To do the above without all the backtic gymnastics,
data=system("head -1 " . file . " | awk '{print $6}'")
Wow, much better.
This is a very old question, but here's a nice way to get access to a single value anywhere in your data file and save it as a gnuplot-accessible variable:
set term unknown #This terminal will not attempt to plot anything
plot 'myfile.dat' index 0 every 1:1:0:0:0:0 u (var=$1):1
The index number allows you to address a particular dataset (separated by two carriage returns), while every allows you to specify a particular line.
The colon-separated numbers after every should be of the form 1:1:<line_number>:<block_number>:<line_number>:<block_number>, where the line number is the line with the the block (starting from 0), and the block number is the number of the block (separated by a single carriage return, again starting from 0). The first and second numbers say plot every 1 lines and every one data block, and the third and fourth say start from line <line_number> and block <block_number>. The fifth and sixth say where to stop. This allows you to select a single line anywhere in your data file.
The last part of the plot command assigns the value in a particular column (in this case, column 1) to your variable (var). There needs to be two values to a plot command, so I chose column 1 to plot against my variable assignment statement.
Here is a less 'awk'-ward solution which assigns the value from the first row and 6th column of the file 'Data.txt' to the variable x16.
set table
# Syntax: u 0:($0==RowIndex?(VariableName=$ColumnIndex):$ColumnIndex)
# RowIndex starts with 0, ColumnIndex starts with 1
# 'u' is an abbreviation for the 'using' modifier
plot 'Data.txt' u 0:($0==0?(x16=$6):$6)
unset table
A more general example for storing several values is given below:
# Load data from file to variable
# Gnuplot can only access the data via the "plot" command
set table
# Syntax: u 0:($0==RowIndex?(VariableName=$ColumnIndex):$ColumnIndex)
# RowIndex starts with 0, ColumnIndex starts with 1
# 'u' is an abbreviation for the 'using' modifier
# Example: Assign all values according to: xij = Data33[i,j]; i,j = 1,2,3
plot 'Data33.txt' u 0:($0==0?(x11=$1):$1),\
'' u 0:($0==0?(x12=$2):$2),\
'' u 0:($0==0?(x13=$3):$3),\
'' u 0:($0==1?(x21=$1):$1),\
'' u 0:($0==1?(x22=$2):$2),\
'' u 0:($0==1?(x23=$3):$3),\
'' u 0:($0==2?(x31=$1):$1),\
'' u 0:($0==2?(x32=$2):$2),\
'' u 0:($0==2?(x33=$3):$3)
unset table
print x11, x12, x13 # Data from first row
print x21, x22, x23 # Data from second row
print x31, x32, x33 # Data from third row
I have a csv format file, which I want to import to sql server 2008 using bulk insert. I have 80 columns in csv file which has comma for example, column state has NY,NJ,AZ,TX,AR,VA,MA like this for few millions of rows.
So I enclosed the state column in double quotes using custom format in excel, so that this column will be treated as single column and does not split at comma in between the column. But still the import is not successful; still it is splitting at comma. Can anyone please suggest successful import of the columns containing comma using bulk insert
I am using this code
bulk insert test from 'C:\test.csv'
with (
fieldterminator=',', rowterminator='\n'
)
go
I saw similar question previously asked here, but I don't know visual basic to apply the code. Is there any other option to modify file in excel?
Is there any other option to modify file in excel?
It turns out there is, at least in Windows.
Go to Start Menu > Control Panel > Regional and Language Options.
In the Regional Options tab, click the Customize Button.
In the List Separator field, replace the , with a |. Click OK.
Saving a file as a .CSV through Excel will now create a pipe-separated value file. Be sure to undo this change to the Regional Options setting, as Excel uses the list separator for other things like functions.
Then you can do as datagod suggests and bulk upload the file using | as the column delimiter.
You should create a format file: http://msdn.microsoft.com/en-us/library/ms191516.aspx
If your data contains commas, I would choose a different delimiter. You can specify "|" as the delimiteter in the format file.
Example:
10.0
4
1 SQLCHAR 0 100 "|" 1 Col1 SQL_Latin1_General_CP1_CI_AS
2 SQLCHAR 0 100 "|" 2 Col2 SQL_Latin1_General_CP1_CI_AS
3 SQLCHAR 0 100 "|" 3 Col3 SQL_Latin1_General_CP1_CI_AS
4 SQLCHAR 0 7000 "\r\n" 4 Col11 SQL_Latin1_General_CP1_CI_AS