GVim not recognizing commands in plugin - sql

How do I get gvim to recognize sqlcomplete.vim commands?
I'm unable to use the sqlcomplete.vim plugin. When running :version I get the following output:
and scrolling all the way to the bottom here is the rest of the output:
and the env variables:
:echo $VIM
c:\users\me\.babun\cygwin\etc\
:echo $HOME
H:\
Here is the output of :scriptnames:
When running the sqlcomplete.vim command such as :SQLSetType sqlanywhere the output I get is:
How do I get gvim to recognize sqlcomplete.vim commands?
Another piece of helpful information is the output of :echo &rtp :
H:\vimfiles,H:\.vim\bundle\Vundle.vim,H:\.vim\bundle\dbext.vim,H:\.vim\bundle\SQ
LComplete.vim,C:\Users\me\.babun\cygwin\etc\vimfiles,C:\Users\me\.babu
n\cygwin\etc\,C:\Users\me\.babun\cygwin\etc\vimfiles/after,H:\vimfiles/afte
r,H:\.vim/bundle/Vundle.vim,H:\.vim\bundle\Vundle.vim/after,H:\.vim\bundle\dbext
.vim/after,H:\.vim\bundle\SQLComplete.vim/after

Some points you could check:
:scriptnames shows plugin\sqlcomplete.vim
But the link you provided points to .../vim/runtime/autoload/sqlcomplete.vim, there is no .../vim/runtime/plugin/sqlcomplete.vim, and the version at vim.org also doesn't contains a /plugin file:
install details
Copy sqlcomplete.vim to:
.vim/autoload/sqlcomplete.vim (Unix)
vimfiles\autoload\sqlcomplete.vim (Windows)
For documentation:
:h sql.txt
Maybe you have installed it incorrectly.
The file on your link has version 12 at its header, while the latest version is 15. Try updating to the latest version
Note that this plugin does not define the SQLSetType command.
You could check that by simple searching the file the on link. And on its header:
" Vim OMNI completion script for SQL
" Language: SQL
" Maintainer: David Fishburn <dfishburn dot vim at gmail dot com>
" Version: 15.0
" Last Change: 2013 May 13
" Homepage: http://www.vim.org/scripts/script.php?script_id=1572
" Usage: For detailed help
" ":help sql.txt"
" or ":help ft-sql-omni"
" or read $VIMRUNTIME/doc/sql.txt
Following :help sql.txt:
2.1 SQLSetType *sqlsettype* *SQLSetType*
--------------
For the people that work with many different databases, it is nice to be
able to flip between the various vendors rules (indent, syntax) on a per
buffer basis, at any time. The ftplugin/sql.vim file defines this function: >
SQLSetType
And scriptnames is not listing ftplugin/sql.vim

Related

How do I get Source Extractor to Analyze an Image?

I'm relatively inexperienced in coding, so right now I'm just familiarizing myself with the basics of how to use SE, which I'll need to use in the near future.
At the moment I'm trying to get it to analyze a FITS file on my computer (which is a Mac). I'm sure this is something obvious, but I haven't been able to get it do that. Following the instructions in Chapters 6 and 7 of Source Extractor for Dummies (linked below), I input the following:
sex MedSpiral_20deg_Serl2_.45_.fits.fits -c configuration_file.txt
And got the following error message:
WARNING: configuration_file.txt not found, using internal defaults
----- SExtractor 2.19.5 started on 2020-02-05 at 17:10:59 with 1 thread
Setting catalog parameters
ERROR: can't read default.param
I then tried entering parameters manually:
sex MedSpiral_20deg_Ser12_.45_.fits.fits -c configuration_file.txt -DETECT_TYPE CCD -MAG_ZEROPOINT 2.5 -PIXEL_SCALE 0 -SATUR_LEVEL 1 -SEEING_FWHM 1
And got the same error message. I tried referencing default.sex directly:
sex MedSpiral_20deg_Ser12_.45_.fits.fits -c default.sex
And got the same error message again, substituting "configuration_file.txt not found" with "default.sex not found" (I checked that default.sex was on my computer, it is). The same thing happened when I tried to use default.param.
Here's the link to SE for Dummies (Chapter 6 begins on page 19):
http://astroa.physics.metu.edu.tr/MANUALS/sextractor/Guide2source_extractor.pdf
If you run the command "sex MedSpiral_20deg_Ser12_.45_fits.fits -c default.sex" within the config folder (within the sextractor folder), you will be able to run it.
However, I wonder how I can possibly run sextractor command from any folder in my computer?

How to get information on latest successful pod deployment in OpenShift 3.6

I am currently working on making a CICD script to deploy a complex environment into another environment. We have multiple technology involved and I currently want to optimize this script because it's taking too much time to fetch information on each environment.
In the OpenShift 3.6 section, I need to get the last successful deployment for each application for a specific project. I try to find a quick way to do so, but right now I only found this solution :
oc rollout history dc -n <Project_name>
This will give me the following output
deploymentconfigs "<Application_name>"
REVISION STATUS CAUSE
1 Complete config change
2 Complete config change
3 Failed manual change
4 Running config change
deploymentconfigs "<Application_name2>"
REVISION STATUS CAUSE
18 Complete config change
19 Complete config change
20 Complete manual change
21 Failed config change
....
I then take this output and parse each line to know which is the latest revision that have the status "Complete".
In the above example, I would get this list :
<Application_name> : 2
<Application_name2> : 20
Then for each application and each revision I do :
oc rollout history dc/<Application_name> -n <Project_name> --revision=<Latest_Revision>
In the above example the Latest_Revision for Application_name is 2 which is the latest complete revision not building and not failed.
This will give me the output with the information I need which is the version of the ear and the version of the configuration that was used in the creation of the image use for this successful deployment.
But since I have multiple application, this process can take up to 2 minutes per environment.
Would anybody have a better way of fetching the information I required?
Unless I am mistaken, it looks like there are no "one liner" with the possibility to get the information on the currently running and accessible application.
Thanks
Assuming that the currently active deployment is the latest successful one, you may try the following:
oc get dc -a --no-headers | awk '{print "oc rollout history dc "$1" --revision="$2}' | . /dev/stdin
It gets a list of deployments, feeds it to awk to extract the name $1 and revision $2, then compiles your command to extract the details, finally sends it to standard input to execute. It may be frowned upon for not using xargs or the like, but I found it easier for debugging (just drop the last part and see the commands printed out).
UPDATE:
On second thoughts, you might actually like this one better:
oc get dc -a -o jsonpath='{range .items[*]}{.metadata.name}{"\n\t"}{.spec.template.spec.containers[0].env}{"\n\t"}{.spec.template.spec.containers[0].image}{"\n-------\n"}{end}'
The example output:
daily-checks
[map[name:SQL_QUERIES_DIR value:daily-checks/]]
docker-registry.default.svc:5000/ptrk-testing/daily-checks#sha256:b299434622b5f9e9958ae753b7211f1928318e57848e992bbf33a6e9ee0f6d94
-------
jboss-webserver31-tomcat
registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift#sha256:b5fac47d43939b82ce1e7ef864a7c2ee79db7920df5764b631f2783c4b73f044
-------
jtask
172.30.31.183:5000/ptrk-testing/app-txeq:build
-------
lifebicycle
docker-registry.default.svc:5000/ptrk-testing/lifebicycle#sha256:a93cfaf9efd9b806b0d4d3f0c087b369a9963ea05404c2c7445cc01f07344a35
You get the idea, with expressions like .spec.template.spec.containers[0].env you can reach for specific variables, labels, etc. Unfortunately the jsonpath output is not available with oc rollout history.
UPDATE 2:
You could also use post-deployment hooks to collect the data, if you can set up a listener for the hooks. Hopefully the information you need is inherited by the PODs. More info here: https://docs.openshift.com/container-platform/3.10/dev_guide/deployments/deployment_strategies.html#lifecycle-hooks

How to generate executable TPC-DS queries?

I have downloaded the DSGEN tool from the TPC-DS web site and already generated the tables and loaded the data into Oracle XE.
I am using the following command to generate the SQL statements :
dsqgen -input ..\query_templates\templates.lst -directory ..\query_templates -dialect oracle -scale 1
However, No matter how I adjust the command I always get this error message :
ERROR: A query template list must be supplied using the INPUT option
Can anybody help?
Apparently you need to use / rather than - for the flags for the Windows executable:
dsqgen /input ..\query_templates\templates.lst /directory ..\query_templates
/dialect oracle /scale 1

opensplice dds Hello Word Example

I am posting here after asking the question at the openslice dds forum, and not receiving any reply.I am trying to use opensplice dds on a ubuntu machine. I am not sure if it serves as a proof of proper installation, but I have pasted my release.com file below. Now, I was able to run the ping pong example just fine. But when I ran the executable sac_helloworld_pub ( HelloWorld example in the C programming language), I got the following error
vishal#expmach:~/HDE/x86.linux2.6/examples/dcps/HelloWorld/c/standalone$ ./sac_helloworld_pub
Error in DDS_DomainParticipantFactory_create_participant: Creation failed: invalid handle
I did some searching, and it looks like I need to be running the ospl start command from the terminal. But when I do so, I get a No command ospl found message. Below is the release.comfile's contents
echo "<<< OpenSplice HDE Release V6.3.130716OSS For x86.linux2.6, Date 2013-07-30 >>>"
if [ "${SPLICE_ORB:=}" = "" ]
then
SPLICE_ORB=DDS_OpenFusion_1_6_1
export SPLICE_ORB
fi
if [ "${SPLICE_JDK:=}" = "" ]
then
SPLICE_JDK=jdk
export SPLICE_JDK
fi
OSPL_HOME="/home/vishal/HDE/x86.linux2.6"
OSPL_TARGET=x86.linux2.6
PATH=$OSPL_HOME/bin:$PATH
LD_LIBRARY_PATH=$OSPL_HOME/lib${LD_LIBRARY_PATH:+:}$LD_LIBRARY_PATH
CPATH=$OSPL_HOME/include:$OSPL_HOME/include/sys:${CPATH:=}
OSPL_URI=file://$OSPL_HOME/etc/config/ospl.xml
OSPL_TMPL_PATH=$OSPL_HOME/etc/idlpp
. $OSPL_HOME/etc/java/defs.$SPLICE_JDK
export OSPL_HOME OSPL_TARGET PATH LD_LIBRARY_PATH CPATH OSPL_TMPL_PATH OSPL_URI
$#
release.com (END)
Sorry for the holidays-driven lack of 'reactivity' on the OpenSplice forum .. I've answered your question there though ..
Here's that same answer for completeness:
*For the 6.3 community-edition, the deployment-model changed from shared-memory (v5.x) to the so-called single-process standalone deployment mode where the middleware is simply linked (as libraries) with the application so you don't need to start any daemons first (as was the case for the federated 'shared-memory' mode that was the default in V5).
So its OK that you get the error when trying to call 'ospl' as thats not used anymore so isn't in the distribution.
Now to your issue, your release.com looks OK to me, but perhaps you didn't actually 'source' it in your environment i.e. calling it with a '.' in front of it:
promtp> . release.com
you can verify that by doing an 'echo $OSPL_HOME' in your shell and see if it actually shows the value of the env. variable as set by the release.com.
Hope that helps,
-Hans*

Generating TPC-DS database for sql server

How do I populate the Transaction Processing Performance Council's TPC-DS database for SQL Server? I have downloaded the TPC-DS tool but there are few tutorials about how to use it.
In case you are using windows, you gotta have visual studio 2005 or later. Unzip dsgen in the folder tools there is dsgen2.sln file, open it using visual studio and build the project, will generate tables for you, I've tried that and I loaded tables manually into sql server
I've just succeeded in generating these queries.
There are some tips may not the best but useful.
cp ${...}/query_templates/* ${...}/tools/
add define _END = ""; to each query.tpl
${...}/tools/dsqgen -INPUT templates.lst -OUTPUT_DIR /home/query99/
Let's describe the base steps:
Before go to the next steps double-check that the required TPC-DS Kit has not been already prepared for your DB
Download TPC-DS Tools
Build Tools as described in 'v2.11.0rc2\tools\How_To_Guide-DS-V2.0.0.docx' (I used VS2015)
Create DB
Take the DB schema described in tpcds.sql and tpcds_ri.sql (they located in 'v2.11.0rc2\tools\'-folder), suit it to your DB if required.
Generate data that be stored to database
# Windows
dsdgen.exe /scale 1 /dir .\tmp /suffix _001.dat
# Linux
dsdgen -scale 1 -dir /tmp -suffix _001.dat
Upload data to DB
# example for ClickHouse
database_name=tpcds
ch_password=12345
for file_fullpath in /tmp/tpc-ds/*.dat; do
filename=$(echo ${file_fullpath##*/})
tablename=$(echo ${filename%_*})
echo " - $(date +"%T"): start processing $file_fullpath (table: $tablename)"
query="INSERT INTO $database_name.$tablename FORMAT CSV"
cat $file_fullpath | clickhouse-client --format_csv_delimiter="|" --query="$query" --password $ch_password
done
Generate queries
# Windows
set tmpl_lst_path="..\query_templates\templates.lst"
set tmpl_dir="..\query_templates"
set dialect_path="..\..\clickhouse-dialect"
set result_dir="..\queries"
set tmpl_name="query1.tpl"
dsqgen /input %tmpl_lst_path% /directory %tmpl_dir% /dialect %dialect_path% /output_dir %result_dir% /scale 1 /verbose y /template %tmpl_name%
# Linux
# see for example https://github.com/pingcap/tidb-bench/blob/master/tpcds/genquery.sh
To fix the error 'Substitution .. is used before being initialized' follow this fix.