I'm trying to use a message-driven bean in my webapp, but everytime it throws me this exception :
com.sun.messaging.jmq.jmsserver.util.BrokerException: [B4122]: Can not add message 1-127.0.1.1(b0:1a:c1:66:46:a9)-1-1336769823653 to destination PhysicalQueue [Queue]. The message size of 24968685 bytes is larger than the destination individual message byte limit (maxBytesPerMsg) of 10485760 bytes.
After some researches, I've found out that the default limit is -1, so it has to be unlimited.
I've looked everywhere in Glassfish's admin console but withou finding a way to remove this limit.
Even the "new JMS resource" wizard doesn't ask anything about this parameter.
Is there any way to fix it?
Why is your message so large? You might want to reconsider how you're doing this.
....
You can update it via the imqcmd command. The value you want to change is MaxBytesPerMsg.
From the SunGlassFish MessageQueue 4.4 Administration Guide or the 4.2 guide.
Updating Physical Destination Properties
The subcommand imqcmd update dst changes the values of specified properties of a physical
destination:
imqcmd update dst -t destType -n destName
-o property1=value1 [ [-o property2=value2] ... ]
The properties to be updated can include any of those listed in Table 18–1 (with the exception of the isLocalOnly property, which cannot be changed once the destination has been created).
For example, the following command changes the maxBytesPerMsg property of the queue
destination curlyQueue to 1000 and the maxNumMsgs property to 2000:
imqcmd update dst -t q -n curlyQueue -u admin
-o maxBytesPerMsg=1000
-o maxNumMsgs=2000
Related
Are there any specific problems with running Microsoft's BCP utility (on CentOS 7, https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-migrate-bcp?view=sql-server-2017) on multiple threads? Googling could not find much, but am looking at a problem that seems to be related to just that.
Copying a set of large TSV files from HDFS to a remote MSSQL Server with some code of the form
bcpexport() {
filename=$1
TO_SERVER_ODBCDSN=$2
DB=$3
TABLE=$4
USER=$5
PASSWORD=$6
RECOMMEDED_IMPORT_MODE=$7
DELIMITER=$8
echo -e "\nRemoving header from TSV file $filename"
echo -e "Current head:\n"
echo $(head -n 1 $filename)
echo "$(tail -n +2 $filename)" > $filename
echo "First line of file is now..."
echo $(head -n 1 $filename)
# temp. workaround safeguard for NFS latency
#sleep 5 #FIXME: appears to sometimes cause script to hang, workaround implemented below, throws error if timeout reached
timeout 30 sleep 5
echo -e "\nReplacing null literal values with empty chars"
NULL_WITH_TAB="null\t" # WARN: assumes the first field is prime-key so never null
TAB="\t"
sed -i -e "s/$NULL_WITH_TAB/$TAB/g" $filename
echo -e "Lines containing null (expect zero): $(grep -c "\tnull\t" $filename)"
# temp. workaround safeguard for NFS latency
#sleep 5 #FIXME: appears to sometimes cause script to hang, workaround implemented below
timeout 30 sleep 5
/opt/mssql-tools/bin/bcp "$TABLE" in "$filename" \
$TO_SERVER_ODBCDSN \
-U $USER -P $PASSWORD \
-d $DB \
$RECOMMEDED_IMPORT_MODE \
-t "\t" \
-e ${filename}.bcperror.log
}
export -f bcpexport
parallel -q -j 7 bcpexport {} "$TO_SERVER_ODBCDSN" $DB $TABLE $USER $PASSWORD $RECOMMEDED_IMPORT_MODE $DELIMITER \
::: $DATAFILES/$TARGET_GLOB
where $DATAFILES/$TARGET_GLOB constructs a glob that lists a set of files in a directory.
When running this code for a set of TSV files, finding that sometimes some (but not all) of the parallel BCP threads fail, ie. some files successfully copy to MSSQL Server
Starting copy...
5397376 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total : 154902 Average : (34843.8 rows per sec.)
while others output error message
Starting copy...
BCP copy in failed
Usually, see this pattern: a few successful BCP copy-in operations in the first few threads returned, then a bunch of failing threads return their output until run out of files (GNU Parallel returns output only when whole thread done to appear same as if sequential).
Notice in the code there is the -e option to produce an error file for each BCP copy-in operation (see https://learn.microsoft.com/en-us/sql/tools/bcp-utility?view=sql-server-2017#e). When examining the files after observing these failing behaviors, all are blank, no error messages.
Only have seen this with the number of threads >= 10 (and only for certain sets of data (assuming has something to do with total number of files are files sizes, and yet...)), no errors seen so far when using ~7 threads, which further makes me suspect this has something to do with multi-threading.
Monitoring system resources (via free -mh) shows that generally ~13GB or RAM is always available.
May be helpful to note that the data bcp is trying to copy-in may be ~500000-1000000 records long with an upper limit of ~100 columns per record.
Does anyone have any idea what could be going on here? Note, am pretty new to using BCP as well as GNU Parallel and multi-threading.
No, no issues specific to the BCP program being run in multiple threads. You seem to be on the track of what I would say your issue is, system resources. Have you monitored system resources while increasing the number of threads? If anything, there is likely an issue with BCP executing properly when memory/cpu/network resources are low. Regarding the "-e" option, it is meant to output data errors. login errors, bad table names... many other errros are not reported in the file created with the -e option. When you get output using the "-e" option, you'll see info like "value truncated" and such... will give you line numbers and sample data that was at issue.
TLDR: Adding more threads to run concurrently to have bcp copy-in files of data seems to have the affect of overwhelming the endpoint MSSQL Server with write instructions, causing the bcp threads to fail (maybe timeing out?). When the number of threads becomes too many seems to depend on the size of the files getting copy-in'ed by bcp (ie. both the number of records in the file as well as the width of each record (ie. number of columns)).
Long version (more reasons for my theory):
1.
When running a larger number of bcp threads and looking at the processes started on the machine (https://clustershell.readthedocs.io/en/latest/tools/clush.html)
ps -aux | grep bcp
seeing a bunch of sleeping processes (notice the S, see https://askubuntu.com/a/360253/760862) as shown below (added newlines for readability)
me 135296 14.5 0.0 77596 6940 ? S 00:32 0:01
/opt/mssql-tools/bin/bcp TABLENAME in /path/to/tsv/1_16_0.tsv -D -S MyMSSQLServer -U myusername -P -d myDB -c -t \t -e /path/to/logfile
These threads appear to sleep for very long time. Further debugging into why these threads are sleeping suggests that they may in fact be doing their intended job (which would further imply that the problem may be coming from BCP itself (see https://stackoverflow.com/a/52748660/8236733)). From https://unix.stackexchange.com/a/47259/260742 and https://unix.stackexchange.com/a/36200/260742)
A process in S state is usually in a blocking system call, such as reading or writing to a file or the network, or waiting for another called program to finish.
(eg. writing to the MSSQL Server endpoint destination given to bcp in the ODBCDSN)
Your process will be in S state when it is doing reads and possibly writes that are blocking. Can also happen while waiting on semaphores or other synchronization primitives... This is all normal and expected, and not usually a problem... you don't want it to waste CPU while it's waiting for user input.
2. When running different sets of files of varying record-amount-per-file (eg. ranges of 500000 - 1000000 rows/file) and record-width-per-file (~10 - 100 columns/row), found that in cases with either very large data width or amounts, running a fixed set of bcp threads would fail.
Eg. for a set of ~33 TSVs with ~500000 rows each, each row being ~100 columns wide, a set of 30 threads would write the first few OK, but then all the rest would start returning failure messages. Incorporating a bit from #jamie's answer, the fact the the failure messages returned are "BCP copy in failed" errors does not necessarily mean it has do do with the content of the data in question. Having no actual content being written into the -e errorlog files from my process, #jamie's post says this
Regarding the "-e" option, it is meant to output data errors. login errors, bad table names... many other errros are not reported in the file created with the -e option. When you get output using the "-e" option, you'll see info like "value truncated" and such... will give you line numbers and sample data that was at issue.
Meanwhile, a set of ~33 TSVs with ~500000 rows each, each row being ~100 wide, and still using 30 bcp threads would complete quickly and without error (also would be faster when reducing the number of threads or file set). The only difference here being the overall size of the data being bcp copy-in'ed to the MSSQL Server.
All this while
free -mh
still showed that the machine running the threads still had ~15GB of free RAM remaining in each case (which is again why I suspect that the problem has to do with the remote MSSQL Server endpoint rather than with the code or local machine itself).
3. When running some of the tests from (2), found that manually killing the parallel process (via CTL+C) and then trying to remotely truncate the testing table being written to with /opt/mssql-tools/bin/sqlcmd -Q "truncate table mytable" on the local machine would take a very long time (as opposed to manually logging into the MSSQL Server and executing a truncate mytable in the DB). Again this makes me think that this has something to do with the MSSQL Server having too many connections and just being overwhelmed.
** Anyone with any MSSQL Mgmt Studio experience reading this (I have basically none), if you see anything here that makes you think that my theory is incorrect please let me know your thoughts.
#!/bin/bash
#Oracle DB Info for NEXT
HOST="1.2.3.4"
PORT="5678"
SERVICE="MYDB"
DB_USER=$(whoami)
DB_PASS=$(base64 -d ~/.passwd)
DB_SCHEMA="my_db"
#Section for all of our functions.
function SQLConnection(){
sqlplus "$DB_USER"/"$DB_PASS"#"$HOST":"$PORT"/"$SERVICE"
}
function Connected(){
SQLConnection <<EOF
select sys_context('USERENV','SERVER_HOST') from dual;
EOF
}
function GetJMS(){
SQLConnection <<EOF
set echo on timing on lines 200 pages 100
select pd.destination from ${DB_SCHEMA}.pd_notification pd where pd.org_id = '$ORGID';
EOF
}
TODAY=$(date +"%A %B %d, %Y")
read -r -p $'\n\nWhat is the ORG ID? ' ORGID
read -r -p $'\n\nWhat is the REMOTE QUEUE MANAGER NAME? ' RQM
read -r -p $'\n\nWhat is the IP address of the REMOTE QUEUE MANAGER? ' CONN
read -r -p $'\n\nWhat is the PORT of the REMOTE QUEUE MANAGER? ' PORT
echo -en "* $(whoami)\n* $TODAY\n* MQ Setup $ORGID\n\nDEFINE +\n\tCHANNEL('$RQM.LQML') +\n\tCHLTYPE(SDR) +\n\tCONNAME('$CONN($PORT)') +\n\tXMITQ('BUF.2.$ORGID.XMQ')\n\tCHAUTH(TLS_RSA_WITH_AES_256_CBC_SHA256)\n\nDEFINE +\n\tCHANNEL('LQML.$RQM') +\n\tCHLTYPE(RCVR) +\n\tTRPTYPE(TCP)\n\nDEFINE +\n\tQLOCAL('$RQM') +\n\tTRIGDATA('LQML.$RQM') +\n\tINITQ('SYSTEM.CHANNEL.INITQ') +\n\tTRIGGER USAGE(XMITQ)\n\n" > ~/mqsetup.mqsc
CONNECTED=$(Connected | awk 'NR==16')
echo -en "\n\nHello From: $CONNECTED\n\n"
for JMSDESTINATION in $(GetJMS | awk 'NR>=16&&NR<=24{print $1}')
do
read -r -p $'\n\nWhich REMOTE QUEUE NAME matches with this ${JMSDESTINATION}?' RNAME
QDESC=$(echo "$JMSDESTINATION" | tr '.' ' ' | tr '[[:upper:]]' '[[:lower:]]')
echo -en "\n\nDEFINE +\n\tQR($JMSDESTINATION) +\n\t\tREPLACE DESCR('$ORGID $QDESC Queue') +\n\t\tREPLACE MAXDEPTH(5000) +\n\t\tXMITQ('BUF.2.$ORGID.XMQ') +\n\t\tRNAME('$RNAME') +\n\t\tRQMNAME('$RQM')" >> ~/mqsetup.mqsc
done
Here is the script I've built, hoping to automate the setup of IBM MQ Queues and Channels. My problem is that outside this script, I can establish an SQL Session without an issue, directly from the shell, provided I input the variables seen in the script. I can call the functions and everything returns just as I'd hope it would. When I run the exact same things from within the script, I get timeout errors ... the "Hello From" is blank, which tells me there is no DB connection.
I'm totally stumped as to why it all works great from outside the script, but inside it times out.
I appreciate the eyes and the help!
You're overwritng a variable value. You have this at the top of the script:
PORT="5678"
but then later on you do:
read -r -p $'\n\nWhat is the PORT of the REMOTE QUEUE MANAGER? ' PORT
which overwrites your 5678 value with whatever is entered there. That port may not be listening on the DB server at all, or may be doing something else, or if you don't enter a value it'll default to port 1521 when you connect. But either way the connection is going to fail, either quickly or slowly depending on the port state (e.g. slower maybe if a firewall blocks it).
If you test the connection by adding a Connected call before the read calls (as I initially did) then it seems to be working fine; but the connections after the reads don't work because port value it tries to connect to is now wrong.
Use a different name for the two variables, e.g. RQ_PORT for the second one - both in its read command and the subsequent creation of the ~/mqsetup.mqsc file.
You may also find it useful to add the -l flag to your SQL*Plus call so that if the connection fails for some reason it won't re-prompt for credentials, which in some circumstances can make the script appear to hang until you hit enter a few times.
Not directly relevant to the problem, but when automating anything like this I usually also use the -s flag to suppress the banners (which can vary between environments); and if you're only interested in capturing query output I'd usually set headings and/or pagination off, and feedback off, and generally set SQL*Plus up to generate as little noise as possible - it makes parsing out the interesting bits easier.
I want to generate some trace file from disk IO, but the problem is I need the actual input data along with timestamp, logical address and access block size, etc.
I've been trying to solve the problem by using the "blktrace | blkparse" with "iozone" on the ubuntu VirtualBox environment, but it seems not working.
There is an option in blkparse for setting the output format to show the packet data, -f "%P", but it dose not print anything.
below is the command that i use:
$> sudo blktrace -a issue -d /dev/sda -o - | blkparse -i - -o ./temp/blktrace.sda.iozone -f "%-12C\t\t%p\t%d\t%S:%n:%N\t\t%P\n"
$> iozone -w -e -s 16M -f ./mnt/iozone.dummy -i 0
In the printing format "%-12C\t\t%p\t%d\t%S:%n:%N\t\t%P\n", all other things are printed well, but the "%P" is not printed at all.
Is there anyone who knows why the packet data is not displayed?
OR anyone who knows other way to get the disk IO packet data with actual input value?
As far as I know blktrace does not capture the actual data. It just capture the metadata. One way to capture real data is to write your own kernel module. Some students at FIU.edu did that in this paper:
"I/O deduplication: Utilizing content similarity to ..."
I would ask this question in linux-btrace mailing list as well:
http://vger.kernel.org/majordomo-info.html
In the repository home page , i can see comments posted in recent activity at the bottom, bit it only shows 10 commnets.
i want to all the comments posted since beginning.
Is there any way
Comments of pull requests, issues and commits can be retrieved using bitbucket’s REST API.
However it seems that there is no way to list all of them at one place, so the only way to get them would be to query the API for each PR, issue or commit of the repository.
Note that this takes a long time, since bitbucket has seemingly set a limit to the number of accesses via API to repository data: I got Rate limit for this resource has been exceeded errors after retrieving around a thousand results, then I could retrieve about only one entry per second elapsed from the time of the last rate limit error.
Finding the API URL to the repository
The first step is to find the URL to the repo. For private repositories, it is necessary to get authenticated by providing username and password (using curl’s -u switch). The URL is of the form:
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}
Running git remote -v from the local git repository should provide the missing values. Check the forged URL (below referred to as $url) by verifying that repository information is correctly retrieved as JSON data from it: curl -u username $url.
Fetching comments of commits
Comments of a commit can be accessed at $url/commit/{commitHash}/comments.
The resulting JSON data can be processed by a script. Beware that the results are paginated.
Below I simply extract the number of comments per commit. It is indicated by the value of the member size of the retrieved JSON object; I also request a partial response by adding the GET parameter fields=size.
My script getNComments.sh:
#!/bin/sh
pw=$1
id=$2
json=$(curl -s -u username:"$pw" \
https://api.bitbucket.org/2.0/repositories/{repoOwnerName}/{repoName}/commit/$id/comments'?fields=size')
printf '%s' "$json" | grep -q '"type": "error"' \
&& printf "ERROR $id\n" && exit 0
nComments=$(printf '%s' "$json" | grep -o '"size": [0-9]*' | cut -d' ' -f2)
: ${nComments:=EMPTY}
checkNumeric=$(printf '%s' "$nComments" | tr -dc 0-9)
[ "$nComments" != "$checkNumeric" ] \
&& printf >&2 "!ERROR! $id:\n%s\n" "$json" && exit 1
printf "$nComments $id\n"
To use it, taking into account the possibility for the error mentioned above:
A) Prepare input data. From the local repository, generate the list of commits as wanted (run git fetch -a prior to update the local git repo if needed); check out git help rev-list for how it can be customised.
git rev-list --all | sort > sorted-all.id
cp sorted-all.id remaining.id
B) Run the script. Note that the password is passed here as a parameter – so first assign it to a variable safely using stty -echo; IFS= read -r passwd; stty echo, in one line; also see security considerations below. The processing is parallelised onto 15 processes here, using the option -P.
< remaining.id xargs -P 15 -L 1 ./getNComments.sh "$passwd" > commits.temp
C) When the rate limit is reached, that is when getNComments.sh prints !ERROR!, then kill the above command (Ctrl-C), and execute these below to update the input and output files. Wait a while for the request limit to increase, then re-execute the above one command and repeat until all the data is processed (that is when wc -l remaining.id returns 0).
cat commits.temp >> commits.result
cut -d' ' -f2 commits.result | sort | comm -13 - sorted-all.id > remaining.id
D) Finally, you can get the commits which received comments with:
grep '^[1-9]' commits.result
Fetching comments of pull requests and issues
The procedure is the same as for fetching commits’ comments, but for the following two adjustments:
Edit the script to replace in the URL commit by pullrequests or by issues, as appropriate;
Let $n be the number of issues/PRs to search. The git rev-list command above becomes: seq 1 $n > sorted-all.id
The total number of PRs in the repository can be obtained with:
curl -su username $url/pullrequests'?state=&fields=size'
and, if the issue tracker is set up, the number of issues with:
curl -su username $url/issues'?fields=size'
Hopefully, the repository has few enough PRs and issues so that all data can be fetched in one go.
Viewing comments
They can be viewed normally via the web interface on their commit/PR/issue page at:
https://bitbucket.org/{repoOwnerName}/{repoName}/commits/{commitHash}
https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/{prId}
https://bitbucket.org/{repoOwnerName}/{repoName}/issues/{issueId}
For example, to open all PRs with comments in firefox:
awk '/^[1-9]/{print "https://bitbucket.org/{repoOwnerName}/{repoName}/pull-requests/"$2}' PRs.result | xargs firefox
Security considerations
Arguments passed on the command line are visible to all users of the system, via ps ax (or /proc/$PID/cmdline). Therefore the bitbucket password will be exposed, which could be a concern if the system is shared by multiple users.
There are three commands getting the password from the command line: xargs, the script, and curl.
It appears that curl tries to hide the password by overwriting its memory, but it is not guaranteed to work, and even if it does, it leaves it visible for a (very short) time after the process starts. On my system, the parameters to curl are not hidden.
A better option could be to pass the sensitive information through environment variables. They should be visible only to the current user and root via ps axe (or /proc/$PID/environ); although it seems that there are systems that let all users access this information (do a ls -l /proc/*/environ to check the environment files’ permissions).
In the script simply replace the lines pw=$1 id=$2 with id=$1, then pass pw="$passwd" before xargs in the command line invocation. It will make the environment variable pw visible to xargs and all of its descendent processes, that is the script and its children (curl, grep, cut, etc), which may or may not read the variable. curl does not read the password from the environment, but if its password hiding trick mentioned above works then it might be good enough.
There are ways to avoid passing the password to curl via the command line, notably via standard input using the option -K -. In the script, replace curl -s -u username:"$pw" with printf -- '-s\n-u "%s"\n' "$authinfo" | curl -K - and define the variable authinfo to contain the data in the format username:password. Note that this method needs printf to be a shell built-in to be safe (check with type printf), otherwise the password will show up in its process arguments. If it is not a built-in, try with print or echo instead.
A simple alternative to an environment variable that will not appear in ps output in any case is via a file. Create a file with read/write permissions restricted to the current user (chmod 600), and edit it so that it contains username:password as its first line. In the script, replace pw=$1 with IFS= read -r authinfo < "$1", and edit it to use curl’s -K option as in the paragraph above. In the command line invocation replace $passwd with the filename.
The file approach has the drawback that the password will be written to disk (note that files in /proc are not on the disk). If this too is undesirable, it is possible to pass a named pipe instead of a regular file:
mkfifo pipe
chmod 600 pipe
# make sure printf is a builtin, or use an equivalent instead
(while :; do printf -- '%s\n' "username:$passwd"; done) > pipe&
pid=$!
exec 3<pipe
Then invoke the script passing pipe instead of the file. Finally, to clean up do:
kill $pid
exec 3<&-
This will ensure the authentication info is passed directly from the shell to the script (through the kernel), is not written to disk and is not exposed to other users via ps.
You can go to Commits and see the top line for each commit, you will need to click on each one to see further information.
If I find a way to see all without drilling into each commit, I will update this answer.
I need to increase memory in weblogic. I am new in this and I dont know how. I need to set -Xss=4096k . How I can I do it ?
Xss is Thread Stack Size,, it is not the memory size
you can change the memory size by changing the parameters Xmx
the most important parameters are :
-Xms1536m -Xmx1536m -XX:MaxPermSize=512m
Xmx - is the max size of the heap.
Xms - is the initial size of the heap.( give it the same as Xmx )
XX:MaxPermSize - is is used to hold reflective of the VM itself such as class objects and method objects ( it's independent from the heap size,, give it the 1/3 to 1/4 of the Xms size depend in your classes size)
.........
Any Way:
you can change XSS from config.xml
in this path : DOMAIN_NAME/config/config.xml
but you have to shutdown the admin server when you change something in config.xml
, then edit the start properties, or add it under <server> if it's not there:
<server-start>
<arguments>-Xms1536m -Xmx1536m -XX:MaxPermSize=512m -Xss4096k </arguments>
</server-start>
........
[[OR]]
you can change it from the admin console which is easier
access the admin console then go to Environment >> Servers
choose the server you want to change it
form Configuration >> Server Start
you will see box called Arguments:
Add -Xss4096k
Options for the JVM must be set on startup so you need to modify the startup script for WebLogic.
See here:
http://docs.oracle.com/cd/E13222_01/wls/docs100/server_start/overview.html#JavaOptions
Go to setDomainEnv.
Search for the below comment.
#REM IF USER_MEM_ARGS the environment variable is set, use it to override ALL MEM_ARGS values
Paste the below line (3072 for 3GB).
set USER_MEM_ARGS=-Xms128m -Xmx3072m %MEM_DEV_ARGS% %MEM_MAX_PERM_SIZE%
Another way that is simpler is by using the setUserOverrides.sh script or creating one if none exists. See the example below
----setUserOverrides.sh script-----
#!/bin/bash
echo "Setting for UserOverrides.sh"
# global settings (for all managed servers)
export JAVA_OPTIONS="$JAVA_OPTIONS"
# customer settings for each Server
if [ "${SERVER_NAME}" = "AdminServer" ]
then
echo "Customizing ${SERVER_NAME}"
export JAVA_OPTIONS="$JAVA_OPTIONS -server -Xms2g -Xmx2g - Dweblogic.security.SSL.minimumProtocolVersion=TLSv1.1"
fi
if [ "${SERVER_NAME}" = "soa_server1" ]
then
echo "Customizing ${SERVER_NAME}"
export JAVA_OPTIONS="$JAVA_OPTIONS -client -Xms4g -Xmx4g - Dweblogic.security.SSL.minimumProtocolVersion=TLSv1.2"
fi
echo "End setting from UserOverrides.sh"