I was wondering if anyone can advise me on how to track down a memory leak / issue on a background process on Heroku.
I have one dyno worker running with a delayed_job queue, processing all sorts of different processes. From time to time, I'm getting a sudden jump in the memory consumed. Subsequent jobs then exceed the memory limit and fail, and all Hell breaks loose.
The weird thing is I can't see that the jump in memory is connected to any particular job. Here's the sort of log I see:
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=load_avg_1m val=0.00
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=load_avg_5m val=0.01
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=load_avg_15m val=0.01
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_total val=133.12 units=MB
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_rss val=132.23 units=MB
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_cache val=0.88 units=MB
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_swap val=0.01 units=MB
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_pgpgin val=0 units=pages
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_pgpgout val=45325 units=pages
Aug 15 07:13:25 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=diskmbytes val=0 units=MB
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=load_avg_1m val=0.15
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=load_avg_5m val=0.07
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=load_avg_15m val=0.17
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=memory_total val=110.88 units=MB
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=memory_rss val=108.92 units=MB
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=memory_cache val=1.94 units=MB
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=memory_swap val=0.01 units=MB
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=memory_pgpgin val=2908160 units=pages
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=memory_pgpgout val=42227 units=pages
Aug 15 07:13:31 vemmleads heroku/web.1: source=heroku.10054113.web.1.bf5d3fae-2b1b-4e1d-a974-01d9fa4644db measure=diskmbytes val=0 units=MB
Aug 15 07:13:35 vemmleads app/heroku-postgres: source=HEROKU_POSTGRESQL_CHARCOAL measure.current_transaction=1008211 measure.db_size=482260088bytes measure.tables=39 measure.active-connections=6 measure.waiting-connections=0 measure.index-cache-hit-rate=0.99996 measure.table-cache-hit-rate=1
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=load_avg_1m val=0.00
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=load_avg_5m val=0.00
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=load_avg_15m val=0.14
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=memory_total val=108.00 units=MB
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=memory_rss val=107.85 units=MB
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=memory_cache val=0.15 units=MB
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=memory_swap val=0.01 units=MB
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=memory_pgpgin val=0 units=pages
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=memory_pgpgout val=33609 units=pages
Aug 15 07:13:45 vemmleads heroku/run.2472: source=heroku.10054113.run.2472.e811164e-4413-4dcf-8560-1f998f2c2b4e measure=diskmbytes val=0 units=MB
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=load_avg_1m val=0.30
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=load_avg_5m val=0.07
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=load_avg_15m val=0.04
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_total val=511.80 units=MB
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_rss val=511.78 units=MB
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_cache val=0.00 units=MB
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_swap val=0.02 units=MB
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_pgpgin val=27303936 units=pages
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=memory_pgpgout val=154826 units=pages
Aug 15 07:13:46 vemmleads heroku/worker.1: source=heroku.10054113.worker.1.4589e3f4-8208-483a-a927-67c4c1cbee46 measure=diskmbytes val=0 units=MB
The memory usage of worker.1 jumps from 108.00 to 551.80 for no apparent reason. It doesn't look like any jobs were processed during that time, so it's hard to understand where that giant chomp of memory comes from. Some way later in the log, worker1 hits the memory limit and fails.
I have NewRelic Pro running. It doesn't help at all - in fact it doesn't even create alerts for the repeated memory errors. The above Heroku logs give me no more information.
Any ideas or pointers about what to investigate next would be appreciated.
Thanks
Simon
There isn't enough information here to pinpoint what's going on.
The most common cause of memory leaks in Rails applications (especially in asynchronous background jobs) is a failure to iterate through large database collections incrementally. For example, loading all User records with a statement like User.all
For example, if you have a background job that is going through every User record in the database, You should use User.find_each() or User.find_in_batches() to process these records in chunks (default is 1000 for ActiveRecord).
This limits the working set of objects loaded into memory while still processing all of the records.
You should look for un-bounded database lookups that could be loading huge numbers of objects.
Related
I recently installed PDI 8.0 and do not see the usual JDBC file structure where the database driver jars are saved (pentaho/design-tools/data-integration/lib).
Below is what was extracted out, note I tried dropping the jar into Data Integration.app, as well as recreating the structure from prior versions and re-starting PDI but no success as of yet.
So my question is: where do the JDBC jar files go now in PDI 8.0?
douglas#localhost:~/pentaho$ ls -lth
total 11M
drwxr-xr-x 2 douglas douglas 4.0K Feb 18 12:51 logs
drwxr-xr-x 3 douglas douglas 4.0K Feb 18 12:34 design-tools
drwxr-xr-x 3 douglas douglas 4.0K Nov 5 16:49 Data Integration.app
drwxr-xr-x 4 douglas douglas 4.0K Nov 5 16:49 adaptive-execution
drwxr-xr-x 2 douglas douglas 4.0K Nov 5 16:49 Data Service JDBC Driver
drwxr-xr-x 28 douglas douglas 4.0K Nov 5 16:49 plugins
drwxr-xr-x 6 douglas douglas 4.0K Nov 5 16:49 libswt
drwxr-xr-x 2 douglas douglas 20K Nov 5 16:49 lib
-rw-r--r-- 1 douglas douglas 1.5K Nov 5 16:47 Carte.bat
-rwxr-xr-x 1 douglas douglas 1.3K Nov 5 16:47 carte.sh
drwxr-xr-x 2 douglas douglas 4.0K Nov 5 16:47 classes
drwxr-xr-x 3 douglas douglas 4.0K Nov 5 16:47 docs
-rw-r--r-- 1 douglas douglas 1.1K Nov 5 16:47 Encr.bat
-rwxr-xr-x 1 douglas douglas 1.1K Nov 5 16:47 encr.sh
-rw-r--r-- 1 douglas douglas 1.1K Nov 5 16:47 Import.bat
-rw-r--r-- 1 douglas douglas 2.3K Nov 5 16:47 import-rules.xml
-rwxr-xr-x 1 douglas douglas 1.2K Nov 5 16:47 import.sh
-rw-r--r-- 1 douglas douglas 1.1K Nov 5 16:47 Kitchen.bat
-rwxr-xr-x 1 douglas douglas 1.3K Nov 5 16:47 kitchen.sh
drwxr-xr-x 2 douglas douglas 4.0K Nov 5 16:47 launcher
-rw-r--r-- 1 douglas douglas 14K Nov 5 16:47 LICENSE.txt
-rw-r--r-- 1 douglas douglas 1.1K Nov 5 16:47 Pan.bat
-rwxr-xr-x 1 douglas douglas 1.2K Nov 5 16:47 pan.sh
-rw-r--r-- 1 douglas douglas 1.2K Nov 5 16:47 purge-utility.bat
-rwxr-xr-x 1 douglas douglas 1.3K Nov 5 16:47 purge-utility.sh
drwxr-xr-x 2 douglas douglas 4.0K Nov 5 16:47 pwd
-rw-r--r-- 1 douglas douglas 1.3K Nov 5 16:47 README.txt
-rw-r--r-- 1 douglas douglas 1.5K Nov 5 16:47 runSamples.bat
-rwxr-xr-x 1 douglas douglas 1.2K Nov 5 16:47 runSamples.sh
drwxr-xr-x 5 douglas douglas 4.0K Nov 5 16:47 samples
-rw-r--r-- 1 douglas douglas 5.0K Nov 5 16:47 set-pentaho-env.bat
-rwxr-xr-x 1 douglas douglas 4.5K Nov 5 16:47 set-pentaho-env.sh
drwxr-xr-x 2 douglas douglas 4.0K Nov 5 16:47 simple-jndi
-rw-r--r-- 1 douglas douglas 1.2K Nov 5 16:47 Spark-app-builder.bat
-rwxr-xr-x 1 douglas douglas 1.2K Nov 5 16:47 spark-app-builder.sh
-rw-r--r-- 1 douglas douglas 4.7K Nov 5 16:47 Spoon.bat
-rw-r--r-- 1 douglas douglas 220 Nov 5 16:47 spoon.command
-rw-r--r-- 1 douglas douglas 1.1K Nov 5 16:47 SpoonConsole.bat
-rw-r--r-- 1 douglas douglas 2.2K Nov 5 16:47 SpoonDebug.bat
-rwxr-xr-x 1 douglas douglas 1.9K Nov 5 16:47 SpoonDebug.sh
-rw-r--r-- 1 douglas douglas 362K Nov 5 16:47 spoon.ico
-rw-r--r-- 1 douglas douglas 745 Nov 5 16:47 spoon.png
-rwxr-xr-x 1 douglas douglas 7.1K Nov 5 16:47 spoon.sh
drwxr-xr-x 5 douglas douglas 4.0K Nov 5 16:47 system
drwxr-xr-x 3 douglas douglas 4.0K Nov 5 16:47 ui
-rwxr-xr-x 1 douglas douglas 1.7K Nov 5 16:47 yarn.sh
-rw-r--r-- 1 douglas douglas 11M Nov 5 14:13 PentahoDataIntegration_OSS_Licenses.html
Check the lib directory, and if not found at this place, run this command to find all jar files : find . -name '*.jar'
I'm new to supporting WAS (Websphere Application Server), currently I'm having issue with my WAS, my WAS was installed under AIX under 2 servers/nodes.
While investigating it, I found in our application log that there are some activity which is "Performing Cache Maintenance":
===================================
2017-01-14 01:31:52,619: [Cache Maintenance] com.ibm.srm.util.db.ServerCache refreshed 2017-01-14 01:31:53,314: [Cache Maintenance] Memory: available=[6884mb] used=[9500mb] %used avail=[58%] max=[16384mb] %used max=[58%] total=[16384mb] free=[6884mb] used by doMaintenance=[-251,201,3 92bytes] Time=[22,818ms] 2017-01-14 01:51:53,325: -------- Performing Cache Maintenance -------- 2017-01-14 01:51:53,325: null : QN=319 Select * from perform.cache_timestamps where row_class_name not like '%Cache' and row_class_name not like '%(SRM 6.0)' 2017-01-14 01:51:53,333: Returning 19 data records, QN=319,2 columns, Time: 8ms conn/query time: 5ms
2017-01-14 01:51:53,333: [Cache Maintenance] Memory: available=[5492mb] used=[10892mb] %used avail=[66%] max=[16384mb] %used max=[66%] total=[16384mb] free=[5492mb] used by doMaintenance=[532kb] Time=[8ms]
===================================
After this activity triggered, I found that mpmstats value for 'bsy' are keep increasing until reach MaxClient maximum value which is '4000':
===================================
[Sat Jan 14 01:38:58 2017] [notice] mpmstats: rdy 166 bsy 234 rd 0 wr 234 ka 0 log 0 dns 0 cls 0 [Sat Jan 14 01:38:58 2017] [notice] mpmstats: bsy: 234 in mod_was_ap22_http.c [Sat Jan 14 01:48:58 2017] [notice] mpmstats: rdy 195 bsy 505 rd 0 wr 505 ka 0 log 0 dns 0 cls 0 [Sat Jan 14 01:48:58 2017] [notice] mpmstats: bsy: 505 in mod_was_ap22_http.c [Sat Jan 14 01:58:58 2017] [notice] mpmstats: rdy 180 bsy 720 rd 0 wr 720 ka 0 log 0 dns 0 cls 0 [Sat Jan 14 01:58:58 2017] [notice] mpmstats: bsy: 720 in mod_was_ap22_http.c [Sat Jan 14 02:08:59 2017] [notice] mpmstats: rdy 105 bsy 895 rd 1 wr 894 ka 0 log 0 dns 0 cls 0 [Sat Jan 14 02:08:59 2017] [notice] mpmstats: bsy: 894 in mod_was_ap22_http.c [Sat Jan 14 02:18:59 2017] [notice] mpmstats: rdy 112 bsy 1088 rd 1 wr 1087 ka 0 log 0 dns 0 cls 0 [Sat Jan 14 02:18:59 2017] [notice] mpmstats: bsy: 1087 in mod_was_ap22_http.c
[Sat Jan 14 02:28:59 2017] [notice] mpmstats: rdy 158 bsy 1242 rd 1 wr 1241 ka 0 log 0 dns 0 cls 0
----
[Sat Jan 14 04:55:34 2017] [notice] mpmstats: rdy 0 bsy 4000 rd 0 wr 4000 ka 0 log 0 dns 0 cls 0 [Sat Jan 14 04:55:34 2017] [notice] mpmstats: bsy: 4000 in mod_was_ap22_http.c [Sat Jan 14 04:57:04 2017] [notice] mpmstats: reached MaxClients (4000/4000) [Sat Jan 14 04:57:04 2017] [notice] mpmstats: rdy 0 bsy 4000 rd 0 wr 4000 ka 0 log 0 dns 0 cls 0 [Sat Jan 14 04:57:04 2017] [notice] mpmstats: bsy: 4000 in mod_was_ap22_http.c [Sat Jan 14 04:58:34 2017] [notice] mpmstats: reached MaxClients (4000/4000) [Sat Jan 14 04:58:34 2017] [notice] mpmstats: rdy 0 bsy 4000 rd 0 wr 4000 ka 0 log 0 dns 0 cls 0
[Sat Jan 14 04:58:34 2017] [notice] mpmstats: bsy: 4000 in mod_was_ap22_http.c
===================================
It seem WAS are not processing the client request until it reached the Maximum value.
The questions are:
Is there any log that I can check about why WAS are not processing the Client request until it reached to the max value?
Does the "Cache Maintenance" activity hold WAS from processing the Client request? Because as mentioned from our developer this activity should not lead to this issue.
What is the procedure that I can take to identify/resolve this issue?
Appreciate if can help me for this thing as this issue already occurred for a long time but still not resolve.
I have a requirement to group the data day-wise from 8:00 AM today to 7:59 AM the next day as 1 full day.
For example from the below table:
SoldDatetime Qty
Dec 20,2015 12:05 AM 1
Dec 20,2015 1:05 AM 2
Dec 20,2015 7:05 AM 3
Dec 20,2015 8:05 AM 4
Dec 20,2015 10:05 AM 5
Dec 20,2015 11:05 PM 6
Dec 21,2015 12:05 AM 7
Dec 21,2015 1:05 AM 8
Dec 21,2015 7:05 AM 9
Dec 21,2015 8:05 AM 10
Dec 21,2015 10:05 AM 11
Dec 21,2015 11:05 PM 12
Dec 22,2015 12:05 AM 13
Dec 22,2015 1:05 AM 14
Dec 22,2015 7:05 AM 15
Dec 22,2015 8:05 AM 16
Dec 22,2015 10:05 AM 17
Dec 22,2015 11:05 PM 18
Dec 23,2015 12:05 AM 19
Dec 23,2015 1:05 AM 20
Dec 23,2015 7:05 AM 21
Dec 23,2015 8:05 AM 22
Dec 23,2015 10:05 AM 23
Dec 23,2015 11:05 PM 24
Dec 24,2015 12:05 AM 25
Dec 24,2015 1:05 AM 26
Dec 24,2015 7:05 AM 27
Dec 24,2015 8:05 AM 28
Dec 24,2015 10:05 AM 29
Dec 24,2015 11:05 PM 30
Dec 25,2015 12:05 AM 31
Dec 25,2015 1:05 AM 32
Dec 25,2015 7:05 AM 33
Dec 25,2015 8:05 AM 34
Dec 25,2015 10:05 AM 35
Dec 25,2015 11:05 PM 36
If I have to run a query to filter for date range Dec 21,2015 8:00 AM to Dec 25,2015 7:59 AM. I need the output in below format
SoldDateRange Qty
Dec 20,2015 8:00 AM - Dec 21,2015 7:59AM 39
Dec 21,2015 8:00 AM - Dec 22,2015 7:59AM 75
Dec 22,2015 8:00 AM - Dec 23,2015 7:59AM 111
Dec 23,2015 8:00 AM - Dec 24,2015 7:59AM 147
Dec 24,2015 8:00 AM - Dec 25,2015 7:59AM 183
Can someone help with the SQL query for this? Thanks in advance
Here is a query which should give you the full output you mentioned in your original question.
SELECT CONCAT(CONVERT(VARCHAR, t.adjustedTime, 107), ' 8:00 AM',
' - ',
CONVERT(VARCHAR, DATEADD(DAY, 1, t.adjustedTime), 107), ' 7:59AM'),
t.Qty
FROM
(
SELECT CAST(DATEADD(HOUR, -8, SoldDateTime) AS DATE) AS adjustedTime, SUM(Qty) AS Qty
FROM yourTable
GROUP BY CAST(DATEADD(HOUR, -8, SoldDateTime) AS DATE)
) t
ORDER BY t.adjustedTime
You can do this by subtracting 8 hours. The basic idea is:
select cast(dateadd(hour, -8, solddaterange) as date) as SoldDate, sum(qty)
from t
group by cast(dateadd(hour, -8, solddaterange) as date)
order by SoldDate;
EDIT:
You could use the same idea. For instance:
where cast(dateadd(hour, -8, solddaterange) as date) = '2016-01-01'
Could you help me one case, below
After Tranpose the table i get result as like this:
Lno OID ConnectedDate
100000224 34931 Feb 7 201,Feb 7 201,Feb 8 201,Feb 8 201,Feb 4 201
100001489 9156 Jun 23 201,Jun 24 201,Jun 25 201,Jun 25 201,Oct 29 201,Oct 29 201
100002153 31514 Oct 5 201
100002740 32367 Sep 14 201,Sep 14 201,Oct 21 201,Sep 15 201,Sep 15 201,Sep 16 201
100004774 31558 May 19 201,May 19 201,May 20 201,May 20 201,Jun 2 201
100004935 5857 Sep 1 201,Sep 1 201,Sep 3 201,Sep 3 201,Sep 29 201,Aug 31 201,Sep 22 201
100004935 31684 Jun 16 201,Jun 17 201,Jun 17 201,Jun 19 201
100004983 33942 Dec 30 201,Dec 30 201,Dec 27 201,Dec 29 201,Dec 28 201
100005055 32347 Sep 14 201,Sep 13 201,Sep 13 201,Oct 1 201,Oct 5 201,Oct 20 201,Nov 17 201,Sep 15 201,Sep 16 201,Dec 4 201
100006146 31481 Apr 30 201,Apr 30 201,May 3 201,May 4 201,May 4 201,Jun 3 201,Jun 4 201,Jun 5 201,Jun 7 201,Jun 12 201
But i want output like this:
LID OID ConnectedDate1 ConnectedDate2 ConnectedDate3 ConnectedDate4
100000224 34931 Feb 7 201 Feb 7 201 Feb 8 201 Feb 8 201
100001489 9156 Jun 23 201 Jun 24 201 Jun 25 201 Jun 25 201
100002153 31514 Oct 5 201
100002740 32367 Sep 14 201 Sep 14 201 Oct 21 201 Sep 15 201
100004774 31558 May 19 201 May 19 201 May 20 201
Plz help me
Thanks in advance
You need the PIVOT command. To give more details, you'd need to provide your query
I am trying to use Axis2/c on OS X but when I launch axis2c_http_server, I get the following errors:
[Fri Mar 16 12:26:19 2012] [info] Starting Axis2 HTTP server....
[Fri Mar 16 12:26:19 2012] [info] Apache Axis2/C version in use : 1.6.0
[Fri Mar 16 12:26:19 2012] [info] Server port : 9090
[Fri Mar 16 12:26:19 2012] [info] Repo location : ../
[Fri Mar 16 12:26:19 2012] [info] Read Timeout : 60000 ms
[Fri Mar 16 12:26:19 2012] [error] dep_engine.c(324) Axis2 Configuration file name not found
[Fri Mar 16 12:26:19 2012] [error] conf_init.c(56) Creating deployment engine failed for repository ../
[Fri Mar 16 12:26:19 2012] [error] http_receiver.c(126) unable to create private configuration contextfor repo path ../
[Fri Mar 16 12:26:19 2012] [error] http_server_main.c(215) Server creation failed: Error code: 18 :: Configuration file cannot be found
It seems the server cannot locate the file axis2.xml.
I have put axis2.xml in the repo's root. I have correctly set the environment variable $AXIS2C_HOME because the server write the logs in the right folder.
Here is my repo's structure:
antoine#Antoines-MacBook-Air:repo $ pwd
/Users/antoine/Documents/axis2c_test/repo
antoine#Antoines-MacBook-Air:repo $ ll -R
total 16
-rw-r--r--# 1 antoine staff 5.8K Mar 16 10:06 axis2.xml
drwxr-xr-x 53 antoine staff 1.8K Mar 16 10:06 lib
drwxr-xr-x 4 antoine staff 136B Mar 16 12:26 logs
drwxr-xr-x 4 antoine staff 136B Mar 16 11:14 services
./lib:
total 25816
-rwxr-xr-x 1 antoine staff 246K Mar 16 10:06 libaxis2_axiom.0.6.0.dylib
-rwxr-xr-x 1 antoine staff 246K Mar 16 10:06 libaxis2_axiom.0.dylib
-rw-r--r-- 1 antoine staff 1.3M Mar 16 10:06 libaxis2_axiom.a
-rwxr-xr-x 1 antoine staff 246K Mar 16 10:06 libaxis2_axiom.dylib
-rwxr-xr-x 1 antoine staff 1.0K Mar 16 10:06 libaxis2_axiom.la
-rwxr-xr-x 1 antoine staff 576K Mar 16 10:06 libaxis2_engine.0.6.0.dylib
-rwxr-xr-x 1 antoine staff 576K Mar 16 10:06 libaxis2_engine.0.dylib
-rw-r--r-- 1 antoine staff 2.6M Mar 16 10:06 libaxis2_engine.a
-rwxr-xr-x 1 antoine staff 576K Mar 16 10:06 libaxis2_engine.dylib
-rwxr-xr-x 1 antoine staff 1.1K Mar 16 10:06 libaxis2_engine.la
-rwxr-xr-x 1 antoine staff 120K Mar 16 10:06 libaxis2_http_common.0.6.0.dylib
-rwxr-xr-x 1 antoine staff 120K Mar 16 10:06 libaxis2_http_common.0.dylib
-rw-r--r-- 1 antoine staff 484K Mar 16 10:06 libaxis2_http_common.a
-rwxr-xr-x 1 antoine staff 120K Mar 16 10:06 libaxis2_http_common.dylib
-rwxr-xr-x 1 antoine staff 1.1K Mar 16 10:06 libaxis2_http_common.la
-rwxr-xr-x 1 antoine staff 20K Mar 16 10:06 libaxis2_http_receiver.0.6.0.dylib
-rwxr-xr-x 1 antoine staff 20K Mar 16 10:06 libaxis2_http_receiver.0.dylib
-rw-r--r-- 1 antoine staff 57K Mar 16 10:06 libaxis2_http_receiver.a
-rwxr-xr-x 1 antoine staff 20K Mar 16 10:06 libaxis2_http_receiver.dylib
-rwxr-xr-x 1 antoine staff 1.2K Mar 16 10:06 libaxis2_http_receiver.la
-rwxr-xr-x 1 antoine staff 112K Mar 16 10:06 libaxis2_http_sender.0.6.0.dylib
-rwxr-xr-x 1 antoine staff 112K Mar 16 10:06 libaxis2_http_sender.0.dylib
-rw-r--r-- 1 antoine staff 355K Mar 16 10:06 libaxis2_http_sender.a
-rwxr-xr-x 1 antoine staff 112K Mar 16 10:06 libaxis2_http_sender.dylib
-rwxr-xr-x 1 antoine staff 1.2K Mar 16 10:06 libaxis2_http_sender.la
-rwxr-xr-x 1 antoine staff 49K Mar 16 10:06 libaxis2_parser.0.6.0.dylib
-rwxr-xr-x 1 antoine staff 49K Mar 16 10:06 libaxis2_parser.0.dylib
-rw-r--r-- 1 antoine staff 139K Mar 16 10:06 libaxis2_parser.a
-rwxr-xr-x 1 antoine staff 49K Mar 16 10:06 libaxis2_parser.dylib
-rwxr-xr-x 1 antoine staff 982B Mar 16 10:06 libaxis2_parser.la
-rwxr-xr-x 1 antoine staff 57K Mar 16 10:06 libaxis2_xpath.0.6.0.dylib
-rwxr-xr-x 1 antoine staff 57K Mar 16 10:06 libaxis2_xpath.0.dylib
-rw-r--r-- 1 antoine staff 190K Mar 16 10:06 libaxis2_xpath.a
-rwxr-xr-x 1 antoine staff 57K Mar 16 10:06 libaxis2_xpath.dylib
-rwxr-xr-x 1 antoine staff 902B Mar 16 10:06 libaxis2_xpath.la
-rwxr-xr-x 1 antoine staff 193K Mar 16 10:06 libaxutil.0.6.0.dylib
-rwxr-xr-x 1 antoine staff 193K Mar 16 10:06 libaxutil.0.dylib
-rw-r--r-- 1 antoine staff 982K Mar 16 10:06 libaxutil.a
-rwxr-xr-x 1 antoine staff 193K Mar 16 10:06 libaxutil.dylib
-rwxr-xr-x 1 antoine staff 867B Mar 16 10:06 libaxutil.la
-rwxr-xr-x 1 antoine staff 63K Mar 16 10:06 libguththila.0.6.0.dylib
-rwxr-xr-x 1 antoine staff 63K Mar 16 10:06 libguththila.0.dylib
-rw-r--r-- 1 antoine staff 191K Mar 16 10:06 libguththila.a
-rwxr-xr-x 1 antoine staff 63K Mar 16 10:06 libguththila.dylib
-rwxr-xr-x 1 antoine staff 923B Mar 16 10:06 libguththila.la
-rwxr-xr-x 1 antoine staff 229K Mar 16 10:06 libneethi.0.6.0.dylib
-rwxr-xr-x 1 antoine staff 229K Mar 16 10:06 libneethi.0.dylib
-rw-r--r-- 1 antoine staff 1.4M Mar 16 10:06 libneethi.a
-rwxr-xr-x 1 antoine staff 229K Mar 16 10:06 libneethi.dylib
-rwxr-xr-x 1 antoine staff 1.0K Mar 16 10:06 libneethi.la
drwxr-xr-x 3 antoine staff 102B Mar 16 10:06 pkgconfig
./lib/pkgconfig:
total 8
-rw-r--r-- 1 antoine staff 256B Mar 16 10:06 axis2c.pc
./logs:
total 24
-rw-r--r-- 1 antoine staff 12K Mar 16 12:26 axis2.log
./services:
total 32
-rwxr-xr-x 1 antoine staff 10K Mar 16 12:26 libhello.dylib
-rw-r--r-- 1 antoine staff 209B Mar 16 10:06 services.xml
Does someone see what I am doing wrong?
I am using the client side consumer only, but maybe the solution I have can help you.
In order to run the server you need to set AXIS2C_HOME environment variable to point to the AXIS2C installation.
For the client you don't need to, you just specifiy the location of the axis2.xml file when creating a new instance of axis2c_stub_t type (aka set the client_home parameter.)