Seam - No Active Event Context - maven-2

I am having problems with my Seam application, I'm not sure if it is from using Maven along with Jetty for testing or simply a misconfiguration on my part. The error I am getting is rather simple, when Seam attempts to close the event context, it expects it to be open and it must have already been closed by something else. The problem is what and how to determine what caused that.
Here is the stack trace not that it helps much:
java.lang.IllegalStateException: No active event context
at org.jboss.seam.core.Manager.instance(Manager.java:368)
at org.jboss.seam.servlet.ContextualHttpServletRequest.run(ContextualHttpServletRequest.java:55)
at org.jboss.seam.web.ContextFilter.doFilter(ContextFilter.java:37)
at org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69)
at org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:73)
at org.jboss.seam.web.HotDeployFilter.doFilter(HotDeployFilter.java:53)
at org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69)
at org.jboss.seam.web.LoggingFilter.doFilter(LoggingFilter.java:60)
at org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69)
at org.jboss.seam.servlet.SeamFilter.doFilter(SeamFilter.java:158)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1139)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:378)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:324)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:535)
at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:865)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:539)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:520)
These are all the jars in my WEB-INF/lib folder
-rw-r--r-- 1 walterw staff 54665 2009-03-02 19:28 activation-1.0.2.jar
-rw-r--r-- 1 walterw staff 443432 2009-03-02 19:28 antlr-2.7.6.jar
-rw-r--r-- 1 walterw staff 43033 2009-03-02 19:23 asm-3.1.jar
-rw-r--r-- 1 walterw staff 1604162 2009-03-02 19:28 axis-1.2.1.jar
-rw-r--r-- 1 walterw staff 32071 2009-03-02 19:28 axis-jaxrpc-1.2.1.jar
-rw-r--r-- 1 walterw staff 148230 2009-06-17 22:27 cglib-asm-1.0.jar
-rw-r--r-- 1 walterw staff 188671 2009-03-02 19:28 commons-beanutils-1.7.0.jar
-rw-r--r-- 1 walterw staff 571259 2009-03-02 19:28 commons-collections-3.2.jar
-rw-r--r-- 1 walterw staff 146108 2009-06-14 18:20 commons-digester-1.8.1.jar
-rw-r--r-- 1 walterw staff 71442 2009-03-02 19:28 commons-discovery-0.2.jar
-rw-r--r-- 1 walterw staff 261809 2009-03-02 19:28 commons-lang-2.4.jar
-rw-r--r-- 1 walterw staff 38015 2009-03-02 19:28 commons-logging-1.0.4.jar
-rw-r--r-- 1 walterw staff 26202 2009-03-02 19:28 commons-logging-api-1.0.4.jar
-rw-r--r-- 1 walterw staff 313898 2009-03-02 19:28 dom4j-1.6.1.jar
-rw-r--r-- 1 walterw staff 50583 2009-03-02 19:27 ejb3-persistence-1.0.2.GA.jar
-rw-r--r-- 1 walterw staff 15506 2009-03-14 11:21 FileIO-2009.3.14.jar
-rw-r--r-- 1 walterw staff 170443 2009-03-02 19:23 flickrapi-1.1.jar
-rw-r--r-- 1 walterw staff 279714 2009-03-02 19:27 hibernate-annotations-3.4.0.GA.jar
-rw-r--r-- 1 walterw staff 66993 2009-03-02 19:27 hibernate-commons-annotations-3.1.0.GA.jar
-rw-r--r-- 1 walterw staff 2266769 2009-03-02 19:27 hibernate-core-3.3.0.SP1.jar
-rw-r--r-- 1 walterw staff 119292 2009-04-03 18:41 hibernate-entitymanager-3.4.0.GA.jar
-rw-r--r-- 1 walterw staff 313785 2009-06-17 22:46 hibernate-search-3.1.1.GA.jar
-rw-r--r-- 1 walterw staff 62574 2009-03-02 19:27 hibernate-validator-3.1.0.GA.jar
-rw-r--r-- 1 walterw staff 630486 2009-06-17 22:46 hsqldb-1.8.0.2.jar
-rw-r--r-- 1 walterw staff 552514 2009-03-12 21:54 javassist-3.7.1.GA.jar
-rw-r--r-- 1 walterw staff 131456 2009-03-02 19:23 java-unrar-0.2.jar
-rw-r--r-- 1 walterw staff 134652 2009-06-13 21:46 jboss-el-1.0_02.CR4.jar
-rw-r--r-- 1 walterw staff 288761 2009-06-17 22:48 jboss-envers-1.2.1.GA-hibernate-3.3.jar
-rw-r--r-- 1 walterw staff 25589 2009-06-17 22:48 jboss-logging-log4j-2.1.0.GA.jar
-rw-r--r-- 1 walterw staff 12623 2009-06-17 22:48 jboss-logging-spi-2.1.0.GA.jar
-rw-r--r-- 1 walterw staff 16148 2009-06-13 21:46 jboss-seam-debug-2.1.2.jar
-rw-r--r-- 1 walterw staff 2507 2009-06-13 21:46 jboss-seam-jul-2.1.2.jar
-rw-r--r-- 1 walterw staff 28223 2009-06-13 21:46 jboss-seam-mail-2.1.2.jar
-rw-r--r-- 1 walterw staff 294735 2009-06-13 21:46 jboss-seam-ui-2.1.2.jar
-rw-r--r-- 1 walterw staff 312629 2009-06-14 18:20 jsf-api-1.2-b19.jar
-rw-r--r-- 1 walterw staff 302352 2009-03-02 19:23 jsf-facelets-1.1.15.B1.jar
-rw-r--r-- 1 walterw staff 1122787 2009-06-14 18:20 jsf-impl-1.2-b19.jar
-rw-r--r-- 1 walterw staff 13236 2009-03-02 19:28 jta-1.1.jar
-rw-r--r-- 1 walterw staff 367444 2009-03-02 19:28 log4j-1.2.14.jar
-rw-r--r-- 1 walterw staff 822794 2009-06-17 22:46 lucene-core-2.4.1.jar
-rw-r--r-- 1 walterw staff 1139907 2009-06-13 21:46 org.jboss.seam-jboss-seam-2.1.2.jar
-rw-r--r-- 1 walterw staff 445090 2009-03-08 20:11 quartz-1.6.1.jar
-rw-r--r-- 1 walterw staff 171921 2009-06-14 18:20 richfaces-api-3.3.1.GA.jar
-rw-r--r-- 1 walterw staff 1551810 2009-06-14 18:20 richfaces-impl-3.3.1.GA.jar
-rw-r--r-- 1 walterw staff 4160770 2009-06-14 18:21 richfaces-ui-3.3.1.GA.jar
-rw-r--r-- 1 walterw staff 102493 2009-06-17 22:48 SeamCore-2009.06.17.jar
-rw-r--r-- 1 walterw staff 16591 2009-03-02 19:24 slf4j-api-1.5.0.jar
-rw-r--r-- 1 walterw staff 8880 2009-03-02 19:24 slf4j-log4j12-1.5.0.jar
-rw-r--r-- 1 walterw staff 25814 2009-06-17 22:49 WebContent-2009.06.17.jar
-rw-r--r-- 1 walterw staff 109318 2009-03-02 19:28 xml-apis-1.0.b2.jar
I had a few version conflicts before that actually did cause a problem before, I'm just wondering if this could be one of those problems.
Thanks,
Walter

It doesn't look like a maven issue. I'd check one of the following:
Have you set the seam servlet context listener?
<listener>
<listener-class>org.jboss.seam.servlet.SeamListener</listener-class>
</listener>
Based on this question, have you configured additional context filter?

Thanks for your reply.
It appears to actually be a combination of things:
I instantiate ContextualHttpServletRequest inside every requestInitialized and requestDestroyed (ServletRequestListener) to monitor every Http Request.
I have configured the ContextFilter on every URL other than faces (everything not *.xhtml)
My a4j requests are erroring out? I am using RichFaces and see requests in firebug for /a4j resources. I have Seam and RichFaces configured according to the version I'm using 2.2.0.GA.
This issue is resolved for Jetty, but I am getting these errors in Tomcat. I'm trying to gather what might be different between Jetty and Tomcat. The odd thing is, my ServletRequestListener class functions fine as it logs all Http Requests coming into the application, but the view is not actually served up as it is throwing a No Active Event Context Exception.
EDIT:
The reason this was not working on Tomcat but Jetty was I was not catching an exception thrown by the ContextFilter. I now have my own ContextFilter which attempts to create a context for the current request only if the SeamPhaseListener didn't already do so. In some cases, it still fails, that is why I need to catch the exception. This is a bandaid fix, but it works well in tomcat now. I am guessing it is related to jsessionid in conjunction with resources a4j is requesting.
This works well enough for me.
Walter

Related

how do you extract a variable that appears multiple times in a table only once

I'm trying to extract the name of space organisations from a table but the closest i can get is the amount of times it appears next to the name of the organisation but i just want the name of the organisation not the amount of times it is named in the table.
if you can help me please leave a comment on my google colab.
https://colab.research.google.com/drive/1m4zI4YGguQ5aWdDVyc7Bdpr-78KHdxhR?usp=sharing
What I get:
variable number
organisation
time of launch
0
SpaceX
Fri Aug 07, 2020 05:12 UTC
1
CASC
Thu Aug 06, 2020 04:01 UTC
2
SpaceX
Tue Aug 04, 2020 23:57 UTC
3
Roscosmos
Thu Jul 30, 2020 21:25 UTC
4
ULA
Thu Jul 30, 2020 11:50 UTC
...
...
...
4319
US Navy
Wed Feb 05, 1958 07:33 UTC
4320
AMBA
Sat Feb 01, 1958 03:48 UTC
4321
US Navy
Fri Dec 06, 1957 16:44 UTC
4322
RVSN USSR
Sun Nov 03, 1957 02:30 UTC
4323
RVSN USSR
Fri Oct 04, 1957 19:28 UTC
etc
etc
etc
What I want:
organisation
RVSN USSR
Arianespace
CASC
General Dynamics
NASA
VKS RF
US Air Force
ULA
Boeing
Martin Marietta
etc

T-SQL - Partition a running total

I've written a query that returns the size of my individual records in mb. These records contain Blob data.
I would like to partition the records in 50mb batches.
SELECT SourceId, Title, Description,
SUM(DATALENGTH(VersionData) * 0.000001) OVER (PARTITION BY DATALENGTH(SourceId) ORDER BY SourceId) AS RunningTotal,
RANK() OVER(ORDER BY SourceId) AS RowNo
FROM TargetContentVersion WITH(NOLOCK)
The data returned from this query currently looks like this, where RunningTotal is the running total in mb of the records:
SourceId Title RunningTotalRowNo
00Pf4000006gna3EAA 001f400000ZP5yUAAT_3 Oct 2018 (14_37_32).pdf 5.242880 1
00Pf4000006gna8EAA 001f400000ZP5yUAAT_3 Oct 2018 (14_37_38).doc 6.291456 2
00Pf4000006gnacEAA 001f400000ZP5yUAAT_3 Oct 2018 (14_38_44).pdf 7.340032 3
00Pf4000006gnaDEAQ 001f400000ZP5yUAAT_3 Oct 2018 (14_37_41).doc 12.582912 4
00Pf4000006gnahEAA 001f400000ZP5yUAAT_3 Oct 2018 (14_38_47).pdf 17.825792 5
00Pf4000006gnaIEAQ 001f400000ZP5yUAAT_3 Oct 2018 (14_37_46).doc 23.068672 6
00Pf4000006gnamEAA 001f400000ZP5yUAAT_3 Oct 2018 (14_38_54).pdf 33.554432 7
00Pf4000006gnaNEAQ 001f400000ZP5yUAAT_3 Oct 2018 (14_37_52).txt 34.603008 8
00Pf4000006gnarEAA 001f400000ZP5yUAAT_3 Oct 2018 (14_39_20).doc 35.651584 9
00Pf4000006gnaSEAQ 001f400000ZP5yUAAT_3 Oct 2018 (14_37_55).txt 40.894464 10
00Pf4000006gnawEAA 001f400000ZP5yUAAT_3 Oct 2018 (14_39_24).doc 46.137344 11
00Pf4000006gnaXEAQ 001f400000ZP5yUAAT_3 Oct 2018 (14_38_0).txt 51.380224 12
00Pf4000006gnb1EAA 001f400000ZP5yUAAT_3 Oct 2018 (14_39_30).doc 61.865984 13
00Pf4000006gnb6EAA 001f400000ZP5yUAAT_3 Oct 2018 (14_39_50).txt 62.914560 14
00Pf4000006gnbaEAA 001f400000ZP5yVAAT_3 Oct 2018 (14_40_29).doc 68.157440 15
00Pf4000006gnbBEAQ 001f400000ZP5yUAAT_3 Oct 2018 (14_39_58).txt 78.643200 16
00Pf4000006gnbfEAA 001f400000ZP5yVAAT_3 Oct 2018 (14_40_34).doc 89.128960 17
00Pf4000006gnbGEAQ 001f400000ZP5yVAAT_3 Oct 2018 (14_40_7).pdf 90.177536 18
00Pf4000006gnbkEAA 001f400000ZP5yVAAT_3 Oct 2018 (14_40_43).txt 91.226112 19
00Pf4000006gnbLEAQ 001f400000ZP5yVAAT_3 Oct 2018 (14_40_12).pdf 96.468992 20
00Pf4000006gnbpEAA 001f400000ZP5yVAAT_3 Oct 2018 (14_40_46).txt 101.711872 21
00Pf4000006gnbQEAQ 001f400000ZP5yVAAT_3 Oct 2018 (14_40_17).pdf 112.197632 22
00Pf4000006gnbuEAA 001f400000ZP5yVAAT_3 Oct 2018 (14_40_52).txt 122.683392 23
00Pf4000006gnbVEAQ 001f400000ZP5yVAAT_3 Oct 2018 (14_40_26).doc 123.731968 24
00Pf4000006gnbzEAA 001f400000ZP5yWAAT_3 Oct 2018 (14_41_0).pdf 124.780544 25
00Pf4000006gnc4EAA 001f400000ZP5yWAAT_3 Oct 2018 (14_41_5).pdf 130.023424 26
00Pf4000006gnc9EAA 001f400000ZP5yWAAT_3 Oct 2018 (14_41_11).pdf 140.509184 27
00Pf4000006gncdEAA 001f400000ZP5yWAAT_3 Oct 2018 (14_41_56).txt 145.752064 28
00Pf4000006gncEEAQ 001f400000ZP5yWAAT_3 Oct 2018 (14_41_30).doc 146.800640 29
00Pf4000006gnciEAA 001f400000ZP5yWAAT_3 Oct 2018 (14_42_3).txt 157.286400 30
00Pf4000006gncJEAQ 001f400000ZP5yWAAT_3 Oct 2018 (14_41_33).doc 162.529280 31
00Pf4000006gncKEAQ 001f400000ZP5ycAAD_3 Oct 2018 (14_48_11).txt 173.015040 32
00Pf4000006gncnEAA 001f400000ZP5yXAAT_3 Oct 2018 (14_42_12).pdf 174.063616 33
00Pf4000006gncsEAA 001f400000ZP5yXAAT_3 Oct 2018 (14_42_15).pdf 179.306496 34
00Pf4000006gncTEAQ 001f400000ZP5yWAAT_3 Oct 2018 (14_41_44).doc 189.792256 35
00Pf4000006gncxEAA 001f400000ZP5yXAAT_3 Oct 2018 (14_42_30).pdf 200.278016 36
00Pf4000006gncYEAQ 001f400000ZP5yWAAT_3 Oct 2018 (14_41_53).txt 201.326592 37
00Pf4000006gnd2EAA 001f400000ZP5yXAAT_3 Oct 2018 (14_42_46).doc 202.375168 38
00Pf4000006gnd7EAA 001f400000ZP5yXAAT_3 Oct 2018 (14_42_49).doc 207.618048 39
00Pf4000006gndbEAA 001f400000ZP5yYAAT_3 Oct 2018 (14_43_23).pdf 212.860928 40
00Pf4000006gndCEAQ 001f400000ZP5yXAAT_3 Oct 2018 (14_42_54).doc 223.346688 41
00Pf4000006gndgEAA 001f400000ZP5yYAAT_3 Oct 2018 (14_43_30).pdf 233.832448 42
00Pf4000006gnDhEAI Snake_River_(5mb).jpg 239.077777 43
00Pf4000006gndHEAQ 001f400000ZP5yXAAT_3 Oct 2018 (14_43_3).txt 240.126353 44
00Pf4000006gndlEAA 001f400000ZP5yYAAT_3 Oct 2018 (14_43_39).doc 241.174929 45
00Pf4000006gndMEAQ 001f400000ZP5yXAAT_3 Oct 2018 (14_43_6).txt 246.417809 46
00Pf4000006gndqEAA 001f400000ZP5yYAAT_3 Oct 2018 (14_43_41).doc 251.660689 47
00Pf4000006gnDrEAI Pizigani_1367_Chart_10MB.jpg 261.835395 48
00Pf4000006gndREAQ 001f400000ZP5yXAAT_3 Oct 2018 (14_43_11).txt 272.321155 49
00Pf4000006gndvEAA 001f400000ZP5yYAAT_3 Oct 2018 (14_43_47).doc 282.806915 50
00Pf4000006gnDwEAI Spinner_Dolphin_Indian_Ocean_07-2017.jpg 284.109019 51
00Pf4000006gndWEAQ 001f400000ZP5yYAAT_3 Oct 2018 (14_43_20).pdf 285.157595 52
00Pf4000006gnDXEAY 440 Kb.jpg 285.609143 53
00Pf4000006gne0EAA 001f400000ZP5yYAAT_3 Oct 2018 (14_43_59).txt 286.657719 54
00Pf4000006gne5EAA 001f400000ZP5yYAAT_3 Oct 2018 (14_44_2).txt 291.900599 55
00Pf4000006gneaEAA 001f400000ZP5yZAAT_3 Oct 2018 (14_44_59).txt 302.386359 56
00Pf4000006gneAEAQ 001f400000ZP5yYAAT_3 Oct 2018 (14_44_7).txt 312.872119 57
00Pf4000006gneeEAA 001f400000ZP5yZAAT_3 Oct 2018 (14_44_40).doc 323.357879 58
I would like the results to look like this where they are partitioned in 50mb batches:
SourceId Title RunningTotalRowNo Batch
00Pf4000006gna3EAA 001f400000ZP5yUAAT_3 Oct 2018 (14_37_32).pdf 5.242880 1 1
00Pf4000006gna8EAA 001f400000ZP5yUAAT_3 Oct 2018 (14_37_38).doc 6.291456 2 1
00Pf4000006gnacEAA 001f400000ZP5yUAAT_3 Oct 2018 (14_38_44).pdf 7.340032 3 1
00Pf4000006gnaDEAQ 001f400000ZP5yUAAT_3 Oct 2018 (14_37_41).doc 12.582912 4 1
00Pf4000006gnahEAA 001f400000ZP5yUAAT_3 Oct 2018 (14_38_47).pdf 17.825792 5 1
00Pf4000006gnaIEAQ 001f400000ZP5yUAAT_3 Oct 2018 (14_37_46).doc 23.068672 6 1
00Pf4000006gnamEAA 001f400000ZP5yUAAT_3 Oct 2018 (14_38_54).pdf 33.554432 7 1
00Pf4000006gnaNEAQ 001f400000ZP5yUAAT_3 Oct 2018 (14_37_52).txt 34.603008 8 1
00Pf4000006gnarEAA 001f400000ZP5yUAAT_3 Oct 2018 (14_39_20).doc 35.651584 9 1
00Pf4000006gnaSEAQ 001f400000ZP5yUAAT_3 Oct 2018 (14_37_55).txt 40.894464 10 1
00Pf4000006gnawEAA 001f400000ZP5yUAAT_3 Oct 2018 (14_39_24).doc 46.137344 11 1
00Pf4000006gnaXEAQ 001f400000ZP5yUAAT_3 Oct 2018 (14_38_0).txt 51.380224 12 1
00Pf4000006gnb1EAA 001f400000ZP5yUAAT_3 Oct 2018 (14_39_30).doc 61.865984 13 2
00Pf4000006gnb6EAA 001f400000ZP5yUAAT_3 Oct 2018 (14_39_50).txt 62.914560 14 2
00Pf4000006gnbaEAA 001f400000ZP5yVAAT_3 Oct 2018 (14_40_29).doc 68.157440 15 2
00Pf4000006gnbBEAQ 001f400000ZP5yUAAT_3 Oct 2018 (14_39_58).txt 78.643200 16 2
00Pf4000006gnbfEAA 001f400000ZP5yVAAT_3 Oct 2018 (14_40_34).doc 89.128960 17 2
00Pf4000006gnbGEAQ 001f400000ZP5yVAAT_3 Oct 2018 (14_40_7).pdf 90.177536 18 2
00Pf4000006gnbkEAA 001f400000ZP5yVAAT_3 Oct 2018 (14_40_43).txt 91.226112 19 2
00Pf4000006gnbLEAQ 001f400000ZP5yVAAT_3 Oct 2018 (14_40_12).pdf 96.468992 20 2
00Pf4000006gnbpEAA 001f400000ZP5yVAAT_3 Oct 2018 (14_40_46).txt 101.711872 21 3
00Pf4000006gnbQEAQ 001f400000ZP5yVAAT_3 Oct 2018 (14_40_17).pdf 112.197632 22 3
00Pf4000006gnbuEAA 001f400000ZP5yVAAT_3 Oct 2018 (14_40_52).txt 122.683392 23 3
00Pf4000006gnbVEAQ 001f400000ZP5yVAAT_3 Oct 2018 (14_40_26).doc 123.731968 24 3
00Pf4000006gnbzEAA 001f400000ZP5yWAAT_3 Oct 2018 (14_41_0).pdf 124.780544 25 3
00Pf4000006gnc4EAA 001f400000ZP5yWAAT_3 Oct 2018 (14_41_5).pdf 130.023424 26 3
00Pf4000006gnc9EAA 001f400000ZP5yWAAT_3 Oct 2018 (14_41_11).pdf 140.509184 27 3
00Pf4000006gncdEAA 001f400000ZP5yWAAT_3 Oct 2018 (14_41_56).txt 145.752064 28 3
00Pf4000006gncEEAQ 001f400000ZP5yWAAT_3 Oct 2018 (14_41_30).doc 146.800640 29 3
00Pf4000006gnciEAA 001f400000ZP5yWAAT_3 Oct 2018 (14_42_3).txt 157.286400 30 4
00Pf4000006gncJEAQ 001f400000ZP5yWAAT_3 Oct 2018 (14_41_33).doc 162.529280 31 4
00Pf4000006gncKEAQ 001f400000ZP5ycAAD_3 Oct 2018 (14_48_11).txt 173.015040 32 4
00Pf4000006gncnEAA 001f400000ZP5yXAAT_3 Oct 2018 (14_42_12).pdf 174.063616 33 4
00Pf4000006gncsEAA 001f400000ZP5yXAAT_3 Oct 2018 (14_42_15).pdf 179.306496 34 4
00Pf4000006gncTEAQ 001f400000ZP5yWAAT_3 Oct 2018 (14_41_44).doc 189.792256 35 4
00Pf4000006gncxEAA 001f400000ZP5yXAAT_3 Oct 2018 (14_42_30).pdf 200.278016 36 5
00Pf4000006gncYEAQ 001f400000ZP5yWAAT_3 Oct 2018 (14_41_53).txt 201.326592 37 5
00Pf4000006gnd2EAA 001f400000ZP5yXAAT_3 Oct 2018 (14_42_46).doc 202.375168 38 5
00Pf4000006gnd7EAA 001f400000ZP5yXAAT_3 Oct 2018 (14_42_49).doc 207.618048 39 5
00Pf4000006gndbEAA 001f400000ZP5yYAAT_3 Oct 2018 (14_43_23).pdf 212.860928 40 5
00Pf4000006gndCEAQ 001f400000ZP5yXAAT_3 Oct 2018 (14_42_54).doc 223.346688 41 5
00Pf4000006gndgEAA 001f400000ZP5yYAAT_3 Oct 2018 (14_43_30).pdf 233.832448 42 5
00Pf4000006gnDhEAI Snake_River_(5mb).jpg 239.077777 43 5
00Pf4000006gndHEAQ 001f400000ZP5yXAAT_3 Oct 2018 (14_43_3).txt 240.126353 44 5
00Pf4000006gndlEAA 001f400000ZP5yYAAT_3 Oct 2018 (14_43_39).doc 241.174929 45 5
00Pf4000006gndMEAQ 001f400000ZP5yXAAT_3 Oct 2018 (14_43_6).txt 246.417809 46 5
00Pf4000006gndqEAA 001f400000ZP5yYAAT_3 Oct 2018 (14_43_41).doc 251.660689 47 6
00Pf4000006gnDrEAI Pizigani_1367_Chart_10MB.jpg 261.835395 48 6
00Pf4000006gndREAQ 001f400000ZP5yXAAT_3 Oct 2018 (14_43_11).txt 272.321155 49 6
00Pf4000006gndvEAA 001f400000ZP5yYAAT_3 Oct 2018 (14_43_47).doc 282.806915 50 6
00Pf4000006gnDwEAI Spinner_Dolphin_Indian_Ocean_07-2017.jpg 284.109019 51 6
00Pf4000006gndWEAQ 001f400000ZP5yYAAT_3 Oct 2018 (14_43_20).pdf 285.157595 52 6
00Pf4000006gnDXEAY 440 Kb.jpg 285.609143 53
00Pf4000006gne0EAA 001f400000ZP5yYAAT_3 Oct 2018 (14_43_59).txt 286.657719 54 6
00Pf4000006gne5EAA 001f400000ZP5yYAAT_3 Oct 2018 (14_44_2).txt 291.900599 55 6
00Pf4000006gneaEAA 001f400000ZP5yZAAT_3 Oct 2018 (14_44_59).txt 302.386359 56 7
00Pf4000006gneAEAQ 001f400000ZP5yYAAT_3 Oct 2018 (14_44_7).txt 312.872119 57 7
00Pf4000006gneeEAA 001f400000ZP5yZAAT_3 Oct 2018 (14_44_40).doc 323.357879 58 7
Help would be much appreciated, thank you.
You can use integer division:
SELECT ( CAST ( SUM(Datalength(versiondata) * 0.000001)
OVER (
partition BY Datalength(sourceid)
ORDER BY sourceid) AS INT) / 50 ) + 1 AS Batch
FROM TargetContentVersion
Here's a quick sample that demonstrates how it works:
CREATE TABLE #t (id INT IDENTITY(1,1), size NUMERIC(8,6))
GO
INSERT INTO #t
SELECT RAND() * 20
GO 20 -- Create 20 sample rows with random sizes between 0 and 20
SELECT id, SUM(size) OVER (ORDER BY id) AS RunningTotal,
(CAST(SUM(size) OVER (ORDER BY id) AS INT) / 50) + 1 AS Batch
FROM #t
id RunningTotal Batch
1 2.303367 1
2 4.049776 1
3 19.177784 1
4 28.637981 1
5 29.675840 1
6 32.781603 1
7 33.859586 1
8 36.633733 1
9 39.413363 1
10 58.004502 2
11 70.363837 2
12 82.897268 2
13 83.946657 2
14 85.623044 2
15 87.432670 2
16 103.304830 3
17 103.709745 3
18 122.165664 3
19 126.554616 3
20 128.019929 3
I've worked it out.
Script below for those interested.
WITH cte1 AS (
SELECT SourceId, Title, DATALENGTH(VersionData) * 0.000001 AS RecordSize,
CAST(SUM(DATALENGTH(VersionData) * 0.000001) OVER (PARTITION BY
DATALENGTH(SourceId) ORDER BY SourceId) AS INT) AS RunningTotal,
RANK() OVER(ORDER BY SourceId) AS RowNo
FROM TargetContentVersion WITH(NOLOCK)
)
SELECT SourceId, Title, RecordSize, RunningTotal,
RowNo, SUM(RunningTotal) OVER (PARTITION BY SourceId ORDER BY SourceId) / 50 AS
Batch
FROM cte1
Another option would be to use dense_rank:
WITH CTE AS
(
SELECT SourceId, Title, Description,
SUM(DATALENGTH(VersionData) * 0.000001) OVER (PARTITION BY DATALENGTH(SourceId) ORDER BY SourceId) AS RunningTotal,
RANK() OVER(ORDER BY SourceId) AS RowNo
FROM TargetContentVersion WITH(NOLOCK)
)
SELECT SourceId, Title, Description, RunningTotal, RowNo
DENSE_RANK() OVER(PARTITION BY SourceId ORDER BY CAST(RunningTotal as int) / 50) As Batch
from #CTE
Note the casting of RunningTotal to int.

print lines between matching strings using awk

I have a file containing following entries;
Changing to: /Out/AP/SD
sftp> ls -lrt
-rw-rw-rw- 1 user group 0 Oct 25 10:24 HOLD_201810261247_M.csv
-rw-rw-rw- 1 user group 0 Oct 25 10:24 HOLD_201810261247_S.csv
-rw-rw-rw- 1 user group 1724355981 Oct 25 10:24 HOLD_201810261310.csv
-rw-rw-rw- 1 user group 2319514056 Oct 25 10:26 FINAL_201810261347.csv
-rw-rw-rw- 1 user group 0 Oct 25 10:26 SUMMARY_201810261343.csv
-rw-rw-rw- 1 user group 0 Oct 26 10:16 HOLD_201810271245_S.csv
-rw-rw-rw- 1 user group 0 Oct 26 10:16 HOLD_201810271246_M.csv
-rw-rw-rw- 1 user group 1725252957 Oct 26 10:17 HOLD_201810271302.csv
-rw-rw-rw- 1 user group 2244889790 Oct 26 10:21 FINAL_201810271346.csv
-rw-rw-rw- 1 user group 0 Oct 26 10:21 SUMMARY_201810271342.csv
sftp> bye
Changing to: /Out/AS/SD
sftp> ls -lrt
-rw-rw-rw- 1 user group 174172077 Oct 25 13:01 HOLD_201810261753.csv
-rw-rw-rw- 1 user group 191231356 Oct 25 13:01 HOLD_201810261753_M.csv
-rw-rw-rw- 1 user group 177010167 Oct 25 13:01 HOLD_201810261753_S.csv
-rw-rw-rw- 1 user group 171490539 Oct 25 13:02 FINAL_201810261808.csv
-rw-rw-rw- 1 user group 0 Oct 25 13:02 SUMMARY_201810261808.csv
-rw-rw-rw- 1 user group 97238298 Oct 25 13:02 VAS_HOLD_201810261751.csv
sftp> bye
Changing to: /Out/BR/SD
sftp> ls -lrt
-rw-rw-rw- 1 user group 0 Oct 25 11:24 HOLD_201810261529_S.csv
-rw-rw-rw- 1 user group 1721060436 Oct 25 11:25 HOLD_201810261544_M.csv
-rw-rw-rw- 1 user group 1537619643 Oct 25 11:26 HOLD_201810261546.csv
-rw-rw-rw- 1 user group 1545973081 Oct 25 11:28 FINAL_201810261601.csv
-rw-rw-rw- 1 user group 0 Oct 25 11:28 SUMMARY_201810261559.csv
sftp> bye
I want to print all lines between sftp> ls -lrt and sftp> bye which i can get using
awk '/sftp> ls -lrt/{flag=1;next}/sftp> bye/{flag=0}flag' filename
but i want to print data as;
/Out/AP/SD -rw-rw-rw- 1 user group 0 Oct 25 10:24 HOLD_201810261247_M.csv
/Out/AP/SD -rw-rw-rw- 1 user group 0 Oct 25 10:24 HOLD_201810261247_S.csv
/Out/AP/SD -rw-rw-rw- 1 user group 1724355981 Oct 25 10:24 HOLD_201810261310.csv
.........
/Out/AS/SD -rw-rw-rw- 1 user group 174172077 Oct 25 13:01 HOLD_201810261753.csv
/Out/AS/SD -rw-rw-rw- 1 user group 191231356 Oct 25 13:01 HOLD_201810261753_M.csv
and so on...
Thank for your support...
Here it is multi line output excluding lines without a character:
$ awk '/^Changing to: / { match($0,/^Changing to: /); dir=substr($0,RLENGTH+1); } /^sftp> ls -lrt$/ { doprint=1; next; } /^sftp> bye$/ { doprint=0; } /.+/ { if ( doprint == 1 ) print(dir " " $0); }' sftp.out
/Out/AP/SD -rw-rw-rw- 1 user group 0 Oct 25 10:24 HOLD_201810261247_M.csv
/Out/AP/SD -rw-rw-rw- 1 user group 0 Oct 25 10:24 HOLD_201810261247_S.csv
/Out/AP/SD -rw-rw-rw- 1 user group 1724355981 Oct 25 10:24 HOLD_201810261310.csv
/Out/AP/SD -rw-rw-rw- 1 user group 2319514056 Oct 25 10:26 FINAL_201810261347.csv
/Out/AP/SD -rw-rw-rw- 1 user group 0 Oct 25 10:26 SUMMARY_201810261343.csv
/Out/AP/SD -rw-rw-rw- 1 user group 0 Oct 26 10:16 HOLD_201810271245_S.csv
/Out/AP/SD -rw-rw-rw- 1 user group 0 Oct 26 10:16 HOLD_201810271246_M.csv
/Out/AP/SD -rw-rw-rw- 1 user group 1725252957 Oct 26 10:17 HOLD_201810271302.csv
/Out/AP/SD -rw-rw-rw- 1 user group 2244889790 Oct 26 10:21 FINAL_201810271346.csv
/Out/AP/SD -rw-rw-rw- 1 user group 0 Oct 26 10:21 SUMMARY_201810271342.csv
/Out/AS/SD -rw-rw-rw- 1 user group 174172077 Oct 25 13:01 HOLD_201810261753.csv
/Out/AS/SD -rw-rw-rw- 1 user group 191231356 Oct 25 13:01 HOLD_201810261753_M.csv
/Out/AS/SD -rw-rw-rw- 1 user group 177010167 Oct 25 13:01 HOLD_201810261753_S.csv
/Out/AS/SD -rw-rw-rw- 1 user group 171490539 Oct 25 13:02 FINAL_201810261808.csv
/Out/AS/SD -rw-rw-rw- 1 user group 0 Oct 25 13:02 SUMMARY_201810261808.csv
/Out/AS/SD -rw-rw-rw- 1 user group 97238298 Oct 25 13:02 VAS_HOLD_201810261751.csv
/Out/BR/SD -rw-rw-rw- 1 user group 0 Oct 25 11:24 HOLD_201810261529_S.csv
/Out/BR/SD -rw-rw-rw- 1 user group 1721060436 Oct 25 11:25 HOLD_201810261544_M.csv
/Out/BR/SD -rw-rw-rw- 1 user group 1537619643 Oct 25 11:26 HOLD_201810261546.csv
/Out/BR/SD -rw-rw-rw- 1 user group 1545973081 Oct 25 11:28 FINAL_201810261601.csv
/Out/BR/SD -rw-rw-rw- 1 user group 0 Oct 25 11:28 SUMMARY_201810261559.csv
If You need as one line output as it was in original question:
$ awk '/^Changing to: / { match($0,/^Changing to: /); printf("%s ",substr($0,RLENGTH+1)); } /^sftp> ls -lrt$/ { doprint=1; next; } /^sftp> bye$/ { doprint=0; } /.+/ { if (doprint == 1) printf("%s ",$0); } END { print(""); }' sftp.out
/Out/AP/SD -rw-rw-rw- 1 user group 0 Oct 25 10:24 HOLD_201810261247_M.csv -rw-rw-rw- 1 user group 0 Oct 25 10:24 HOLD_201810261247_S.csv -rw-rw-rw- 1 user group 1724355981 Oct 25 10:24 HOLD_201810261310.csv -rw-rw-rw- 1 user group 2319514056 Oct 25 10:26 FINAL_201810261347.csv -rw-rw-rw- 1 user group 0 Oct 25 10:26 SUMMARY_201810261343.csv -rw-rw-rw- 1 user group 0 Oct 26 10:16 HOLD_201810271245_S.csv -rw-rw-rw- 1 user group 0 Oct 26 10:16 HOLD_201810271246_M.csv -rw-rw-rw- 1 user group 1725252957 Oct 26 10:17 HOLD_201810271302.csv -rw-rw-rw- 1 user group 2244889790 Oct 26 10:21 FINAL_201810271346.csv -rw-rw-rw- 1 user group 0 Oct 26 10:21 SUMMARY_201810271342.csv /Out/AS/SD -rw-rw-rw- 1 user group 174172077 Oct 25 13:01 HOLD_201810261753.csv -rw-rw-rw- 1 user group 191231356 Oct 25 13:01 HOLD_201810261753_M.csv -rw-rw-rw- 1 user group 177010167 Oct 25 13:01 HOLD_201810261753_S.csv -rw-rw-rw- 1 user group 171490539 Oct 25 13:02 FINAL_201810261808.csv -rw-rw-rw- 1 user group 0 Oct 25 13:02 SUMMARY_201810261808.csv -rw-rw-rw- 1 user group 97238298 Oct 25 13:02 VAS_HOLD_201810261751.csv /Out/BR/SD -rw-rw-rw- 1 user group 0 Oct 25 11:24 HOLD_201810261529_S.csv -rw-rw-rw- 1 user group 1721060436 Oct 25 11:25 HOLD_201810261544_M.csv -rw-rw-rw- 1 user group 1537619643 Oct 25 11:26 HOLD_201810261546.csv -rw-rw-rw- 1 user group 1545973081 Oct 25 11:28 FINAL_201810261601.csv -rw-rw-rw- 1 user group 0 Oct 25 11:28 SUMMARY_201810261559.csv
note there is added space and new line in the end

Saving Dataframe to text on hdfs generated more than the total count of files

I have a dataframe r with 165 counts, I save it in text format onto hdfs with the command below:
scala> r.rdd.saveAsTextFile("top3_text")
here is the list of hdfs files (to save space here, I kept only portion of the list):
[paslechoix#gw03 ~]$ hdfs dfs -ls top3_text/*
-rw-r--r-- 3 paslechoix hdfs 0 2018-03-21 22:46 top3_text/_SUCCESS
-rw-r--r-- 3 paslechoix hdfs 55 2018-03-21 22:46 top3_text/part-00025
-rw-r--r-- 3 paslechoix hdfs 57 2018-03-21 22:46 top3_text/part-00026
-rw-r--r-- 3 paslechoix hdfs 54 2018-03-21 22:46 top3_text/part-00027
-rw-r--r-- 3 paslechoix hdfs 54 2018-03-21 22:46 top3_text/part-00028
-rw-r--r-- 3 paslechoix hdfs 0 2018-03-21 22:46 top3_text/part-00029
-rw-r--r-- 3 paslechoix hdfs 0 2018-03-21 22:46 top3_text/part-00030
-rw-r--r-- 3 paslechoix hdfs 0 2018-03-21 22:46 top3_text/part-00031
-rw-r--r-- 3 paslechoix hdfs 0 2018-03-21 22:46 top3_text/part-00032
-rw-r--r-- 3 paslechoix hdfs 0 2018-03-21 22:46 top3_text/part-00033
-rw-r--r-- 3 paslechoix hdfs 0 2018-03-21 22:46 top3_text/part-00034
-rw-r--r-- 3 paslechoix hdfs 0 2018-03-21 22:46 top3_text/part-00035
-rw-r--r-- 3 paslechoix hdfs 0 2018-03-21 22:46 top3_text/part-00036
-rw-r--r-- 3 paslechoix hdfs 0 2018-03-21 22:46 top3_text/part-00197
-rw-r--r-- 3 paslechoix hdfs 54 2018-03-21 22:46 top3_text/part-00198
-rw-r--r-- 3 paslechoix hdfs 54 2018-03-21 22:46 top3_text/part-00199
[paslechoix#gw03 ~]$ hdfs dfs -cat top3_text/part-00163
[paslechoix#gw03 ~]$ hdfs dfs -cat top3_text/part-00162
[paslechoix#gw03 ~]$ hdfs dfs -cat top3_text/part-00199
[663,30,139.99,1]
[664,30,139.99,2]
[665,30,139.99,3]
This is interesting:
1. What makes the saveAsTextFile generate 200 files?
2. What makes some files are empty while others might contain multiple records?
Thank you.
I found the reason!
It is actually decided by the spark.sql.shuffle.partitions setting, the default value is 200: https://spark.apache.org/docs/1.6.1/sql-programming-guide.html
Reducer number
In Shark, default reducer number is 1 and is controlled
by the property mapred.reduce.tasks. Spark SQL deprecates this
property in favor of spark.sql.shuffle.partitions, whose default value
is 200. Users may customize this property via SET:
SET spark.sql.shuffle.partitions=10;

OSX: Obj-C: how to get user last login and logout time of the user?

I need to fetch the last login and logout time of the user using objective-c. Is it possible?
I am able to see the whole record manually by viewing the following file
/private/var/log/accountpolicy.log
Even if I read the file from code and parse it but there are chances that user does not have the permission to access the file and so the app.
P.S: Can not ask user for right elevation window as I am doing it in background proecess.
Check the last command.
https://www.freebsd.org/cgi/man.cgi?query=last&sektion=1
LAST(1) FreeBSD General Commands Manual LAST(1)
NAME
last -- indicate last logins of users and ttys
SYNOPSIS
last [-swy] [-d [[CC]YY][MMDD]hhmm[.SS]] [-f file] [-h host] [-n maxrec] [-t tty] [user ...]
ex:
$ last
gbuzogany ttys001 Fri Mar 18 11:21 - 11:27 (00:06)
gbuzogany ttys003 Fri Mar 18 10:24 - 11:18 (00:54)
gbuzogany ttys003 Fri Mar 18 10:07 - 10:07 (00:00)
gbuzogany ttys002 Fri Mar 18 10:03 - 11:18 (01:15)
gbuzogany ttys001 Fri Mar 18 10:01 - 10:30 (00:29)
gbuzogany ttys001 Fri Mar 18 09:31 - 09:33 (00:01)
gbuzogany ttys004 Thu Mar 17 15:34 - 15:52 (00:18)
...