Separate text to the columns [closed] - vb.net

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I have this crazy text in a text file which I want to split into columns for a datagrid. Some advice on how can I do it?
Application: ws_cp_ixnsrv_mm DBID: 845 Status: APP_STATUS_STOPPED Runmode: EXITED
Application: mng_dbserver_p DBID: 469 Status: APP_STATUS_RUNNING Runmode: PRIMARY
Application: ird_dap DBID: 470 Status: APP_STATUS_UNKNOWN Runmode: EXITED
Application: mng_dap DBID: 471 Status: APP_STATUS_UNKNOWN Runmode: EXITED
Application: mng_messagesrv_p DBID: 472 Status: APP_STATUS_RUNNING Runmode: PRIMARY
Application: mng_scs_p DBID: 473 Status: APP_STATUS_STOPPED Runmode: EXITED Error
Application: pulse_collector_02 DBID: 827 Status: APP_STATUS_RUNNING Runmode: BACKUP
Application: was_tomcat_1 DBID: 829 Status: APP_STATUS_RUNNING Runmode: PRIMARY
Application: svc_nss_p DBID: 850 Status: APP_STATUS_RUNNING Runmode: PRIMARY
Application: svc_nss_b DBID: 851 Status: APP_STATUS_STOPPED Runmode: EXITED Error
My idea: Columns:
Application, DBID, Status, Runmode
ws_cp_ixnsrv_mm, 845, APP_STATUS_STOPPED, EXITED
pulse_collector_02, 827, APP_STATUS_RUNNING, BACKUP

When you have strings with a regularity to how they are expressed, you can use..
..a regular expression
Quick regex intro:
+ means "One or more of the thing to the left". The thing to the left can be a single thing, or a group of things
[..] means a group of characters, which are defined between the brackets. Hyphen means a range, so [a-z] is "lowercase a to z", [a-zA-Z] is "lowercase a to z or uppercase A to Z", [abc] is "a or b or c", [abce-t] is "a or b or c or (e to t)"
(?<x>...) means "whatever is matched by ... is captured into a group (a variable) named x
\d means "any digit", in other words, equal to [0-9]
The code:
Dim r as New RegEx("Application: (?<a>[a-z_]+) DBID: (<?d>\d+) Status: (?<s>[A-Z_]+) Runmode: (?<r>[A-Z]+)")
This means:
Application: (?<a>[a-z_]+) DBID: (<?d>\d+) Status: (?<s>[A-Z_]+) Runmode: (?<r>[A-Z]+)
^-----------^^-----------^^-----^^-------^^-------^^-----------^^--------^^----------^
1 2 3 4 5 6 7 8
The literal string Application: followed by
One or more of a to z or underscore, captured into a followed by
literal DBID: followed by
One or more of any digit, captured into d followed by
Literal string Status: followed by
One or more of A to Z or underscore, captured into s followed by
Literal string Runmode: followed by
One or more of A to Z, captured into r
And how it's used:
Dim s = "Application: ws_cp_ixnsrv_mm DBID: 845 Status: APP_STATUS_STOPPED Runmode: EXITED"
Dim m = r.Match(s)
m.Groups("a").Value 'it's the application column e.g. "ws_cp_ixnsrv_mm"
m.Groups("d").Value 'it's the dbid column e.g. "845"
m.Groups("s").Value 'it's the status column e.g. "APP_STATUS_STOPPED"
m.Groups("r").Value 'it's the runmode column e.g. "EXITED"
You'd run the Match for every line (read the file in and process each line in turn)
If you want "EXITED Error" in r, modify the characters class from [A-Z] to [a-zA-Z ] to include the space and lowercase chars from Error
Alernatively, if you already know how to split strings, maybe life would be easier by doing:
Dim s = "Application: ws_cp_ixnsrv_mm DBID: 845 Status: APP_STATUS_STOPPED Runmode: EXITED"
s = s.Replace("Application: ","").Replace(" DBID: ","|").Replace(" Status: ","|").Replace(" Runmode: ","|")
Dim ss = s.Split("|"c)

Related

Azure Data Factory: ErrorCode=TypeConversionFailure,Exception occurred when converting value : ErrorCode: 2200

Can somoene let me know why Azure Data Factory is trying to convert a value from String to type Double.
I am getting the error:
{
"errorCode": "2200",
"message": "ErrorCode=TypeConversionFailure,Exception occurred when converting value '+44 07878 44444' for column name 'telephone2' from type 'String' (precision:255, scale:255) to type 'Double' (precision:15, scale:255). Additional info: Input string was not in a correct format.",
"failureType": "UserError",
"target": "Copy Table to EnrDB",
"details": [
{
"errorCode": 0,
"message": "'Type=System.FormatException,Message=Input string was not in a correct format.,Source=mscorlib,'",
"details": []
}
]
}
My Sink looks like the following:
I don't have any mapping set
The column setting for the the field 'telephone2' is as follows:
I changed the 'table option' to none, however I got the following error:
{
"errorCode": "2200",
"message": "Failure happened on 'Source' side. ErrorCode=SqlOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A database operation failed with the following error: 'Internal system error occurred.\r\nStatement ID: {C2C38377-5A14-4BB7-9298-28C3C351A40E} | Query hash: 0x2C885D2041993FFA | Distributed request ID: {6556701C-BA76-4D0F-8976-52695BBFE6A7}. Total size of data scanned is 134 megabytes, total size of data moved is 102 megabytes, total size of data written is 0 megabytes.',Source=,''Type=System.Data.SqlClient.SqlException,Message=Internal system error occurred.\r\nStatement ID: {C2C38377-5A14-4BB7-9298-28C3C351A40E} | Query hash: 0x2C885D2041993FFA | Distributed request ID: {6556701C-BA76-4D0F-8976-52695BBFE6A7}. Total size of data scanned is 134 megabytes, total size of data moved is 102 megabytes, total size of data written is 0 megabytes.,Source=.Net SqlClient Data Provider,SqlErrorNumber=75000,Class=17,ErrorCode=-2146232060,State=1,Errors=[{Class=17,Number=75000,State=1,Message=Internal system error occurred.,},{Class=0,Number=15885,State=1,Message=Statement ID: {C2C38377-5A14-4BB7-9298-28C3C351A40E} | Query hash: 0x2C885D2041993FFA | Distributed request ID: {6556701C-BA76-4D0F-8976-52695BBFE6A7}. Total size of data scanned is 134 megabytes, total size of data moved is 102 megabytes, total size of data written is 0 megabytes.,},],'",
"failureType": "UserError",
"target": "Copy Table to EnrDB",
"details": []
}
Any more thoughts
The issue was resolved by changing the column DataType on the database to match the DataType recorded in Azure Data Factory i.e StringType

How to deal with the error when using Gurobi with cvxpy :Unable to retrieve attribute 'BarIterCount'

How to deal with the error when using Gurobi with cvxpy :AttributeError: Unable to retrieve attribute 'BarIterCount'.
I have an Integer programming problem, using cvxpy and set gurobi as a solver.
When the number of variables is small, the result is ok. After the number of variables reaches a level of like 43*13*6, then the error occurred. I suppose it may be caused by the scale of the problem, in which the gurobi solver can not estimate the BarIterCount, which is the max Iterations needed.
Thus, I wonder, is there any way to manually set the BarItercount attribute of gurobi through the interface of the CVX? Or whether there exists another way to solve this problem?
Thanks for any suggestions you may provide for me.
The trace log is as follows:
If my model is small, like I set a number which indicates the scale of model as 3, then the program is ok. The trace is :
Using license file D:\software\lib\site-packages\gurobipy\gurobi.lic
Restricted license - for non-production use only - expires 2022-01-13
Parameter OutputFlag unchanged
Value: 1 Min: 0 Max: 1 Default: 1
D:\software\lib\site-packages\cvxpy\reductions\solvers\solving_chain.py:326: DeprecationWarning: Deprecated, use Model.addMConstr() instead
solver_opts, problem._solver_cache)
Changed value of parameter QCPDual to 1
Prev: 0 Min: 0 Max: 1 Default: 0
Gurobi Optimizer version 9.1.0 build v9.1.0rc0 (win64)
Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
Optimize a model with 126 rows, 370 columns and 2689 nonzeros
Model fingerprint: 0x70d49530
Variable types: 0 continuous, 370 integer (369 binary)
Coefficient statistics:
Matrix range [1e+00, 7e+00]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 6e+00]
Found heuristic solution: objective 7.0000000
Presolve removed 4 rows and 90 columns
Presolve time: 0.01s
Presolved: 122 rows, 280 columns, 1882 nonzeros
Variable types: 0 continuous, 280 integer (279 binary)
Root relaxation: objective 4.307692e+00, 216 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 4.30769 0 49 7.00000 4.30769 38.5% - 0s
H 0 0 6.0000000 4.30769 28.2% - 0s
0 0 5.00000 0 35 6.00000 5.00000 16.7% - 0s
0 0 5.00000 0 37 6.00000 5.00000 16.7% - 0s
0 0 5.00000 0 7 6.00000 5.00000 16.7% - 0s
Cutting planes:
Gomory: 4
Cover: 9
MIR: 4
StrongCG: 1
GUB cover: 9
Zero half: 1
RLT: 1
Explored 1 nodes (849 simplex iterations) in 0.12 seconds
Thread count was 32 (of 32 available processors)
Solution count 2: 6 7
Optimal solution found (tolerance 1.00e-04)
Best objective 6.000000000000e+00, best bound 6.000000000000e+00, gap 0.0000%
If the number is 6, then error occurs:
-------------------------------------------------------
Using license file D:\software\lib\site-packages\gurobipy\gurobi.lic
Restricted license - for non-production use only - expires 2022-01-13
Parameter OutputFlag unchanged
Value: 1 Min: 0 Max: 1 Default: 1
D:\software\lib\site-packages\cvxpy\reductions\solvers\solving_chain.py:326: DeprecationWarning: Deprecated, use Model.addMConstr() instead
solver_opts, problem._solver_cache)
Changed value of parameter QCPDual to 1
Prev: 0 Min: 0 Max: 1 Default: 0
Gurobi Optimizer version 9.1.0 build v9.1.0rc0 (win64)
Thread count: 16 physical cores, 32 logical processors, using up to 32 threads
Traceback (most recent call last):
File "model.py", line 274, in <module>
problem.solve(solver=cp.GUROBI,verbose=True)
File "D:\software\lib\site-packages\cvxpy\problems\problem.py", line 396, in solve
return solve_func(self, *args, **kwargs)
File "D:\software\lib\site-packages\cvxpy\problems\problem.py", line 754, in _solve
self.unpack_results(solution, solving_chain, inverse_data)
File "D:\software\lib\site-packages\cvxpy\problems\problem.py", line 1058, in unpack_results
solution = chain.invert(solution, inverse_data)
File "D:\software\lib\site-packages\cvxpy\reductions\chain.py", line 79, in invert
solution = r.invert(solution, inv)
File "D:\software\lib\site-packages\cvxpy\reductions\solvers\qp_solvers\gurobi_qpif.py", line 59, in invert
s.NUM_ITERS: model.BarIterCount,
File "src\gurobipy\model.pxi", line 343, in gurobipy.gurobipy.Model.__getattr__
File "src\gurobipy\model.pxi", line 1842, in gurobipy.gurobipy.Model.getAttr
File "src\gurobipy\attrutil.pxi", line 100, in gurobipy.gurobipy.__getattr
AttributeError: Unable to retrieve attribute 'BarIterCount'
Hopefully this can provide more hint for solution.
BarIterCount is the number of barrier iterations performed to solve an LP. This is not a limit on the number of iterations and it should only be queried when the current optimization process has been finished. You cannot set this attribute either, of course.
To actually limit the number of iterations the barrier algorithm is allowed to take, you can use the parameter BarIterLimit.
Please inspect your log file for further information about the solver's behavior.

Error while executing ForEach - Apache PIG

I have 3 logs , a Squid , a login and a logoff. I need to cross these logs to find out which sites each user has accessed.
I'm using the Apache Pig and created the following scrip to do it:
copyFromLocal /home/marcelo/Documentos/hadoop/squid.txt /tmp/squid.txt;
copyFromLocal /home/marcelo/Documentos/hadoop/samba.log_in /tmp/login.txt;
copyFromLocal /home/marcelo/Documentos/hadoop/samba.log_out /tmp/logout.txt;
squid = LOAD '/tmp/squid.txt' USING PigStorage AS (linha: chararray);
nsquid = FOREACH squid GENERATE FLATTEN (STRSPLIT(linha,'[ ]+'));
nsquid = FOREACH nsquid GENERATE $0 AS timeStamp:chararray, $2 AS ipCliente:chararray, $5 AS request:chararray, $6 AS url:chararray;
nsquid = FOREACH nsquid GENERATE FLATTEN (STRSPLIT(timeStamp,'[.]'))AS (timeStamp:int,resto:chararray),ipCliente,request,url;
nsquid = FOREACH nsquid GENERATE (int)$0 AS timeStamp:int, $2 AS ipCliente:chararray,$3 AS request:chararray, $4 AS url:chararray;
connect = FILTER nsquid BY (request=='CONNECT');
login = LOAD '/tmp/login.txt' USING PigStorage(' ') AS (serverAL: chararray, data: chararray, hora: chararray, netlogon: chararray, on: chararray, ip: chararray);
nlogin = FOREACH login GENERATE FLATTEN(STRSPLIT(serverAL,'[\\\\]')),data, hora,FLATTEN(STRSPLIT(ip,'[\\\\]'));
nlogin = FOREACH nlogin GENERATE $1 AS al:chararray, $2 AS data:chararray, $3 AS hora:chararray, $4 AS ipCliente:chararray;
logout = LOAD '/tmp/logout.txt' USING PigStorage(' ') AS (data: chararray, hora: chararray, logout: chararray, ipAl: chararray, disconec: chararray);
nlogout = FOREACH logout GENERATE data, hora, FLATTEN(STRSPLIT(ipAl,'[\\\\]'));
nlogout = FOREACH nlogout GENERATE $0 AS data:chararray,$1 AS hora:chararray,$2 AS ipCliente:chararray, $3 AS al:chararray;
data = JOIN nlogin BY (al,ipCliente,data), nlogout BY (al,ipCliente,data);
ndata = FOREACH data GENERATE nlogin::al,ToUnixTime(ToDate(CONCAT(nlogin::data, nlogin::hora),'dd/MM/yyyyHH:mm:ss', 'GMT')) AS tslogin:int,ToUnixTime(ToDate(CONCAT(nlogout::data, nlogout::hora),'dd/MM/yyyyHH:mm:ss', 'GMT')) AS tslogout:int,nlogout::ipCliente;
BB = FOREACH ndata GENERATE $0 AS al:chararray, (int)$1 AS tslogin:int, (int)$2 AS tslogout:int, $3 AS ipCliente:chararray;
CC = JOIN BB BY ipCliente, connect BY ipCliente;
DD = FOREACH CC GENERATE BB::al AS al:chararray, (int)BB::tslogin AS tslogin:int, (int)BB::tslogout AS tslogout:int,(int)connect::timeStamp AS timeStamp:int, connect::ipCliente AS ipCliente:chararray, connect::url AS url:chararray;
EE = FILTER DD BY (tslogin<=timeStamp) AND (timeStamp<=tslogout);
STORE EE INTO 'EEs';
But it is returning the following error
2015-10-16 21:58:10,600 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2015-10-16 21:58:10,600 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_201510162141_0008 has failed! Stop running all dependent jobs
2015-10-16 21:58:10,600 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2015-10-16 21:58:10,667 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 0: Error while executing ForEach at [DD[93,5]]
2015-10-16 21:58:10,667 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2015-10-16 21:58:10,667 [main] INFO org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
1.2.1 0.12.1 root 2015-10-16 21:56:48 2015-10-16 21:58:10 HASH_JOIN,FILTER
Some jobs have failed! Stop running all dependent jobs
Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs
job_201510162141_0007 2 1 4 3 4 4 9 9 9 9 BB,data,login,logout,ndata,nlogin,nlogout HASH_JOIN
Failed Jobs:
JobId Alias Feature Message Outputs
job_201510162141_0008 CC,DD,EE,connect,nsquid,squid HASH_JOIN Message: Job failed! Error - # of failed Reduce Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201510162141_0008_r_000000 hdfs://localhost:9000/user/root/EEb,
Input(s):
Successfully read 7367 records from: "/tmp/login.txt"
Successfully read 7374 records from: "/tmp/logout.txt"
Failed to read data from "/tmp/squid.txt"
Output(s):
Failed to produce result in "hdfs://localhost:9000/user/root/EEb"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_201510162141_0007 -> job_201510162141_0008,
job_201510162141_0008
2015-10-16 21:58:10,674 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Encountered Warning ACCESSING_NON_EXISTENT_FIELD 11 time(s).
2015-10-16 21:58:10,674 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Some jobs have failed! Stop running all dependent jobs
I had created an alternative that worked, just replaced the penultimate line by:
STORE DD INTO 'DD';
newDD = LOAD 'hdfs://localhost:9000/user/root/DD' USING PigStorage AS (al:chararray, tslogin:int, tslogout:int, timeStamp:int, ipCliente:chararray, url:chararray);
EE = FILTER newDD BY (tslogin<=timeStamp) AND (timeStamp<=tslogout);
Does anyone have any idea how to fix it without the “store” ?

ERROR 2017 : Internal error creating job configuration

The following code on execution pops up an error saying ERROR 2017 : Internal error creating job configuration. in PIG.
data = LOAD 'info.txt' USING PigStorage();
name_col_one = FOREACH data GENERATE $0 AS timeStamp, $1 AS one, $2 AS two, $3 AS info, $4 AS four, $5 AS five, $6 AS six, $7 AS seven, $8 AS eight, $9 AS nine, $10 AS ten, $11 AS eleven;
process_col_one = FOREACH name_col_one GENERATE FLATTEN(STRSPLIT(timeStamp,'\\s+',2)) AS (time:chararray, date:chararray), one, two;
new_timestamp = FOREACH process_col_one GENERATE CONCAT(date,CONCAT(' ',time)), one, two;
sys_info = FOREACH name_col_one GENERATE info;
split_ = FOREACH sys_info GENERATE REPLACE(info, '\\[', '') AS new_split;
split_again = FOREACH split_ GENERATE REPLACE(new_split, ']', '\t') AS final_split;
others = FOREACH name_col_one GENERATE four, five, six, seven, eight, nine, ten, eleven;
r1 = RANK new_timestamp;
r2 = RANK split_again;
r3 = RANK others;
final = JOIN r1 BY rank_new_timestamp, r2 BY rank_split_again;
DUMP final;
SAMPLE DATA in info.txt
23:58:19 02/23/2015 good 1042559519 [Linux][Baseline][lrtp2nosqlprod1][FileSystem][/tmp] FileSystems/tmp\Use%=1% 9:5603 0 1
23:58:15 02/23/2015 good 1042559519 [Linux][Baseline][lrtp2nosqlprod1][FileSystem][/boot] FileSystems/boot\Use%=37% 3:5603 0 37
23:58:15 02/23/2015 good 1042559537 [Linux][Baseline][lrtp2nosqlprod1][Process][srmclient][SiSExclude] running 3:5599 running true no data 1 0 0
23:58:15 02/23/2015 good 1042559537 [Linux][Baseline][lrtp2nosqlprod1][Process][OSWatcher][SiSExclude] running, 2 processes 4:5599 running true no data 2 0 0
Relations
new_timestamp is reversing the timestamp from the input dat,
split_again is removing square brackets in $3 and delimiting them by '\t'.
Pig Stack Trace
---------------
ERROR 2017: Internal error creating job configuration.
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias final
at org.apache.pig.PigServer.openIterator(PigServer.java:880)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:774)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69)
at org.apache.pig.Main.run(Main.java:541)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: org.apache.pig.PigException: ERROR 1002: Unable to store alias final
at org.apache.pig.PigServer.storeEx(PigServer.java:982)
at org.apache.pig.PigServer.store(PigServer.java:942)
at org.apache.pig.PigServer.openIterator(PigServer.java:855)
... 12 more
Caused by: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException: ERROR 2017: Internal error creating job configuration.
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:873)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:298)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
at org.apache.pig.PigServer.storeEx(PigServer.java:978)
... 14 more
Caused by: java.lang.NullPointerException
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:817)
... 19 more
================================================================================
Any help is welcome.
Thanks in advance.
This problem has been reported before (https://issues.apache.org/jira/browse/PIG-3469) and has been fixed, maybe try using the latest version of pig.
This problem can sometimes be fixed by specifying the path to the input data file
e.g. '/home/user/doc/info.txt'

Pulling from a very very specific location imbedded in a text file

I finished every piece of code in my program save for one tid bit, how to pull two numbers from a text file. I know how to pull lines, I know how to pull search strings, but I cant figure out this one to save my life.
Anyways here is a sample of the automatically generated text that I need to pull from...
.......................................................................
Applications Memory Usage (kB):
Uptime: 6089044 Realtime: 6089040
** MEMINFO in pid 764 [com.lookout] **
native dalvik other total
size: 27908 8775 N/A 36683
allocated: 3240 4216 N/A 7456
free: 24115 4559 N/A 28674
(Pss): 1454 1142 6524 *9120*
(priv dirty): 1436 628 5588 *7652*
Objects
Views: 0 ViewRoots: 0
AppContexts: 0 Activities: 0
Assets: 3 AssetManagers: 3
Local Binders: 15 Proxy Binders: 41
Death Recipients: 3
OpenSSL Sockets: 0
SQL
heap: 98 MEMORY_USED: 98
PAGECACHE_OVERFLOW: 16 MALLOC_SIZE: 50
DATABASES
pgsz dbsz Lookaside(b) Dbname
1 14 120 google_analytics.db
Asset Allocations
zip:/system/app/com.lookout_6.0.1_r8234_Release.apk:/resources.arsc: 161K
.............................................................................
The two numbers that I need out of this are the two ones that I put in the **'s (the asterisks are not normally there). These numbers will be different every time this sheet is generated, and the number placement might be different as well as some of the numbers could have 4 digits, 5 digits, or 6 digits.
If anyone could shed any light on the subject it would be greatly appreciated
Thanks,
Zach
You just need to read in the last word of the line and convert it to a number. Use String.LastIndexOf to find the last space " " in the file and read the data from that point forwards.
Dim line as String = " (Pss): 1454 1142 6524 9120"
Dim value as Integer
If line.IndexOf("(Pss)") > 0 Then
value = CInt(line.Substring(line.LastIndexOf(" ") + 1))
End If