I can't recompile any apk with apktool, here is the error log:
W: Could not find sources
invalid resource directory name: C:\Users\pradeepkumar\Downloads\Compressed\Advanced+ApkTool+v4.1.0+By+BDFreak\AdvancedApkTool\3-Out\framework-res.apk\res values-?#-rES
Exception in thread "main" brut.androlib.AndrolibException: brut.androlib.AndrolibException: brut.common.BrutException: could not exec command: [C:\Users\PRADEE~1\AppData\Local\Temp\brut_util_Jar_5055861152156375473.tmp, p, --forced-package-id, 1, --min-sdk-version, 21, --target-sdk-version, 21, --version-code, 21, --version-name, 5.0.2-d3aebb81ea, -F, C:\Users\PRADEE~1\AppData\Local\Temp\APKTOOL1542257552202253955.tmp, -x, -0, arsc, -S, C:\Users\pradeepkumar\Downloads\Compressed\Advanced+ApkTool+v4.1.0+By+BDFreak\AdvancedApkTool\3-Out\framework-res.apk\res, -M, C:\Users\pradeepkumar\Downloads\Compressed\Advanced+ApkTool+v4.1.0+By+BDFreak\AdvancedApkTool\3-Out\framework-res.apk\AndroidManifest.xml]
at brut.androlib.Androlib.buildResourcesFull(Androlib.java:437)
at brut.androlib.Androlib.buildResources(Androlib.java:374)
at brut.androlib.Androlib.build(Androlib.java:277)
at brut.androlib.Androlib.build(Androlib.java:250)
at brut.apktool.Main.cmdBuild(Main.java:225)
at brut.apktool.Main.main(Main.java:84)
Caused by: brut.androlib.AndrolibException: brut.common.BrutException: could not exec command: [C:\Users\PRADEE~1\AppData\Local\Temp\brut_util_Jar_5055861152156375473.tmp, p, --forced-package-id, 1, --min-sdk-version, 21, --target-sdk-version, 21, --version-code, 21, --version-name, 5.0.2-d3aebb81ea, -F, C:\Users\PRADEE~1\AppData\Local\Temp\APKTOOL1542257552202253955.tmp, -x, -0, arsc, -S, C:\Users\pradeepkumar\Downloads\Compressed\Advanced+ApkTool+v4.1.0+By+BDFreak\AdvancedApkTool\3-Out\framework-res.apk\res, -M, C:\Users\pradeepkumar\Downloads\Compressed\Advanced+ApkTool+v4.1.0+By+BDFreak\AdvancedApkTool\3-Out\framework-res.apk\AndroidManifest.xml]
at brut.androlib.res.AndrolibResources.aaptPackage(AndrolibResources.java:488)
at brut.androlib.Androlib.buildResourcesFull(Androlib.java:423)
... 5 more
Caused by: brut.common.BrutException: could not exec command: [C:\Users\PRADEE~1\AppData\Local\Temp\brut_util_Jar_5055861152156375473.tmp, p, --forced-package-id, 1, --min-sdk-version, 21, --target-sdk-version, 21, --version-code, 21, --version-name, 5.0.2-d3aebb81ea, -F, C:\Users\PRADEE~1\AppData\Local\Temp\APKTOOL1542257552202253955.tmp, -x, -0, arsc, -S, C:\Users\pradeepkumar\Downloads\Compressed\Advanced+ApkTool+v4.1.0+By+BDFreak\AdvancedApkTool\3-Out\framework-res.apk\res, -M, C:\Users\pradeepkumar\Downloads\Compressed\Advanced+ApkTool+v4.1.0+By+BDFreak\AdvancedApkTool\3-Out\framework-res.apk\AndroidManifest.xml]
at brut.util.OS.exec(OS.java:89)
at brut.androlib.res.AndrolibResources.aaptPackage(AndrolibResources.java:482)
... 6 more
This has been fixed in the latest apktool code base, RC4 does not have this fix. I blogged about fixing this bug a few days ago. So build from source or wait for the final release of Apktool.
Basically the BCP-47 qualifier and 3 characters packed into 2 bytes is not supported with the version you have. Thus ast-rES is decoded improperly, which creates the invalid directory error.
Related
I am trying to migrate my cuda application using the dpct. When calling dpct; I see it processes CUDA files and generates some benign warnings but at the end it exits without writing out any DPC++ equivalent file.I can clearly see CUDA functions called in these applications and removal of CUDA path would fail the compile process.This is the command I used
$ dpct --report-type=all --cuda-include-path=/usr/local/cuda-10.2/include -p compile_commands.json"
I have eliminated the actual physical paths to files to avoid confusion:
Processing: ....../ComputeThermoGPU.cu
Processing: ....../CommunicatorGPU.cu
Processing: ....../ParticleData.cu
Processing: ....../Integrator.cu
------------------APIS report--------------------
API name Frequency
-------------------------------------------------
----------Stats report---------------
File name, LOC migrated to DPC++, LOC migrated to helper functions, LOC not needed to migrate,
LOC not able to migrate
....../Integrator.cu, 1, 0, 168, 0
....../ParticleData.cu, 1, 0, 402, 0
....../ComputeThermoGPU.cu, 1, 0, 686, 0
....../ParticleGroup.cu, 6, 0, 111, 0
Total migration time: 17207.371000 ms
-------------------------------------
dpct exited with code: 1 (Migration not necessary)```
I could get a solution to my own question
Use
dpct -p compile_commands.json --in-root=src --out-root=dpct_out --process-all
I think the reason might be, due to the absence of the driver code(containing main function), DPCT thinks that since these helper .cu files are not being utilized anyway, there is no need to perform migration on these files. That's the reason you see "Migration not necessary" warning.
I have a strange issue with logrotate on a Raspbian 9 system.
Logrotate appears to be configured to rotate /var/log/syslog every seven days. When I run logrotate -f -d /etc/logrotate.conf the output tells me:
rotating pattern: /var/log/syslog
forced from command line (7 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/syslog
Now: 2021-03-16 09:56
Last rotated at 2020-11-02 12:26
log needs rotating
rotating log /var/log/syslog, log->rotateCount is 7
dateext suffix '-20210316'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
compressing log with: /bin/gzip
renaming /var/log/syslog.7.gz to /var/log/syslog.8.gz (rotatecount 7, logstart 1, i 7),
renaming /var/log/syslog.6.gz to /var/log/syslog.7.gz (rotatecount 7, logstart 1, i 6),
renaming /var/log/syslog.5.gz to /var/log/syslog.6.gz (rotatecount 7, logstart 1, i 5),
renaming /var/log/syslog.4.gz to /var/log/syslog.5.gz (rotatecount 7, logstart 1, i 4),
renaming /var/log/syslog.3.gz to /var/log/syslog.4.gz (rotatecount 7, logstart 1, i 3),
renaming /var/log/syslog.2.gz to /var/log/syslog.3.gz (rotatecount 7, logstart 1, i 2),
renaming /var/log/syslog.1.gz to /var/log/syslog.2.gz (rotatecount 7, logstart 1, i 1),
renaming /var/log/syslog.0.gz to /var/log/syslog.1.gz (rotatecount 7, logstart 1, i 0),
log /var/log/syslog.8.gz doesn't exist -- won't try to dispose of it
renaming /var/log/syslog to /var/log/syslog.1
creating new /var/log/syslog mode = 0640 uid = 0 gid = 4
running postrotate script
running script with arg /var/log/syslog: "
invoke-rc.d rsyslog rotate > /dev/null
"
So it says it is renaming /var/log/syslog to /var/log/syslog.1 and creating a new syslog. So everything appears to be ok so far.
Just, it does noting. There is no syslog.1 afterwards and the syslog file is the same as before. Nothing happened.
One thing to mention: /var/log is tmpfs- is this related?
Mounted as: tmpfs on /var/log type tmpfs (rw,nosuid,nodev,relatime)
Thanks for ideas!
/KNEBB
And to update the issue here: I was so dumb not reading. When running in debug mode (as done above) logrotate does nothing- it just tells you what it would be if run without "-d".
When trying to execute the following graph I get the error below. Any ideas what might be causing this? I'm on Ubuntu 20.10.
<<< Welcome to GNU Radio Companion 3.9.0.0-git >>>
Block paths:
/usr/share/gnuradio/grc/blocks
Loading: "/home/gnuradiouser/Dev/SDR/Lesson 1.grc"
>>> Done
Generating: '/home/gnuradiouser/Dev/SDR/Lesson_1.py'
Executing: /usr/bin/python3 -u /home/gnuradiouser/Dev/SDR/Lesson_1.py
Warning: failed to XInitThreads()
gr-osmosdr 0.2.0.0 (0.2.0) gnuradio 3.8.1.0
built-in source types: file fcd rtl rtl_tcp uhd hackrf bladerf rfspace airspy airspyhf soapy redpitaya freesrp
[INFO] [UHD] linux; GNU C++ version 10.2.0; Boost_107100; UHD_3.15.0.0-3build2
Using HackRF One with firmware 2018.01.1
Traceback (most recent call last):
File "/home/gnuradiouser/Dev/SDR/Lesson_1.py", line 193, in <module>
main()
File "/home/gnuradiouser/Dev/SDR/Lesson_1.py", line 169, in main
tb = top_block_cls()
File "/home/gnuradiouser/Dev/SDR/Lesson_1.py", line 142, in __init__
self.connect((self.osmosdr_source_0, 0), (self.qtgui_freq_sink_x_0, 0))
File "/usr/lib/python3/dist-packages/gnuradio/gr/hier_block2.py", line 37, in wrapped
func(self, src, src_port, dst, dst_port)
File "/usr/lib/python3/dist-packages/gnuradio/gr/hier_block2.py", line 100, in connect
self.primitive_connect(*args)
TypeError: primitive_connect(): incompatible function arguments. The following argument types are supported:
1. (self: gnuradio.gr.gr_python.hier_block2_pb, block: gnuradio.gr.gr_python.basic_block) -> None
2. (self: gnuradio.gr.gr_python.hier_block2_pb, src: gnuradio.gr.gr_python.basic_block, src_port: int, dst: gnuradio.gr.gr_python.basic_block, dst_port: int) -> None
Invoked with: <gnuradio.gr.gr_python.top_block_pb object at 0x7fac59391970>, <Swig Object of type 'gr::basic_block_sptr *' at 0x7fac575a4990>, 0, <gnuradio.gr.gr_python.basic_block object at 0x7fac5953a1f0>, 0
swig/python detected a memory leak of type 'gr::basic_block_sptr *', no destructor found.
>>> Done (return code 1)
I'm new to GNU Radio, so not sure where to start troubleshooting this issue. I've tried plugging in a signal generator instead of the osmocom source, and got the graph to execute, so I'm guessing there's a bad parameter in the osmocom object or a bad version of a library?
I've also tried installing an earlier version of GNU Radio (3.7 and 3.8), but couldn't get that going on Ubuntu 20.10.
Managed to resolve this, by uninstalling GNU Radio and removing the 'master' ppa repository sudo add-apt-repository ppa:gnuradio/gnuradio-master, which was in one of the installation instructions I came across.
After that I simply installed GNU Radio using the standard 20.10 repositories and all works fine now.
Distribution: CDH-4.6.0-1.cdh4.6.0.p0.26
Hive Version: 0.10.0
Parquet Version: 1.2.5
I have two big date partitioned external Hive tables full of log files that I recently converted to Parquet to take advantage of the compression and columnar storage. So far I've been very happy with the performance.
Our dev team recently added a field to the logs, so I was charged with adding a column to both log tables. It worked perfectly for one, but the other appears to have become corrupted. I reverted the change, but I still can't query the table.
I'm convinced the data is fine (because it didn't change) but something is wrong in the metastore. An msck repair table repopulates the partitions after I drop/create, but does not take care of the error below. There are two things that can fix it, but neither are making me happy:
Re-insert the data.
Copy data back into the table from the production cluster.
I'm really hoping there's a command that I don't know about that will fix the table without needing to resort to the above 2 options. Like I said, the data is fine. I've googled the heck out of the error, and I get some results but they all pertain to Impala which is NOT what were using.
select * from upload_metrics_hist where dt = '2014-07-01' limit 5;
The problem is this:
Caused by: java.lang.RuntimeException: hdfs://hdfs-dev/data/prod/upload-metrics/upload_metrics_hist/dt=2014-07-01/000005_0 is not a Parquet file. expected magic number at tail [80, 65, 82, 49] but found [1, 92, 78, 10]
Full error
2014-07-17 02:00:48,835 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:372)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.<init>(HadoopShimsSecure.java:319)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:433)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:540)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:394)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:358)
... 10 more
Caused by: java.lang.RuntimeException: hdfs://hdfs-dev/data/prod/upload-metrics/upload_metrics_hist/dt=2014-07-01/000005_0 is not a Parquet file. expected magic number at tail [80, 65, 82, 49] but found [1, 92, 78, 10]
at parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:263)
at parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:229)
at parquet.hive.DeprecatedParquetInputFormat$RecordReaderWrapper.getSplit(DeprecatedParquetInputFormat.java:327)
at parquet.hive.DeprecatedParquetInputFormat$RecordReaderWrapper.<init>(DeprecatedParquetInputFormat.java:204)
at parquet.hive.DeprecatedParquetInputFormat.getRecordReader(DeprecatedParquetInputFormat.java:108)
at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
... 15 more
I'm just getting started with elasticsearch. Our requirement has us needing to index thousands of PDF files and I'm having a hard time getting just ONE of them to index successfully.
Installed the Attachment Type plugin and got response: Installed mapper-attachments.
Followed the Attachment Type in Action tutorial but the process hangs and I don't know how to interpret the error message. Also tried the gist which hangs in the same place.
$ curl -X POST "localhost:9200/test/attachment/" -d json.file
{"error":"ElasticSearchParseException[Failed to derive xcontent from (offset=0, length=9): [106, 115, 111, 110, 46, 102, 105, 108, 101]]","status":400}
More details:
The json.file contains an embedded Base64 PDF file (as per instructions). The first line of the file appears correct (to me anyway): {"file":"JVBERi0xLjQNJeLjz9MNCjE1OCAwIG9iaiA8...
I'm not sure if maybe the json.file is invalid or if maybe elasticsearch just isn't set up to parse PDFs properly?!?
Encoding - Here's how we're encoding the PDF into json.file (as per tutorial):
coded=`cat fn6742.pdf | perl -MMIME::Base64 -ne 'print encode_base64($_)'`
json="{\"file\":\"${coded}\"}"
echo "$json" > json.file
also tried:
coded=`openssl base64 -in fn6742.pdf
log:
[2012-06-07 12:32:16,742][DEBUG][action.index ] [Bailey, Paul] [test][0], node[AHLHFKBWSsuPnTIRVhNcuw], [P], s[STARTED]: Failed to execute [index {[test][attachment][DauMB-vtTIaYGyKD4P8Y_w], source[json.file]}]
org.elasticsearch.ElasticSearchParseException: Failed to derive xcontent from (offset=0, length=9): [106, 115, 111, 110, 46, 102, 105, 108, 101]
at org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:147)
at org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:50)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:451)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:437)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:290)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:210)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:532)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:430)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:680)
Hoping someone can help me see what I'm missing or did wrong?
The following error points to the source of the problem.
Failed to derive xcontent from (offset=0, length=9): [106, 115, 111, 110, 46, 102, 105, 108, 101]
The UTF-8 codes [106, 115, 111, ...] show that you are trying to index string "json.file" instead of content of the file.
To index content of the file simply add letter "#" in front of the file name.
curl -X POST "localhost:9200/test/attachment/" -d #json.file
Turns out it's necessary to export ES_JAVA_OPTS=-Djava.awt.headless=true before running a java app on a 'headless' server... who would'a thought!?!