After successful installation of redmine , trying to migrate the datas from trac to redmine . i am getting the following error.... . Any work around to fix this
user#user:~/redmine-2.3$ rake redmine:migrate_from_trac RAILS_ENV="production"
WARNING: a new project will be added to Redmine during this process.
Are you sure you want to continue ? [y/N] y
Trac directory []: /home/user/implementation Trac database adapter (sqlite3, mysql2, postgresql) [sqlite3]:
Trac database encoding [UTF-8]:
Target project identifier []: implementation
Migrating components.......................................................................................................................................................................................
Migrating milestones.......................................
Migrating custom fields...
Migrating tickets..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Migrating wiki.....
Components: 178/183
Milestones: 39/39
Tickets: 2082/2082
Ticket files: 0/421
Custom values: 2812/2812
Wiki edits: 5/5
Wiki files: 0/0
rake aborted!
stack level too deep
Tasks: TOP => redmine:migrate_from_trac
(See full trace by running task with --trace)
This is a typical stack overflow error, means a function is recursively called in an infinite loop. That is caused by a bug in that script, likely because your data is somehow corrupted and the script is not able to stand that.
Try to call the script with the --verbose flag, or check the log files for error messages. Try to find the error in your data by running the script testwise with reduced data input (e.g. without tickets).
Related
I am trying to create a deployment pipeline for Gitlab-CI on a react project. The build is working fine and I use artifacts to store the dist folder from my yarn build command. This is working fine as well.
The issue is regarding my deployment with command: aws s3 sync dist/'bucket-name'.
Expected: "Done in x seconds"
Actual:
error Command failed with exit code 2. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Running after_script 00:01 Uploading artifacts for failed job 00:01 ERROR: Job failed: exit code 1
The files seem to have been uploaded correctly to the S3 bucket, however I do not know why I get an error on the deployment job.
When I run the aws s3 sync dist/'bucket-name' locally everything works correctly.
Check out AWS CLI Return Codes
2 -- The meaning of this return code depends on the command being run.
The primary meaning is that the command entered on the command line failed to be parsed. Parsing failures can be caused by, but are not limited to, missing any required subcommands or arguments or using any unknown commands or arguments. Note that this return code meaning is applicable to all CLI commands.
The other meaning is only applicable to s3 commands. It can mean at least one or more files marked for transfer were skipped during the transfer process. However, all other files marked for transfer were successfully transferred. Files that are skipped during the transfer process include: files that do not exist, files that are character special devices, block special device, FIFO's, or sockets, and files that the user cannot read from.
The second paragraph might explain what's happening.
There is no yarn build command. See https://classic.yarnpkg.com/en/docs/cli/run
As Anton mentioned, the second paragraph of his answer was the problem. The solution to the problem was removing special characters from a couple SVGs. I suspect uploading the dist folder as an artifact(zip) might have changed some of the file names altogether which was confusing to S3. By removing ® and + from the filename the issue was resolved.
I am using Hadoop 3.2.0 and trying to run a simple application in a docker container and I have made the required configuration changes both in yarn-site.xml and container-executor.cfg to choose LinuxContainerExecutor and docker runtime.
I use the example of distributed shell in one of the hortonworks blog. https://hortonworks.com/blog/trying-containerized-applications-apache-hadoop-yarn-3-1/
The problem I face here is when the application is submitted to YARN it fails with a reason related to directory creation issue with the below error
2019-02-14 20:51:16,450 INFO distributedshell.Client: Got application
report from ASM for, appId=2, clientToAMToken=null,
appDiagnostics=Application application_1550156488785_0002 failed 2
times due to AM Container for appattempt_1550156488785_0002_000002
exited with exitCode: -1000 Failing this attempt.Diagnostics:
[2019-02-14 20:51:16.282]Application application_1550156488785_0002
initialization failed (exitCode=20) with output: main : command
provided 0 main : user is myuser main : requested yarn user is
myuser Failed to create directory
/data/yarn/local/nmPrivate/container_1550156488785_0002_02_000001.tokens/usercache/myuser
- Not a directory
I have configured yarn.nodemanager.local-dirs in yarn-site.xml and I can see the same reflected in YARN web ui localhost:8088/conf
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/data/yarn/local</value>
<final>false</final>
<source>yarn-site.xml</source>
</property>
I do not understand why is it trying to create usercache dir inside the nmPrivate directory.
Note : I have verified the permissions for myuser to the directories and also have tried clearing the directories manually as suggested in a related post. But no fruit. I do not see any additional information about container launch failure in any other logs.
How do I debug why the usercache dir is not resolved properly??
Really appreciate any help on this.
Realized that this is all because of the users the services were started with and the permissions to the directories the services work on.
After making sure the required changes are done, I am able to seamlessly run the examples and other applications..
Thanks Hadoop user community for the direction. Adding the link here for more details.
http://mail-archives.apache.org/mod_mbox/hadoop-user/201902.mbox/browser
I deployed an Aerospike container using the official docker hub image. When I try to execute test_list = client.llist(key, 'test_list'), my Python client script returns the following error:
exception.UDFError: (100L, 'UDF: Execution Error 1', 'src/main/llist/llist_operations.c', 93)
I looked at the Aerospike logs and found that each time this code is executed, the error below gets printed:
: WARNING (udf): (src/main/mod_lua.c:599) Lua Create Error: module 'llist' not found:
no field package.preload['llist']
no file './llist.lua'
no file '/usr/local/share/luajit-2.0.3/llist.lua'
no file '/usr/local/share/lua/5.1/llist.lua'
no file '/usr/local/share/lua/5.1/llist/init.lua'
no file '/opt/aerospike/sys/udf/lua/llist.lua'
no file '/opt/aerospike/sys/udf/lua/external/llist.lua'
no file '/opt/aerospike/usr/udf/lua/llist.lua'
no file './llist.so'
no file '/usr/local/lib/lua/5.1/llist.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file '/opt/aerospike/sys/udf/lua/llist.so'
no file '/opt/aerospike/sys/udf/lua/external/llist.so'
no file '/opt/aerospike/usr/udf/lua/llist.so'
: INFO (udf): (udf.c:954) lua error, ret:1
I could not find the relevant lua files or a lua installation in the container. I have my code working fine when I run it directly on the host. Is there some extra configuration that needs to be done to the container?
LDTs were dropped in 3.15.
https://www.aerospike.com/docs/guide/ldt_guide.html
Excerpt:
Aerospike has removed the Large Data Type feature as of server version 3.15 after deprecating this functionality 12 months earlier. Please see the removal notice and deprecation notice. The features listed below are no longer in Aerospike servers.
When using the web page displayed by drill on localhost:8047/query (by default) running the following commands fail:
use dfs.mydfs;
and then:
show files;
Then I receive this error:
org.apache.drill.common.exceptions.UserRemoteException: VALIDATION ERROR: SHOW FILES is supported in workspace type schema only. Schema [] is not a workspace schema. [Error Id: 872e6708-0aaa-480e-af32-9aaf6f84de2b on 172.28.128.1:31010]
While if I use the terminal to enter the same commands the command works correct.
I've also found that this affects 1.6 and above and that this behaviour is not seen on 1.5 below.
This command works in both the web and commmand line/terminal version:
show files in df.workspace;
I have configured multiple types of dfs and have tried both OS X and Windows 10 and found the issue to be the same.
I tried looking through the drill jira to see if this was registered as bug and I looked briefly through the release notes as well.
I am working my way through the RhoMobile tutorial http://docs.rhomobile.com/rhoconnect/command-line#generate-an-application and I at the point of entering
rake redis:install
I get the following error.
WARNING: using the built-in Timeout class which is known to have issues when use
d for opening connections. Install the SystemTimer gem if you want to make sure
the Redis client will not hang.
See http://redis.io/ for information about redis.
Installing redis to C:\RhoStudio\redis-2.4.0;C:\dropbox\code\InstantRhodes\redis
-1.2.6-windows.
rake aborted!
Zip end of central directory signature not found
Tasks: TOP => redis:install => redis:download
(See full trace by running task with --trace)
D:\Dropbox\code\rhodes-apps\storeserver>
I am working on a Whindows machine, primarily using RhoStudio.
It ended up being an environmental variables issue. Also, it seems the main support forum for Rhodes is the Google Group. Question answered here:
https://groups.google.com/d/topic/rhomobile/b-Adx2FDMT8/discussion
If you are using Rhostudio in windows then redis is automatically installed with Rhostudio.
So no need of installing it again.