I wanted to make my own wiki for my own personal use and carry it around on an external drive. I am using Windows 10.
I installed XAMPP portable to my external drive and set up MediaWiki yesterday and everything was fine. I turned off my computer without shutting down apache and mysql and woke up to this morning to continue working on it but mysql cannot start. I have no backup yet since I recently just made it yesterday night so I won't be too upset losing the very few articles I written.
InnoDB: using atomic writes.
2020-04-10 15:05:25 0 [Note] InnoDB: Mutexes and rw_locks use Windows interlocked functions
2020-04-10 15:05:25 0 [Note] InnoDB: Uses event mutexes
2020-04-10 15:05:25 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2020-04-10 15:05:25 0 [Note] InnoDB: Number of pools: 1
2020-04-10 15:05:25 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-04-10 15:05:25 0 [Note] InnoDB: Initializing buffer pool, total size = 16M, instances = 1, chunk size = 16M
2020-04-10 15:05:25 0 [Note] InnoDB: Completed initialization of buffer pool
2020-04-10 15:05:25 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=300288
2020-04-10 15:05:25 0 [Note] InnoDB: 128 out of 128 rollback segments are active.
2020-04-10 15:05:25 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
2020-04-10 15:05:25 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2020-04-10 15:05:25 0 [Note] InnoDB: Setting file 'D:\xampp\mysql\data\ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2020-04-10 15:05:25 0 [Note] InnoDB: File 'D:\xampp\mysql\data\ibtmp1' size is now 12 MB.
2020-04-10 15:05:25 0 [Note] InnoDB: Waiting for purge to start
2020-04-10 15:05:25 0 [Note] InnoDB: 10.4.11 started; log sequence number 300297; transaction id 171
2020-04-10 15:05:25 0 [Note] InnoDB: Loading buffer pool(s) from D:\xampp\mysql\data\ib_buffer_pool
2020-04-10 15:05:25 0 [Note] Plugin 'FEEDBACK' is disabled.
2020-04-10 15:05:25 0 [Note] Server socket created on IP: '::'.
2020-04-10 15:12:20 0 [Note] mysqld: Aria engine: starting recovery
recovered pages: 0% 26% 38% 54% 64% 74% 84% 94% 100% (0.0 seconds); tables to flush: 2 1 0
(0.1 seconds);
2020-04-10 15:12:20 0 [Note] mysqld: Aria engine: recovery done
InnoDB: using atomic writes.
2020-04-10 15:12:20 0 [ERROR] InnoDB: The innodb_system data file 'ibdata1' must be writable
2020-04-10 15:12:20 0 [ERROR] InnoDB: The innodb_system data file 'ibdata1' must be writable
2020-04-10 15:12:20 0 [ERROR] Plugin 'InnoDB' init function returned error.
2020-04-10 15:12:20 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2020-04-10 15:12:20 0 [Note] Plugin 'FEEDBACK' is disabled.
2020-04-10 15:12:20 0 [ERROR] Unknown/unsupported storage engine: InnoDB
2020-04-10 15:12:20 0 [ERROR] Aborting
I tried looking at ibdata1, I as administrator should have complete control over it so I'm not sure why I'm getting this error.
What are my options on how to fix this?
Maybe you have existing mysql process is runing, go to task manager to disable it.
I had a similar problem when I was trying to reset my password to MySQL. I am using a Windows computer. I went into Administrative Tools -> Services and then stopped the MySQL task. Then I didn't receive the error message: " 'ibdata1' must be writable"
Related
Mysql mariaDB crashes frequently and I cannot find out why. I have installed MariaDB a couple of times but in vain.
Increased innodb_buffer_pool_size to 60% of memory space. As I was getting Host name could not be resolved, included skip name resolve in my.cnf file, but in vain.
2019-09-30 21:46:14 140285622676608 [Note] InnoDB: Using mutexes to ref count buffer pool pages
2019-09-30 21:46:14 140285622676608 [Note] InnoDB: The InnoDB memory heap is disabled
2019-09-30 21:46:14 140285622676608 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2019-09-30 21:46:14 140285622676608 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
2019-09-30 21:46:14 140285622676608 [Note] InnoDB: Compressed tables use zlib 1.2.11
2019-09-30 21:46:14 140285622676608 [Note] InnoDB: Using Linux native AIO
2019-09-30 21:46:14 140285622676608 [Note] InnoDB: Using SSE crc32 instructions
2019-09-30 21:46:14 140285622676608 [Note] InnoDB: Initializing buffer pool, size = 3.0G
2019-09-30 21:46:14 140285622676608 [Note] InnoDB: Completed initialization of buffer pool
2019-09-30 21:46:14 140285622676608 [Note] InnoDB: Highest supported file format is Barracuda.
2019-09-30 21:46:14 140285622676608 [Note] InnoDB: 128 rollback segment(s) are active.
2019-09-30 21:46:14 140285622676608 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.44-86.0 started; log sequence number 901877131
2019-09-30 21:46:14 140281617807104 [Note] InnoDB: Dumping buffer pool(s) not yet started
2019-09-30 21:46:14 140285622676608 [Note] Plugin 'FEEDBACK' is disabled.
2019-09-30 21:46:14 140285622676608 [Note] Server socket created on IP: '0.0.0.0'.
2019-09-30 21:46:14 140285622676608 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.1.41-MariaDB-0ubuntu0.18.04.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 Ubuntu 18.04
2019-10-01 0:22:33 140285573568256 [Note] /usr/sbin/mysqld: Normal shutdown
2019-10-01 0:22:33 140285573568256 [Note] Event Scheduler: Purging the queue. 0 events
2019-10-01 0:22:33 140281651377920 [Note] InnoDB: FTS optimize thread exiting.
2019-10-01 0:22:33 140285573568256 [Note] InnoDB: Starting shutdown...
2019-10-01 0:22:34 140285573568256 [Note] InnoDB: Waiting for page_cleaner to finish flushing of buffer pool
2019-10-01 0:22:35 140285573568256 [Note] InnoDB: Shutdown completed; log sequence number 1007730801
2019-10-01 0:22:35 140285573568256 [Note] /usr/sbin/mysqld: Shutdown complete
Any suggestions how I can stop mariadb from crashing?
Background to Plan C
Code42 decided to terminate their "CrashPlan for Home" service. This means that after the shutdown date of October 22, 2018, CrashPlan will delete your backup on their servers, which is to be expected, but much more annoyingly, you will no longer be able to restore CrashPlan backups that you stored locally. Effectively, Code42 is reaching into your computer to break your backups for you.
PlanC is an open source project to enable restore from existing CrashPlan Home backups to be performed.
My Problem
However, when attempting to restore I received an error:
MacBook-Pro:CrashPlanHomeRecovery daniel$ ./plan-c-osx/plan-c --key 07B... --archive ./sg2015/642033544161964565/ --dest ./recovered/ --filename "J:/..." restore
Caching block indexes in memory...
libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: Failed to open block manifest for reading: ./sg2015/642033544161964565/cpbf0000000000017581637/cpbmf
Abort trap: 6
The file referenced in the error appears to read OK, but the reported error provided no more information.
I reported this GitHub Issue #9.
I then made a minor change to the error reporting GitHub Pull Request #10 to work out that the error was a Too many open files error:
MacBook-Pro:CrashPlanHomeRecovery daniel$ ./plan-c-osx/plan-c --key 07B... --archive ./sg2015/642033544161964565/ --dest ./recovered/ --filename "J:/..." restore
Caching block indexes in memory...
libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: Failed to open block manifest (../../sg2015/642033544161964565/cpbf0000000000017581637/cpbmf) for reading: Too many open files
Abort trap: 6
Just a note that if my pull request (only just submitted) is not merged (and a new binary released) you will need to build from my fork.
Which I then fixed with a ulimit change:
MacBook-Pro:PlanC daniel$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1418
virtual memory (kbytes, -v) unlimited
by increasing number of open files for the shell to 1024:
MacBook-Pro:PlanC daniel$ ulimit -S -n 1024
Recording this answer in case others have problems - backups are important after all :)
I am trying to run a oozie sqoop job to import from teradata to Hive.
Sqoop runs fine in CLI. But I am facing the issues in scheduling it with oozie.
Note: I am able to do shell actions in oozie and it works fine.
Find below the error logs and workflow
Error logs:
Log Type: stderr
Log Upload Time: Wed Feb 01 04:19:00 -0500 2017
Log Length: 513
log4j:ERROR Could not find value for key log4j.appender.CLA
log4j:ERROR Could not instantiate appender named "CLA".
log4j:WARN No appenders could be found for logger (org.apache.hadoop.yarn.client.RMProxy).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
No such sqoop tool: sqoop. See 'sqoop help'.
Intercepting System.exit(1)
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1]
Log Type: stdout
Log Upload Time: Wed Feb 01 04:19:00 -0500 2017
Log Length: 158473
Showing 4096 bytes of 158473 total. Click here for the full log.
curity.ShellBasedUnixGroupsMapping
dfs.client.domain.socket.data.traffic=false
dfs.client.read.shortcircuit.streams.cache.size=256
fs.s3a.connection.timeout=200000
dfs.datanode.block-pinning.enabled=false
mapreduce.job.end-notification.max.retry.interval=5000
yarn.acl.enable=true
yarn.nm.liveness-monitor.expiry-interval-ms=600000
mapreduce.application.classpath=$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$MR2_CLASSPATH
mapreduce.input.fileinputformat.list-status.num-threads=1
dfs.client.mmap.cache.size=256
mapreduce.tasktracker.map.tasks.maximum=2
yarn.scheduler.fair.user-as-default-queue=true
yarn.timeline-service.ttl-enable=true
yarn.nodemanager.linux-container-executor.resources-handler.class=org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler
dfs.namenode.max.objects=0
dfs.namenode.service.handler.count=10
dfs.namenode.kerberos.principal.pattern=*
yarn.resourcemanager.state-store.max-completed-applications=${yarn.resourcemanager.max-completed-applications}
dfs.namenode.delegation.token.max-lifetime=604800000
mapreduce.job.classloader=false
yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size=10000
mapreduce.job.hdfs-servers=${fs.defaultFS}
yarn.application.classpath=$HADOOP_CLIENT_CONF_DIR,$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*
dfs.datanode.hdfs-blocks-metadata.enabled=true
mapreduce.tasktracker.dns.nameserver=default
dfs.datanode.readahead.bytes=4193404
mapreduce.job.ubertask.maxreduces=1
dfs.image.compress=false
mapreduce.shuffle.ssl.enabled=false
yarn.log-aggregation-enable=false
mapreduce.tasktracker.report.address=127.0.0.1:0
mapreduce.tasktracker.http.threads=40
dfs.stream-buffer-size=4096
tfile.fs.output.buffer.size=262144
fs.permissions.umask-mode=022
dfs.client.datanode-restart.timeout=30
dfs.namenode.resource.du.reserved=104857600
yarn.resourcemanager.am.max-attempts=2
yarn.nodemanager.resource.percentage-physical-cpu-limit=100
ha.failover-controller.graceful-fence.connection.retries=1
mapreduce.job.speculative.speculative-cap-running-tasks=0.1
hadoop.proxyuser.hdfs.groups=*
dfs.datanode.drop.cache.behind.writes=false
hadoop.proxyuser.HTTP.hosts=*
hadoop.common.configuration.version=0.23.0
mapreduce.job.ubertask.enable=false
yarn.app.mapreduce.am.resource.cpu-vcores=1
dfs.namenode.replication.work.multiplier.per.iteration=2
mapreduce.job.acl-modify-job=
io.seqfile.local.dir=${hadoop.tmp.dir}/io/local
yarn.resourcemanager.system-metrics-publisher.enabled=false
fs.s3.sleepTimeSeconds=10
mapreduce.client.output.filter=FAILED
------------------------
Sqoop command arguments :
sqoop
import
--connect
"jdbc:teradata://xx.xxx.xx:xxxx/DATABASE=Database_name"
--verbose
--username
xxx
-password
'xxx'
--table
BILL_DETL_EXTRC
--split-by
EXTRC_RUN_ID
--m
1
--fields-terminated-by
,
--hive-import
--hive-table
OPS_TEST.bill_detl_extr213
--target-dir
/hadoop/dev/TD_archive/bill_detl_extrc
Fetching child yarn jobs
tag id : oozie-56ea2084fcb1d55591f8919b405f0be0
Child yarn jobs are found -
=================================================================
Invoking Sqoop command line now >>>
3324 [uber-SubtaskRunner] WARN org.apache.sqoop.tool.SqoopTool - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
Intercepting System.exit(1)
<<< Invocation of Main class completed <<<
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1]
Oozie Launcher failed, finishing Hadoop job gracefully
Oozie Launcher, uploading action data to HDFS sequence file: hdfs://namenode:8020/user/hadoopadm/oozie-oozi/0000039-170123205203054-oozie-oozi-W/sqoop-action--sqoop/action-data.seq
Oozie Launcher ends
Log Type: syslog
Log Upload Time: Wed Feb 01 04:19:00 -0500 2017
Log Length: 16065
Showing 4096 bytes of 16065 total. Click here for the full log.
adoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Job jar is not present. Not adding any jar to the list of resources.
2017-02-01 04:18:51,990 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf file on the remote FS is /user/hadoopadm/.staging/job_1485220715968_0219/job.xml
2017-02-01 04:18:52,074 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #5 tokens and #1 secret keys for NM use for launching container
2017-02-01 04:18:52,074 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of containertokens_dob is 6
2017-02-01 04:18:52,074 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle token in serviceData
2017-02-01 04:18:52,174 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://svacld001.bcbsnc.com:8020]
2017-02-01 04:18:52,240 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m
2017-02-01 04:18:52,243 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1485220715968_0219_m_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2017-02-01 04:18:52,243 INFO [uber-EventHandler] org.apache.hadoop.mapred.LocalContainerLauncher: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1485220715968_0219_01_000001 taskAttempt attempt_1485220715968_0219_m_000000_0
2017-02-01 04:18:52,245 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1485220715968_0219_m_000000_0] using containerId: [container_1485220715968_0219_01_000001 on NM: [svacld005.bcbsnc.com:8041]
2017-02-01 04:18:52,246 INFO [uber-SubtaskRunner] org.apache.hadoop.mapred.LocalContainerLauncher: mapreduce.cluster.local.dir for uber task: /disk1/yarn/nm/usercache/hadoopadm/appcache/application_1485220715968_0219,/disk10/yarn/nm/usercache/hadoopadm/appcache/application_1485220715968_0219,/disk11/yarn/nm/usercache/hadoopadm/appcache/application_1485220715968_0219,/disk12/yarn/nm/usercache/hadoopadm/appcache/application_1485220715968_0219,/disk2/yarn/nm/usercache/hadoopadm/appcache/application_1485220715968_0219,/disk3/yarn/nm/usercache/hadoopadm/appcache/application_1485220715968_0219,/disk4/yarn/nm/usercache/hadoopadm/appcache/application_1485220715968_0219,/disk5/yarn/nm/usercache/hadoopadm/appcache/application_1485220715968_0219,/disk6/yarn/nm/usercache/hadoopadm/appcache/application_1485220715968_0219,/disk7/yarn/nm/usercache/hadoopadm/appcache/application_1485220715968_0219,/disk8/yarn/nm/usercache/hadoopadm/appcache/application_1485220715968_0219,/disk9/yarn/nm/usercache/hadoopadm/appcache/application_1485220715968_0219
2017-02-01 04:18:52,247 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1485220715968_0219_m_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING
2017-02-01 04:18:52,247 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1485220715968_0219_m_000000 Task Transitioned from SCHEDULED to RUNNING
2017-02-01 04:18:52,249 INFO [uber-SubtaskRunner] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
2017-02-01 04:18:52,258 INFO [uber-SubtaskRunner] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ]
2017-02-01 04:18:52,324 INFO [uber-SubtaskRunner] org.apache.hadoop.mapred.MapTask: Processing split: org.apache.oozie.action.hadoop.OozieLauncherInputFormat$EmptySplit#9c73765
2017-02-01 04:18:52,329 INFO [uber-SubtaskRunner] org.apache.hadoop.mapred.MapTask: numReduceTasks: 0
2017-02-01 04:18:52,340 INFO [uber-SubtaskRunner] org.apache.hadoop.conf.Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
WORKFLOW
<workflow-app xmlns="uri:oozie:workflow:0.5" name="oozie-wf">
<start to="sqoop-wf"/>
<action name="sqoop-wf">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>xx.xx.xx:8032</job-tracker>
<name-node>hdfs://xx.xxx.xx:8020</name-node>
<command>import --connect "jdbc:teradata://ip/DATABASE=EDW_EXTRC_TAB_HST" --connection-manager "com.cloudera.connector.teradata.TeradataManager" --verbose --username HADOOP -password 'xxxxx' --table BILL_DETL_EXTRC --split-by EXTRC_RUN_ID --m 1 --fields-terminated-by , --hive-import --hive-table OPS_TEST.bill_detl_extrc1 --target-dir /hadoop/dev/TD_archive/data/PDCRDATA_TEST/bill_detl_extrc </command>
</sqoop>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Failed, Error Message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
JOB PROPERTIES
oozie.wf.application.path=hdfs:///hadoop/dev/TD_archive/workflow1.xml
oozie.use.system.libpath=true
security_enabled=True
dryrun=False
jobtracker=xxx.xxx:8032
nameNode=hdfs://xx.xx:8020
NOTE:
We are using cloudera CDH5.5
All the necessary JARS (sqoop-connector-teradata-1.5c5.jar tdgssconfig.jar terajdbc4.jar) are placed in /var/lib/sqoop and as well as placed in HDFS too.
I have WAWMP installed on Win 7. It was working well but since I restarted my system and I ran into a problem.
Problem is that MySQL is not starting but Apache is running as usual. When I Test Port 80 it says Apache is running.
I have tried to change ports in my.ini of MySQL but still MySQL does not start.
Here is latest Log when I attempted to start MySQL
2014-06-18 08:15:23 4496 [Note] Plugin 'FEDERATED' is disabled.
2014-06-18 08:15:23 4496 [Note] InnoDB: The InnoDB memory heap is disabled
2014-06-18 08:15:23 4496 [Note] InnoDB: Mutexes and rw_locks use Windows interlocked functions
2014-06-18 08:15:23 4496 [Note] InnoDB: Compressed tables use zlib 1.2.3
2014-06-18 08:15:23 4496 [Note] InnoDB: Not using CPU crc32 instructions
2014-06-18 08:15:23 4496 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2014-06-18 08:15:23 4496 [Note] InnoDB: Completed initialization of buffer pool
2014-06-18 08:15:23 4496 [Note] InnoDB: Highest supported file format is Barracuda.
2014-06-18 08:15:24 4496 [Note] InnoDB: The log sequence numbers 1625977 and 1625977 in ibdata files do not match the log sequence number 22262437 in the ib_logfiles!
2014-06-18 08:15:24 4496 [Note] InnoDB: Database was not shutdown normally!
2014-06-18 08:15:24 4496 [Note] InnoDB: Starting crash recovery.
2014-06-18 08:15:24 4496 [Note] InnoDB: Reading tablespace information from the .ibd files...
2014-06-18 08:15:24 4496 [ERROR] InnoDB: Attempted to open a previously opened tablespace. Previous tablespace joomla_web/intern_extensions uses space ID: 104 at filepath: .\joomla_web\intern_extensions.ibd. Cannot open tablespace test/joomla_assets which uses space ID: 104 at filepath: .\test\joomla_assets.ibd
InnoDB: Error: could not open single-table tablespace file .\test\joomla_assets.ibd
InnoDB: We do not continue the crash recovery, because the table may become
InnoDB: corrupt if we cannot apply the log records in the InnoDB log to it.
InnoDB: To fix the problem and start mysqld:
InnoDB: 1) If there is a permission problem in the file and mysqld cannot
InnoDB: open the file, you should modify the permissions.
InnoDB: 2) If the table is not needed, or you can restore it from a backup,
InnoDB: then you can remove the .ibd file, and InnoDB will do a normal
InnoDB: crash recovery and ignore that table.
InnoDB: 3) If the file system or the disk is broken, and you cannot remove
InnoDB: the .ibd file, you can set innodb_force_recovery > 0 in my.cnf
InnoDB: and force InnoDB to continue crash recovery here.
try to goto control panel, administrative tool, services or run services.msc (shortcut) try to start and stop and restart MYSQL services.
Ok I solved my problem by deleting the ibdata1 file in C:\WAMP\mysql\data
For those who dont know what is What is stored in ibdata1
Read this answer
We reproducibly experience errors when attempting to checkout or update working copies.
Environment
Our environment is as follows:-
svn 1.8.9 (r1591380) client and server running on the same server (also happens with client on another server, but less often)
Server runs Windows Server 2008 (64 bit)
Apache httpd server
We are running the svn checkout from QuickBuild.
Client Error
This error is reported from the checkout:-
svn checkout http://qvsvn101/PayWay/PayWay/Branches/2014.R1/ D:\quickbuild_workspace\PayWay\Application\PointRelease\Release --non-interactive --username SrvAcc --password ****** -r 11523
Command return code: -1073741819
Command error output: This application has halted due to an unexpected error.
A crash report and minidump file were saved to disk, you can find them here:
C:\Users\SrvAcc\AppData\Local\Temp\svn-crash-log20140527164109.log
C:\Users\SrvAcc\AppData\Local\Temp\svn-crash-log20140527164109.dmp
Please send the log file to users_at_subversion.apache.org to help us analyze
and solve this problem.
NOTE: The crash report and minidump files can contain some sensitive information
(filenames, partial file content, usernames and passwords etc.)
Apache httpd error log
At the same time, the Apache error.log contains this:
[Tue May 27 16:41:12 2014] [error] [client 192.168.40.47] Provider encountered an error while streaming a REPORT response. [500, #0]
[Tue May 27 16:41:12 2014] [error] [client 192.168.40.47] A failure occurred while driving the update report editor [500, #106]
Subversion Crash Log File
Subversion writes out a log file as follows:
Process info:
Cmd line: svn checkout http://qvsvn101/PayWay/PayWay/Branches/2014.R1/ D:\quickbuild_workspace\PayWay\Application\PointRelease\Release --non-interactive --username SrvAcc --password ****** -r 11523
Working Dir: D:\quickbuild_workspace\PayWay\Application\PointRelease\Release
Version: 1.8.9 (r1591380), compiled May 8 2014, 04:25:41
Platform: Windows OS version 6.1 build 7601 Service Pack 1
Exception: ACCESS_VIOLATION
Registers:
eax=7259a7c0 ebx=00000000 ecx=01e5e138 edx=72746e65 esi=01e5e138 edi=61006469
eip=7259a7cf esp=003cf44c ebp=003cf458 efl=00010202
cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b
Stacktrace:
#1 0x7259a7cf in svn_wc_add_repos_file4()
#2 0x732396b7 in svn_ra_svn_init()
#3 0x7323470b in svn_ra_svn_init()
#4 0x73234abe in svn_ra_svn_init()
#5 0x731f3110 in (unknown function)
Loaded modules:
0x00020000 D:\SubversionServer\svn.exe (1.8.9.18516, 241664 bytes)
0x77a20000 C:\Windows\SysWOW64\ntdll.dll (6.1.7601.18247, 1572864 bytes)
0x75f00000 C:\Windows\SysWOW64\kernel32.dll (6.1.7601.18229, 1114112 bytes)
0x75e00000 C:\Windows\SysWOW64\KERNELBASE.dll (6.1.7601.18229, 290816 bytes)
0x73600000 D:\SubversionServer\libapr-1.dll (1.5.1.0, 163840 bytes)
0x764a0000 C:\Windows\SysWOW64\ws2_32.dll (6.1.7601.17514, 217088 bytes)
0x75800000 C:\Windows\SysWOW64\msvcrt.dll (7.0.7601.17744, 704512 bytes)
0x75350000 C:\Windows\SysWOW64\rpcrt4.dll (6.1.7601.18205, 983040 bytes)
0x75100000 C:\Windows\SysWOW64\sspicli.dll (6.1.7601.18270, 393216 bytes)
0x750f0000 C:\Windows\SysWOW64\CRYPTBASE.dll (6.1.7600.16385, 49152 bytes)
0x75ee0000 C:\Windows\SysWOW64\sechost.dll (6.1.7600.16385, 102400 bytes)
0x75950000 C:\Windows\SysWOW64\nsi.dll (6.1.7600.16385, 24576 bytes)
0x74510000 C:\Windows\System32\mswsock.dll (6.1.7601.18254, 245760 bytes)
0x76010000 C:\Windows\SysWOW64\user32.dll (6.1.7601.17514, 1048576 bytes)
0x75960000 C:\Windows\SysWOW64\gdi32.dll (6.1.7601.18275, 589824 bytes)
0x779f0000 C:\Windows\SysWOW64\lpk.dll (6.1.7601.18177, 40960 bytes)
0x758b0000 C:\Windows\SysWOW64\usp10.dll (1.626.7601.18009, 643072 bytes)
0x759f0000 C:\Windows\SysWOW64\advapi32.dll (6.1.7601.18247, 655360 bytes)
0x76510000 C:\Windows\SysWOW64\shell32.dll (6.1.7601.18222, 12886016 bytes)
0x75e70000 C:\Windows\SysWOW64\shlwapi.dll (6.1.7601.17514, 356352 bytes)
0x73260000 D:\SubversionServer\MSVCR100.DLL (10.0.30319.1, 778240 bytes)
0x73580000 D:\SubversionServer\libsvn_client-1.dll (1.8.9.18516, 319488 bytes)
0x74010000 D:\SubversionServer\libsvn_delta-1.dll (1.8.9.18516, 122880 bytes)
0x733e0000 D:\SubversionServer\libaprutil-1.dll (1.5.3.0, 204800 bytes)
0x73ee0000 D:\SubversionServer\libapriconv-1.dll (1.2.1.0, 36864 bytes)
0x72a80000 D:\SubversionServer\libsvn_subr-1.dll (1.8.9.18516, 1077248 bytes)
0x73670000 C:\Windows\System32\shfolder.dll (6.1.7600.16385, 20480 bytes)
0x755c0000 C:\Windows\SysWOW64\ole32.dll (6.1.7601.17514, 1425408 bytes)
0x75230000 C:\Windows\SysWOW64\crypt32.dll (6.1.7601.18277, 1179648 bytes)
0x75ed0000 C:\Windows\SysWOW64\msasn1.dll (6.1.7601.17514, 49152 bytes)
0x74a30000 C:\Windows\System32\version.dll (6.1.7600.16385, 36864 bytes)
0x73cc0000 D:\SubversionServer\libsvn_diff-1.dll (1.8.9.18516, 86016 bytes)
0x731f0000 D:\SubversionServer\libsvn_ra-1.dll (1.8.9.18516, 454656 bytes)
0x738f0000 D:\SubversionServer\libsasl.dll (2.1.23.0, 81920 bytes)
0x73f10000 D:\SubversionServer\libsvn_fs-1.dll (1.8.9.18516, 225280 bytes)
0x73f60000 D:\SubversionServer\libsvn_repos-1.dll (1.8.9.18516, 180224 bytes)
0x735d0000 C:\Windows\System32\secur32.dll (6.1.7601.18270, 32768 bytes)
0x731a0000 D:\SubversionServer\ssleay32.dll (1.0.1.7, 286720 bytes)
0x72940000 D:\SubversionServer\libeay32.dll (1.0.1.7, 1306624 bytes)
0x72550000 D:\SubversionServer\libsvn_wc-1.dll (1.8.9.18516, 544768 bytes)
0x76340000 C:\Windows\System32\imm32.dll (6.1.7601.17514, 393216 bytes)
0x75160000 C:\Windows\SysWOW64\msctf.dll (6.1.7600.16385, 835584 bytes)
0x74960000 C:\Windows\System32\profapi.dll (6.1.7600.16385, 45056 bytes)
0x744f0000 C:\Windows\System32\nlaapi.dll (6.1.7601.17761, 65536 bytes)
0x744e0000 C:\Windows\System32\NapiNSP.dll (6.1.7600.16385, 65536 bytes)
0x742c0000 C:\Windows\System32\dnsapi.dll (6.1.7601.17570, 278528 bytes)
0x744d0000 C:\Windows\System32\winrnr.dll (6.1.7600.16385, 32768 bytes)
0x742a0000 C:\Windows\System32\IPHLPAPI.DLL (6.1.7601.17514, 114688 bytes)
0x74290000 C:\Windows\System32\winnsi.dll (6.1.7600.16385, 28672 bytes)
0x74250000 C:\Windows\System32\FWPUCLNT.DLL (6.1.7601.18283, 229376 bytes)
0x74240000 C:\Windows\System32\rasadhlp.dll (6.1.7600.16385, 24576 bytes)
0x74500000 C:\Windows\System32\WSHTCPIP.DLL (6.1.7600.16385, 20480 bytes)
0x74ae0000 C:\Windows\System32\dbghelp.dll (6.1.7601.17514, 962560 bytes)
0x73170000 C:\Windows\System32\powrprof.dll (6.1.7600.16385, 151552 bytes)
0x76110000 C:\Windows\SysWOW64\setupapi.dll (6.1.7601.17514, 1691648 bytes)
0x75470000 C:\Windows\SysWOW64\cfgmgr32.dll (6.1.7601.17621, 159744 bytes)
0x763a0000 C:\Windows\SysWOW64\oleaut32.dll (6.1.7601.17676, 585728 bytes)
0x75e50000 C:\Windows\SysWOW64\devobj.dll (6.1.7601.17621, 73728 bytes)
Here is the related thread in dev# Subversion mailing list.
It looks like your server interrupts the connection and Subversion 1.8 client built with serf 1.3.5 library crashes on this failure. That's why you see the error with older Subversion client but the client built with serf 1.3.5 crashes.
Serf 1.3.5 fails to process the error and thus crash on the client occurs. There is a great chance that the crash is caused by the bug in Serf library (on client side) which is fixed in the version 1.3.6:
Revert r2319 from serf 1.3.5: this change was making serf call
handle_response multiple times in case of an error response, leading
to unexpected behavior.
I suggest trying Subversion command-line client which is built against Serf 1.3.6. Subversion 1.8.x binaries built with serf 1.3.6 are going to be available soon.
I posted the same dump file to the users#subversion.apache.org mailing list but have had no reply in 2 weeks. To get around this issue, I switched QuickBuild to use jsvn.bat from the pure Java distribution of SVNKit and the issue seems to be resolved. Jsvn has the same command-line interface as the Apache svn binary, so it's a simple drop-in replacement.
I initially had an authentication issue because we use NTLM to authenticate with our active directory. The error was "svn: E170001: Authentication required for repsoitory" The solution was to add the following to svnkit\bin\jsvn.bat after the existing EXTRA_JVM_ARGUMENTS environment variable:
rem Adding this to resolve authentication issue as per http://subversion.1072662.n5.nabble.com/SVN-authentication-problem-td1560.html
set EXTRA_JVM_ARGUMENTS=%EXTRA_JVM_ARGUMENTS% "-Dsvnkit.http.methods=Basic,Digest,Negotiate,NTLM"
The only thing worked for me is to install the previous version of Tortoise svn(TortoiseSVN-1.8.6.25419-x64-svn-1.8.8) which runs with the pervious version of svn client 1.8.8 and then use the old version of the svn.exe. That worked against the newer version of the server (1.8.9) too.
(I had the same issue. I upgraded yesterday my collabnet subversion to the latest (svn version 1.8.9-3871.129) and both the command prompt svn checkout or the latest tortoise svn (1.8.7) fails. I have the same ACCESS_VIOLATION error in the dump log. And my computer is Windows 7 64 bit.)