How do I correct this Naked Agility Work Item Migration tool error: Name cannot be null - migration

I'm a new user trying to debug why the migration tool (Now running Version: 11.11.26.0) gives me this error.
I'm a system admin migrating from on-prem TFS 2017 to Azure DevOps. I've created the Azure DevOps projects and I've tried now 5 different projects, all get this error. I suspect I'm missing something in my config files. I've tried to use a minimal config file, possibly too minimal.
The new tool release (26) did not resolve my error unfortunately.
I've also tried creating TFS projects with no code just a couple of Work Items and both TFVC and Git TFS projects and they all still get the below "Value cannot be Null" error.
There is only one processor and it outputs:
System.ArgumentNullException: Value cannot be null.
Parameter name: name
My most recent attempt uses a TFS test project with no code and only 2 Work Items.
Here is most of the config file from a new TFS Agile Git project with no code repo:
{
"ChangeSetMappingFile": null,
"Version":"11.11",
"LogLevel": "Information",
"WorkItemTypeDefinition": {
"sourceWorkItemTypeName": "targetWorkItemTypeName"
},
"source": {
"$type": "TfsTeamProjectConfig",
"Collection": "https://tfs.oldrepublictitle.com/ORTC-Collection/",
"Project": "WIT_Migration_Git_Test1",
"AllowCrossProjectLinking": false,
"AuthenticationMode": "AccessToken",
"LanguageMaps": {
"AreaPath": "Area",
"IterationPath": "Iteration"
}
},
"Target": {
"$type": "TfsTeamProjectConfig",
"Collection": "https://dev.azure.com/ortdevops/",
"Project": "TEST-WorkItem-Migration",
"AllowCrossProjectLinking": false,
"AuthenticationMode": "AccessToken",
"LanguageMaps": {
"AreaPath": "Area",
"IterationPath": "Iteration"
}
},
"FieldMaps": [
{
"$type": "FieldBlankMapConfig",
"WorkItemTypeName": "*",
"targetField": "TfsMigrationTool.ReflectedWorkItemId"
},
{
"$type": "FieldValueMapConfig",
"WorkItemTypeName": "*",
"sourceField": "System.State",
"targetField": "System.State",
"defaultValue": "New",
"valueMapping": {
"Approved": "New",
"New": "New",
"Committed": "Active",
"In Progress": "Active",
"To Do": "New",
"Done": "Closed",
"Removed": "Removed"
}
},
{
"$type": "FieldMergeMapConfig",
"WorkItemTypeName": "*",
"sourceField1": "System.Description",
"sourceField2": "Microsoft.VSTS.Common.AcceptanceCriteria",
"sourceField3": null,
"targetField": "System.Description",
"formatExpression": "{0} <br/><br/><h3>Acceptance Criteria</h3>{1}",
"doneMatch": "##DONE##"
},
{
"$type": "RegexFieldMapConfig",
"WorkItemTypeName": "*",
"sourceField": "COMPANY.PRODUCT.Release",
"targetField": "COMPANY.DEVISION.MinorReleaseVersion",
"pattern": "PRODUCT \\d{4}.(\\d{1})",
"replacement": "$1"
},
],
"WorkItemTypeDefinition": {
"Bug": "Bug",
"Product Backlog Item": "Issue",
"Feature": "Feature",
"Task": "Task",
"Code Review Request": "Task",
"Code Review Response": "Task"
},
"Processors": [
{
"$type": "WorkItemMigrationConfig",
"Enabled": true,
"ReplayRevisions": true,
"PrefixProjectToNodes": false,
"UpdateCreatedDate": false,
"UpdateCreatedBy": false,
"BuildFieldTable": false,
"AppendMigrationToolSignatureFooter": false,
"WIQLQueryBit": "AND [System.WorkItemType] NOT IN ('Test Suite', 'Test Plan', 'Bug')",
"WIQLOrderBit": "[System.ChangedDate] desc",
"LinkMigration": true,
"AttachmentMigration": true,
"AttachmentWorkingPath": "c:\\temp\\WorkItemAttachmentWorkingFolder\\",
"FixHtmlAttachmentLinks": false,
"SkipToFinalRevisedWorkItemType": true,
"WorkItemCreateRetryLimit": 5,
"FilterWorkItemsThatAlreadyExistInTarget": false,
"NodeStructureEnricherEnabled": true,
"PauseAfterEachWorkItem": false,
"AttachmentMaxSize": 480000000,
"AttachRevisionHistory": false,
"LinkMigrationSaveEachAsAdded": false,
"GenerateMigrationComment": true,
"WorkItemIDs": null,
"MaxRevisions": 0
}
],
"Endpoints": {
"InMemoryWorkItemEndpoints": [
{
"Name": "Source",
"EndpointEnrichers": null
},
{
"Name": "Target",
"EndpointEnrichers": null
}
]
}
}
Here is most of a run's output from the config above:
[13:30:04 INF] Migrating all Nodes before the Processor run.
[13:30:05 INF] Processing Node: \TEST-WorkItem-Migration\Iteration\Iteration 1, start date: null, finish date: null
[13:30:05 INF] Processing Node: \TEST-WorkItem-Migration\Iteration\Iteration 2, start date: null, finish date: null
[13:30:05 INF] Processing Node: \TEST-WorkItem-Migration\Iteration\Iteration 3, start date: null, finish date: null
[13:30:07 INF] Querying items to be migrated: SELECT [System.Id], [System.Tags] FROM WorkItems WHERE [System.TeamProject
] = #TeamProject AND [System.WorkItemType] NOT IN ('Test Suite', 'Test Plan', 'Bug') ORDER BY [System.ChangedDate] desc
...
[13:30:08 INF] Replay all revisions of 2 work items?
[13:30:08 INF] Found target project as TEST-WorkItem-Migration
[13:30:08 FTL] Error while running WorkItemMigration
System.ArgumentNullException: Value cannot be null.
Parameter name: name
at Microsoft.TeamFoundation.WorkItemTracking.Client.FieldDefinitionCollection.Contains(String name)
at MigrationTools.ProcessorEnrichers.TfsValidateRequiredField.ValidatingRequiredField(String fieldToFind, List`1 sour
ceWorkItems) in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.ObjectModel\ProcessorEnrichers\TfsValidateRequiredField.
cs:line 53
at VstsSyncMigrator.Engine.WorkItemMigrationContext.InternalExecute() in D:\a\1\s\src\VstsSyncMigrator.Core\Execution
\MigrationContext\WorkItemMigrationContext.cs:line 125
at MigrationTools._EngineV1.Processors.MigrationProcessorBase.Execute() in D:\a\1\s\src\MigrationTools\_EngineV1\Proc
essors\MigrationProcessorBase.cs:line 47
[13:30:08 ERR] WorkItemMigration The Processor MigrationEngine entered the failed state...stopping run
[13:30:08 INF] Application is shutting down...
[13:30:08 INF] Terminating: Application forcebly closed.
[13:30:08 INF] Application Ending
[13:30:08 INF] The application ran in 00:00:06.9585120 and finished at 02/08/2022 13:30:08
PS D:\AzureMigrationTool\MigrationTools-11.11.26>
How can I determine where the null name is located to correct it?
Any guidance would be greatly appreciated.
Thanks
Dan

Related

Azure ADO Shared query migration - TfsSharedQueryProcessorOptions SourceName TargetName

The documentation shows a sample configuration for migrating queries (TfsSharedQueryProcessorOptions) that includes SourceName and TargetName parameters. My original problem was that when trying to use the config sample to migrate queries, I was getting and error messages saying: "There is no endpoint named [sourceName]".
Things I tried: I changed endpoint names for the existing Endpoints node, I added a new child Endpoints node inside TfsSharedQueryProcessorOptions, I changed the name TfsEndpoints to TfSharedQueryEndpoints, etc. But none of that worked. I eventually found a way to make it work. Please see my own answer below.
Eventually, I found a way to make it work. So I thought I'd share my config, in case someone else is stuck on this. In the config below, change the parameters Version, Organisation and Project to your own values. Also change the AccessToken parameter if you set AuthenticationMode to "AccessToken". This is my entire config to migrate queries:
{
"Version": "**0.0**",
"LogLevel": "Verbose",
"Endpoints": {
"TfsEndpoints": [
{
"Name": "Source",
"AccessToken": "**Your source access token**",
"Organisation": "https://dev.azure.com/**your_source_organization_name**/",
"Project": "**Your Source Project Name**",
"ReflectedWorkItemIdField": "Custom.ReflectedWorkItemId",
"AuthenticationMode": "AccessToken",
"AllowCrossProjectLinking": false,
"LanguageMaps": {
"AreaPath": "Area",
"IterationPath": "Iteration"
}
},
{
"Name": "Target",
"AccessToken": "**Your target access token**",
"Organisation": "https://dev.azure.com/**your_target_organization_name**/",
"Project": "**Your Target Project Name**",
"ReflectedWorkItemIdField": "Custom.ReflectedWorkItemId",
"AuthenticationMode": "AccessToken",
"AllowCrossProjectLinking": false,
"LanguageMaps": {
"AreaPath": "Area",
"IterationPath": "Iteration"
}
}
]
},
"Processors": [
{
"$type": "TfsSharedQueryProcessorOptions",
"Enabled": true,
"PrefixProjectToNodes": false,
"SharedFolderName": "Shared Queries",
"SourceToTargetFieldMappings": null,
"ProcessorEnrichers": null,
"SourceName": "Source",
"TargetName": "Target"
}
]
}

Selenium Cookies

Ive copied all the cookies for a website after I logged in (has 2fa verification if it matters) and sent them with selenium, the problem is that after approximately 1 hour the cookies seems to not work anymore as I get asked for 2FA.
Cookies being sent look like this:
{
"expirationDate": 1651051461,
"hostOnly": false,
"httpOnly": false,
"name": "s_dslv_s",
"path": "/",
"secure": false,
"session": false,
"storeId": null,
"value": "Less%20than%201%20day"
},
{
"expirationDate": 2083049656,
"hostOnly": false,
"httpOnly": false,
"name": "s_nr",
"path": "/",
"secure": false,
"session": false,
"storeId": null,
"value": "1651049656703-Repeat"
},
{
"expirationDate": 1745657661,
"hostOnly": false,
"httpOnly": false,
"name": "s_dslv",
"path": "/",
"secure": false,
"session": false,
"storeId": null,
"value": "1651049661026"
},
{
"hostOnly": false,
"httpOnly": false,
"name": "s_ppv",
"path": "/",
"secure": false,
"session": true,
"storeId": null,
"value": "Open%2520VET%2520Page%2520-%2520Render%2C83%2C83%2C1297%2C1%2C1"
},
Altho cookies being sent are incomplete as I get an error if I send these 2 cookies
"sameSite": null,
"domain": ".somewebsite.work",
My question is. Could this expiry problem be caused because the cookies are sent without the two ones from above?
On my normal browser even when the session expires i only get asked to relog without completing the 2fa.

Graph API doesn't restore a mail message, instead it is creating a new message with CreateDateTime automatically updated to present date

When I am performing restore of an email message via graph API with a Post request, instead of restoring it is creating a new message with the same data. Because in the JSON createDateTime is being updated although I am passing previous createDataTime.
To elaborate more: I want to restore below mail message which got created in 2018 ( "createdDateTime": "2018-12-31T14:49:42Z") but when I am posting same JSON for restore, createDateTime is being updated automatically to the present date. Which is problem because it's not the restore, it is just like creating new message.
{
"#odata.type": "#microsoft.graph.eventMessageResponse",
"#odata.etag": "W/\"DAAAABYAAABjFtMyIejaSbuRSeM/auJwAAGfpJnO\"",
"id": "AAMkAGZiNGI0MWM4LTQ0NjUtNDUyMy1hOTI2LWNopaTZiMGYxZTBkNQBGAAAAAACaBIVNrajXSj6AQcjiAFBwBjFtMyIejaSbuRSeM-auJwAAAAAAEJAABjFtMyIejaSbuRSeM-auJwAAGf4eRfAAA=",
"createdDateTime": "2018-12-31T14:49:42Z",
"lastModifiedDateTime": "2020-12-31T14:49:46Z",
"changeKey": "DopskAkslaAABjFtMyIejaSbuRSeM/auJwAAGfpJnO",
"categories": [],
"receivedDateTime": "2020-12-31T14:49:43Z",
"sentDateTime": "2020-12-31T14:49:42Z",
"hasAttachments": false,
"internetMessageId": "<MA1PR0101MB207oPF15907003958DB7A58BDD60#MA1PR0101MB2070.INDPRD01.PROD.OUTLOOK.COM>",
"subject": "Accepted: New Year Party",
"bodyPreview": "",
"importance": "normal",
"parentFolderId": "AQMkAGZiNGI0MWM4LTQ0ADY1LTQ1MjMtYTkyNi1jZGU2YjBmMWUwZDUALgAAA5oEhU2tqNdKuqPoBByOIAlkallspspspspspppAAAIBCQAAAA==",
"conversationId": "AAQkAGZiNGI0MWM4LTQ0NjUtNDUyMy1hOTI2LWNkZTZiMGYxZTBkNQAQAEJ5AU8Tk1nklXE3E0XGh2w=",
"conversationIndex": "AQHW34QsrZ0Wy3deoU2Bn2byefNABQ==",
"isDeliveryReceiptRequested": null,
"isReadReceiptRequested": false,
"isRead": true,
"isDraft": false,
"inferenceClassification": "focused",
"meetingMessageType": "meetingAccepted",
"type": "singleInstance",
"isOutOfDate": false,
"isAllDay": false,
"isDelegated": false,
"responseType": "accepted",
"recurrence": null,
"body": {
"contentType": "text",
"content": ""
},
"sender": {
"emailAddress": {
"name": "Mark Rober",
"address": "mark#securemigration.in"
}
},
"from": {
"emailAddress": {
"name": "Mark Rober",
"address": "mark#securemigration.in"
}
},
"toRecipients": [
{
"emailAddress": {
"name": "#Class Yammer",
"address": "ClassYammer#securemigration.in"
}
}
],
"ccRecipients": [],
"bccRecipients": [],
"replyTo": [],
"flag": {
"flagStatus": "notFlagged"
},
"startDateTime": {
"dateTime": "2020-12-31T15:00:00.0000000",
"timeZone": "UTC"
},
"endDateTime": {
"dateTime": "2020-12-31T15:30:00.0000000",
"timeZone": "UTC"
}
}
Please help me with it.

Webdriver-manager error: Unable to create session with *my config*

I run into the following problem after updating webdriver-manager:
E/launcher - SessionNotCreatedError: Unable to create session from
my config is printed here
webdriver-manager Version: 12.1.5
Node Version: 10.15.3
Protractor Version: 5.4.2
Browser(s): Chrome
Operating System and Version: Win 7 / Ubuntu
This is my config file which worked for the last 1.5 years:
exports.config = {
"seleniumAddress": "http://localhost:4444/wd/hub",
"seleniumPort": "4444",
"capabilities": {
"browserName": "chrome",
"unexpectedAlertBehaviour": "accept",
"perform": "ANY",
"version": "ANY",
"chromeOptions": {
"perfLoggingPrefs": {
"traceCategories": "blink.console,devtools.timeline,disabled-by-default-devtools.timeline,toplevel,disabled-by-default-devtools.timeline.frame,benchmark"
},
"prefs": {
"credentials_enable_service": false
},
"args": ["--headless", "--window-size=800,1080", "--disable-blink-features=BlockCredentialedSubresources", "--no-sandbox", "--test-type=browser", "--disable-dev-shm-usage", "--enable-gpu-benchmarking", "--enable-thread-composting" , "--start-maximized"]
},
"loggingPrefs": { "performance": "ALL" }
},
"jasmineNodeOpts": {
"showColors": true,
"defaultTimeoutInterval": 9999999
},
"allScriptsTimeout": 200000,
"params": {
"perf": {
"selenium": { "protocol": "http:", "slashes": true, "auth": null, "host": "localhost:4444", "port": 4444, "hostname": "localhost", "hash": null, "search": null, "query": null, "pathname": "/wd/hub", "path": "/wd/hub", "href": "http://localhost:4444/wd/hub" },
"browsers": [{
"browserName": "chrome",
"chromeOptions": {
"perfLoggingPrefs": {
"traceCategories": "blink.console,devtools.timeline,disabled-by-default-devtools.timeline,toplevel,disabled-by-default-devtools.timeline.frame,benchmark"
},
"args": ["--headless", "--disable-gpu", "--disable-blink-features=BlockCredentialedSubresources", "--no-sandbox", "--test-type=browser", "--disable-dev-shm-usage"]
},
"loggingPrefs": { "performance": "ALL" }
}],
"debugBrowser": false,
"actions": ["scroll"],
"metrics": ["TimelineMetrics", "ChromeTracingMetrics", "RafRenderingStats", "NetworkTimings", "NetworkResources"],
"metricOptions": {}
},
"warmup": false,
"agilar" : false
}
}
I know the file is a mess and it's more or less googled together, but it worked. Can you point me to what is causing this problem?
Is selenium server up and running at default address
"http://localhost:4444/wd/hub" . If it's not , start it by running : webdriver-manager start. Assuming you have webdriver-manager installed already.
Also I dont think you need to define seleniumPort when you have selenium address property given already in config. So remove this property "seleniumPort": "4444" from config.

Why can't Docker find my sql file to import?

I have a SQL file I'm trying to import into a local docker instance. I'm running the following command:
docker exec -i 868b7935cc37 ../my.file.sql -u {user} --password={password} {dbName}
I'm getting the following error back when I run it:
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"../my.file.sql\": stat ../my.file.sql: no such file or directory": unknown
I'm only one directory away from the file, hence the ../ in the command. I spoke with the person who gave me the file and the username, password and name are all correct. None of the names, passwords, etc. contain any special characters.
I feel like I'm right there. I don't know why I'm getting the no such file or directory error.
Any and all help is appreciated!
docker inspect gives me
[
{
"Id": "868b7935cc371a0eef47e84a7ffbddb99b03cfc93e735af31e5b5754680c1f98",
"Created": "2018-11-15T20:11:44.9362404Z",
"Path": "docker-entrypoint.sh",
"Args": [
"mysqld"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 4819,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-11-23T23:15:52.5735445Z",
"FinishedAt": "2018-11-21T01:52:31.3103165Z"
},
"Image": "sha256:583a6e3a3c98793a6c8a3b09d291b574da66f7e1fba6ebfebe3e93c88c3b443a",
"ResolvConfPath": "/var/lib/docker/containers/868b7935cc371a0eef47e84a7ffbddb99b03cfc93e735af31e5b5754680c1f98/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/868b7935cc371a0eef47e84a7ffbddb99b03cfc93e735af31e5b5754680c1f98/hostname",
"HostsPath": "/var/lib/docker/containers/868b7935cc371a0eef47e84a7ffbddb99b03cfc93e735af31e5b5754680c1f98/hosts",
"LogPath": "/var/lib/docker/containers/868b7935cc371a0eef47e84a7ffbddb99b03cfc93e735af31e5b5754680c1f98/868b7935cc371a0eef47e84a7ffbddb99b03cfc93e735af31e5b5754680c1f98-json.log",
"Name": "/dmr_mysql_1",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"dmr_local_mysql_data:/var/lib/mysql:rw",
"dmr_local_mysql_data_backups:/backups:rw"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "dmr_default",
"PortBindings": {
"3306/tcp": [
{
"HostIp": "",
"HostPort": "3306"
}
]
},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/asound",
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/830398a5558d1451a520a7219971cfb6f869cfc7aa149373eab77287c2924ee4-init/diff:/var/lib/docker/overlay2/d6fbcced29e35b61a9bb5a8db9cec8c561fdcba5b52a61c62af886de180aa93a/diff:/var/lib/docker/overlay2/289d826020070599fe59d4171f40bfcfc41de1bbefa29bcc4cfd0bc0ab5ebb3c/diff:/var/lib/docker/overlay2/05572289cc7498d3d29d09d0b9745c0387c56ef06919ef27517c9131a585a895/diff:/var/lib/docker/overlay2/eed1357572b7a67729f776846e8109fa9493e0083d88bb3edeb2c95410bfa2b4/diff:/var/lib/docker/overlay2/8163d89b53f562d5476ffc8ccabdcb6a935ee932b2544f0d42ada9650b67eb46/diff:/var/lib/docker/overlay2/8ea425a1f09814f6e6f3f9d8f887c0829b2151e359425ea985792a75e65acd90/diff:/var/lib/docker/overlay2/ae06aa0cbb069d340970beb76ad8b278ac4b4f97eaceb1f3b36cb4ba15a2128c/diff:/var/lib/docker/overlay2/16350f1b36b1eb496286e5ad4cdea02f9931d33a6869a6105da766e40793d81a/diff:/var/lib/docker/overlay2/305da8336df57edf64806244981141bd6a05b168653a48f97223e7da0a3ac477/diff:/var/lib/docker/overlay2/2265f0da439e923b98007d292dda922f3a90298bb879c07f2f41afa66c971c7b/diff:/var/lib/docker/overlay2/b5e59e46468f95a1d243b6c99b7421b41715f7ad11bda4095901244a6552bbb9/diff:/var/lib/docker/overlay2/76fdb756320d579aed7713e27b4760a5266fcfde5358903d9e4351d9c77a4b9d/diff:/var/lib/docker/overlay2/58952f226dee428fecc6cf23f45e39b4084f10c6214f3ded03ebd87a250318bd/diff:/var/lib/docker/overlay2/7f03ca1e222e9ee48d8332e6ec830cb0a2a7a27167d2698847d41d3f18c47bd3/diff",
"MergedDir": "/var/lib/docker/overlay2/830398a5558d1451a520a7219971cfb6f869cfc7aa149373eab77287c2924ee4/merged",
"UpperDir": "/var/lib/docker/overlay2/830398a5558d1451a520a7219971cfb6f869cfc7aa149373eab77287c2924ee4/diff",
"WorkDir": "/var/lib/docker/overlay2/830398a5558d1451a520a7219971cfb6f869cfc7aa149373eab77287c2924ee4/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "dmr_local_mysql_data_backups",
"Source": "/var/lib/docker/volumes/dmr_local_mysql_data_backups/_data",
"Destination": "/backups",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
},
{
"Type": "volume",
"Name": "dmr_local_mysql_data",
"Source": "/var/lib/docker/volumes/dmr_local_mysql_data/_data",
"Destination": "/var/lib/mysql",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "868b7935cc37",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"3306/tcp": {},
"33060/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"MYSQL_DATABASE=dmr",
"MYSQL_USER=dmr",
"MYSQL_PASSWORD=dmr",
"MYSQL_ROOT_PASSWORD=dmr",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"GOSU_VERSION=1.7",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.24-1debian9"
],
"Cmd": [
"mysqld"
],
"ArgsEscaped": true,
"Image": "dmr_mysql",
"Volumes": {
"/backups": {},
"/var/lib/mysql": {}
},
"WorkingDir": "",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "ffc27388c47a8468694fe5412bb06e3dda7a7b083d378fba1ab57eace2b3628e",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "dmr",
"com.docker.compose.service": "mysql",
"com.docker.compose.version": "1.22.0"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "cbccae535b05d954c1592710bb808814a87bcfbee6617fd1fb0a8f44561faec7",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3306/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3306"
}
],
"33060/tcp": null
},
"SandboxKey": "/var/run/docker/netns/cbccae535b05",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"dmr_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"868b7935cc37",
"mysql"
],
"NetworkID": "eaf16cd4854d6bcb607ca7598c5337d42b917164404de82f873b9567ab480df7",
"EndpointID": "c98e525c7c4f22e5bfb7b6041a2f94fbf81561f518d2a6b550768ef6c32e57d5",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:02",
"DriverOpts": null
}
}
}
}
]
I believe the problem is a misunderstanding of how docker exec works. If you think of your container as a remote machine, and docker exec as a command you run on your local machine that will cause the remote machine to execute some command installed on the remote machine, it may become more clear
Right now it looks like you have a file on your local machine (outside the container) and you're passing it as the command you wish the remote machine(inside the container) to run, but the file is on your local machine, not the remote so even if it could be processed (docker help page says it must be an executable file- is an sql file executable in this context?) the file isn't on the machine that is going to process it.
Calling docker exec and passing a file reference of a file outside the container, doesn't cause the local file to be sent into the container and executed inside
As such, I think you'll have to do something more like
docker cp ../myfile.sql DOCKERCONTAINERNAME:/root/myfile.sql
To copy the file into the container and then something like:
docker exec DOCKERCONTAINERNAME mysqlimporttool -u mysqluser -p pass /root/myfile.sql
To have docker launch the in-container import tool for you and pass the arguments you specified. That tool will start up inside the container, and process the file you copied into the container in the first step
You don't need docker exec to interact with servers running in containers. Just use the ordinary client programs you'd normally use to talk to them.
For instance, if you launched the database container, publishing the normal MySQL port, as as
docker run -p3306:3306 ... mysql
then you could run your script by installing the mysql command-line client, and then running
mysql -h 127.0.0.1 ../my.file.sql -u {user} --password={password} {dbName}
If you've configured your Docker to require root permission to access it (a very reasonable setup) then this has the additional advantage of not requiring sudo just to run it.