ERR - Work Item is not ready to save - Azure DevOps migration tools - azure-devops-migration-tools

Trying to migrate work items from one organization to another organization.
Both organizations are Azure DevOps Services.
When I execute:
migration.exe execute --config .\configuration.json
I get errors:
at Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItem.Save(SaveFlags saveFlags)
at MigrationTools.TfsExtensions.SaveToAzureDevOps(WorkItemData context) in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.ObjectModel\TfsExtensions.cs:line 76
at VstsSyncMigrator.Engine.WorkItemMigrationContext.<ProcessWorkItemAsync>d__33.MoveNext() in D:\a\1\s\src\VstsSyncMigrator.Core\Execution\MigrationContext\WorkItemMigrationContext.cs:line 406
Microsoft.TeamFoundation.WorkItemTracking.Client.ValidationException: TF237124: Work Item is not ready to save
at Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItem.Save(SaveFlags saveFlags)
at MigrationTools.TfsExtensions.SaveToAzureDevOps(WorkItemData context) in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.ObjectModel\TfsExtensions.cs:line 76
at VstsSyncMigrator.Engine.WorkItemMigrationContext.<ProcessWorkItemAsync>d__33.MoveNext() in D:\a\1\s\src\VstsSyncMigrator.Core\Execution\MigrationContext\WorkItemMigrationContext.cs:line 406
This problem only occurs not with all work items.
Performed some testing and it seems the problem is related to work item history migration, if in configuration.json disable revisions migration, then it succeeds to migrate the task.
Seems something in "rev": 1.
I can’t find what field or value he didn’t like.
Debug mode log:
[10:17:16 DBG] Running Field Map: FieldValueMap System.State System.State
[10:17:16 DBG] FieldValueMap: [UPDATE] field value mapped 92552:System.State to 0:System.State
[10:17:16 DBG] Running Field Map: FieldToFieldMap System.Tags System.Tags
[10:17:16 DBG] FieldToFieldMap: [UPDATE] field mapped 92552:System.Tags to 0:System.Tags
[10:17:16 DBG] Running Field Map: FieldToFieldMap System.IterationPath System.IterationPath
[10:17:16 DBG] FieldToFieldMap: [UPDATE] field mapped 92552:System.IterationPath to 0:System.IterationPath
[10:17:16 DBG] Running Field Map: FieldToFieldMap System.Description System.Description
[10:17:16 DBG] FieldToFieldMap: [UPDATE] field mapped 92552:System.Description to 0:System.Description
[10:17:16 DBG] Running Field Map: FieldToFieldMap Custom.Requestor Custom.Requestorname
[10:17:16 DBG] Running Field Map: FieldToFieldMap Custom.Projectcode Custom.ProjectCode
[10:17:16 DBG] Running Field Map: FieldToFieldMap Custom.Team Custom.Team
[10:17:16 DBG] Running Field Map: FieldToFieldMap Custom.OperationsArea Custom.OperationsArea
[10:17:16 DBG] Running Field Map: FieldToFieldMap Custom.Developmenttime Custom.Estimation
[10:17:16 DBG] Running Field Map: FieldToFieldMap Custom.Tech Custom.TechnologyUsed
[10:17:16 DBG] Running Field Map: FieldToFieldMap Custom.DevelopmentStartDate Microsoft.VSTS.Scheduling.StartDate
[10:17:16 DBG] Running Field Map: FieldToFieldMap Custom.DevelopmentEnddate Microsoft.VSTS.Scheduling.TargetDate
[10:17:16 DBG] Running Field Map: FieldToFieldMap Custom.HoursSaved Custom.HoursSaved
[10:17:16 DBG] Running Field Map: FieldToFieldMap Custom.QualityImprovement Custom.ImpactDescription
[10:17:16 DBG] Running Field Map: FieldToFieldMap Custom.Customer Custom.ProjectCode
[10:17:16 DBG] Running Field Map: FieldValueMap Microsoft.VSTS.Common.Priority Custom.Prioritization
[10:17:16 DBG] FieldValueMap: [UPDATE] field value mapped 92552:Microsoft.VSTS.Common.Priority to 0:Custom.Prioritization
[10:17:16 DBG] TfsExtensions::SaveToAzureDevOps
[10:17:16 DBG] TfsExtensions::SaveToAzureDevOps: ChangedBy: user1, AuthorisedBy: user2
[10:17:16 WRN] Work Item is not ready to save as it has some invalid fields. This may not result in an error. Enable LogLevel as 'Debug' in the config to see more.
[10:17:16 DBG] --------------------------------------------------------------------------------------------------------------------
[10:17:16 DBG] --------------------------------------------------------------------------------------------------------------------
[10:17:16 DBG] TfsExtensions::ToJson
[10:17:16 DBG] Invalid Field Object:
{
"WorkItemId": 0,
"CurrentRevisionWorkItemRev": 0,
"CurrentRevisionWorkItemTypeName": "User Story",
"Name": "Iteration Path",
"ReferenceName": "System.IterationPath",
"Value": "C-SAM",
"OriginalValue": "",
"ValueWithServerDefault": "C-SAM",
"Status": 16,
"IsRequired": true,
"IsEditable": true,
"IsDirty": true,
"IsComputed": true,
"IsChangedByUser": true,
"IsChangedInRevision": true,
"HasPatternMatch": false,
"IsLimitedToAllowedValues": false,
"HasAllowedValuesList": false,
"AllowedValues": [],
"IdentityFieldAllowedValues": [],
"ProhibitedValues": []
[10:17:16 ERR] Microsoft.TeamFoundation.WorkItemTracking.Client.ValidationException: TF237124: Work Item is not ready to save
at Microsoft.TeamFoundation.WorkItemTracking.Client.WorkItem.Save(SaveFlags saveFlags)
at MigrationTools.TfsExtensions.SaveToAzureDevOps(WorkItemData context) in D:\a\1\s\src\MigrationTools.Clients.AzureDevops.ObjectModel\TfsExtensions.cs:line 76
at VstsSyncMigrator.Engine.WorkItemMigrationContext.ReplayRevisions(List`1 revisionsToMigrate, WorkItemData sourceWorkItem, WorkItemData targetWorkItem) in D:\a\1\s\src\VstsSyncMigrator.Core\Execution\MigrationContext\WorkItemMigrationContext.cs:line 563

Work items are not migrated
Maybe you see a TF237124: Work Item is not ready to save error when you atempt to do a migration.
A number of processors have a setting "PrefixProjectToNodes": false. If set to true this inserts the name of the source Team Project into the created structure e.g. Area path, Iteration path, or Work Item queries. It is also used by the migration processor.
This setting must be consistent across all processors in a configuration file. If it not it can often cause migrations to fail as expected paths are not present.
Source: https://nkdagility.github.io/azure-devops-migration-tools/faq.html
One to One Mapping of Area Path between Source and Target Helped Fix My error. I was looking to migrate multiple area's from Source without giving mapping in Target

Related

How to know in which task an ansible play failed when launched using AWX API?

I intend to launch ansible jobs on AWX using AWX api and get a call back from the ansible playbook to be informed about the result of the play.
To do so I'm using the /api/v2/job_templates/<job-template-id>/launch/ with some extra_vars in the body to pass parameters to my play.
{
"extra_vars": {
"target": "w.x.y.z", (put here a real IP)
"directory_name_1": "dir1",
"directory_name_2": "dir2",
"file_name": "file1" (or "subdir/file1" to make it fails)
}
}
I've also configured a webhook notification in the job-template with the default customization: {{ job_metadata }}
I've put here the play I'm using which is super simple, it creates 2 directories and one file in the first directory.
- hosts: "{{ target }}"
name: fbplay
tasks:
- name: Create dummy directory 1
file:
path: "{{directory_name_1}}"
state: directory
- name: Create dummy directory 2
file:
path: "{{directory_name_2}}"
state: directory
- name: Create dummy file in directory
file:
path: "{{directory_name_1}}/{{file_name}}"
state: touch
mode: u=rw,g=r,o=r
All of this work great and in case of error 4 tasks will be executed on the target machine:
TASK [Gathering Facts] *********************************************************
TASK [Create dummy directory 1] ************************************************
TASK [Create dummy directory 2] ************************************************
TASK [Create dummy file in directory] ******************************************
...but here is my question with regard to error handling: How can I indicate in the call back which task failed in case of error ?
In fact I can know if the play failed or not getting, in case of success:
"hosts": {
"w.x.y.z": {
"failed": false,
"changed": 1,
"dark": 0,
"failures": 0,
"ok": 4,
"processed": 1,
"skipped": 0,
"rescued": 0,
"ignored": 0
}
}
and in case of failure:
"hosts": {
"w.x.y.z": {
"failed": true,
"changed": 0,
"dark": 0,
"failures": 1,
"ok": 3,
"processed": 1,
"skipped": 0,
"rescued": 0,
"ignored": 0
}
}
But I cannot get the exact task that failed (in this case the last one by passing a filename containing a sub-directory that does not exist for example).
I'm a newbie on AWX & ansible and I'm fighting with what I thought would be a relatively simple point... so any hints or ideas is welcome.
Thx beforehand.
in case it helps someone, I actually confirm that ARA does what I was looking for above which is to display if using its web client (or provide through an API) exactly which task failed in each playbook you ran.
ARA does an excellent job for analyzing awx-runs, but it has to be set up on the AWX-server (the host running the AWX instance), if I understand correctly.
A workaroud would be to use the awx commandline interface ( see https://docs.ansible.com/ansible-tower/latest/html/towercli/index.html and * ):
watch --color 'awx --conf.host https://awx.site --conf.username my.user --conf.password "my_super_secret" -k -f human job_events list --job 3232445 --event "runner_on_failed" --filter "stdout"'
One could also sroll the API for different "?event=runner_on_failed", like:
https://your.awx.site/api/v2/jobs/3232445/job_events/?event=runner_on_failed
Or other "filters" like:
awx --conf.host https://awx.site --conf.username my.user --conf.password "my_supersecret" -k -f human job_events list --job 3232445 --event "runner_on_unreachable" --filter "*" >/tmp/tmp
https://docs.ansible.com/ansible-tower/latest/html/towercli/output.html#human-readable-tabular-formatting
https://docs.ansible.com/ansible-tower/latest/html/towercli/reference.html#awx-job-events-list
* (not to confuse with awx-cli/tower-cli - I think they are different)

presto configuration through emr launch config

I am trying to deploy presto om EMR through our EMR launch config JSON. I have decided the config properties as advised in this github issue of presto. I have added the following presto properties in the launch config
{
"Classification": "presto-connector-hive",
"Properties": {
"hive.metastore.glue.datacatalog.enabled": "true",
"hive.table-statistics-enabled": "true"
},
"Configurations": []
},
{
"Classification": "presto-config",
"Properties": {
"query.max-memory": "150G",
"query.max-memory-per-node": "20G",
"query.max-total-memory-per-node": "30G",
"memory.heap-headroom-per-node": "10G",
"query.initial-hash-partitions": "15"
},
"Configurations": []
}
EMR cluster has been created but presto is failing due to the following errors
1) Explicit bindings are required and com.facebook.presto.memory.LowMemoryKiller is not explicitly bound.
while locating com.facebook.presto.memory.LowMemoryKiller
for parameter 7 at com.facebook.presto.memory.ClusterMemoryManager.<init>(ClusterMemoryManager.java:123)
at com.facebook.presto.server.CoordinatorModule.setup(CoordinatorModule.java:189) (via modules: com.facebook.presto.server.ServerMainModule -> com.facebook.presto.server.CoordinatorModule)
2) Error: Could not coerce value '150G' to io.airlift.units.DataSize (property 'query.max-memory') in order to call [public com.facebook.presto.memory.MemoryManagerConfig com.facebook.presto.memory.MemoryManagerConfig.setMaxQueryMemory(io.airlift.units.DataSize)]
3) Error: Could not coerce value '20G' to io.airlift.units.DataSize (property 'query.max-memory-per-node') in order to call [public com.facebook.presto.memory.NodeMemoryConfig com.facebook.presto.memory.NodeMemoryConfig.setMaxQueryMemoryPerNode(io.airlift.units.DataSize)]
4) Configuration property 'memory.heap-headroom-per-node' was not used
at io.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:234)
5) Configuration property 'query.max-memory' was not used
at io.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:234)
6) Configuration property 'query.max-memory-per-node' was not used
at io.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:234)
7) Configuration property 'query.max-total-memory-per-node' was not used
at io.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:234)
7 errors
at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:466)
at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:155)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:107)
at com.google.inject.Guice.createInjector(Guice.java:96)
at io.airlift.bootstrap.Bootstrap.initialize(Bootstrap.java:241)
at com.facebook.presto.server.PrestoServer.run(PrestoServer.java:114)
at com.facebook.presto.server.PrestoServer.main(PrestoServer.java:66)
My config.properties file
coordinator=true
node-scheduler.include-coordinator=false
discovery.uri=X.X.X.X:YYYY
http-server.threads.max=500
discovery-server.enabled=true
sink.max-buffer-size=1GB
query.max-memory=150G
query.max-memory-per-node=20G
query.max-history=40
query.min-expire-age=30m
http-server.http.port=8889
http-server.log.path=/var/log/presto/http-request.log
http-server.log.max-size=67108864B
http-server.log.max-history=5
log.max-size=268435456B
log.max-history=5
query.initial-hash-partitions = 15
memory.heap-headroom-per-node = 10G
query.max-total-memory-per-node = 30G
Setup fails because
You need to use "GB" (not "G") as the unit when setting data size config properties
Your version (0.194) doesn't support some properties that you're setting (memory.heap-headroom-per-node nor query.max-total-memory-per-node).

Apache Beam on Cloud Dataflow - Failed to query Cadvisor

I have a cloud dataflow that is reading from a Pub/Sub and pushing data out to BQ. Recently the dataflow is reporting the error below and not writing any data to BQ.
{
insertId: "3878608796276796502:822931:0:1075"
jsonPayload: {
line: "work_service_client.cc:490"
message: "gcpnoelevationcall-01211413-b90e-harness-n1wd Failed to query CAdvisor at URL=<IPAddress>:<PORT>/api/v2.0/stats?count=1, error: INTERNAL: Couldn't connect to server"
thread: "231"
}
labels: {
compute.googleapis.com/resource_id: "3878608796276796502"
compute.googleapis.com/resource_name: "gcpnoelevationcall-01211413-b90e-harness-n1wd"
compute.googleapis.com/resource_type: "instance"
dataflow.googleapis.com/job_id: "2018-01-21_14_13_45"
dataflow.googleapis.com/job_name: "gcpnoelevationcall"
dataflow.googleapis.com/region: "global"
}
logName: "projects/poc/logs/dataflow.googleapis.com%2Fshuffler"
receiveTimestamp: "2018-01-21T22:41:40.053806623Z"
resource: {
labels: {
job_id: "2018-01-21_14_13_45"
job_name: "gcpnoelevationcall"
project_id: "poc"
region: "global"
step_id: ""
}
type: "dataflow_step"
}
severity: "ERROR"
timestamp: "2018-01-21T22:41:39.524005Z"
}
Any ideas, on how could I help this? Has anyone faced a similar issue before?
If this just happened once it could be attributed to a transient issue. The process running on the worker node can't reach cAdvisor. Either the cAdvisor container is not running or there is a temporal problem on the worker that can't contact cAdvisor and the job gets stuck.

Meld error with Datastax Enterprise

Provisioning a DSE cluster with the lifecycle manager fails consitently. Master node (also the one OpsCenter is running on) installed correctly. Each one of the other nodes fails the install (also config) task. Have double-checked the SSH credentials and ports. Any ideas on how to investigate further and fix the issue would be great.
Please excuse the length - trying to provide all of the relevant info.
Ubuntu 14.04.4,
JRE: 1.8.0.91,
DSE 5.0.0
job events:
...
"results": [
{
"event-subtype": "start",
"event-type": "milestone",
"message": "job started...",
...
},
{
"event-subtype": "invocation",
"event-type": "shell-command",
"message": "Invoked command: if [ -x $(which yum) ] && [ -f /etc/redhat-release -o -f /etc/SuSE-release ]; then echo -n yum; elif [ -x $(which apt-get) ]; then echo -n apt; fi"
...
},
{
"event-subtype": "uploaded-facts",
"event-type": "milestone",
"message": "Uploaded facts to OpsCenter server",
...
},
{
"event-subtype": "meld-error",
"event-type": "error",
"message": "Unexpected error executing meld",
...
},
{
"event-subtype": "MeldError",
"event-type": "error",
"message": "Meld failed on: name=\"NODE-2\" ssh-management-address=\"<IP>\" node-id=\"<node-id>\" job-id=\"<job-id>\" stdout=\"\r\n\" stderr=\"\"",
...
}
]
opscenterd.log
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:16,848 [opscenterd] INFO: Install job started for node name="NODE-2" ssh-management-address="<IP>" node-id="<node-id>" (async-thread-macro-53)
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:16,850 [opscenterd] INFO: using ssh-private-key (async-thread-macro-53)
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:18,478 [opscenterd] INFO: Received milestone from node name="NODE-2" ssh-management-address="<IP>" node-id="<node-id>" message="Uploaded facts to OpsCenter server" job-id="a630c081-6ac1-4b00-ac08-18fef320e0d5" (MainThread)
/var/log/opscenter/opscenterd.log:2016-07-02 16:34:18,675 [opscenterd] ERROR: Received error from node event-subtype="meld-error" job-id="a630c081-6ac1-4b00-ac08-18fef320e0d5" name="NODE-2" traceback="Traceback (most recent call last):
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 3313, in run
/var/log/opscenter/opscenterd.log- rc = engine.go()
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 2991, in go
/var/log/opscenter/opscenterd.log- self.file_manager.get_config_files()
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 1280, in get_config_files
/var/log/opscenter/opscenterd.log- {\"accept\": \"application/json\"})
/var/log/opscenter/opscenterd.log: File \"meld.py\", line 598, in get
/var/log/opscenter/opscenterd.log- return json.loads(response.read())
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/socket.py\", line 351, in read
/var/log/opscenter/opscenterd.log- data = self._sock.recv(rbufsize)
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/httplib.py\", line 549, in read
/var/log/opscenter/opscenterd.log- return self._read_chunked(amt)
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/httplib.py\", line 609, in _read_chunked
/var/log/opscenter/opscenterd.log- value.append(self._safe_read(amt))
/var/log/opscenter/opscenterd.log- File \"/usr/lib/python2.7/httplib.py\", line 666, in _safe_read
/var/log/opscenter/opscenterd.log- raise IncompleteRead(''.join(s), amt)
/var/log/opscenter/opscenterd.log:IncompleteRead: IncompleteRead(4153 bytes read, 4039 more expected)" ssh-management-address="<IP>" node-id="<node-id>" event-type="error" message="Unexpected error executing meld" (MainThread)
/var/log/opscenter/opscenterd.log-2016-07-02 16:34:18,892 [opscenterd] ERROR: Install job a630c081-6ac1-4b00-ac08-18fef320e0d5 failed! (async-thread-macro-54)
/var/log/opscenter/opscenterd.log:2016-07-02 16:34:19,105 [opscenterd] ERROR: Meld failed on: name="NODE-2" ssh-management-address="<IP>" node-id="<node-id>" job-id="a630c081-6ac1-4b00-ac08-18fef320e0d5" stdout="
/var/log/opscenter/opscenterd.log-" stderr="" (async-thread-macro-53)
Thank you
EDIT: Captured the HTTP traffic between NODE2 and master. The error occurs while transferring config files. One of them is not transferred completely for some reason. The json looks resonable until some gibberish appears.
{"filename": "dse.yaml", "contents": {"internode_messaging_options": {"client_worker_threads": 16, "port": 8609, "server_worker_threads": 16, "server_acceptor_thread
Yvatv+~UK{.kMI4^QOrqQTDX_3"DPm,v!"H&M$!1M7
LRYCs{l>-df;cj
W6C9dq
The config files are valid and do work on the master node. Only the replication fails.
OpsCenter LCM developer here. Your issue is caused by OPSC-8851 in the LCM known issues list: http://docs.datastax.com/en/opscenter/6.0/opsc/release_notes/opscReleaseNotes600.html
This is only triggered under certain network conditions and was discovered too close to release to get fixed in 6.0.0. It's a high priority though, and will be fixed in a subsequent release soon. Unfortunately, I don't think there's anything you can do to work around this in the field. If you're a DataStax customer, you could contact support and potentially get a patch now to workaround the issue... otherwise the only thing I can suggest is to watch the upcoming release notes.
Edit: I should also note that in our tests the issue is intermittent. LCM is designed so you can rerun failed jobs safely (aka it's idempotent) so in all but the most extreme cases you can also work around this just by rerunning your job.
You can specify the private IP for Listen Address and 0.0.0.0 for broadcast address and LCM should be able to provision appropriately.

ovirt:create iso/nfs storage domain erro

I met a problem while creating
"New Domain" iso/nfs storage,it prints "Error while executing action
New NFS Storage Domain: Storage domain remote path not mounted"
and the error code is 477.
I followed the "http://wiki.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues" and find that the vdsm user can't now use mount.
"mount: only root can do that"
the version I use:
oVirt Engine Version: 3.1.0-2.fc17
oVirt Node Hypervisor 2.5.4-0.1.fc17
the error log:
2012-11-08 09:15:00,004 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-77) Checking autorecoverable storage domains done
2012-11-08 09:17:28,920 WARN [org.ovirt.engine.core.bll.GetConfigurationValueQuery] (ajp--0.0.0.0-8009-2) calling GetConfigurationValueQuery (StorageDomainNameSizeLimit) with null version,
using default general for version
2012-11-08 09:17:29,333 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] (ajp--0.0.0.0-8009-3) [7720b88f] START, ValidateStorageServerConnectionVDSCommand(vdsId = 12bcf124-29a4-11e2-bcba-00505680002a, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: 6556c55d-42a4-4dcc-832c-4d8987ebe6bd, connection: 200.200.101.219:/usr/lwq/iso };]), log id: 52777a80
2012-11-08 09:17:29,388 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] (ajp--0.0.0.0-8009-3) [7720b88f] FINISH, ValidateStorageServerConnectionVDSCommand, return: {6556c55d-42a4-4dcc-832c-4d8987ebe6bd=0}, log id: 52777a80
2012-11-08 09:17:29,392 INFO [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (ajp--0.0.0.0-8009-3) [7720b88f] Running command: AddStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System
2012-11-08 09:17:29,404 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-3) [7720b88f] START, ConnectStorageServerVDSCommand(vdsId = 12bcf124-29a4-11e2-bcba-00505680002a, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: 6556c55d-42a4-4dcc-832c-4d8987ebe6bd, connection: 200.200.101.219:/usr/lwq/iso };]), log id: 36cb94f
2012-11-08 09:17:29,656 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-3) [7720b88f] FINISH, ConnectStorageServerVDSCommand, return: {6556c55d-42a4-4dcc-832c-4d8987ebe6bd=477}, log id: 36cb94f
2012-11-08 09:17:29,658 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (ajp--0.0.0.0-8009-3) [7720b88f] The connection with details 200.200.101.219:/usr/lwq/iso failed because of
error code 477 and error message is: 477
2012-11-08 09:17:29,717 INFO [org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand] (ajp--0.0.0.0-8009-11) [1661aa36] Running command: AddNFSStorageDomainCommand internal: false. En
tities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System
2012-11-08 09:17:29,740 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--0.0.0.0-8009-11) [1661aa36] START, CreateStorageDomainVDSCommand(vdsId = 12bcf12
4-29a4-11e2-bcba-00505680002a, storageDomain=org.ovirt.engine.core.common.businessentities.storage_domain_static#4a900545, args=200.200.101.219:/usr/lwq/iso), log id: 50b803a0
2012-11-08 09:17:35,233 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-11) [1661aa36] Failed in CreateStorageDomainVDS method
2012-11-08 09:17:35,234 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-11) [1661aa36] Error code StorageDomainFSNotMounted and error message VDSGeneri
cException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Storage domain remote path not mounted: ('/rhev/data-center/mnt/200.200.101.219:_usr_lwq_iso',)
2012-11-08 09:17:35,260 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-11) [1661aa36] Command org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageD
omainVDSCommand return value
As far as I know VDSM does sudo to run privileged commands, but still that mount error is strange. Can you send share the vdsm log?
Also, you may get more attention on the engine-users or vdsm mailing lists.