Error: error creating Storage gateway SMB file Share: InvalidGatewayRequestExeption - amazon-s3

I tried to run following terraform code o create test smb share but got error
see code
provider "aws" {
region = "us-east-1"
}
resource "aws_storagegateway_smb_file_share" "test_smb_share" {
authentication = "ActiveDirectory"
gateway_arn = "arn:aws:storagegateway:us-east-1:145429107744:gateway/sgw-4xxxxxxx"
default_storage_class = "S3_STANDARD"
location_arn = "arn:aws:s3:::xxxxxxxx"
role_arn = "arn:aws:iam::145429107744:role/service-role/StorageGatewayBucketAccessRolee896cdf0-cb46-4471-a0de-119f69f87e"
valid_user_list = ["#Domain Admins","#Admins"]
kms_encrypted = "true"
kms_key_arn = "arn:aws:kms:us-east-1:145429107744:key/8c4b962b-c00a-4a32-8fbd-76b174efb609"
tags = {
atomdev = "prod"
atomdomain = "xxxxxx"
atomos = "file system"
atompid = "32"
atomrole = "storage"
}
}
aws_storagegateway_smb_file_share.test_smb_share: Creating...
╷
│ Error: error creating Storage Gateway SMB File Share: InvalidGatewayRequestException: OverlappingLocations
│ {
│ RespMetadata: {
│ StatusCode: 400,
│ RequestID: "e8f7466d-23af-4a4c-a457-d39a0f99406d"
│ },
│ Error_: {
│ ErrorCode: "OverlappingLocations"
│ },
│ Message_: "OverlappingLocations"
│ }
│
│ with aws_storagegateway_smb_file_share.test_smb_share,
│ on main.tf line 5, in resource "aws_storagegateway_smb_file_share" "test_smb_share":
│ 5: resource "aws_storagegateway_smb_file_share" "test_smb_share" {
│
any idea?

If you try to run two shares on the same SGW that have overlapping S3 locations, you'll get this error. For example:
\\my-s3-bucket\folder1\data
\\my-s3-bucket\folder1
^ those would overlap, since one would contain a subset of the other

Related

Vitest visualViewport is not defined

I am writing one of my first vitest and my code does a test of the visualViewport when the component is mounted. When I run the test, it fails because the visualViewport is not defined. How would I go about having this defined when the test runs?
onMounted(() => {
if (visualViewport.width > 1024) {
options.isDesktop = true;
}
visualViewport.addEventListener('resize', ({target}) => {
if (target.width > 1024) {
options.isDesktop = true;
} else {
options.isDesktop = false;
}
});
});
ReferenceError: visualViewport is not defined
│ ❯ src/scripts/search-and-filter/App.vue:22:3
│ 20|
│ 21| onMounted(() => {
│ 22| if (visualViewport.width > 1024) {
│ | ^
│ 23| options.isDesktop = true;
│ 24| }
Let me know if more information is needed and I will add it. Thank you for your help.
jsdom is being used so I don't think that is the problem.

Write a dynamic Terraform block for a load balancer listener rule

I'm new to dynamic blocks and am having some trouble writing rules to listeners on a load balancer that was created using for_each.
Below are the resources I created:
resource "aws_lb_listener" "app_listener_forward" {
for_each = toset(var.app_listener_ports)
load_balancer_arn = aws_lb.app_alb.arn
port = each.value
protocol = "HTTPS"
ssl_policy = "ELBSecurityPolicy-TLS-1-2-Ext-2018-06"
certificate_arn = var.ssl_cert
default_action {
type = "forward"
forward {
dynamic "target_group" {
for_each = aws_lb_target_group.app_tg
content {
arn = target_group.value["arn"]
}
}
stickiness {
enabled = true
duration = 86400
}
}
}
}
resource "aws_lb_listener_rule" "app_https_listener_rule" {
for_each = toset(var.app_listener_ports)
listener_arn = aws_lb_listener.app_listener_forward[each.value].arn
action {
type = "forward"
forward {
dynamic "target_group" {
for_each = aws_lb_target_group.app_tg
content {
arn = target_group.value["arn"]
}
}
}
}
dynamic "condition" {
for_each = var.images
path_pattern {
content {
values = condition.value["paths"]
}
}
}
}
resource "aws_lb_target_group" "app_tg" {
for_each = var.images
name = each.key
port = each.value.port
protocol = "HTTP"
target_type = "ip"
vpc_id = aws_vpc.app_vpc.id
health_check {
interval = 130
timeout = 120
healthy_threshold = 10
unhealthy_threshold = 10
}
stickiness {
type = "lb_cookie"
cookie_duration = 86400
}
}
Below are how the variables are defined:
variable "images" {
type = map(object({
app_port = number
paths = set(string)
}))
{
"app-one" = {
app_port = 3000
paths = [
"/appOne",
"/appOne/*"
]
}
"app-two" = {
app_port = 4000
paths = [
"/appTwo",
"/appTwo/*"
]
}
}
variable "app_listener_ports" {
type = list(string)
default = [
80, 443, 22, 7999, 8999
]
}
Upon executing, I am getting an error dealing with the path_pattern being unexpected:
Error: Unsupported block type
│
│ on alb.tf line 78, in resource "aws_lb_listener_rule" "app_https_listener_rule":
│ 78: path_pattern {
│
│ Blocks of type "path_pattern" are not expected here.
I've tried a few ways to get this dynamic block but am having some difficulty. Any advice would be appreciated.
Thank you!
Try it like this:
dynamic "condition" {
for_each = var.images
content {
path_pattern {
values = condition.value.paths
}
}
}
And change the type of paths from set(string) to list(string).
This is also completely acceptable:
dynamic "condition" {
for_each = var.images
content {
path_pattern {
values = condition.value["paths"]
}
}
}
However, in my opinion here it's better to not use a dynamic block for the condition to maintain readability and maintenance.
condition {
path_pattern {
values = [
"/appOne",
"/appOne/*" ## can also use variables if you prefer !!
]
}
}
I have already answered your original post related to the problem which you had after fixing the dynamic syntax.
Post URL: Error when creating dynamic terraform rule for alb listener rule

azurerm_mssql_virtual_machine - already exists

Trying to do an AZ Terraform deployment, and failing horribly - looking for some ideas what am I missing. Basically I am trying to deploy 2 (maybe later more) VM-s with variable size of disks, joining them to the domain and add SQL server to them. (Be gentle with me, I am from VMWare-Tf background, this is my first SQL deployment on AZ!)
My module:
## main.tf:
# ----------- NIC --------------------------------
resource "azurerm_network_interface" "nic" {
name = "${var.vm_name}-nic"
resource_group_name = var.rg.name
location = var.location
ip_configuration {
name = "${var.vm_name}-internal"
subnet_id = var.subnet_id
private_ip_address_allocation = "Static"
private_ip_address = var.private_ip
}
dns_servers = var.dns_servers
}
# ----------- VM --------------------------------
resource "azurerm_windows_virtual_machine" "vm" {
/* count = length(var.instances) */
name = var.vm_name
location = var.location
resource_group_name = var.rg.name
network_interface_ids = [azurerm_network_interface.nic.id]
size = var.size
zone = var.zone
admin_username = var.win_admin_user
admin_password = var.win_admin_pw # data.azurerm_key_vault_secret.vmadminpwd.value
enable_automatic_updates = "false"
patch_mode = "Manual"
provision_vm_agent = "true"
tags = var.vm_tags
source_image_reference {
publisher = "MicrosoftSQLServer"
offer = "sql2019-ws2019"
sku = "enterprise"
version = "latest"
}
os_disk {
name = "${var.vm_name}-osdisk"
caching = "ReadWrite"
storage_account_type = "StandardSSD_LRS"
disk_size_gb = 250
}
}
# ----------- DOMAIN JOIN --------------------------------
// Waits for up to 1 hour for the Domain to become available. Will return an error 1 if unsuccessful preventing the member attempting to join.
resource "azurerm_virtual_machine_extension" "wait-for-domain-to-provision" {
name = "TestConnectionDomain"
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.9"
virtual_machine_id = azurerm_windows_virtual_machine.vm.id
settings = <<SETTINGS
{
"commandToExecute": "powershell.exe -Command \"while (!(Test-Connection -ComputerName ${var.active_directory_domain_name} -Count 1 -Quiet) -and ($retryCount++ -le 360)) { Start-Sleep 10 } \""
}
SETTINGS
}
resource "azurerm_virtual_machine_extension" "join-domain" {
name = azurerm_windows_virtual_machine.vm.name
publisher = "Microsoft.Compute"
type = "JsonADDomainExtension"
type_handler_version = "1.3"
virtual_machine_id = azurerm_windows_virtual_machine.vm.id
settings = <<SETTINGS
{
"Name": "${var.active_directory_domain_name}",
"OUPath": "",
"User": "${var.active_directory_username}#${var.active_directory_domain_name}",
"Restart": "true",
"Options": "3"
}
SETTINGS
protected_settings = <<SETTINGS
{
"Password": "${var.active_directory_password}"
}
SETTINGS
depends_on = [azurerm_virtual_machine_extension.wait-for-domain-to-provision]
}
# ----------- DISKS --------------------------------
resource "azurerm_managed_disk" "data" {
for_each = var.disks
name = "${var.vm_name}-${each.value.name}"
location = var.location
resource_group_name = var.rg.name
storage_account_type = each.value.sa
create_option = each.value.create
disk_size_gb = each.value.size
zone = var.zone
}
resource "azurerm_virtual_machine_data_disk_attachment" "disk-attachment" {
for_each = var.disks
managed_disk_id = azurerm_managed_disk.data[each.key].id
virtual_machine_id = azurerm_windows_virtual_machine.vm.id
lun = each.value.lun
caching = "ReadWrite"
depends_on = [azurerm_windows_virtual_machine.vm]
}
# ----------- SQL --------------------------------
# configure the SQL side of the deployment
resource "azurerm_mssql_virtual_machine" "sqlvm" {
/* count = length(var.instances) */
virtual_machine_id = azurerm_windows_virtual_machine.vm.id
sql_license_type = "PAYG"
r_services_enabled = true
sql_connectivity_port = 1433
sql_connectivity_type = "PRIVATE"
/* sql_connectivity_update_username = var.sqladmin
sql_connectivity_update_password = data.azurerm_key_vault_secret.sqladminpwd.value */
#The storage_configuration block supports the following:
storage_configuration {
disk_type = "NEW" # (Required) The type of disk configuration to apply to the SQL Server. Valid values include NEW, EXTEND, or ADD.
storage_workload_type = "OLTP" # (Required) The type of storage workload. Valid values include GENERAL, OLTP, or DW.
data_settings {
default_file_path = "F:\\Data"
luns = [1]
}
log_settings {
default_file_path = "G:\\Log"
luns = [2]
}
temp_db_settings {
default_file_path = "D:\\TempDb"
luns = [0]
}
}
}
## provider.tf
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">=3.0.1"
#configuration_aliases = [azurerm.corp]
}
}
}
variables.tf
# ----------- COMMON --------------------------------
variable "vm_name" {
type = string
}
variable "rg" {
/* type = string */
description = "STACK - resource group"
}
variable "location" {
type = string
description = "STACK - location"
}
# ----------- NIC --------------------------------
variable "subnet_id" {
type = string
description = "STACK - subnet"
}
variable "private_ip" {
}
variable "dns_servers" {
}
# ----------- VM --------------------------------
variable "size" {
description = "VM - size"
type = string
}
variable "win_admin_user" {
sensitive = true
type = string
}
variable "win_admin_pw" {
sensitive = true
type = string
}
variable "os_storage_type" {
type = string
}
variable "vm_tags" {
type = map(any)
}
variable "zone" {
#type = list
description = "VM AZ"
}
# ----------- DOMAIN JOIN --------------------------------
variable "active_directory_domain_name" {
type = string
}
variable "active_directory_username" {
sensitive = true
}
variable "active_directory_password" {
sensitive = true
}
# ----------- SQL --------------------------------
variable "sql_maint_day" {
type = string
description = "SQL - maintenance day"
}
variable "sql_maint_length_min" {
type = number
description = "SQL - maintenance duration (min)"
}
variable "sql_maint_start_hour" {
type = number
description = "SQL- maintenance start (hour of the day)"
}
# ----------- DISKS --------------------------------
/* variable "disk_storage_account" {
type = string
default = "Standard_LRS"
description = "DATA DISKS - storage account type"
}
variable "disk_create_method" {
type = string
default = "Empty"
description = "DATA DISKS - creation method"
}
variable "disk_size0" {
type = number
}
variable "disk_size1" {
type = number
}
variable "disk_size2" {
type = number
}
variable "lun0" {
type = number
default = 0
}
variable "lun1" {
type = number
default = 1
}
variable "lun2" {
default = 2
type = number
} */
/* variable "disks" {
description = "List of disks to create"
type = map(any)
default = {
disk0 = {
name = "data0"
size = 200
create = "Empty"
sa = "Standard_LRS"
lun = 0
}
disk1 = {
name = "data1"
size = 500
create = "Empty"
sa = "Standard_LRS"
lun = 1
}
}
} */
variable "disks" {
type = map(object({
name = string
size = number
create = string
sa = string
lun = number
}))
}
the actual deployment:
main.tf
/*
PS /home/fabrice> Get-AzVMSize -Location northeurope | where-object {$_.Name -like "*ds13*"}
*/
module "uat_set" {
source = "../modules/vm"
providers = {
azurerm = azurerm.cbank-test
}
for_each = var.uat_set
active_directory_domain_name = local.uat_ad_domain
active_directory_password = var.domain_admin_password
active_directory_username = var.domain_admin_username
disks = var.disk_allocation
dns_servers = local.dns_servers
location = local.uat_location
os_storage_type = local.uat_storage_type
private_ip = each.value.private_ip
rg = data.azurerm_resource_group.main
size = each.value.vm_size
sql_maint_day = local.uat_sql_maintenance_day
sql_maint_length_min = local.uat_sql_maintenance_min
sql_maint_start_hour = local.uat_sql_maintenance_start_hour
subnet_id = data.azurerm_subnet.main.id
vm_name = each.key
vm_tags = var.default_tags
win_admin_pw = var.admin_password
win_admin_user = var.admin_username
zone = each.value.zone[0]
}
variable "uat_set" {
description = "List of VM-s to create"
type = map(any)
default = {
UAT-SQLDB-NE-01 = {
private_ip = "192.168.32.8"
vm_size = "Standard_DS13-4_v2"
zone = ["1"]
}
UAT-SQLDB-NE-02 = {
private_ip = "192.168.32.10"
vm_size = "Standard_DS13-4_v2"
zone = ["2"]
}
}
}
variable "disk_allocation" {
type = map(object({
name = string
size = number
create = string
sa = string
lun = number
}))
default = {
"temp" = {
name = "temp"
size = 200
create = "Empty"
sa = "Standard_LRS"
lun = 0
},
"disk1" = {
name = "data1"
size = 500
create = "Empty"
sa = "Standard_LRS"
lun = 1
},
"disk2" = {
name = "data2"
size = 500
create = "Empty"
sa = "Standard_LRS"
lun = 2
}
}
}
locals {
dns_servers = ["192.168.34.5", "192.168.34.10"]
uat_storage_type = "Standard_LRS"
uat_sql_maintenance_day = "Saturday"
uat_sql_maintenance_min = 180
uat_sql_maintenance_start_hour = 23
uat_ad_domain = "civbdev.local"
uat_location = "North Europe"
}
## variables.tf
# new build variables
variable "Environment" {
default = "DEV"
description = "this is the environment variable used to intperpolate with others vars"
}
variable "default_tags" {
type = map(any)
default = {
Environment = "DEV"
Product = "dev-XXXtemplateXXX"
Terraformed = "https://AllicaBankLtd#dev.azure.com/XXXtemplateXXX/Terraform/DEV"
}
}
variable "admin_username" {
sensitive = true
}
variable "admin_password" {
sensitive = true
}
variable "domain_admin_username" {
sensitive = true
}
variable "domain_admin_password" {
sensitive = true
}
Resources create OK, except the SQL-part
│ Error: A resource with the ID "/subscriptions/<..redacted...>/providers/Microsoft.SqlVirtualMachine/sqlVirtualMachines/UAT-SQLDB-NE-02" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_mssql_virtual_machine" for more information.
│
│ with module.uat_set["UAT-SQLDB-NE-02"].azurerm_mssql_virtual_machine.sqlvm,
│ on ../modules/vm/main.tf line 115, in resource "azurerm_mssql_virtual_machine" "sqlvm":
│ 115: resource "azurerm_mssql_virtual_machine" "sqlvm" {
│
╵
╷
│ Error: A resource with the ID "/subscriptions/<..redacted...>/providers/Microsoft.SqlVirtualMachine/sqlVirtualMachines/UAT-SQLDB-NE-01" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_mssql_virtual_machine" for more information.
│
│ with module.uat_set["UAT-SQLDB-NE-01"].azurerm_mssql_virtual_machine.sqlvm,
│ on ../modules/vm/main.tf line 115, in resource "azurerm_mssql_virtual_machine" "sqlvm":
│ 115: resource "azurerm_mssql_virtual_machine" "sqlvm" {
│
╵
Any notions please what I might be missing?
Ta,
Fabrice
UPDATE:
Thanks for those who replied. Just to confirm: it is not an already existing resource. I get this error straight at the time of the creation of these VM-s.
For example, these are my vm-s after the Terraform run (none of them has the sql extension)
Plan even states it will create these:
Terraform will perform the following actions:
# module.uat_set["UAT-SQLDB-NE-01"].azurerm_mssql_virtual_machine.sqlvm will be created
+ resource "azurerm_mssql_virtual_machine" "sqlvm" {
+ id = (known after apply)
+ r_services_enabled = true
+ sql_connectivity_port = 1433
+ sql_connectivity_type = "PRIVATE"
+ sql_license_type = "PAYG"
+ virtual_machine_id = "/subscriptions/..../providers/Microsoft.Compute/virtualMachines/UAT-SQLDB-NE-01"
+ storage_configuration {
+ disk_type = "NEW"
+ storage_workload_type = "OLTP"
+ data_settings {
+ default_file_path = "F:\\Data"
+ luns = [
+ 1,
]
}
+ log_settings {
+ default_file_path = "G:\\Log"
+ luns = [
+ 2,
]
}
+ temp_db_settings {
+ default_file_path = "Z:\\TempDb"
+ luns = [
+ 0,
]
}
}
}
# module.uat_set["UAT-SQLDB-NE-02"].azurerm_mssql_virtual_machine.sqlvm will be created
+ resource "azurerm_mssql_virtual_machine" "sqlvm" {
+ id = (known after apply)
+ r_services_enabled = true
+ sql_connectivity_port = 1433
+ sql_connectivity_type = "PRIVATE"
+ sql_license_type = "PAYG"
+ virtual_machine_id = "/subscriptions/..../providers/Microsoft.Compute/virtualMachines/UAT-SQLDB-NE-02"
+ storage_configuration {
+ disk_type = "NEW"
+ storage_workload_type = "OLTP"
+ data_settings {
+ default_file_path = "F:\\Data"
+ luns = [
+ 1,
]
}
+ log_settings {
+ default_file_path = "G:\\Log"
+ luns = [
+ 2,
]
}
+ temp_db_settings {
+ default_file_path = "Z:\\TempDb"
+ luns = [
+ 0,
]
}
}
}
Plan: 2 to add, 0 to change, 0 to destroy.
Presumably, if these resources would exist somehow - which would be odd, as Tf just created the VM-s - then it would not say in the plan that it will create it now, would it?
So the error is quite the source of my confusion, since if the VM just got created, the creation of the extension failed - how could it possibly be existing?
In this case you should probably just import the modules as the error suggest to your terraform state.
For example
terraform import module.uat_set[\"UAT-SQLDB-NE-02\"].azurerm_mssql_virtual_machine.sqlvm "/subscriptions/<..redacted...>/providers/Microsoft.SqlVirtualMachine/sqlVirtualMachines/UAT-SQLDB-NE-02"

Unable to use terraform.tfvars inside terraform module

Facing some issue to use terraform.tfvars inside module.
My folder structure is
module/
main.tf
variable.tf
terraform.tfvars
demo.tf
provider.tf
The code of demo.tf is
module "module" {
source = "./module"
}
Inside the module folder I have decleared variables inside variable.tf and put their values inside terraform.tfvars.
when I run terraform plan then it is showing
Error: Missing required argument
on main.tf line 1, in module "module":
1: module "module" {
The argument "<variable_name>" is required, but no definition was found.
Please let me know the solution, Thanks in advance.
( When I put values as default inside variables.tf then it is working file.)
For more details, I am adding all files below -
main.tf
resource "aws_glue_catalog_database" "glue_database_demo" {
name = var.database_name # var
location_uri = "s3://${var.bucket_location}" # var
}
resource "aws_glue_catalog_table" "aws_glue_catalog_table" {
name = var.table_name # var
database_name = aws_glue_catalog_database.glue_database_demo.name
table_type = "EXTERNAL_TABLE"
parameters = {
EXTERNAL = "TRUE"
"parquet.compression" = "SNAPPY"
}
storage_descriptor {
location = "s3://${var.bucket_location}" # var
input_format = "org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat"
output_format = "org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat"
ser_de_info {
name = "my-stream"
serialization_library = "org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe"
}
columns {
name = "filekey"
type = "string"
}
columns {
name = "reead_dt"
type = "string"
}
}
partition_keys {
name = "load_dt"
type = "string"
}
}
variables.tf
variable "database_name" {
}
variable "bucket_location" {
}
variable "table_name" {
}
terraform.tfvars
database_name = "mydatabase"
bucket_location = "kgvjgfkjhglbg"
table_name = "mytable"
This is not how modules work. If you define a variable, it is expecting you to provide a value when you are calling it unless it has a default value defined as you have already noted. In order for this to work, you would have to provide values when calling the module:
module "modules" {
source = "./module"
database_name = "mydatabase"
bucket_location = "kgvjgfkjhglbg"
table_name = "mytable"
}
The other option would be to define a variables.tf file in the same directory where you are calling the module from, e.g.,:
# provide input for the module
variable "database_name" {
type = string
description = "Glue DB name."
}
variable "bucket_location" {
type = string
description = "Bucket region."
}
variable "table_name" {
type = string
description = "Glue catalog table name."
}
Then, copy the terraform.tfvars to the same directory where you are calling the module from and in the demo.tf do the following:
module "glue" {
source = "./module"
database_name = var.database_name
bucket_location = var.bucket_location
table_name = var.table_name
}
Note that I have changed the logical name of the module from modules to glue as it is more descriptive, but it's not necessary.
The final outlook of the directories should be:
module/
main.tf
variables.tf
demo.tf
provider.tf
terraform.tfvars
variables.tf
Within your demo.tf file, in the "modules" module you need to provide the input variables value.
For example:
module "modules" {
source = "./module"
database_name = var.database_name
bucket_location = var.bucket_location
table_name = var.table_name
}

React Native: How to retrieve image file content from fetch?

In my React Native 0.62.2 app, a fetch is used to fetch an image IMG_1885.jpg from cloud object storage for test purpose:
let img = await fetch(bgimage.uri, {
method:"GET",
'Content-Type': 'image/jpeg'
});
Here is the http return and is assigned to variable img:
'img in sp ', { type: 'default', //<<<==output of img
│ status: 200,
│ ok: true,
│ statusText: undefined,
│ headers:
│ { map:
│ { 'x-oss-storage-class': 'Standard',
│ 'accept-ranges': 'bytes',
│ 'content-md5': 'Au4QpWK3O8+l4qCYxDjzw==',
│ 'content-length': '475266', //<<<==size of file
│ connection: 'keep-alive',
│ 'content-type': 'image/jpeg', //<<<<==jpeg
│ 'x-oss-server-time': '20',
│ 'x-oss-object-type': 'Normal',
│ date: 'Sun, 05 Jul 2020 21:20:16 GMT',
│ 'x-oss-force-download': 'true', //<<<==server force frontend to download
│ 'content-disposition': 'attachment',
│ 'x-oss-request-id': '5F024410980C637340B8FBD',
│ 'last-modified': 'Sun, 05 Jul 2020 17:26:04 GMT',
│ etag: '"02EE10A562B73BC23E978A826310E3CF"',
│ 'x-oss-server-side-encryption': 'AES256',
│ 'x-oss-hash-crc64ecma': '65781719895705534',
│ server: 'AliyunOSS' } },
│ url: 'https://oss-hz-1.oss-cn-hangzhou.aliyuncs.com/IMG_1885.jpg?Expires=1593984273&OSSAccessKeyId=myID&Signature=mySigna',
│ bodyUsed: false,
│ _bodyInit:
│ { _data:
│ { size: 475266,
│ offset: 0,
│ blobId: '7f5f3eb5-8e02-470a-a06c-81bcd1161df8',
│ __collector: {} } },
│ _bodyBlob:
│ { _data:
│ { size: 475266,
│ offset: 0,
│ blobId: '7f5f3eb5-8e02-470a-a06c-81bcd1161df8',
└ __collector: {} } } }
The http status is 200, the content size matches the test file.
Here is the output of img.blob() which has no image data:
[14:20:16] I | ReactNativeJS ▶︎ 'img blob ', { _data: //<<<== output of img.blob()
│ { size: 475266,
│ offset: 0,
│ blobId: '7f5f3eb5-8e02-470a-a06c-81bcd1161df8',
└ __collector: {} } }
img.json() returns error: [SyntaxError: JSON Parse error: Unrecognized token '�']
How to retrieve the image content from img returned?