Can anyone answer a question for me? I want to know if I'm using pinata correctly. I created a small collection (50 images). Of course there are corresponding .json files for the metadata of each image. I uploaded the 50 images to pinata, then wrote a script that updated the .json files, so the metadata points to the ipfs location for each image. I finally uploaded the 50 .json files to pinata as well. Therefore, the images and the corresponding .json files have different CIDs. Is this the correct way to do this. I'm asking because my images are not showing on testnets.opensea.io. My nft contract sets the base uri to the CID of the metadata files (.json files).
What I usually do is upload a folder containing the JSON metadata of the nfts, so that each file is pointing to the same base URI. Then just set your contract to point your nfts to that base URI, and then just add the nft id to the end. If your JSON has the necessary properties, it should show up correctly on opensea. Be sure that each metadata points to its corresponding image.
Related
There are 3 files uploaded to Sharefile using Sharefile API. 2 of which are PDF's and named slightly different. Since 11/2/2020 both the pdf's are showing up uploaded as one (the names are concatenated with a comma) they used to be uploaded as 2 different files as their filenames are appended with _App or _Qte at the end. Example: Jane_Doe_100-1_HO_10-20-2020_App-8625.pdf,Jane_Doe_100-1_HO_10-20-2020_Qte-8625_4112020082002.pdf
Is anybody else have this problem. Sharefile support has been contacted and they said they do not support API calls.
We haven't made any changes to the code. Thanks in advance for the help.
What I had to do was split the upload proc into 2 and upload each file separately. Although Sharefile API allows\uses Multipart Form DataStream to enable sending files of the same type/extension as part of one stream, it seems they no longer upload the files separately. So I had to upload both the pdf files seperately.
For a file uploaded to Telegram, I have a file_id and I can download it. But when the file is sent originally, there is an audio object available that has more metadata than the file (for example, title, performer, etc.). Is there a way to get this information again by having just the file_id?
You can use sendAudio method to send the audio to a chat and the response will contain all the attached details.
Unfortunately, it is not possible to get such information from the file_id only. No one knew what was contained in the file_id until recently when a couple of individual managed to 'crack' it. You can check what is in a file_id using this. So a file_id is just a representation of the location of a file on the Telegram servers which contains a little bit of information such as data center, location of file, a hashed checksum which further contains original uploader etc.
Why I can't update image this way?
Bigcommerce::createProductImage($product_id, array('image_file'=>'/home/user/bigcommerce/api/picture.jpg'));
The follow code works:
Bigcommerce::createProductImage($product_id, array('image_file'=>'https://cdn6.bigcommerce.com/s-0cvdh/products/32/images/299/apitec71q__46081.1484240387.1280.1280__30161.1484331218.1280.1280.jpg'));
According to the documentation, it is not possible to upload an image locally. The docs say:
When specifying a product image, the image_file should be specified as either:
a path to an image already uploaded via FTP to the import directory (with the path
relative to the import directory); or a URL to an image accessible on the internet.
It doesn't work because the BigCommerce servers have no idea where "/home/user/bigcommerce/api/picture.jpg" lives. That's a file that exists on your local machine, and you're passing a string to BigCommerce telling it where to get the file. It works in the second case because BigCommerce can access that URI.
In more general terms, you need to upload the image to a location that BigCommerce can access it, and then pass a URI that BigCommerce can use to retrieve the image. It may seem a little cumbersome, but it relieves BigCommerce from having to stream large amounts of data through their API.
Just wondering if there is a recommended strategy for storing different types of assets/files in separate S3 buckets or just put them all in one bucket? The different types of assets that I have include: static site images, user's profile images, user-generated content like documents, files, and videos.
As far as how to group files into buckets. That is really not that critical of an issue unless you want to have different domain names or CNAMEs fordifferent types on content, in which case you would need a separate bucket for each domain name you would want to use.
I would tend to group them by functionality. Perhaps static files used in your application that you have full control over you might deploy into a separate bucket from content that is going to be user generated. Or you might want to have video in a different bucket than images, etc.
To add to my earlier comments about S3 metadata. It is going to be a critical part of optimizing how you server up content from S3/Cloudfront.
Basically, S3 metadata consists of key-value pairs. So you could have Content-Type as a key with a value of image/jpeg for example if the file is .jpg. This will automatically send appropriate Content-Type headers corresponding to your values for requests made directly to S3 URL or via Cloudfront. The same is true of Cache-Control metatags. You can also use your own custom metatags. For example, I use a custom metatag named x-amz-meta-md5 to store an md5 hash of the file. It is used for simple bucket comparisons against content stored in a revision control system, so we don't have to make checksums of each file in the bucket on the fly. We use this for pushing differential content updates to the buckets (i.e. only push those that have changed).
As far as how revision control goes. I would HIGHLY recommend using versioned file names. In other words say you have bigimage.jpg and you want to make an update, call it bigimage1.jpg and change your code to reflect this. Why? Because optimally, you would like to set long expiration time frames in your Cache-Control headers. Unfortunately, if you then want to deploy a file of the same name and you are using Cloudfront, it becomes problematic to invalidate the edge caching locations. Whereas if you have a new file name, Cloudfront would just begin to populate the edge nodes and you don't have to worry about invalidating the cache at all.
Similarly for user-produced content, you might want to include an md5 or some other (mostly) unique identifier scheme, so that each video/image can have its own unique filename and place in the cache.
For your reference here is a link to the AWs documentation on setting up streaming in Cloudfront
http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/CreatingStreamingDistributions.html
Let's say my RESTful API deals with files and folders. With it, I can create and edit both files and folders.
A file can belong to a folder. So let's say I want to move a file to a different folder. Which would be most appropriate, according to spec and/or what is most common?
POST to /file/:id, sending the new folder's id, changing the value
for just folder_id, keeping all other attributes untouched. The API
method only updates folder_id.
POST to /file/:id/location, sending
the new folder's id.
This isn't really a straightforward answer but I guess the question I would ask myself is: should the move file action be more appropriate for a file resource or a folder resource to handle. I wouldn't worry too much about the URI structure until I had an answer to the question.
The move action touches three resources; the file, the original folder and the destination folder. A client would need to know at least the file URI & the destination folder URI since the original folder can be inferred. I can see a case being made for both approaches. The file resource representation (the contents of the POST) could indicate a new destination folder as a value which is assumed to empty if no move is required. A folder resource could assume that a file representation contained in a POST implies moving that file to the folder. Whichever approach makes the most sense for your business process is the one I would go with.