how can I execute a three.js example in my project folder? - npm

How can I execute a three.js example in my project folder? The example I want to execute is this:
https://threejs.org/examples/webgl_animation_keyframes.html
.
After installing three.js through npm, I copied example's source code:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_animation_keyframes.html
in my empty examples folder located at '/node_modules/three/examples'.
I think there's a path direction problem importing some of including library files such as
"import { RoomEnvironment } from './jsm/environments/RoomEnvironment.js';"
"loader.load( 'models/gltf/LittlestTokyo.glb', function ( gltf )"
etc.
Do I have to copy those library files and paste it on the right path by hand? I'm afraid this is not a correct solution. Is there a solution something like, as I wish, downloading all necessary library files in the right places by input some npm command?

The situation is certainly not ideal, but here are some tips:
You can clone the whole repository so you have all the resources used by the examples: git clone --depth=1 https://github.com/mrdoob/three.js.git
You can use a web browser's web page saving feature (Ctrl+S) but be sure to replace the HTML with the example's source HTML, because it will be much cleaner. You'll need to fix up file path references, and it may still miss some resources. Also make sure not to get the iframe containing page, but rather the demo itself.
If Save Webpage As misses some resources, you can use Chrome's dev tools Network tab. Refresh the page to populate it, then right click inside the table > Copy > Copy All as CURL. This will give you a command you can paste in your terminal (in an empty directory, ideally) to download all the resources used by the webpage. This can still miss some resources that are dynamically loaded, such as with 1. the model selector in the LDraw Loader example, in which case you could switch to each model to purposely populate the network requests table, or 2. a fallback for older browsers, in which case you may not be able to get it through this method.
It's much easier to remove features than to add and combine them, so try to find an example that uses as many of the things you want as possible (without being overwhelmingly complex). It's worth looking outside of the official examples to real projects and third party experiments. Just note that they may not be up to date with the latest APIs.
I hope someone writes a script to automate setting up a Three.js project from an example... and posts a better answer than this!

Related

How to add GET parameter to CSS filename in plugin JCH Optimize (Joomla)

I cannot find an answer to for several weeks. Perhaps my experience in development is not so great :) One site uses the JCH Optimize plugin and I noticed that after clearing the old memory (caches), the CSS and JS file links do not change, i.e. the names of these files remain old. The problem is that the browser checks the file name and if it has not changed, then the site visitors show the old version of the style file. The question itself, where in the plugin (in which code file) i can add some GET parameters ?vers = 1.1 so that for the browser it is a new file and it would update the information for users. I will be glad to hear any solutions. Thanks.
The names of the combined css and javascript files are keyed from the names of the individual files on the page. Clearing the cache will not cause the plugin to generate a different file name.
As of version 7.0.0, there is an option to generate a different cache key if you want to change the names of the combined file. Update your current version to get that capability.

How to execute the "AL:GO!" task as part of a script

Recently, my company started to focus Extension_v2 development for Dynamics NAV BC. We store our code in an internal Git-Server. So far, so good.
But startig a new project is still a very fiddly task. You have to create a repository, clone it, execute the AL Code-Task, move the files to the fitting location push the repository to the correct upstream etc. And all this does not include the first initial Steps (README, CHANGELOG and all other fundamental files...).
So I wanted to write a small PowerShell-Script, to do all these initial steps before being able to start working on the Project.
The Problem: I could not find a way to execute the "AL-GO!" task via script.
I have already searched the Internet and some forums for an answer... but it seems like microsoft did not consider the possibility to execute tasks from the AL-Language-Extension via script.
I also played around with the New-NAVAppManifest and the New-NAVAppManifestFile command for the old Extension_V1 development, but that did not do the trick.
I am looking for a fair and easy way to combine the creation of the app.json file and the launch.json file with other commands to easily initialize a new Project without haveing to write all commands manually. Maybe I did not recognize the easy solution. Or maybe this is just the way we have to do it in Extension_v2.
Anyway, thanks for all your help nevertheless.
Greetings.
Stay away from Ext V1. It's highly deprecated at this point.
First of all, why do you need to execute the "AL-Go!" via script? The "AL-Go!" command should already include all necessary steps to create an empty project including the launch.json and app.json. (minimal adjuments required dependent on your BC environment)
There is an extension/plugin for Git in Visual Studio Code which will handle all the repository stuff for you. You don't need to change file locations if everything is set up for Git. I rarely use it yet, but saw a demo for it on the Directions EMEA last year and I'm pretty sure it works at its current state (someone correct me if I'm wrong)
A way to implement the "AL-GO!" command for a script or for setting up additional steps in your project setup might be to write your own visual studio code extension/plugin which requires some additional know-how for that.
OR
You just change the settings/files of the default project, I bet there is at least a file for creating the initial AL project. Just change that to your requirements

Can't migrate custom Plone file types to Blobs

We have custom content types that were created as extensions of the ATTypes, two of them extend the ATFile type and one extends the ATImage type. We recently upgraded from Plone 4.2 to Plone 4.3.2. Just discovered we are not using Blob storage at all. No wonder our Data.fs is HUGE. So, I have been trying to migrate these custom types.
I have followed all of the steps explained in this example and the product's notes from pypi, these Plone instructions, and used the example from the pypi page for archetypes.schemaextender (Sorry, since I'm still a noob my reputation won't let me post more than 2 links).
In the end, I created an extender script that just extends the ATFile type changing the FileField to BlobField. It seems to be working for new items. I can add a new CustomFileType and it appears to be uploading the file to blob, and my new upload field is showing (I changed the description as a quick way to verify which one it was using).
However, I am having a problem migrating all existing content items to move the binary files over to blob. I tried the generic migrate() script, then I created my own migrate and walker as suggested in the above resources. It doesn't seem like it is doing anything though. When printing results for each item it tries merging, I do see this returned for each item:
DEBUG ATCT.migration Migrating /site/path/to/custom/file/filename.ext (CustomFile -> Blob)
When I navigate to the custom file type in the site, where it usually shows the link to the file, it is just empty. Then going to edit, it treats it as if there is no file there. As a check, I disabled the extender, restarted, and reloaded the custom file. The file was there now. So it looks like the script I am running just isn't moving that file over to where it should be now.
I feel like I am missing something simple, and it is right there, but I can't seem to find it. All of this is learn as I go and a bit over my head, so hopefully someone can easily set me straight.
If I need to provide any additional information leave a comment and I will try to provide what you need.
UPDATE
I used the Red Turtle objects as examples to migrate my custom types as suggested by keul. I still was not able to get the file to migrate to blob within the type itself. So, I tried a different approach. I created a new custom type "CustomBlob", that is a mimic setup of my CustomFile type, and only extended this new blob type to be blob aware. Then I migrated the CustomFiles to CustomBlob, did a complete clear and rebuild, and packed the zeo. The migration seemed to work for the most part, the blobstorage grew by an expected amount, the new types worked. However, the Data.fs didn't go down in size. I would have thought that the binary files that were stored in Data.fs would be removed during the migration. Am I understanding this incorrectly? How can I remove these files so the Data.fs size goes down appropriately?
Not sure if this is the best solution, but here is how I was able to get this to work.
I created temporary content types parallel of each type (for CustomImage I made CustomImageBlob, and so on). I made the new types blob-aware only, migrated all types to their parallel. Then I enabled the extender for the original types to make them blob-aware, and migrated back. It is a little redundant and time consuming, but I just could not get the files to migrate to blob when migrating to itself.
Providing this as the best answer so far in case it helps someone else, or might encourage someone to find a better solution. Thanks for the tip keul, it definitely helped me get to this solution.

Creating a search app like EasyFind

On OS X there is a popular app called EasyFind that searches for strings inside of a files content or you can just do a name search. More importantly, it searches in hidden files and inside of package contents.
So my research with using the Spotlight API leads me to believe that it is not possible to do this. Should I assume EasyFind is doing this all manually without using any Cocoa search API?
If that is true, does anyone know of some code to get me started, even just pseudo?
Basically I want to build an app that will find every single image on the drive no matter where it is or what permissions it has. This also includes icon files.
One other thing I can't seem to find an answer to is whether or not you can do a search like this on the command line in OS X.
Thanks!
In the command line you can use the find command line tool. That gives you access to all the files in the filesystem if you run it with root permissions (sudo). You can pipe its results to grep to find for strings inside the files. You can also use the strings command line tool to look for strings inside binary files.
This is not very complicated to implement within a Cocoa App. Just Google for how to iterate through all the hard drive contents. NSFileManager could be a good place to start digging.
Also check out FindAnyFile. It is a nice app that does similar to EasyFind but just on file properties (name, dates, etc.). It doesn't read file contents.

Substitute reference to a remote script with local one

This is regarding my developmnent stage and the practice of testing all the JS before releasing it.
Unfortunatly we have some hardcoded references in our code. and this is the reason why there is no way for me to test a new version of test.js on the Stage server. and you only see the effects when it goes live.
Now, I know I should use relative paths etc.. but I was wondering if there is a Firefox plugin that could maybe substitute http://remote.site/test.js with /dev_path/to/test.js during pageload ?
I have also tried using hosts file for this purpose but it doesn't work in my scenario as I only need to map it to this 1 reference and not the whole domain.
Is there anything stopping you from changing the hard-coded references? That's really the easiest answer to your problem.
Run a find-and-replace on your files to replace the absolute links to relative ones. So long as the site hierarchy is the same for development and production, there shouldn't be any problems.