I've been using AWS SAM without much issue for the last 6 months, with about 40 functions split across 4 projects.
Just today I was cleaning up some hard disk space and notice my codebase folder for one project was 4.5 gbs, and I realized it was the .aws-sam folder. Looking further, I could see a build folder inside, with each function storing the equivalent of every single dependency. I am wondering if I am doing something wrong, or if not, why is this necessary, considering I don't utilize this folder whatsoever when I'm building locally.
Related
We have a C codebase which is around 9GB in size! It takes a lot of time (hours) to create the IPCH database/files and hence the intellisence doesn't work so well for the first few hours of the project.
Is it possible to index the whole project separately and use the resulting IPCH database for all our work spaces.
One more problem is that indexing is done only for those files which are clicked. We want the IPCH to be built for all the files no matter whether they are clicked or not.
We are using "Remote SSH" plugin and the system which we work on are essentially virtual machines but we have a facility of NFS mounts, hence we can mount those pre-compiled ipch stuff to all our virtual machines.
Please suggest the right workflow.
When running srb init on a large Rails application, the process uses a lot of memory (10GB+) and takes a long time (upwards of 10 or 15 minutes) to complete. Is is possible to update hidden definitions for a single file or sub-directory in order to speed-up this process?
I am especially thinking of the case where a new gem or file-change requires an update to the hidden definitions but I don't want to re-initialize the entire project.
Computing hidden-definitions.rbi is necessarily whole-program wide. The algorithm is:
load all code in your project, including gems
run sorbet over all code in your project, including RBIs that were already created for gems
output an RBI containing the diff of the previous two steps.
So fundamentally, hidden-definitions.rbi must be computed for an entire project.
Under the section "Cascading Style Sheets" in M. Hartl's Rails 3 tutorial he mentions copying the CSS blueprint directory into the 'public/stylesheets' folder. My stylesheets folder resides within the assets directory. Is it reasonable to copy the blueprint directory into the 'assets/stylesheets' instead of the 'public/styleshets'? If not, what might be your suggestion? If so, are there any particular pitfalls of which you might suggest I be mindful?
He clearly suggested using Rails 3.0.1, though I am running 3.2.6. I should have followed his directions to the mark, but I had an almost impossible time getting the environment up and running on my Windows machine (MySQL conflicts, etc... ) and it just so happened that this version ended up working for me so I went with it.
Don't assume I know what I'm talking about, because I'm new to RoR, but I just spent the last few hours reading up on the asset pipeline after running into problems with it. I'll share a few things I've learned that might help you conceptualize:
Anything in public/ is left just the way it is, and server as static files directly by the web server. There are two points worth considering regarding public/ assets, though:
1) They don't get the benefits of precompiling, which include:
1a) fingerprinting - Appending an md5 hash based on file contents to the filename, so that the filename changes when the file changes, forcing caches to reload. This is useful if the file might change some day (a new version of blueprint, in your case).
1b) concatenation - The precompiler can/will combine multiple CSS or JS files into one, which makes the download faster. (Exactly what files get compiled and into how many is configurable.)
1c) minification - The precompiler removes whitespace (and other clever optimizations) to shrink down the size of your CSS/JS files.
2) I'm still trying to figure this part out, but whether something is in /app/assets and goes through precompile affects whether and how helper methods work (things like asset_tag, image_tag, and javascript_include tag, which you use in your views).
Even though I'm totally unqualified, I'm seriously considering starting my own Rails Assets Best Practices page on a wiki somewhere to start organize my thoughts. I think it's sorely lacking - I've had to dredge bits of knowledge from many places, and some of what people are suggesting I find objectionable (like modifying config files to precompile add unmanifested assets).
I have copied my stylesheet files to app/assets folder and it worked normally
Note: I've tried searching, Stackoverflows near useless. I am not sure what kind of tool I need.
At my organization we need to keep track of the software configuration for many types of computers including the binary installers and automation scripts. Change is infrequent but the size of latest version of the configuration is several gigs.
We are trying to use Mercurial to store changes but it is just too slow, even without many revisions at all. I did an hg status but killed it after it took 10 minutes without finishing.
We are looking for a way to store the current configuration as well as having the old configurations there just in case. I have never done anything like this before and do not know what tools are available or even suitable for such tasks. Can someone point me in the right direction or tell me how the are solving this problem? Thanks
Since hard disk space is cheap and being able to view binary differences isn't very helpful, perhaps the best option you have is to store each configuration in a new directory that is indexed somehow. Example below:
/software/configs/2009-03-15
/software/configs/2009-09-28
/software/configs/2009-09-30
Given the size of your files and the infrequent number of changes, this would allow you to pick a configuration from a given 'tag' without the overhead of revision control.
If you pack your files into a single tar file and generate a SHA-512 hash, then you can be reasonably sure that no one has tampered with your files since they were archived.
While I don't know specific details about how to implement this strategy in mercurial, I have been working with git and git-fat. It sets up a general procedure that is likely to be feasible on mercurial as well. Basically the idea is whenever you add a binary file to the repository, under the hood, the repo creates a symlink to the file that is actually stored in another location as a checksummed object.
This allows large files to be tracked by the repo, without storing the actual data inside. It requires the data to be stored in some other location (perhaps in a binary management system).
It might take some configuration to do it in mercurial, but I think it's an elegantly simple solution.
To compile my current project, one exe with ~90,000 loc + ~100 DLL's it takes about a half hour or more depending on the speed of the workstation.
The build process is one of running devenv from Powershell scripts. This works very well with no problems.
The problem is that it is slow. I want to speed up this build process.
MSBuild (using VS-2005) is one option but there's a bug specifying icons to the vb compiler/linker on the command line such that it won't successfully link.
What other options are there to "make" VB.NET programs?
(Faster workstation is not an option.)
Do you absolutely have to compile the whole solution every time? With that many assemblies it seems unlikely that they all need to be built unless they actually change. If your solution is made up of multiple projects, you might consider creating multiple solutions in your build environment. One master solution could contain all the projects, another that includes the ones that change most often. You can then configure your build process to focus on the projects that have changed. Depending on the source control system you use, you may be able to query the system to determine which projects have changed since the last build, and only build those projects.
There's NAnt, and Cruisecontrol.NET for continuous build.
You mentioned that getting a faster PC is not an option, but how much memory do you have? 2GB should be the minimum for a developer machine. Also, using a fast 10K RPM hard disk makes a big difference.
Have you tried disabling any virus scanner during your build?
If you can, upgrade to the 3.5 version of MSBuild. It can build solution files, and enables support for multiprocessor support (or here if you need to host it yourself) enabling it to build projects in parallel.
The caveat is that you need to be using project references so it knows what to build.
Also, how long is it taking now? Have you looked at the CPU/Memory Usage (using something like PerfMon) to see if it is a bottleneck?
There's not much you can do to make the build process any faster short of adding more cores, CPU power, and memory to your machine, but that isn't an option in your case.
Most large projects are not self-contained in a single EXE. More often, logical units are moved into seperate assemblies, which can either be a DLL or EXE. The end result is a whole bunch of little assemblies, instead of one enormous one.
To cite one example, one project that I worked on was enormous, consisting of 700+ forms and 10s of 1000s of classes. Functionally related forms, such as those related to printing, report generation, user interrogation, etc were self-contained in their own EXEs. If I was working on the reports, I'd exclude all projects not related to reports from the build process, and this helps bring the compilation time down from a half hour to a few seconds.
This programming style can be tricky, but when it done right, it simply works and works flawlessly.
If you have a big number of projects then you should try to reduce them. You can always split them up in dll's later. The fewer projects the faster it can build. Especially if it has to build them in a certain order.
Breaking them in smaller solutions is also an option.