Inconsistent list formatting in TexMaker - formatting

Any items in a list where the name of an item (in the brackets) has <=2 character, the description is put on the same line as the name. I'm looking for a way to make it so all of the descriptions are all indented on the next line, thanks.
\documentclass[11pt]{article}
%% Layout Alteration
%% --------------------------
\usepackage{
enumitem, % indented items for glossary
framed, % nice boxes; used in Supervisor's Approval
geometry, % change the margins for specific PAGES
}
\setlist[description]{style=nextline}
\geometry{ % specify page size options for (geometry)
a4paper, % paper size
margin=1in, % specified independently with hmargin vmargin
}
% Hides the formatting for the glossary
\newenvironment{Glossary}
{
\begin{description}
}
{
\end{description}
}
\begin{document}
\begin{Glossary}
\item[Uvic] University of Victoria
\item[2-FSK] 2 tone Frequency Shift Keying
\item[Core blocks] The pre installed modules and blocks in Gnuradio
\item[CSA] Canadian Space Agency
\item[CubeSat] A miniaturized space satellite for research
\item[DSP] Digital Signal Processor
\item[FM] Frequency Modulation
\item[FPGA] Field-Programmable Gate Array
\item[FSK] Frequency Shift Keying
\item[GFSK] Gaussian Frequency Shift Keying
\item[GNU Radio] A DSP and SDR creation software
\item[GPIO] General Purpose Input/Output
\item[GRC] GNU Radio Companion, GUI interface for GNU Radio
\item[gr\_modtool] A script that simplifies the process of creating OOT Modules
\item[GUI] Graphical User Interface
\item[IO] Input/Output
\item[OS] Operating System
\item[OOT Module] Out-of-tree Module
\item[PDU] Protocol Data Unit, A PMT that is a pair of a dictionary and a uniform vector type
\item[PMT] Polymorphic Type, An opaque data type designed as generic containers of data that can be safely passed between blocks in Gnuradio
\end{Glossary}
\end{document}
"FM", "IO, and "OS" names all have their descriptions on the same line

There are a couple of options. For example: Format the item yourself with a new command. Then you can decide the spacing yourself.
\newcommand{\myitem}[2]{
\item[] \textbf{#1} \newline \hspace*{.5cm} #2}
\begin{document}
\begin{Glossary}
\myitem{Uvic} University of Victoria
\myitem{2-FSK} 2 tone Frequency Shift Keying
\myitem{Core blocks} The pre installed modules and blocks in Gnuradio
\myitem{CSA} Canadian Space Agency
\myitem{CubeSat} A miniaturized space satellite for research
\myitem{DSP} Digital Signal Processor
\myitem{FM} Frequency Modulation
\myitem{FPGA} Field-Programmable Gate Array
\myitem{FSK} Frequency Shift Keying
\myitem{GFSK} Gaussian Frequency Shift Keying
\myitem{GNU Radio} A DSP and SDR creation software
\myitem{GPIO} General Purpose Input/Output
\myitem{GRC} GNU Radio Companion, GUI interface for GNU Radio
\myitem{gr\_modtool} A script that simplifies the process of creating OOT Modules
\myitem{GUI} Graphical User Interface
\myitem{IO} Input/Output
\myitem{OS} Operating System
\myitem{OOT Module} Out-of-tree Module
\myitem{PDU} Protocol Data Unit, A PMT that is a pair of a dictionary and a uniform vector type
\myitem{PMT} Polymorphic Type, An opaque data type designed as generic containers of data that can be safely passed between blocks in Gnuradio
\end{Glossary}
Gives you this (excerpt):
An even simpler (but hacky) solution is to just insert invisible letters in the shorter items to match the longer ones and thereby force the same formatting:
\item[GUI] Graphical User Interface
\item[IO\phantom{XYZ}] Input/Output
\item[OS\phantom{XYZ}] Operating System
And better yet, organize your acronyms using a package that does the formatting and bookkeeping for you, e.g. acro.

Related

Redeclaring two Medium packages in One system component

I am new to modelica, and i don't have this much experience in it, but i got the basics of course. I am trying to model a micrfluidic network. The network consists of two sources of water and oil, controlled by two valves. The flow of the two mediums interact at a Tjunction and then into a tank or chamber. I don't care about the fluid properties of the mixture because its not my purpose. My question is how do redeclare two medium packages (water and oil) in one system component such as the Tjunction or a tank in order to simulate the system. In my real model, the two mediums doesn't meet, becuase every medium passes through the channels at a different time.
I attached the model with this message. Here's the link.
https://www.dropbox.com/s/yq6lg9la8z211uc/twomediumsv2.zip?dl=0
Thanks for the help .
I don't think you can redeclare a medium during simulation. In your case (where you don't need the mixing of the two fluids) you could create a new medium, for instance called OilWaterMixture, extending from Modelica.Media.Interfaces.PartialMedium.
If you look into the code of PartialMedium you'll see that it contains a lot of partial ("empty") functions that you should fill in in your new medium model. For example, in OilWaterMixture you should extend the function specificEnthalpy_pTX to return the specific enthalpy of your water/oil mixture, for a certain water/oil mixture (given by the mass fraction vector X). This could be done by adding the following model to the OilWaterMixture package:
redeclare function extends specificEnthalpy_pTX "Return specific enthalpy"
Oil = Modelica.Media.Incompressible.Examples.Essotherm650;
Water = Modelica.Media.Water.StandardWater;
algorithm
h_oil := Oil.h_pT(p,T);
h_water := Water.specificEnthalpy_pT(p,T);
h := X[0]*h_oil + X[1]*h_water;
end specificEnthalpy_pTX;
The mass fraction vector X is defined in PartialMedium and in OilWaterMixture you must define that it has two elements.
Again, since you are not going to actually use the mixing properties but only mass fraction vectors {0,1} or {1,0} the simple linear mixing equation should be adequate.
When you use OilWaterMixture in the various components, the error log will tell you which medium functions they need. So you probably don't need to extend all the partial functions in PartialMedium.

Method to get non-base units?

Is there a method of using the exponent properties of LabView units for carrying custom units? For example I would find it convenient to use milli-Amperes instead of Amperes in my data wires.
My first attempt at doing so looks like this, but trying to get the value out at the end gives me nothing.
I would find it convenient to use milli-Amperes instead of Amperes in my data wires
For a wire, it's not possible, and it's not a problem, here's why:
I'm afraid what you want make little sense, since you're milli-Amperes instead of Amperes refers to representing your data, while a wire is just raw data. Adding the milli- to a floating point changes the exponent, not the mantissa, so there's no loss or gain of precision in the value that your number carries.
Now if we talk about an indicator which is technically a display of the wire value, you change the unit from "A" to "mA" to have the display you want.
Finally, in your attempt with "set numeric info", the -3 factor added next to Amperes means the unit is A^-3, not mA.
You can use data that don't use units, however than you will loose your automatic check of the units.
For display properties you can tweak the display format to show different outputs:
This format string is constructed as following:
% numeric
^ engineering notation, exponents in multiples of three
# no trailing zeros
_6 six significat digits
e scientific notation (1e1 for instance)
The prefix is the best way to affect the presentation of the value on a specific front panel.
When passing data from VI to VI, the prefix is not passed, and the data uses the base ( Amps, Volts, etc...)
In my example below, the unitless value 3 is assigned units of Amp in mA.vi. The front panel indicator is set to show units of mA.
In Watts.vi I multiply the Amps OUT of mA.vi by a constant of 9V and the result is wired to the indicator x*y.
x*y has units of W and I changed the prefix to k for presentation.
The NI forums have several threads that report certain functions (square and square root specifically) can cause unit errors or broken wires. Most folks don't even know the units capability exists, and most that do have tried and abandoned them. :)

Adobe Illustrator Scripting - No Management for Units

So, I searched through the documentation on AI CS6's Scripting Reference (JavaScript / though actually ECMAScript), and I cannot find anything for managing units, or unit conversion. Is it missing?
Photoshop uses ECMAScript's UnitValue object as a standard way for managing unit conversion, but I can't find anything for Illustrator. Illustrator functions do not accept UnitValue instances - they only accept doubles, in the "points" unit.
How am I supposed to manage units in AI?
Or, if nothing else, how do I run conversions from a given unit to points?
p.s. Why is it points anyway? Is that script-default? My current default unit is inches.
Answers to all your questions can be found in the Adobe Illustrator Scripting Guide (CS6 on the Adobe home page) in the Measurement units section. This section seem to remain unchanged for years.
In particular,
"Illustrator uses points when communicating with your scripts
regardless of the current ruler units"
"If your script depends on [...] units other than points, it must perform any unit conversions needed to represent your measurements as points."
Precise conversion rules: 1 inch = 72 points, 1 inch = 2.54 cm, 1 pica = 12 points
Try setting required units without conversion (replace PIXELS to probably POINTS):
//at start of script:
var originalUnit=preferences.rulerUnits;//alert(originalUnit);
preferences.rulerUnits = Units.PIXELS;//alert(preferences.rulerUnits);
//your main code here
//at the end - restore units:
preferences.rulerUnits = originalUnit;

Abaqus - stress-displacement elements are not allowed in a heat transfer analysis

I'm trying to simulate cooling of cylinder-shaped sample, but when I submit a job I get an error: stress-displacement elements are not allowed in a heat transfer analysis. I defined part, material(density, specific heat, conductivity), section, section assignments, mesh, instance, predefined field (temperature) in initial step, step-1 (heat transfer) with interaction (surface film condition). Where's the problem ?
Update:
I solved that problem: I had an incorrect element type. For the heat transfer simulation: Mesh -> Element Type -> Family -> Heat Transfer. I guess that also Convection/Diffusion option in the Hex tab should be selected.

What are the most hardcore optimisations you've seen?

I'm not talking about algorithmic stuff (eg use quicksort instead of bubblesort), and I'm not talking about simple things like loop unrolling.
I'm talking about the hardcore stuff. Like Tiny Teensy ELF, The Story of Mel; practically everything in the demoscene, and so on.
I once wrote a brute force RC5 key search that processed two keys at a time, the first key used the integer pipeline, the second key used the SSE pipelines and the two were interleaved at the instruction level. This was then coupled with a supervisor program that ran an instance of the code on each core in the system. In total, the code ran about 25 times faster than a naive C version.
In one (here unnamed) video game engine I worked with, they had rewritten the model-export tool (the thing that turns a Maya mesh into something the game loads) so that instead of just emitting data, it would actually emit the exact stream of microinstructions that would be necessary to render that particular model. It used a genetic algorithm to find the one that would run in the minimum number of cycles. That is to say, the data format for a given model was actually a perfectly-optimized subroutine for rendering just that model. So, drawing a mesh to the screen meant loading it into memory and branching into it.
(This wasn't for a PC, but for a console that had a vector unit separate and parallel to the CPU.)
In the early days of DOS when we used floppy discs for all data transport there were viruses as well. One common way for viruses to infect different computers was to copy a virus bootloader into the bootsector of an inserted floppydisc. When the user inserted the floppydisc into another computer and rebooted without remembering to remove the floppy, the virus was run and infected the harddrive bootsector, thus permanently infecting the host PC. A particulary annoying virus I was infected by was called "Form", to battle this I wrote a custom floppy bootsector that had the following features:
Validate the bootsector of the host harddrive and make sure it was not infected.
Validate the floppy bootsector and
make sure that it was not infected.
Code to remove the virus from the
harddrive if it was infected.
Code to duplicate the antivirus
bootsector to another floppy if a
special key was pressed.
Code to boot the harddrive if all was
well, and no infections was found.
This was done in the program space of a bootsector, about 440 bytes :)
The biggest problem for my mates was the very cryptic messages displayed because I needed all the space for code. It was like "FFVD RM?", which meant "FindForm Virus Detected, Remove?"
I was quite happy with that piece of code. The optimization was program size, not speed. Two quite different optimizations in assembly.
My favorite is the floating point inverse square root via integer operations. This is a cool little hack on how floating point values are stored and can execute faster (even doing a 1/result is faster than the stock-standard square root function) or produce more accurate results than the standard methods.
In c/c++ the code is: (sourced from Wikipedia)
float InvSqrt (float x)
{
float xhalf = 0.5f*x;
int i = *(int*)&x;
i = 0x5f3759df - (i>>1); // Now this is what you call a real magic number
x = *(float*)&i;
x = x*(1.5f - xhalf*x*x);
return x;
}
A Very Biological Optimisation
Quick background: Triplets of DNA nucleotides (A, C, G and T) encode amino acids, which are joined into proteins, which are what make up most of most living things.
Ordinarily, each different protein requires a separate sequence of DNA triplets (its "gene") to encode its amino acids -- so e.g. 3 proteins of lengths 30, 40, and 50 would require 90 + 120 + 150 = 360 nucleotides in total. However, in viruses, space is at a premium -- so some viruses overlap the DNA sequences for different genes, using the fact that there are 6 possible "reading frames" to use for DNA-to-protein translation (namely starting from a position that is divisible by 3; from a position that divides 3 with remainder 1; or from a position that divides 3 with remainder 2; and the same again, but reading the sequence in reverse.)
For comparison: Try writing an x86 assembly language program where the 300-byte function doFoo() begins at offset 0x1000... and another 200-byte function doBar() starts at offset 0x1001! (I propose a name for this competition: Are you smarter than Hepatitis B?)
That's hardcore space optimisation!
UPDATE: Links to further info:
Reading Frames on Wikipedia suggests Hepatitis B and "Barley Yellow Dwarf" virus (a plant virus) both overlap reading frames.
Hepatitis B genome info on Wikipedia. Seems that different reading-frame subunits produce different variations of a surface protein.
Or you could google for "overlapping reading frames"
Seems this can even happen in mammals! Extensively overlapping reading frames in a second mammalian gene is a 2001 scientific paper by Marilyn Kozak that talks about a "second" gene in rat with "extensive overlapping reading frames". (This is quite surprising as mammals have a genome structure that provides ample room for separate genes for separate proteins.) Haven't read beyond the abstract myself.
I wrote a tile-based game engine for the Apple IIgs in 65816 assembly language a few years ago. This was a fairly slow machine and programming "on the metal" is a virtual requirement for coaxing out acceptable performance.
In order to quickly update the graphics screen one has to map the stack to the screen in order to use some special instructions that allow one to update 4 screen pixels in only 5 machine cycles. This is nothing particularly fantastic and is described in detail in IIgs Tech Note #70. The hard-core bit was how I had to organize the code to make it flexible enough to be a general-purpose library while still maintaining maximum speed.
I decomposed the graphics screen into scan lines and created a 246 byte code buffer to insert the specialized 65816 opcodes. The 246 bytes are needed because each scan line of the graphics screen is 80 words wide and 1 additional word is required on each end for smooth scrolling. The Push Effective Address (PEA) instruction takes up 3 bytes, so 3 * (80 + 1 + 1) = 246 bytes.
The graphics screen is rendered by jumping to an address within the 246 byte code buffer that corresponds to the right edge of the screen and patching in a BRanch Always (BRA) instruction into the code at the word immediately following the left-most word. The BRA instruction takes a signed 8-bit offset as its argument, so it just barely has the range to jump out of the code buffer.
Even this isn't too terribly difficult, but the real hard-core optimization comes in here. My graphics engine actually supported two independent background layers and animated tiles by using different 3-byte code sequences depending on the mode:
Background 1 uses a Push Effective Address (PEA) instruction
Background 2 uses a Load Indirect Indexed (LDA ($00),y) instruction followed by a push (PHA)
Animated tiles use a Load Direct Page Indexed (LDA $00,x) instruction followed by a push (PHA)
The critical restriction is that both of the 65816 registers (X and Y) are used to reference data and cannot be modified. Further the direct page register (D) is set based on the origin of the second background and cannot be changed; the data bank register is set to the data bank that holds pixel data for the second background and cannot be changed; the stack pointer (S) is mapped to graphics screen, so there is no possibility of jumping to a subroutine and returning.
Given these restrictions, I had the need to quickly handle cases where a word that is about to be pushed onto the stack is mixed, i.e. half comes from Background 1 and half from Background 2. My solution was to trade memory for speed. Because all of the normal registers were in use, I only had the Program Counter (PC) register to work with. My solution was the following:
Define a code fragment to do the blend in the same 64K program bank as the code buffer
Create a copy of this code for each of the 82 words
There is a 1-1 correspondence, so the return from the code fragment can be a hard-coded address
Done! We have a hard-coded subroutine that does not affect the CPU registers.
Here is the actual code fragments
code_buff: PEA $0000 ; rightmost word (16-bits = 4 pixels)
PEA $0000 ; background 1
PEA $0000 ; background 1
PEA $0000 ; background 1
LDA (72),y ; background 2
PHA
LDA (70),y ; background 2
PHA
JMP word_68 ; mix the data
word_68_rtn: PEA $0000 ; more background 1
...
PEA $0000
BRA *+40 ; patched exit code
...
word_68: LDA (68),y ; load data for background 2
AND #$00FF ; mask
ORA #$AB00 ; blend with data from background 1
PHA
JMP word_68_rtn ; jump back
word_66: LDA (66),y
...
The end result was a near-optimal blitter that has minimal overhead and cranks out more than 15 frames per second at 320x200 on a 2.5 MHz CPU with a 1 MB/s memory bus.
Michael Abrash's "Zen of Assembly Language" had some nifty stuff, though I admit I don't recall specifics off the top of my head.
Actually it seems like everything Abrash wrote had some nifty optimization stuff in it.
The Stalin Scheme compiler is pretty crazy in that aspect.
I once saw a switch statement with a lot of empty cases, a comment at the head of the switch said something along the lines of:
Added case statements that are never hit because the compiler only turns the switch into a jump-table if there are more than N cases
I forget what N was. This was in the source code for Windows that was leaked in 2004.
I've gone to the Intel (or AMD) architecture references to see what instructions there are. movsx - move with sign extension is awesome for moving little signed values into big spaces, for example, in one instruction.
Likewise, if you know you only use 16-bit values, but you can access all of EAX, EBX, ECX, EDX , etc- then you have 8 very fast locations for values - just rotate the registers by 16 bits to access the other values.
The EFF DES cracker, which used custom-built hardware to generate candidate keys (the hardware they made could prove a key isn't the solution, but could not prove a key was the solution) which were then tested with a more conventional code.
The FSG 2.0 packer made by a Polish team, specifically made for packing executables made with assembly. If packing assembly isn't impressive enough (what's supposed to be almost as low as possible) the loader it comes with is 158 bytes and fully functional. If you try packing any assembly made .exe with something like UPX, it will throw a NotCompressableException at you ;)