Can anyone explain me what is the difference between using a nand flash chip of x8 I/O width with x16 i/O width.
For nand flash:
x8 mode : it has a byte location within the page register
x16 mode: it has a word location within the page register
at the interface:
x16 device has total 16 IO pins(IO0~IO15) (tips: io8~io15 are only used in the SDR data interface)
x8 device has total 8 IO pins(IO0~IO7)
Related
I have an USRP N320 SDR and I have an issue with 3 MHz-450 MHz band center frequency value. When I have a signal between 450 MHz and 6 GHz, I can see the actual frequency value of the signal even if I slide the center frequency but below 450 MHz, the signal is shifted negatively when I slide the center frequency. Is there any reason and solution for this issue? Any help?
As you can see in Figure 1, the FM radio signals are correctly seen when I set the Rx Tune Frequency to 100 MHz.
1
But when I slide the Rx Tune Frequency to 110, 120,130 and 140 MHz, the FM radio signals' frequency values are also shifted. As you can see in Figure 2, 3 ,4 and 5.
2
3
4
5
Addition:
The main picture of the blocks and parameters of USRP source are below figures.
Blocks
USRP Source 1
USRP Source 2
USRP Source 3
Also I figured out that when I applied below 450 MHz, for example 100 MHz signal and shifted center frequency with an amount, it shifts the signal double time inverse. I might not explain well but below figures does.
100MHz Signal at 100 MHZ center frequency
100MHz Signal at 95 MHz Center frequency but appeared at 90 MHz
100MHz Signal at 110 MHz Center frequency but appeared at 120MHz
But When I applied a signal above 450 MHz, for example 2 GHz, it does work properly. As you can see in figure below.
2 GHz given signal can correctly seen in any other center frequency
My source video file (1h 30min movie) is playable in both PotPlayer and VLC: h264, 8-bit color and 7755kb/s bitrate.
The NVEnc command I'm using is this:
.\nvencc\NVEncC64.exe --avhw -i "input.mkv" --codec hevc --preset quality --bframes 4 --ref 7 --cu-max 32 --cu-min 8 --output-depth 10 --audio-copy --sub-copy -o "output.mkv"
Encoding works fine (I believe):
NVEncC (x64) 5.26 (r1786) by rigaya, Jan 31 2021 09:23:04 (VC 1928/Win/avx2)
OS Version Windows 10 x64 (19042)
CPU AMD Ryzen 5 1600 Six-Core Processor [3.79GHz] (6C/12T)
GPU #0: GeForce GTX 1660 (1408 cores, 1830 MHz)[PCIe3x16][457.51]
NVENC / CUDA NVENC API 11.0, CUDA 11.1, schedule mode: auto
Input Buffers CUDA, 21 frames
Input Info avcuvid: H.264/AVC, 1920x800, 24000/1001 fps
AVSync vfr
Vpp Filters cspconv(nv12 -> p010)
Output Info H.265/HEVC main10 # Level auto
1920x800p 1:1 23.976fps (24000/1001fps)
avwriter: hevc, eac3, subtitle#1 => matroska
Encoder Preset quality
Rate Control CQP I:20 P:23 B:25
Lookahead off
GOP length 240 frames
B frames 4 frames [ref mode: disabled]
Ref frames 7 frames, MultiRef L0:auto L1:auto
AQ off
CU max / min 32 / 8
Others mv:auto
encoded 142592 frames, 219.97 fps, 1549.90 kbps, 1098.83 MB
encode time 0:10:48, CPU: 8.7%, GPU: 5.2%, VE: 98.3%, VD: 21.5%, GPUClock: 1966MHz, VEClock: 1816MHz
frame type IDR 595
frame type I 595, avgQP 20.00, total size 39.44 MB
frame type P 28519, avgQP 23.00, total size 471.93 MB
frame type B 113478, avgQP 25.00, total size 587.45 MB
but when I try to play it in either PotPlayer or VLC it says there is no video track or it just doesn't play at all.
MediaInfo also doesn't show any video, audio, or subtitle tracks either, just the name of the file and the file size. Am I missing something?
Switching --avhw to --avsw solved the problem.
I'm learning OpenCL/CUDA for GPU computing.
When I study the GDDR5 architecture, I'm told that
memory bus = quntity of memory channel * memory channel width
I see an AMD GPU has 16 memory channels with 32-bits wide, so I get the memory pin width = 16 * 32 = 512bits.
But I found that the mainstream graphic card has only 256/384-bits memory bus.
What's going wrong with it?
For GPUs, the number of memory channels usually is not explicitely stated, but rather the total bus width (in bits) for all channels combined. The bus width varies greatly depending on how many memory modules are on the PCB and the bus width per memory module. GPUs with 256bit total bus width typically have 8 memory modules with 1GB capacity each and GPUs with 384bit have 12.
For CPUs or integrated GPUs which share main memory:
memory bus width per channel = 64bit
numer of memory channels = 2 (mainstream plattforms) / 4 or 8 (high-end desktop / workstation)
memory clock = 1600MHz (DDR3) - 3200+MHz (DDR4)
memory bandwidth = 0.125 * memory bus width per channel * numer of memory channels * memory clock
For dedicated GPUs:
total memory bus width = 64bit (GDDR3) - 256bit (GDDR5) - 5120bit (HBM2)
effective memory clock = <5GHz (GDDR5) - 19.5GHz (GDDR6X)
memory bandwidth = 0.125 * total memory bus width * effective memory clock
For an iCE40 1k device, Following is the snippet from the output of the command "iceunpack -vv example.bin"
I could not understand why there are 332x144 bits?
My understanding is that [1], the CRAM BLOCK[0] starts at the logic tile (1,1), and it should contain:
48 logic tiles, each 54x16,
14 IO tiles, each 18x16
How the "332 x 144" is calculated?
Where does the IO tile and logic tiles bits are mapped in CRAM BLOCK[0] bits?
e.g., which bits of CRAM BLOCK[0] indicates the bits for logic tile (1,1) and bits for IO tile (0,1)?
Set bank to 0.
Next command at offset 26: 0x01 0x01
CRAM Data [0]: 332 x 144 bits = 47808 bits = 5976 bytes
Next command at offset 6006: 0x11 0x01
[1]. http://www.clifford.at/icestorm/format.html
Thanks.
height=9x16=144 (1 I/O tile and 8 Logic tiles)
Width=18+42+5x54 = 330 (1 I/O tile, 1 ram tile and 5 Logic tiles) plus "two zero bytes" = 332.
The coin3d offscreen rendering class SoOffscreenRenderer is capable of rendering big images (e.g. 4000 x 2000 pixels), that don't fit on the screen or in a rendering buffer. This is done by partitioning the image into tiles that are rendered one after the other, where the default size of these tiles is 1024 x 1024.
I looked at the code of SoOffscreenRenderer and CoinOffscreenGLCanvas and found environment variables COIN_OFFSCREENRENDERER_TILEWIDTH COIN_OFFSCREENRENDERER_TILEHEIGHT. I could change the tile size using these variables, but only to sizes smaller than 1024. I could create tiles with 512 x 512 pixels, and also 768 x 768. When I used values bigger than 1024, the resulting tiles were always of size 1024 x 1024.
Is it possible to use bigger tile sizes like 2048 x 2048 or 4096 x 4096, and how would I do that?
It is possible to use larger tiles and coin does it automatically. It will find out which tile sizes work by querying the graphics card driver.
From CoinOffscreenGLCanvas.cpp:
// getMaxTileSize() returns the theoretical maximum gathered from
// various GL driver information. We're not guaranteed that we'll be
// able to allocate a buffer of this size -- e.g. due to memory
// constraints on the gfx card.
The reason why it did not work was that the environment variable COIN_OFFSCREENRENDERER_MAX_TILESIZE was set somewhere in our application using coin_setenv("COIN_OFFSCREENRENDERER_MAX_TILESIZE", "1024", 1);. Removing this call allowed bigger tile sizes to be used.
In CoinOffscreenGLCanvas::getMaxTileSize(void), the variable COIN_OFFSCREENRENDERER_MAX_TILESIZE is read and the tile size clamped accordingly.
On my older computer it generated tiles of size 1024, but on a newer machine the tiles were of size 4096.