Monthly 0.3 -a-k-tsifrovaya-obrabotka-signalov-v-labview.html 2011-11-29T22:52:37+04:00.
In step 3 of the video, I explain one clever way to speed up the puzzle solver is to change how the matching patterns are found. Previously, every variant of the row was generated, then checked to see if it's row descriptor matched the one needed for that row. This took 10 min per row because 2^25 patterns had to be generated and checked. Instead, why not generate the blocks and simply move them around in every combination possible?
In theory, it's as easy as wiggling around the blocks! In reality, I found that a recursive VI was the fast route to take. Starting with a simple example: 1 2 4 (12 total blocks) You can model each group as having some number of empty spaces in front of it, then the number blocks behind it.
0, 1 1, 2 1, 4 Starting with the first section, let's vary the number of spaces in front of it. You can add spaces until you start pushing blocks outside of the row.
Now each of these variants can now be broken into new, smaller “rows”, starting after the block group. Taking for instance, the first one if the new row already fills all the spaces, then we're done varying it. If not, we can vary & break until we do meet that condition. Once we've met the condition, we can return all the variants and add them back together to the sections before it to compile a full list of generated matching patterns. Here's what the code looks like: In terms of performance, this speeds up a 10 hr process down to about 5 seconds a tremendous performance improvement!
Architecture - Reusing States for the Clever Hacks. It was important to me from the onset of this project to maintain a level of storytelling to the UI of the code. For how good of a story could be told if all the magic happened in hidden code? This choice had some unforeseen implications, both good and bad: Pro - by visualizing everything from the number of possible patterns to how it was eliminating patterns, I learned a lot more along the way (and got more inspired ideas along the way) than I anticipated. This kind of show-as-you-go approach is one reason I love coding with LabVIEW it practically makes troubleshooting code much more approachable and visual than other languages I've used.
Domiki sostav chisla v predelah 10 2. Beyond troubleshooting, it simply inspires more clever approaches along the way. Con - Visualizing everything along the way means that the block diagram gets more complicated. Whereas one of the beauties of subVIs is that you can abstract away lower level processes, the other side of that double-edged sword is that, due to the nature of dataflow, any updates to the front panel need to be on the top level diagram. This doesn't mean that subVIs couldn't be used (and many were), but it did mean there were times I had to put a lot of code on the top level block diagram because the visualization was in the heart of the code. This is probably the messiest part of the code I have: one could probably develop around this with references, but I'm not the biggest fan of those because it breaks what is one of the nicest parts of LabVIEW: dataflow. Pro - The visualization of the matching algorithm looks SWEET!
Sure, it doesn't show all of the millions of patterns being tested. In fact, it's only a fraction of that which makes it to the front panel. But it looks so cool to see the code run and test different patterns against their cross-references! It makes me feel like a movie-robot is figuring out my problems and I can't stop myself from making noises like “beep, bop, boop” Con - Performance takes a big hit by visualizing the matching algorithm. In fact, LV also has to arbitrate which steps it can show: the code might be testing a thousand a second, but your computer still only shows 30-60 frames per second, so LabVIEW only shows 30-60 of those tests. Looking at the chart comparisons, it seems like turning off the visualizations shows a 22x average improvement in performance times.
Hello, I am working on one robot and I want to know logic or algorithm to find the angle towards the reference point. The thing is robot know the X cordinate y cordinates and angle of reference point. And robot has also information about its position. Could any one please guide me how can i find the currect angle for robot to movie in the direction of robot. Initially we assume obstracle free atmosphere, means there is no any obstract in the way i only want to movie robot face in the way of reference point. Please any one can suggest me what algorithm is good for developing this logic. Thank you very much in advance.