text
stringlengths
8
267k
meta
dict
Q: How verbose should validation output be? I have an application that reads a database and outputs alerts to any dependencies that are not being met. My thinking on this issue is "Give the minimum information that points the user to the issue." I have been told by a co-worker that I should be as verbose as possible, printing out the values of the database fields for each field I mention verses giving the minimum message that "field one needs to be less then field two". I know that there must be some convention or standard for this issue as it reminds me of compiler errors and warnings. Does anyone know how a compiler messages are are chosen? What suggestion does the community have for this issue? A: I think the key is to be concise. Put as much detail as is required for the reason for the warning to be communicated and nothing more. A: When writing, know your audience. If you're logging warning/error messages for your own consumption, then it's fairly easy: what do you need to know when something goes wrong? If you're logging warning/error messages for someone else, then things get tricky. What do they know? What does their mental model of the system look like? What sorts of problems can they solve, and what information do they need to solve them? Pushing every last scrap of data into a message is punting - at best, the reader will have to wade through irrelevant information in order to find what they need; at worst, they'll become confused and end up making decisions based on the wrong data. The compiler analogy is apt: think how annoying it would be if the entire symbol table was dumped along with every warning... A: For normal, day-to-day operation, I give a data validation message that gives enough information that the user can fix the problem, so that the data validates. For example, if I have two fields (fieldA and fieldB) and one of them have to be greater than the other, then I would state that on the validation output, specifying which field is the offending field. For example, if A has to be greater than B, and they supply an answer less than B, then the message would be "fieldA needs to be higher than fieldB" That said, I also program a debug mode into my applications (especially the web-applications) which has a verbose mode, telling exactly what's happening with everything. If that's turned on you would see two messages, the user-friendly error, and then "FieldA=XX and FieldB=YY: XX is not greater than YY". That's simplified, but it's the general idea. A: I would suggest that you should implement both modes. During normal operation you need a useful but short message. But sometimes things could go wrong and in this case a 'dump' mode which gives the user all possible information is a life saver. A: I think there are 3 levels of the details of an error message for the 3 typical user groups: * *The end user. This is a surfer on a web site or an user of a desktop application. He should receive an error message if the problem can not be compensate. It should include the minimum of information. The end user should not receive any information over the system like current configuration and file paths. The end user should contact the administrator. A continuous error id can be helpful that the administrator can find more informations. *The administrator need more helpful information to solve the problem self. It can include information like table xy not fount or login to database failed. *The developer: If the administrator can not solve the problem then it will contact the software vendor. In this case the administrator should be able to send a log file that the developer can solve it also if he can not reproduce the problem. A: The specifics of the content of a log can be discussed, but it is my experience that the level of verbosity will quickly determined during stress test. If the system can not function properly, it is because you just: * *get either too verbose with your logs, or *did log too often (actually, I believe Jeff himself had a similar problem) Atwood: We were logging in such a way that the log.... during the log call was triggering another log call. Which is normally okay, but with the load that we have, eventually they would happen so close together that there's also a lock. So, there's two locks going on there. Spolsky: [...] you have a tendency to wanna log everything. But then you just get logs that are, you know, a hundred megabyte per user and you get thirty of them a minute and it can't possibly be analyzed or stored in any reasonable way. So the next thing you have to do is to start culling your logs or just have different levels of debugging, where it's like in high debug mode everything is logged and in low debug mode nothing is logged. And... it's kind of hard to figure out what you really want in a log. Atwood: I mean that, ironically, to troubleshoot this hang, which turned out to be because of logging, we were adding more logging. Spolsky: [laughs] Atwood: The joke just writes itself! The joke just writes itself, right... So my point is, when you will run your system in a production-like environment, you should quickly be able to determine if the level of verbosity you choose is sustainable. A: Dealing with errors Vs. warnings first: An error should be for something which violates the standard. A warning should be for something which is allowed, but quite likely isn't what the author intended. For example, the W3C Markup Validator will warn about the use of the syntax <br /> in an HTML document. In XHTML this means "A line break", but in an HTML document, while being allowed, actually means "A line break followed by a greater than sign" (even if most browsers don't respect this). As for verbosity, what is best does depend on who is using the system. Some users would be better with brief messages that they can skim through, while other users (perhaps those less advanced) would find the additional information useful. Without knowing more about who they are, I'd tend towards using a flag (-v is traditional) to let the user select which version they prefer.
{ "language": "en", "url": "https://stackoverflow.com/questions/108947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using Side-by-Side assemblies to load the x64 or x32 version of a DLL We have two versions of a managed C++ assembly, one for x86 and one for x64. This assembly is called by a .net application complied for AnyCPU. We are deploying our code via a file copy install, and would like to continue to do so. Is it possible to use a Side-by-Side assembly manifest to loading a x86 or x64 assembly respectively when an application is dynamically selecting it's processor architecture? Or is there another way to get this done in a file copy deployment (e.g. not using the GAC)? A: I created a simple solution that is able to load platform-specific assembly from an executable compiled as AnyCPU. The technique used can be summarized as follows: * *Make sure default .NET assembly loading mechanism ("Fusion" engine) can't find either x86 or x64 version of the platform-specific assembly *Before the main application attempts loading the platform-specific assembly, install a custom assembly resolver in the current AppDomain *Now when the main application needs the platform-specific assembly, Fusion engine will give up (because of step 1) and call our custom resolver (because of step 2); in the custom resolver we determine current platform and use directory-based lookup to load appropriate DLL. To demonstrate this technique, I am attaching a short, command-line based tutorial. I tested the resulting binaries on Windows XP x86 and then Vista SP1 x64 (by copying the binaries over, just like your deployment). Note 1: "csc.exe" is a C-sharp compiler. This tutorial assumes it is in your path (my tests were using "C:\WINDOWS\Microsoft.NET\Framework\v3.5\csc.exe") Note 2: I recommend you create a temporary folder for the tests and run command line (or powershell) whose current working directory is set to this location, e.g. (cmd.exe) C: mkdir \TEMP\CrossPlatformTest cd \TEMP\CrossPlatformTest Step 1: The platform-specific assembly is represented by a simple C# class library: // file 'library.cs' in C:\TEMP\CrossPlatformTest namespace Cross.Platform.Library { public static class Worker { public static void Run() { System.Console.WriteLine("Worker is running"); System.Console.WriteLine("(Enter to continue)"); System.Console.ReadLine(); } } } Step 2: We compile platform-specific assemblies using simple command-line commands: (cmd.exe from Note 2) mkdir platform\x86 csc /out:platform\x86\library.dll /target:library /platform:x86 library.cs mkdir platform\amd64 csc /out:platform\amd64\library.dll /target:library /platform:x64 library.cs Step 3: Main program is split into two parts. "Bootstrapper" contains main entry point for the executable and it registers a custom assembly resolver in current appdomain: // file 'bootstrapper.cs' in C:\TEMP\CrossPlatformTest namespace Cross.Platform.Program { public static class Bootstrapper { public static void Main() { System.AppDomain.CurrentDomain.AssemblyResolve += CustomResolve; App.Run(); } private static System.Reflection.Assembly CustomResolve( object sender, System.ResolveEventArgs args) { if (args.Name.StartsWith("library")) { string fileName = System.IO.Path.GetFullPath( "platform\\" + System.Environment.GetEnvironmentVariable("PROCESSOR_ARCHITECTURE") + "\\library.dll"); System.Console.WriteLine(fileName); if (System.IO.File.Exists(fileName)) { return System.Reflection.Assembly.LoadFile(fileName); } } return null; } } } "Program" is the "real" implementation of the application (note that App.Run was invoked at the end of Bootstrapper.Main): // file 'program.cs' in C:\TEMP\CrossPlatformTest namespace Cross.Platform.Program { public static class App { public static void Run() { Cross.Platform.Library.Worker.Run(); } } } Step 4: Compile the main application on command line: (cmd.exe from Note 2) csc /reference:platform\x86\library.dll /out:program.exe program.cs bootstrapper.cs Step 5: We're now finished. The structure of the directory we created should be as follows: (C:\TEMP\CrossPlatformTest, root dir) platform (dir) amd64 (dir) library.dll x86 (dir) library.dll program.exe *.cs (source files) If you now run program.exe on a 32bit platform, platform\x86\library.dll will be loaded; if you run program.exe on a 64bit platform, platform\amd64\library.dll will be loaded. Note that I added Console.ReadLine() at the end of the Worker.Run method so that you can use task manager/process explorer to investigate loaded DLLs, or you can use Visual Studio/Windows Debugger to attach to the process to see the call stack etc. When program.exe is run, our custom assembly resolver is attached to current appdomain. As soon as .NET starts loading the Program class, it sees a dependency on 'library' assembly, so it tries loading it. However, no such assembly is found (because we've hidden it in platform/* subdirectories). Luckily, our custom resolver knows our trickery and based on the current platform it tries loading the assembly from appropriate platform/* subdirectory. A: Have a look at SetDllDirectory. I used it around the dynamically loading of an IBM spss assembly for both x64 and x86. It also solved paths for non assembly support dll's loaded by the assemblies in my case was the case with the spss dll's. http://msdn.microsoft.com/en-us/library/ms686203%28VS.85%29.aspx A: My version, similar to @Milan, but with several important changes: * *Works for ALL DLLs that were not found *Can be turned on and off *AppDomain.CurrentDomain.SetupInformation.ApplicationBase is used instead of Path.GetFullPath() because the current directory might be different, e.g. in hosting scenarios, Excel might load your plugin but the current directory will not be set to your DLL. *Environment.Is64BitProcess is used instead of PROCESSOR_ARCHITECTURE, as we should not depend on what the OS is, rather how this process was started - it could have been x86 process on a x64 OS. Before .NET 4, use IntPtr.Size == 8 instead. Call this code in a static constructor of some main class that is loaded before all else. public static class MultiplatformDllLoader { private static bool _isEnabled; public static bool Enable { get { return _isEnabled; } set { lock (typeof (MultiplatformDllLoader)) { if (_isEnabled != value) { if (value) AppDomain.CurrentDomain.AssemblyResolve += Resolver; else AppDomain.CurrentDomain.AssemblyResolve -= Resolver; _isEnabled = value; } } } } /// Will attempt to load missing assembly from either x86 or x64 subdir private static Assembly Resolver(object sender, ResolveEventArgs args) { string assemblyName = args.Name.Split(new[] {','}, 2)[0] + ".dll"; string archSpecificPath = Path.Combine(AppDomain.CurrentDomain.SetupInformation.ApplicationBase, Environment.Is64BitProcess ? "x64" : "x86", assemblyName); return File.Exists(archSpecificPath) ? Assembly.LoadFile(archSpecificPath) : null; } } A: You can use the corflags utility to force an AnyCPU exe to load as an x86 or x64 executable, but that doesn't totally meet the file copy deployment requirement unless you choose which exe to copy based on the target. A: This solution can work for non managed assemblies as well. I have created a simple example similar to Milan Gardian's great example. The example I created dynamically loads a Managed C++ dll into a C# dll compiled for the Any CPU platform. The solution makes use of the InjectModuleInitializer nuget package to subscribe to the AssemblyResolve event before the dependencies of the assembly are loaded. https://github.com/kevin-marshall/Managed.AnyCPU.git
{ "language": "en", "url": "https://stackoverflow.com/questions/108971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: How to simulate memory allocation errors My C application uses 3rd libraries, which do their own memory management. In order to be robust, my application has code to deal with failures of library functions due to lack of free memory. I would like to test this code, and for this, I need to simulate failures due to lack of memory. What tool/s are recommended for this? My environment is Linux/gcc. A: On operating systems that overcommit memory (for example, Linux or Windows), it is simply not possible to handle out-of-memory errors. malloc may return a valid pointer and later, when you try to dereference it, your operating system may determine that you are out of memory and kill the process. http://www.reddit.com/comments/60vys/how_not_to_write_a_shared_library/ is a good write-up on this. A: You can write your own mock library with the same interface as your 3rd party library instead of it. You can also use LD_PRELOAD to override selected functions of the 3rd party library. A: I can give a Linux (maybe POSIX) specific version: __malloc_hook, __realloc_hook, __free_hook. These are declared in malloc.h. EDIT: A little elaboration: these are function pointers (see malloc.h and their man-page for the exact declaration), but beware: these are not exactly standards, just GNU extensions. So if portability is an issue, don't use this. A little less platform-dependent solution might be that you declare a malloc macro. If you're testing, this calls a hook and the real malloc. memhook.h: #define malloc(s) (my_malloc(s)) memhook.c: #include "memhook.h" #undef malloc #include <stdlib.h> etc. You can use this to detect leaks, randomly fail the allocation, etc. A: You can use ulimit to limit the amount of resources a user can use, including memory. So you create a test user, limit their memory use to something just enough to launch your program, and watch it die :) Example: ulimit -m 64 Sets a memory limit of 64kb. A: Have a look at the way sqlite3 does this. They perform extensive unit testing, including out of memory testing. You may also want to look at their page on malloc, particularly Section 4.0. A: Create your own malloc wrapper which will randomly return null instead of a valid pointer. Well, or which fails consistently if you want to unit test. A: In addition, you should use Valgrind to test it all and get real useful reports about memory behavior of your program A: You can setup a define in the header file to return NULL whenever malloc is used: Usually malloc will be protected the following way: if ((int *x = malloc(sizeof(int))) == NULL) { return NULL; } So you use a define to force a NULL return; pseudocode example: # define malloc(X) NULL And check if you get a segfault A: You want the ulimit command in bash. Try help ulimit at a bash shell prompt. A: (As a complement to some of the previous answers) Checkout "Electric Fence" for an example of a malloc-intercepting library that you can use with your executable (using the LD_PRELOAD trick, for instance). Once you've intercepted malloc, you can use whatever you want to trigger failures. A randomly triggered failure would be a good stress test for the various parts of the system. You could also modify the failure probability based on the amount of memory requested. Yours is an interesting idea, by the way, clearly something I'd like to do on some of my code... A: You might want to look at some of the recovery oriented computing sites, such as the Berkeley/Stanford ROC group. I've heard some of these folks talk before, and they use code to randomly inject errors in the C runtime. There's a link to their FIT tool at the bottom of their page.
{ "language": "en", "url": "https://stackoverflow.com/questions/109000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Count the number of set bits in a 32-bit integer 8 bits representing the number 7 look like this: 00000111 Three bits are set. What are the algorithms to determine the number of set bits in a 32-bit integer? A: This is known as the 'Hamming Weight', 'popcount' or 'sideways addition'. Some CPUs have a single built-in instruction to do it and others have parallel instructions which act on bit vectors. Instructions like x86's popcnt (on CPUs where it's supported) will almost certainly be fastest for a single integer. Some other architectures may have a slow instruction implemented with a microcoded loop that tests a bit per cycle (citation needed - hardware popcount is normally fast if it exists at all.). The 'best' algorithm really depends on which CPU you are on and what your usage pattern is. Your compiler may know how to do something that's good for the specific CPU you're compiling for, e.g. C++20 std::popcount(), or C++ std::bitset<32>::count(), as a portable way to access builtin / intrinsic functions (see another answer on this question). But your compiler's choice of fallback for target CPUs that don't have hardware popcnt might not be optimal for your use-case. Or your language (e.g. C) might not expose any portable function that could use a CPU-specific popcount when there is one. Portable algorithms that don't need (or benefit from) any HW support A pre-populated table lookup method can be very fast if your CPU has a large cache and you are doing lots of these operations in a tight loop. However it can suffer because of the expense of a 'cache miss', where the CPU has to fetch some of the table from main memory. (Look up each byte separately to keep the table small.) If you want popcount for a contiguous range of numbers, only the low byte is changing for groups of 256 numbers, making this very good. If you know that your bytes will be mostly 0's or mostly 1's then there are efficient algorithms for these scenarios, e.g. clearing the lowest set with a bithack in a loop until it becomes zero. I believe a very good general purpose algorithm is the following, known as 'parallel' or 'variable-precision SWAR algorithm'. I have expressed this in a C-like pseudo language, you may need to adjust it to work for a particular language (e.g. using uint32_t for C++ and >>> in Java): GCC10 and clang 10.0 can recognize this pattern / idiom and compile it to a hardware popcnt or equivalent instruction when available, giving you the best of both worlds. (https://godbolt.org/z/qGdh1dvKK) int numberOfSetBits(uint32_t i) { // Java: use int, and use >>> instead of >>. Or use Integer.bitCount() // C or C++: use uint32_t i = i - ((i >> 1) & 0x55555555); // add pairs of bits i = (i & 0x33333333) + ((i >> 2) & 0x33333333); // quads i = (i + (i >> 4)) & 0x0F0F0F0F; // groups of 8 return (i * 0x01010101) >> 24; // horizontal sum of bytes } For JavaScript: coerce to integer with |0 for performance: change the first line to i = (i|0) - ((i >> 1) & 0x55555555); This has the best worst-case behaviour of any of the algorithms discussed, so will efficiently deal with any usage pattern or values you throw at it. (Its performance is not data-dependent on normal CPUs where all integer operations including multiply are constant-time. It doesn't get any faster with "simple" inputs, but it's still pretty decent.) References: * *https://graphics.stanford.edu/~seander/bithacks.html *https://catonmat.net/low-level-bit-hacks for bithack basics, like how subtracting 1 flips contiguous zeros. *https://en.wikipedia.org/wiki/Hamming_weight *http://gurmeet.net/puzzles/fast-bit-counting-routines/ *http://aggregate.ee.engr.uky.edu/MAGIC/#Population%20Count%20(Ones%20Count) How this SWAR bithack works: i = i - ((i >> 1) & 0x55555555); The first step is an optimized version of masking to isolate the odd / even bits, shifting to line them up, and adding. This effectively does 16 separate additions in 2-bit accumulators (SWAR = SIMD Within A Register). Like (i & 0x55555555) + ((i>>1) & 0x55555555). The next step takes the odd/even eight of those 16x 2-bit accumulators and adds again, producing 8x 4-bit sums. The i - ... optimization isn't possible this time so it does just mask before / after shifting. Using the same 0x33... constant both times instead of 0xccc... before shifting is a good thing when compiling for ISAs that need to construct 32-bit constants in registers separately. The final shift-and-add step of (i + (i >> 4)) & 0x0F0F0F0F widens to 4x 8-bit accumulators. It masks after adding instead of before, because the maximum value in any 4-bit accumulator is 4, if all 4 bits of the corresponding input bits were set. 4+4 = 8 which still fits in 4 bits, so carry between nibble elements is impossible in i + (i >> 4). So far this is just fairly normal SIMD using SWAR techniques with a few clever optimizations. Continuing on with the same pattern for 2 more steps can widen to 2x 16-bit then 1x 32-bit counts. But there is a more efficient way on machines with fast hardware multiply: Once we have few enough "elements", a multiply with a magic constant can sum all the elements into the top element. In this case byte elements. Multiply is done by left-shifting and adding, so a multiply of x * 0x01010101 results in x + (x<<8) + (x<<16) + (x<<24). Our 8-bit elements are wide enough (and holding small enough counts) that this doesn't produce carry into that top 8 bits. A 64-bit version of this can do 8x 8-bit elements in a 64-bit integer with a 0x0101010101010101 multiplier, and extract the high byte with >>56. So it doesn't take any extra steps, just wider constants. This is what GCC uses for __builtin_popcountll on x86 systems when the hardware popcnt instruction isn't enabled. If you can use builtins or intrinsics for this, do so to give the compiler a chance to do target-specific optimizations. With full SIMD for wider vectors (e.g. counting a whole array) This bitwise-SWAR algorithm could parallelize to be done in multiple vector elements at once, instead of in a single integer register, for a speedup on CPUs with SIMD but no usable popcount instruction. (e.g. x86-64 code that has to run on any CPU, not just Nehalem or later.) However, the best way to use vector instructions for popcount is usually by using a variable-shuffle to do a table-lookup for 4 bits at a time of each byte in parallel. (The 4 bits index a 16 entry table held in a vector register). On Intel CPUs, the hardware 64bit popcnt instruction can outperform an SSSE3 PSHUFB bit-parallel implementation by about a factor of 2, but only if your compiler gets it just right. Otherwise SSE can come out significantly ahead. Newer compiler versions are aware of the popcnt false dependency problem on Intel. * *https://github.com/WojciechMula/sse-popcount state-of-the-art x86 SIMD popcount for SSSE3, AVX2, AVX512BW, AVX512VBMI, or AVX512 VPOPCNT. Using Harley-Seal across vectors to defer popcount within an element. (Also ARM NEON) *Counting 1 bits (population count) on large data using AVX-512 or AVX-2 *related: https://github.com/mklarqvist/positional-popcount - separate counts for each bit-position of multiple 8, 16, 32, or 64-bit integers. (Again, x86 SIMD including AVX-512 which is really good at this, with vpternlogd making Harley-Seal very good.) A: Few open questions:- * *If the number is negative then? *If the number is 1024 , then the "iteratively divide by 2" method will iterate 10 times. we can modify the algo to support the negative number as follows:- count = 0 while n != 0 if ((n % 2) == 1 || (n % 2) == -1 count += 1 n /= 2 return count now to overcome the second problem we can write the algo like:- int bit_count(int num) { int count=0; while(num) { num=(num)&(num-1); count++; } return count; } for complete reference see : http://goursaha.freeoda.com/Miscellaneous/IntegerBitCount.html A: I think the fastest way—without using lookup tables and popcount—is the following. It counts the set bits with just 12 operations. int popcount(int v) { v = v - ((v >> 1) & 0x55555555); // put count of each 2 bits into those 2 bits v = (v & 0x33333333) + ((v >> 2) & 0x33333333); // put count of each 4 bits into those 4 bits return ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; } It works because you can count the total number of set bits by dividing in two halves, counting the number of set bits in both halves and then adding them up. Also know as Divide and Conquer paradigm. Let's get into detail.. v = v - ((v >> 1) & 0x55555555); The number of bits in two bits can be 0b00, 0b01 or 0b10. Lets try to work this out on 2 bits.. --------------------------------------------- | v | (v >> 1) & 0b0101 | v - x | --------------------------------------------- 0b00 0b00 0b00 0b01 0b00 0b01 0b10 0b01 0b01 0b11 0b01 0b10 This is what was required: the last column shows the count of set bits in every two bit pair. If the two bit number is >= 2 (0b10) then and produces 0b01, else it produces 0b00. v = (v & 0x33333333) + ((v >> 2) & 0x33333333); This statement should be easy to understand. After the first operation we have the count of set bits in every two bits, now we sum up that count in every 4 bits. v & 0b00110011 //masks out even two bits (v >> 2) & 0b00110011 // masks out odd two bits We then sum up the above result, giving us the total count of set bits in 4 bits. The last statement is the most tricky. c = ((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24; Let's break it down further... v + (v >> 4) It's similar to the second statement; we are counting the set bits in groups of 4 instead. We know—because of our previous operations—that every nibble has the count of set bits in it. Let's look an example. Suppose we have the byte 0b01000010. It means the first nibble has its 4bits set and the second one has its 2bits set. Now we add those nibbles together. v = 0b01000010 (v >> 4) = 0b00000100 v + (v >> 4) = 0b01000010 + 0b00000100 It gives us the count of set bits in a byte, in the second nibble 0b01000110 and therefore we mask the first four bytes of all the bytes in the number (discarding them). 0b01000110 & 0x0F = 0b00000110 Now every byte has the count of set bits in it. We need to add them up all together. The trick is to multiply the result by 0b10101010 which has an interesting property. If our number has four bytes, A B C D, it will result in a new number with these bytes A+B+C+D B+C+D C+D D. A 4 byte number can have maximum of 32 bits set, which can be represented as 0b00100000. All we need now is the first byte which has the sum of all set bits in all the bytes, and we get it by >> 24. This algorithm was designed for 32 bit words but can be easily modified for 64 bit words. A: I use the below code which is more intuitive. int countSetBits(int n) { return !n ? 0 : 1 + countSetBits(n & (n-1)); } Logic : n & (n-1) resets the last set bit of n. P.S : I know this is not O(1) solution, albeit an interesting solution. A: What do you means with "Best algorithm"? The shorted code or the fasted code? Your code look very elegant and it has a constant execution time. The code is also very short. But if the speed is the major factor and not the code size then I think the follow can be faster: static final int[] BIT_COUNT = { 0, 1, 1, ... 256 values with a bitsize of a byte ... }; static int bitCountOfByte( int value ){ return BIT_COUNT[ value & 0xFF ]; } static int bitCountOfInt( int value ){ return bitCountOfByte( value ) + bitCountOfByte( value >> 8 ) + bitCountOfByte( value >> 16 ) + bitCountOfByte( value >> 24 ); } I think that this will not more faster for a 64 bit value but a 32 bit value can be faster. A: I wrote a fast bitcount macro for RISC machines in about 1990. It does not use advanced arithmetic (multiplication, division, %), memory fetches (way too slow), branches (way too slow), but it does assume the CPU has a 32-bit barrel shifter (in other words, >> 1 and >> 32 take the same amount of cycles.) It assumes that small constants (such as 6, 12, 24) cost nothing to load into the registers, or are stored in temporaries and reused over and over again. With these assumptions, it counts 32 bits in about 16 cycles/instructions on most RISC machines. Note that 15 instructions/cycles is close to a lower bound on the number of cycles or instructions, because it seems to take at least 3 instructions (mask, shift, operator) to cut the number of addends in half, so log_2(32) = 5, 5 x 3 = 15 instructions is a quasi-lowerbound. #define BitCount(X,Y) \ Y = X - ((X >> 1) & 033333333333) - ((X >> 2) & 011111111111); \ Y = ((Y + (Y >> 3)) & 030707070707); \ Y = (Y + (Y >> 6)); \ Y = (Y + (Y >> 12) + (Y >> 24)) & 077; Here is a secret to the first and most complex step: input output AB CD Note 00 00 = AB 01 01 = AB 10 01 = AB - (A >> 1) & 0x1 11 10 = AB - (A >> 1) & 0x1 so if I take the 1st column (A) above, shift it right 1 bit, and subtract it from AB, I get the output (CD). The extension to 3 bits is similar; you can check it with an 8-row boolean table like mine above if you wish. * *Don Gillies A: if you're using C++ another option is to use template metaprogramming: // recursive template to sum bits in an int template <int BITS> int countBits(int val) { // return the least significant bit plus the result of calling ourselves with // .. the shifted value return (val & 0x1) + countBits<BITS-1>(val >> 1); } // template specialisation to terminate the recursion when there's only one bit left template<> int countBits<1>(int val) { return val & 0x1; } usage would be: // to count bits in a byte/char (this returns 8) countBits<8>( 255 ) // another byte (this returns 7) countBits<8>( 254 ) // counting bits in a word/short (this returns 1) countBits<16>( 256 ) you could of course further expand this template to use different types (even auto-detecting bit size) but I've kept it simple for clarity. edit: forgot to mention this is good because it should work in any C++ compiler and it basically just unrolls your loop for you if a constant value is used for the bit count (in other words, I'm pretty sure it's the fastest general method you'll find) A: You can do: while(n){ n = n & (n-1); count++; } The logic behind this is the bits of n-1 is inverted from rightmost set bit of n. If n=6, i.e., 110 then 5 is 101 the bits are inverted from rightmost set bit of n. So if we & these two we will make the rightmost bit 0 in every iteration and always go to the next rightmost set bit. Hence, counting the set bit. The worst time complexity will be O(log n) when every bit is set. A: C++20 std::popcount The following proposal has been merged http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p0553r4.html and should add it to a the <bit> header. I expect the usage to be like: #include <bit> #include <iostream> int main() { std::cout << std::popcount(0x55) << std::endl; } I'll give it a try when support arrives to GCC, GCC 9.1.0 with g++-9 -std=c++2a still doesn't support it. The proposal says: Header: <bit> namespace std { // 25.5.6, counting template<class T> constexpr int popcount(T x) noexcept; and: template<class T> constexpr int popcount(T x) noexcept; Constraints: T is an unsigned integer type (3.9.1 [basic.fundamental]). Returns: The number of 1 bits in the value of x. std::rotl and std::rotr were also added to do circular bit rotations: Best practices for circular shift (rotate) operations in C++ A: If you happen to be using Java, the built-in method Integer.bitCount will do that. A: I'm particularly fond of this example from the fortune file: #define BITCOUNT(x) (((BX_(x)+(BX_(x)>>4)) & 0x0F0F0F0F) % 255) #define BX_(x) ((x) - (((x)>>1)&0x77777777) - (((x)>>2)&0x33333333) - (((x)>>3)&0x11111111)) I like it best because it's so pretty! A: Java JDK1.5 Integer.bitCount(n); where n is the number whose 1's are to be counted. check also, Integer.highestOneBit(n); Integer.lowestOneBit(n); Integer.numberOfLeadingZeros(n); Integer.numberOfTrailingZeros(n); //Beginning with the value 1, rotate left 16 times n = 1; for (int i = 0; i < 16; i++) { n = Integer.rotateLeft(n, 1); System.out.println(n); } A: A fast C# solution using a pre-calculated table of Byte bit counts with branching on the input size. public static class BitCount { public static uint GetSetBitsCount(uint n) { var counts = BYTE_BIT_COUNTS; return n <= 0xff ? counts[n] : n <= 0xffff ? counts[n & 0xff] + counts[n >> 8] : n <= 0xffffff ? counts[n & 0xff] + counts[(n >> 8) & 0xff] + counts[(n >> 16) & 0xff] : counts[n & 0xff] + counts[(n >> 8) & 0xff] + counts[(n >> 16) & 0xff] + counts[(n >> 24) & 0xff]; } public static readonly uint[] BYTE_BIT_COUNTS = { 0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8 }; } A: I found an implementation of bit counting in an array with using of SIMD instruction (SSSE3 and AVX2). It has in 2-2.5 times better performance than if it will use __popcnt64 intrinsic function. SSSE3 version: #include <smmintrin.h> #include <stdint.h> const __m128i Z = _mm_set1_epi8(0x0); const __m128i F = _mm_set1_epi8(0xF); //Vector with pre-calculated bit count: const __m128i T = _mm_setr_epi8(0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4); uint64_t BitCount(const uint8_t * src, size_t size) { __m128i _sum = _mm128_setzero_si128(); for (size_t i = 0; i < size; i += 16) { //load 16-byte vector __m128i _src = _mm_loadu_si128((__m128i*)(src + i)); //get low 4 bit for every byte in vector __m128i lo = _mm_and_si128(_src, F); //sum precalculated value from T _sum = _mm_add_epi64(_sum, _mm_sad_epu8(Z, _mm_shuffle_epi8(T, lo))); //get high 4 bit for every byte in vector __m128i hi = _mm_and_si128(_mm_srli_epi16(_src, 4), F); //sum precalculated value from T _sum = _mm_add_epi64(_sum, _mm_sad_epu8(Z, _mm_shuffle_epi8(T, hi))); } uint64_t sum[2]; _mm_storeu_si128((__m128i*)sum, _sum); return sum[0] + sum[1]; } AVX2 version: #include <immintrin.h> #include <stdint.h> const __m256i Z = _mm256_set1_epi8(0x0); const __m256i F = _mm256_set1_epi8(0xF); //Vector with pre-calculated bit count: const __m256i T = _mm256_setr_epi8(0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4); uint64_t BitCount(const uint8_t * src, size_t size) { __m256i _sum = _mm256_setzero_si256(); for (size_t i = 0; i < size; i += 32) { //load 32-byte vector __m256i _src = _mm256_loadu_si256((__m256i*)(src + i)); //get low 4 bit for every byte in vector __m256i lo = _mm256_and_si256(_src, F); //sum precalculated value from T _sum = _mm256_add_epi64(_sum, _mm256_sad_epu8(Z, _mm256_shuffle_epi8(T, lo))); //get high 4 bit for every byte in vector __m256i hi = _mm256_and_si256(_mm256_srli_epi16(_src, 4), F); //sum precalculated value from T _sum = _mm256_add_epi64(_sum, _mm256_sad_epu8(Z, _mm256_shuffle_epi8(T, hi))); } uint64_t sum[4]; _mm256_storeu_si256((__m256i*)sum, _sum); return sum[0] + sum[1] + sum[2] + sum[3]; } A: I always use this in competitive programming, and it's easy to write and is efficient: #include <bits/stdc++.h> using namespace std; int countOnes(int n) { bitset<32> b(n); return b.count(); } A: I got bored, and timed a billion iterations of three approaches. Compiler is gcc -O3. CPU is whatever they put in the 1st gen Macbook Pro. Fastest is the following, at 3.7 seconds: static unsigned char wordbits[65536] = { bitcounts of ints between 0 and 65535 }; static int popcount( unsigned int i ) { return( wordbits[i&0xFFFF] + wordbits[i>>16] ); } Second place goes to the same code but looking up 4 bytes instead of 2 halfwords. That took around 5.5 seconds. Third place goes to the bit-twiddling 'sideways addition' approach, which took 8.6 seconds. Fourth place goes to GCC's __builtin_popcount(), at a shameful 11 seconds. The counting one-bit-at-a-time approach was waaaay slower, and I got bored of waiting for it to complete. So if you care about performance above all else then use the first approach. If you care, but not enough to spend 64Kb of RAM on it, use the second approach. Otherwise use the readable (but slow) one-bit-at-a-time approach. It's hard to think of a situation where you'd want to use the bit-twiddling approach. Edit: Similar results here. A: Here is a portable module ( ANSI-C ) which can benchmark each of your algorithms on any architecture. Your CPU has 9 bit bytes? No problem :-) At the moment it implements 2 algorithms, the K&R algorithm and a byte wise lookup table. The lookup table is on average 3 times faster than the K&R algorithm. If someone can figure a way to make the "Hacker's Delight" algorithm portable feel free to add it in. #ifndef _BITCOUNT_H_ #define _BITCOUNT_H_ /* Return the Hamming Wieght of val, i.e. the number of 'on' bits. */ int bitcount( unsigned int ); /* List of available bitcount algorithms. * onTheFly: Calculate the bitcount on demand. * * lookupTalbe: Uses a small lookup table to determine the bitcount. This * method is on average 3 times as fast as onTheFly, but incurs a small * upfront cost to initialize the lookup table on the first call. * * strategyCount is just a placeholder. */ enum strategy { onTheFly, lookupTable, strategyCount }; /* String represenations of the algorithm names */ extern const char *strategyNames[]; /* Choose which bitcount algorithm to use. */ void setStrategy( enum strategy ); #endif . #include <limits.h> #include "bitcount.h" /* The number of entries needed in the table is equal to the number of unique * values a char can represent which is always UCHAR_MAX + 1*/ static unsigned char _bitCountTable[UCHAR_MAX + 1]; static unsigned int _lookupTableInitialized = 0; static int _defaultBitCount( unsigned int val ) { int count; /* Starting with: * 1100 - 1 == 1011, 1100 & 1011 == 1000 * 1000 - 1 == 0111, 1000 & 0111 == 0000 */ for ( count = 0; val; ++count ) val &= val - 1; return count; } /* Looks up each byte of the integer in a lookup table. * * The first time the function is called it initializes the lookup table. */ static int _tableBitCount( unsigned int val ) { int bCount = 0; if ( !_lookupTableInitialized ) { unsigned int i; for ( i = 0; i != UCHAR_MAX + 1; ++i ) _bitCountTable[i] = ( unsigned char )_defaultBitCount( i ); _lookupTableInitialized = 1; } for ( ; val; val >>= CHAR_BIT ) bCount += _bitCountTable[val & UCHAR_MAX]; return bCount; } static int ( *_bitcount ) ( unsigned int ) = _defaultBitCount; const char *strategyNames[] = { "onTheFly", "lookupTable" }; void setStrategy( enum strategy s ) { switch ( s ) { case onTheFly: _bitcount = _defaultBitCount; break; case lookupTable: _bitcount = _tableBitCount; break; case strategyCount: break; } } /* Just a forwarding function which will call whichever version of the * algorithm has been selected by the client */ int bitcount( unsigned int val ) { return _bitcount( val ); } #ifdef _BITCOUNT_EXE_ #include <stdio.h> #include <stdlib.h> #include <time.h> /* Use the same sequence of pseudo random numbers to benmark each Hamming * Weight algorithm. */ void benchmark( int reps ) { clock_t start, stop; int i, j; static const int iterations = 1000000; for ( j = 0; j != strategyCount; ++j ) { setStrategy( j ); srand( 257 ); start = clock( ); for ( i = 0; i != reps * iterations; ++i ) bitcount( rand( ) ); stop = clock( ); printf ( "\n\t%d psudoe-random integers using %s: %f seconds\n\n", reps * iterations, strategyNames[j], ( double )( stop - start ) / CLOCKS_PER_SEC ); } } int main( void ) { int option; while ( 1 ) { printf( "Menu Options\n" "\t1.\tPrint the Hamming Weight of an Integer\n" "\t2.\tBenchmark Hamming Weight implementations\n" "\t3.\tExit ( or cntl-d )\n\n\t" ); if ( scanf( "%d", &option ) == EOF ) break; switch ( option ) { case 1: printf( "Please enter the integer: " ); if ( scanf( "%d", &option ) != EOF ) printf ( "The Hamming Weight of %d ( 0x%X ) is %d\n\n", option, option, bitcount( option ) ); break; case 2: printf ( "Please select number of reps ( in millions ): " ); if ( scanf( "%d", &option ) != EOF ) benchmark( option ); break; case 3: goto EXIT; break; default: printf( "Invalid option\n" ); } } EXIT: printf( "\n" ); return 0; } #endif A: There are many algorithm to count the set bits; but i think the best one is the faster one! You can see the detailed on this page: Bit Twiddling Hacks I suggest this one: Counting bits set in 14, 24, or 32-bit words using 64-bit instructions unsigned int v; // count the number of bits set in v unsigned int c; // c accumulates the total bits set in v // option 1, for at most 14-bit values in v: c = (v * 0x200040008001ULL & 0x111111111111111ULL) % 0xf; // option 2, for at most 24-bit values in v: c = ((v & 0xfff) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f; c += (((v & 0xfff000) >> 12) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f; // option 3, for at most 32-bit values in v: c = ((v & 0xfff) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f; c += (((v & 0xfff000) >> 12) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f; c += ((v >> 24) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f; This method requires a 64-bit CPU with fast modulus division to be efficient. The first option takes only 3 operations; the second option takes 10; and the third option takes 15. A: 32-bit or not ? I just came with this method in Java after reading "cracking the coding interview" 4th edition exercice 5.5 ( chap 5: Bit Manipulation). If the least significant bit is 1 increment count, then right-shift the integer. public static int bitCount( int n){ int count = 0; for (int i=n; i!=0; i = i >> 1){ count += i & 1; } return count; } I think this one is more intuitive than the solutions with constant 0x33333333 no matter how fast they are. It depends on your definition of "best algorithm" . A: Naive Solution Time Complexity is O(no. of bits in n) int countSet(unsigned int n) { int res=0; while(n!=0){ res += (n&1); n >>= 1; // logical right shift, like C unsigned or Java >>> } return res; } Brian Kerningam's algorithm Time Complexity is O(no of set bits in n) int countSet(unsigned int n) { int res=0; while(n != 0) { n = (n & (n-1)); res++; } return res; } Lookup table method for 32-bit number- In this method we break the 32-bit number into chunks of four, 8-bit numbers Time Complexity is O(1) static unsigned char table[256]; /* the table size is 256, the number of values i&0xFF (8 bits) can have */ void initialize() //holds the number of set bits from 0 to 255 { table[0]=0; for(unsigned int i=1;i<256;i++) table[i]=(i&1)+table[i>>1]; } int countSet(unsigned int n) { // 0xff is hexadecimal representation of 8 set bits. int res=table[n & 0xff]; n=n>>8; res=res+ table[n & 0xff]; n=n>>8; res=res+ table[n & 0xff]; n=n>>8; res=res+ table[n & 0xff]; return res; } A: unsigned int count_bit(unsigned int x) { x = (x & 0x55555555) + ((x >> 1) & 0x55555555); x = (x & 0x33333333) + ((x >> 2) & 0x33333333); x = (x & 0x0F0F0F0F) + ((x >> 4) & 0x0F0F0F0F); x = (x & 0x00FF00FF) + ((x >> 8) & 0x00FF00FF); x = (x & 0x0000FFFF) + ((x >> 16)& 0x0000FFFF); return x; } Let me explain this algorithm. This algorithm is based on Divide and Conquer Algorithm. Suppose there is a 8bit integer 213(11010101 in binary), the algorithm works like this(each time merge two neighbor blocks): +-------------------------------+ | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | <- x | 1 0 | 0 1 | 0 1 | 0 1 | <- first time merge | 0 0 1 1 | 0 0 1 0 | <- second time merge | 0 0 0 0 0 1 0 1 | <- third time ( answer = 00000101 = 5) +-------------------------------+ A: Why not iteratively divide by 2? count = 0 while n > 0 if (n % 2) == 1 count += 1 n /= 2 I agree that this isn't the fastest, but "best" is somewhat ambiguous. I'd argue though that "best" should have an element of clarity A: This is one of those questions where it helps to know your micro-architecture. I just timed two variants under gcc 4.3.3 compiled with -O3 using C++ inlines to eliminate function call overhead, one billion iterations, keeping the running sum of all counts to ensure the compiler doesn't remove anything important, using rdtsc for timing (clock cycle precise). inline int pop2(unsigned x, unsigned y) { x = x - ((x >> 1) & 0x55555555); y = y - ((y >> 1) & 0x55555555); x = (x & 0x33333333) + ((x >> 2) & 0x33333333); y = (y & 0x33333333) + ((y >> 2) & 0x33333333); x = (x + (x >> 4)) & 0x0F0F0F0F; y = (y + (y >> 4)) & 0x0F0F0F0F; x = x + (x >> 8); y = y + (y >> 8); x = x + (x >> 16); y = y + (y >> 16); return (x+y) & 0x000000FF; } The unmodified Hacker's Delight took 12.2 gigacycles. My parallel version (counting twice as many bits) runs in 13.0 gigacycles. 10.5s total elapsed for both together on a 2.4GHz Core Duo. 25 gigacycles = just over 10 seconds at this clock frequency, so I'm confident my timings are right. This has to do with instruction dependency chains, which are very bad for this algorithm. I could nearly double the speed again by using a pair of 64-bit registers. In fact, if I was clever and added x+y a little sooner I could shave off some shifts. The 64-bit version with some small tweaks would come out about even, but count twice as many bits again. With 128 bit SIMD registers, yet another factor of two, and the SSE instruction sets often have clever short-cuts, too. There's no reason for the code to be especially transparent. The interface is simple, the algorithm can be referenced on-line in many places, and it's amenable to comprehensive unit test. The programmer who stumbles upon it might even learn something. These bit operations are extremely natural at the machine level. OK, I decided to bench the tweaked 64-bit version. For this one sizeof(unsigned long) == 8 inline int pop2(unsigned long x, unsigned long y) { x = x - ((x >> 1) & 0x5555555555555555); y = y - ((y >> 1) & 0x5555555555555555); x = (x & 0x3333333333333333) + ((x >> 2) & 0x3333333333333333); y = (y & 0x3333333333333333) + ((y >> 2) & 0x3333333333333333); x = (x + (x >> 4)) & 0x0F0F0F0F0F0F0F0F; y = (y + (y >> 4)) & 0x0F0F0F0F0F0F0F0F; x = x + y; x = x + (x >> 8); x = x + (x >> 16); x = x + (x >> 32); return x & 0xFF; } That looks about right (I'm not testing carefully, though). Now the timings come out at 10.70 gigacycles / 14.1 gigacycles. That later number summed 128 billion bits and corresponds to 5.9s elapsed on this machine. The non-parallel version speeds up a tiny bit because I'm running in 64-bit mode and it likes 64-bit registers slightly better than 32-bit registers. Let's see if there's a bit more OOO pipelining to be had here. This was a bit more involved, so I actually tested a bit. Each term alone sums to 64, all combined sum to 256. inline int pop4(unsigned long x, unsigned long y, unsigned long u, unsigned long v) { enum { m1 = 0x5555555555555555, m2 = 0x3333333333333333, m3 = 0x0F0F0F0F0F0F0F0F, m4 = 0x000000FF000000FF }; x = x - ((x >> 1) & m1); y = y - ((y >> 1) & m1); u = u - ((u >> 1) & m1); v = v - ((v >> 1) & m1); x = (x & m2) + ((x >> 2) & m2); y = (y & m2) + ((y >> 2) & m2); u = (u & m2) + ((u >> 2) & m2); v = (v & m2) + ((v >> 2) & m2); x = x + y; u = u + v; x = (x & m3) + ((x >> 4) & m3); u = (u & m3) + ((u >> 4) & m3); x = x + u; x = x + (x >> 8); x = x + (x >> 16); x = x & m4; x = x + (x >> 32); return x & 0x000001FF; } I was excited for a moment, but it turns out gcc is playing inline tricks with -O3 even though I'm not using the inline keyword in some tests. When I let gcc play tricks, a billion calls to pop4() takes 12.56 gigacycles, but I determined it was folding arguments as constant expressions. A more realistic number appears to be 19.6gc for another 30% speed-up. My test loop now looks like this, making sure each argument is different enough to stop gcc from playing tricks. hitime b4 = rdtsc(); for (unsigned long i = 10L * 1000*1000*1000; i < 11L * 1000*1000*1000; ++i) sum += pop4 (i, i^1, ~i, i|1); hitime e4 = rdtsc(); 256 billion bits summed in 8.17s elapsed. Works out to 1.02s for 32 million bits as benchmarked in the 16-bit table lookup. Can't compare directly, because the other bench doesn't give a clock speed, but looks like I've slapped the snot out of the 64KB table edition, which is a tragic use of L1 cache in the first place. Update: decided to do the obvious and create pop6() by adding four more duplicated lines. Came out to 22.8gc, 384 billion bits summed in 9.5s elapsed. So there's another 20% Now at 800ms for 32 billion bits. A: Python solution: def hammingWeight(n: int) -> int: sums = 0 while (n!=0): sums+=1 n = n &(n-1) return sums In the binary representation, the least significant 1-bit in n always corresponds to a 0-bit in n - 1. Therefore, anding the two numbers n and n - 1 always flips the least significant 1-bit in n to 0, and keeps all other bits the same. A: The Hacker's Delight bit-twiddling becomes so much clearer when you write out the bit patterns. unsigned int bitCount(unsigned int x) { x = ((x >> 1) & 0b01010101010101010101010101010101) + (x & 0b01010101010101010101010101010101); x = ((x >> 2) & 0b00110011001100110011001100110011) + (x & 0b00110011001100110011001100110011); x = ((x >> 4) & 0b00001111000011110000111100001111) + (x & 0b00001111000011110000111100001111); x = ((x >> 8) & 0b00000000111111110000000011111111) + (x & 0b00000000111111110000000011111111); x = ((x >> 16)& 0b00000000000000001111111111111111) + (x & 0b00000000000000001111111111111111); return x; } The first step adds the even bits to the odd bits, producing a sum of bits in each two. The other steps add high-order chunks to low-order chunks, doubling the chunk size all the way up, until we have the final count taking up the entire int. A: Some languages portably expose the operation in a way that can use efficient hardware support if available, otherwise some library fallback that's hopefully decent. For example (from a table by language): * *C++ has std::bitset<>::count(), or C++20 std::popcount(T x) *Java has java.lang.Integer.bitCount() (also for Long or BigInteger) *C# has System.Numerics.BitOperations.PopCount() *Python has int.bit_count() (since 3.10) Not all compilers / libraries actually manage to use HW support when it's available, though. (Notably MSVC, even with options that make std::popcount inline as x86 popcnt, its std::bitset::count still always uses a lookup table. This will hopefully change in future versions.) Also consider the built-in functions of your compiler when the portable language doesn't have this basic bit operation. In GNU C for example: int __builtin_popcount (unsigned int x); int __builtin_popcountll (unsigned long long x); In the worst case (no single-instruction HW support) the compiler will generate a call to a function (which in current GCC uses a shift/and bit-hack like this answer, at least for x86). In the best case the compiler will emit a cpu instruction to do the job. (Just like a * or / operator - GCC will use a hardware multiply or divide instruction if available, otherwise will call a libgcc helper function.) Or even better, if the operand is a compile-time constant after inlining, it can do constant-propagation to get a compile-time-constant popcount result. The GCC builtins even work across multiple platforms. Popcount has almost become mainstream in the x86 architecture, so it makes sense to start using the builtin now so you can recompile to let it inline a hardware instruction when you compile with -mpopcnt or something that includes that (e.g. https://godbolt.org/z/Ma5e5a). Other architectures have had popcount for years, but in the x86 world there are still some ancient Core 2 and similar vintage AMD CPUs in use. On x86, you can tell the compiler that it can assume support for popcnt instruction with -mpopcnt (also implied by -msse4.2). See GCC x86 options. -march=nehalem -mtune=skylake (or -march= whatever CPU you want your code to assume and to tune for) could be a good choice. Running the resulting binary on an older CPU will result in an illegal-instruction fault. To make binaries optimized for the machine you build them on, use -march=native (with gcc, clang, or ICC). MSVC provides an intrinsic for the x86 popcnt instruction, but unlike gcc it's really an intrinsic for the hardware instruction and requires hardware support. Using std::bitset<>::count() instead of a built-in In theory, any compiler that knows how to popcount efficiently for the target CPU should expose that functionality through ISO C++ std::bitset<>. In practice, you might be better off with the bit-hack AND/shift/ADD in some cases for some target CPUs. For target architectures where hardware popcount is an optional extension (like x86), not all compilers have a std::bitset that takes advantage of it when available. For example, MSVC has no way to enable popcnt support at compile time, and it's std::bitset<>::count always uses a table lookup, even with /Ox /arch:AVX (which implies SSE4.2, which in turn implies the popcnt feature.) (Update: see below; that does get MSVC's C++20 std::popcount to use x86 popcnt, but still not its bitset<>::count. MSVC could fix that by updating their standard library headers to use std::popcount when available.) But at least you get something portable that works everywhere, and with gcc/clang with the right target options, you get hardware popcount for architectures that support it. #include <bitset> #include <limits> #include <type_traits> template<typename T> //static inline // static if you want to compile with -mpopcnt in one compilation unit but not others typename std::enable_if<std::is_integral<T>::value, unsigned >::type popcount(T x) { static_assert(std::numeric_limits<T>::radix == 2, "non-binary type"); // sizeof(x)*CHAR_BIT constexpr int bitwidth = std::numeric_limits<T>::digits + std::numeric_limits<T>::is_signed; // std::bitset constructor was only unsigned long before C++11. Beware if porting to C++03 static_assert(bitwidth <= std::numeric_limits<unsigned long long>::digits, "arg too wide for std::bitset() constructor"); typedef typename std::make_unsigned<T>::type UT; // probably not needed, bitset width chops after sign-extension std::bitset<bitwidth> bs( static_cast<UT>(x) ); return bs.count(); } See asm from gcc, clang, icc, and MSVC on the Godbolt compiler explorer. x86-64 gcc -O3 -std=gnu++11 -mpopcnt emits this: unsigned test_short(short a) { return popcount(a); } movzx eax, di # note zero-extension, not sign-extension popcnt rax, rax ret unsigned test_int(int a) { return popcount(a); } mov eax, edi popcnt rax, rax # unnecessary 64-bit operand size ret unsigned test_u64(unsigned long long a) { return popcount(a); } xor eax, eax # gcc avoids false dependencies for Intel CPUs popcnt rax, rdi ret PowerPC64 gcc -O3 -std=gnu++11 emits (for the int arg version): rldicl 3,3,0,32 # zero-extend from 32 to 64-bit popcntd 3,3 # popcount blr This source isn't x86-specific or GNU-specific at all, but only compiles well with gcc/clang/icc, at least when targeting x86 (including x86-64). Also note that gcc's fallback for architectures without single-instruction popcount is a byte-at-a-time table lookup. This isn't wonderful for ARM, for example. C++20 has std::popcount(T) Current libstdc++ headers unfortunately define it with a special case if(x==0) return 0; at the start, which clang doesn't optimize away when compiling for x86: #include <bit> int bar(unsigned x) { return std::popcount(x); } clang 11.0.1 -O3 -std=gnu++20 -march=nehalem (https://godbolt.org/z/arMe5a) # clang 11 bar(unsigned int): # @bar(unsigned int) popcnt eax, edi cmove eax, edi # redundant: if popcnt result is 0, return the original 0 instead of the popcnt-generated 0... ret But GCC compiles nicely: # gcc 10 xor eax, eax # break false dependency on Intel SnB-family before Ice Lake. popcnt eax, edi ret Even MSVC does well with it, as long as you use -arch:AVX or later (and enable C++20 with -std:c++latest). https://godbolt.org/z/7K4Gef int bar(unsigned int) PROC ; bar, COMDAT popcnt eax, ecx ret 0 int bar(unsigned int) ENDP ; bar A: For a happy medium between a 232 lookup table and iterating through each bit individually: int bitcount(unsigned int num){ int count = 0; static int nibblebits[] = {0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4}; for(; num != 0; num >>= 4) count += nibblebits[num & 0x0f]; return count; } From http://ctips.pbwiki.com/CountBits A: In my opinion, the "best" solution is the one that can be read by another programmer (or the original programmer two years later) without copious comments. You may well want the fastest or cleverest solution which some have already provided but I prefer readability over cleverness any time. unsigned int bitCount (unsigned int value) { unsigned int count = 0; while (value > 0) { // until all bits are zero if ((value & 1) == 1) // check lower bit count++; value >>= 1; // shift bits, removing lower bit } return count; } If you want more speed (and assuming you document it well to help out your successors), you could use a table lookup: // Lookup table for fast calculation of bits set in 8-bit unsigned char. static unsigned char oneBitsInUChar[] = { // 0 1 2 3 4 5 6 7 8 9 A B C D E F (<- n) // ===================================================== 0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, // 0n 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, // 1n : : : 4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8, // Fn }; // Function for fast calculation of bits set in 16-bit unsigned short. unsigned char oneBitsInUShort (unsigned short x) { return oneBitsInUChar [x >> 8] + oneBitsInUChar [x & 0xff]; } // Function for fast calculation of bits set in 32-bit unsigned int. unsigned char oneBitsInUInt (unsigned int x) { return oneBitsInUShort (x >> 16) + oneBitsInUShort (x & 0xffff); } These rely on specific data type sizes so they're not that portable. But, since many performance optimisations aren't portable anyway, that may not be an issue. If you want portability, I'd stick to the readable solution. A: Personally I use this : public static int myBitCount(long L){ int count = 0; while (L != 0) { count++; L ^= L & -L; } return count; } A: int countBits(int x) { int n = 0; if (x) do n++; while(x=x&(x-1)); return n; } Or also: int countBits(int x) { return (x)? 1+countBits(x&(x-1)): 0; } 7 1/2 years after my original answer, @PeterMortensen questioned if this was even valid C syntax. I posted a link to an online compiler showing that it is in fact perfectly valid syntax (code below). #include <stdio.h> int countBits(int x) { int n = 0; if (x) do n++; /* Totally Normal Valid code. */ while(x=x&(x-1)); /* Nothing to see here. */ return n; } int main(void) { printf("%d\n", countBits(25)); return 0; } Output: 3 If you want to re-write it for clarity, it would look like: if (x) { do { n++; } while(x=x&(x-1)); } But that seems excessive to my eye. However, I've also realized the function can be made shorter, but perhaps more cryptic, written as: int countBits(int x) { int n = 0; while (x) x=(n++,x&(x-1)); return n; } A: This can be done in O(k), where k is the number of bits set. int NumberOfSetBits(int n) { int count = 0; while (n){ ++ count; n = (n - 1) & n; } return count; } A: It's not the fastest or best solution, but I found the same question in my way, and I started to think and think. finally I realized that it can be done like this if you get the problem from mathematical side, and draw a graph, then you find that it's a function which has some periodic part, and then you realize the difference between the periods... so here you go: unsigned int f(unsigned int x) { switch (x) { case 0: return 0; case 1: return 1; case 2: return 1; case 3: return 2; default: return f(x/4) + f(x%4); } } A: I think the Brian Kernighan's method will be useful too... It goes through as many iterations as there are set bits. So if we have a 32-bit word with only the high bit set, then it will only go once through the loop. int countSetBits(unsigned int n) { unsigned int n; // count the number of bits set in n unsigned int c; // c accumulates the total bits set in n for (c=0;n>0;n=n&(n-1)) c++; return c; } Published in 1988, the C Programming Language 2nd Ed. (by Brian W. Kernighan and Dennis M. Ritchie) mentions this in exercise 2-9. On April 19, 2006 Don Knuth pointed out to me that this method "was first published by Peter Wegner in CACM 3 (1960), 322. (Also discovered independently by Derrick Lehmer and published in 1964 in a book edited by Beckenbach.)" A: From Hacker's Delight, p. 66, Figure 5-2 int pop(unsigned x) { x = x - ((x >> 1) & 0x55555555); x = (x & 0x33333333) + ((x >> 2) & 0x33333333); x = (x + (x >> 4)) & 0x0F0F0F0F; x = x + (x >> 8); x = x + (x >> 16); return x & 0x0000003F; } Executes in ~20-ish instructions (arch dependent), no branching.Hacker's Delight is delightful! Highly recommended. A: The function you are looking for is often called the "sideways sum" or "population count" of a binary number. Knuth discusses it in pre-Fascicle 1A, pp11-12 (although there was a brief reference in Volume 2, 4.6.3-(7).) The locus classicus is Peter Wegner's article "A Technique for Counting Ones in a Binary Computer", from the Communications of the ACM, Volume 3 (1960) Number 5, page 322. He gives two different algorithms there, one optimized for numbers expected to be "sparse" (i.e., have a small number of ones) and one for the opposite case. A: private int get_bits_set(int v) { int c; // 'c' accumulates the total bits set in 'v' for (c = 0; v>0; c++) { v &= v - 1; // Clear the least significant bit set } return c; } A: Here is a solution that has not been mentioned so far, using bitfields. The following program counts the set bits in an array of 100000000 16-bit integers using 4 different methods. Timing results are given in parentheses (on MacOSX, with gcc -O3): #include <stdio.h> #include <stdlib.h> #define LENGTH 100000000 typedef struct { unsigned char bit0 : 1; unsigned char bit1 : 1; unsigned char bit2 : 1; unsigned char bit3 : 1; unsigned char bit4 : 1; unsigned char bit5 : 1; unsigned char bit6 : 1; unsigned char bit7 : 1; } bits; unsigned char sum_bits(const unsigned char x) { const bits *b = (const bits*) &x; return b->bit0 + b->bit1 + b->bit2 + b->bit3 \ + b->bit4 + b->bit5 + b->bit6 + b->bit7; } int NumberOfSetBits(int i) { i = i - ((i >> 1) & 0x55555555); i = (i & 0x33333333) + ((i >> 2) & 0x33333333); return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24; } #define out(s) \ printf("bits set: %lu\nbits counted: %lu\n", 8*LENGTH*sizeof(short)*3/4, s); int main(int argc, char **argv) { unsigned long i, s; unsigned short *x = malloc(LENGTH*sizeof(short)); unsigned char lut[65536], *p; unsigned short *ps; int *pi; /* set 3/4 of the bits */ for (i=0; i<LENGTH; ++i) x[i] = 0xFFF0; /* sum_bits (1.772s) */ for (i=LENGTH*sizeof(short), p=(unsigned char*) x, s=0; i--; s+=sum_bits(*p++)); out(s); /* NumberOfSetBits (0.404s) */ for (i=LENGTH*sizeof(short)/sizeof(int), pi=(int*)x, s=0; i--; s+=NumberOfSetBits(*pi++)); out(s); /* populate lookup table */ for (i=0, p=(unsigned char*) &i; i<sizeof(lut); ++i) lut[i] = sum_bits(p[0]) + sum_bits(p[1]); /* 256-bytes lookup table (0.317s) */ for (i=LENGTH*sizeof(short), p=(unsigned char*) x, s=0; i--; s+=lut[*p++]); out(s); /* 65536-bytes lookup table (0.250s) */ for (i=LENGTH, ps=x, s=0; i--; s+=lut[*ps++]); out(s); free(x); return 0; } While the bitfield version is very readable, the timing results show that it is over 4x slower than NumberOfSetBits(). The lookup-table based implementations are still quite a bit faster, in particular with a 65 kB table. A: int bitcount(unsigned int n) { int count=0; while(n) { count += n & 0x1u; n >>= 1; } return count; } Iterated 'count' runs in time proportional to the total number of bits. It simply loops through all the bits, terminating slightly earlier because of the while condition. Useful, if 1'S or the set bits are sparse and among the least significant bits. A: Another Hamming weight algorithm if you're on a BMI2 capable CPU: the_weight = __tzcnt_u64(~_pext_u64(data[i], data[i])); A: In Java 8 or 9 just invoke Integer.bitCount . A: You can use built in function named __builtin_popcount(). There is no__builtin_popcount in C++ but it is a built in function of GCC compiler. This function return the number of set bit in an integer. int __builtin_popcount (unsigned int x); Reference : Bit Twiddling Hacks A: From Python 3.10 onwards, you will be able to use the int.bit_count() function, but for the time being, you can define this function yourself. def bit_count(integer): return bin(integer).count("1") A: I am providing one more unmentioned algorithm, called Parallel, taken from here. The nice point about it that it is generic, meaning that the code is the same for bit sizes 8, 16, 32, 64, and 128. I checked the correctness of its values and timings on an amount of 2^26 numbers for bits sizes 8, 16, 32, and 64. See the timings below. This algorithm is a first code snippet. The other two are mentioned here just for reference, because I tested and compared to them. Algorithms are coded in C++, to be generic, but it can be easily adopted to old C. #include <type_traits> #include <cstdint> template <typename IntT> inline size_t PopCntParallel(IntT n) { // https://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel using T = std::make_unsigned_t<IntT>; T v = T(n); v = v - ((v >> 1) & (T)~(T)0/3); // temp v = (v & (T)~(T)0/15*3) + ((v >> 2) & (T)~(T)0/15*3); // temp v = (v + (v >> 4)) & (T)~(T)0/255*15; // temp return size_t((T)(v * ((T)~(T)0/255)) >> (sizeof(T) - 1) * 8); // count } Below are two algorithms that I compared with. One is the Kernighan simple method with a loop, taken from here. template <typename IntT> inline size_t PopCntKernighan(IntT n) { // http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetKernighan using T = std::make_unsigned_t<IntT>; T v = T(n); size_t c; for (c = 0; v; ++c) v &= v - 1; // Clear the least significant bit set return c; } Another one is using built-in __popcnt16()/__popcnt()/__popcnt64() MSVC's intrinsic (doc here). Or __builtin_popcount of CLang/GCC (doc here). This intrinsic should provide a very optimized version, possibly hardware: #ifdef _MSC_VER // https://learn.microsoft.com/en-us/cpp/intrinsics/popcnt16-popcnt-popcnt64?view=msvc-160 #include <intrin.h> #define popcnt16 __popcnt16 #define popcnt32 __popcnt #define popcnt64 __popcnt64 #else // https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html #define popcnt16 __builtin_popcount #define popcnt32 __builtin_popcount #define popcnt64 __builtin_popcountll #endif template <typename IntT> inline size_t PopCntBuiltin(IntT n) { using T = std::make_unsigned_t<IntT>; T v = T(n); if constexpr(sizeof(IntT) <= 2) return popcnt16(uint16_t(v)); else if constexpr(sizeof(IntT) <= 4) return popcnt32(uint32_t(v)); else if constexpr(sizeof(IntT) <= 8) return popcnt64(uint64_t(v)); else static_assert([]{ return false; }()); } Below are the timings, in nanoseconds per one number. All timings are done for 2^26 random numbers. Timings are compared for all three algorithms and all bit sizes among 8, 16, 32, and 64. In sum, all tests took 16 seconds on my machine. The high-resolution clock was used. 08 bit Builtin 8.2 ns 08 bit Parallel 8.2 ns 08 bit Kernighan 26.7 ns 16 bit Builtin 7.7 ns 16 bit Parallel 7.7 ns 16 bit Kernighan 39.7 ns 32 bit Builtin 7.0 ns 32 bit Parallel 7.0 ns 32 bit Kernighan 47.9 ns 64 bit Builtin 7.5 ns 64 bit Parallel 7.5 ns 64 bit Kernighan 59.4 ns 128 bit Builtin 7.8 ns 128 bit Parallel 13.8 ns 128 bit Kernighan 127.6 ns As one can see, the provided Parallel algorithm (first among three) is as good as MSVC's/CLang's intrinsic. For reference, below is full code that I used to test speed/time/correctness of all functions. As a bonus this code (unlike short code snippets above) also tests 128 bit size, but only under CLang/GCC (not MSVC), as they have unsigned __int128. Try it online! #include <type_traits> #include <cstdint> using std::size_t; #if defined(_MSC_VER) && !defined(__clang__) #define IS_MSVC 1 #else #define IS_MSVC 0 #endif #if IS_MSVC #define HAS128 false #else using int128_t = __int128; using uint128_t = unsigned __int128; #define HAS128 true #endif template <typename T> struct UnSignedT { using type = std::make_unsigned_t<T>; }; #if HAS128 template <> struct UnSignedT<int128_t> { using type = uint128_t; }; template <> struct UnSignedT<uint128_t> { using type = uint128_t; }; #endif template <typename T> using UnSigned = typename UnSignedT<T>::type; template <typename IntT> inline size_t PopCntParallel(IntT n) { // https://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel using T = UnSigned<IntT>; T v = T(n); v = v - ((v >> 1) & (T)~(T)0/3); // temp v = (v & (T)~(T)0/15*3) + ((v >> 2) & (T)~(T)0/15*3); // temp v = (v + (v >> 4)) & (T)~(T)0/255*15; // temp return size_t((T)(v * ((T)~(T)0/255)) >> (sizeof(T) - 1) * 8); // count } template <typename IntT> inline size_t PopCntKernighan(IntT n) { // http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetKernighan using T = UnSigned<IntT>; T v = T(n); size_t c; for (c = 0; v; ++c) v &= v - 1; // Clear the least significant bit set return c; } #if IS_MSVC // https://learn.microsoft.com/en-us/cpp/intrinsics/popcnt16-popcnt-popcnt64?view=msvc-160 #include <intrin.h> #define popcnt16 __popcnt16 #define popcnt32 __popcnt #define popcnt64 __popcnt64 #else // https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html #define popcnt16 __builtin_popcount #define popcnt32 __builtin_popcount #define popcnt64 __builtin_popcountll #endif #define popcnt128(x) (popcnt64(uint64_t(x)) + popcnt64(uint64_t(x >> 64))) template <typename IntT> inline size_t PopCntBuiltin(IntT n) { using T = UnSigned<IntT>; T v = T(n); if constexpr(sizeof(IntT) <= 2) return popcnt16(uint16_t(v)); else if constexpr(sizeof(IntT) <= 4) return popcnt32(uint32_t(v)); else if constexpr(sizeof(IntT) <= 8) return popcnt64(uint64_t(v)); else if constexpr(sizeof(IntT) <= 16) return popcnt128(uint128_t(v)); else static_assert([]{ return false; }()); } #include <random> #include <vector> #include <chrono> #include <string> #include <iostream> #include <iomanip> #include <map> inline double Time() { static auto const gtb = std::chrono::high_resolution_clock::now(); return std::chrono::duration_cast<std::chrono::duration<double>>( std::chrono::high_resolution_clock::now() - gtb).count(); } template <typename T, typename F> void Test(std::string const & name, F f) { std::mt19937_64 rng{123}; size_t constexpr bit_size = sizeof(T) * 8, ntests = 1 << 6, nnums = 1 << 14; std::vector<T> nums(nnums); for (size_t i = 0; i < nnums; ++i) nums[i] = T(rng() % ~T(0)); static std::map<size_t, size_t> times; double min_time = 1000; for (size_t i = 0; i < ntests; ++i) { double timer = Time(); size_t sum = 0; for (size_t j = 0; j < nnums; j += 4) sum += f(nums[j + 0]) + f(nums[j + 1]) + f(nums[j + 2]) + f(nums[j + 3]); auto volatile vsum = sum; min_time = std::min(min_time, (Time() - timer) / nnums); if (times.count(bit_size) && times.at(bit_size) != sum) std::cout << "Wrong bit cnt checksum!" << std::endl; times[bit_size] = sum; } std::cout << std::setw(2) << std::setfill('0') << bit_size << " bit " << name << " " << std::fixed << std::setprecision(1) << min_time * 1000000000 << " ns" << std::endl; } int main() { #define TEST(T) \ Test<T>("Builtin", PopCntBuiltin<T>); \ Test<T>("Parallel", PopCntParallel<T>); \ Test<T>("Kernighan", PopCntKernighan<T>); \ std::cout << std::endl; TEST(uint8_t); TEST(uint16_t); TEST(uint32_t); TEST(uint64_t); #if HAS128 TEST(uint128_t); #endif #undef TEST } A: A simple way which should work nicely for a small amount of bits it something like this (For 4 bits in this example): (i & 1) + (i & 2)/2 + (i & 4)/4 + (i & 8)/8 Would others recommend this for a small number of bits as a simple solution? A: Here is the sample code, which might be useful. private static final int[] bitCountArr = new int[]{0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7, 4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8}; private static final int firstByteFF = 255; public static final int getCountOfSetBits(int value){ int count = 0; for(int i=0;i<4;i++){ if(value == 0) break; count += bitCountArr[value & firstByteFF]; value >>>= 8; } return count; } A: Here's something that works in PHP (all PHP intergers are 32 bit signed, thus 31 bit): function bits_population($nInteger) { $nPop=0; while($nInteger) { $nInteger^=(1<<(floor(1+log($nInteger)/log(2))-1)); $nPop++; } return $nPop; } A: #!/user/local/bin/perl $c=0x11BBBBAB; $count=0; $m=0x00000001; for($i=0;$i<32;$i++) { $f=$c & $m; if($f == 1) { $count++; } $c=$c >> 1; } printf("%d",$count); ive done it through a perl script. the number taken is $c=0x11BBBBAB B=3 1s A=2 1s so in total 1+1+3+3+3+2+3+3=19 A: Convert the integer to a binary string and count the ones. PHP solution: substr_count(decbin($integer), '1'); A: I have not seen this approach anywhere: int nbits(unsigned char v) { return ((((v - ((v >> 1) & 0x55)) * 0x1010101) & 0x30c00c03) * 0x10040041) >> 0x1c; } It works per byte, so it would have to be called four times for a 32-bit integer. It is derived from the sideways addition, but it uses two 32-bit multiplications to reduce the number of instructions to only seven. Most current C compilers will optimize this function using SIMD (SSE2) instructions when it is clear that the number of requests is a multiple of 4, and it becomes quite competitive. It is portable, can be defined as a macro or inline function and does not need data tables. This approach can be extended to work on 16 bits at a time, using 64-bit multiplications. However, it fails when all 16 bits are set, returning zero, so it can be used only when the 0xFFFF input value is not present. It is also slower due to the 64-bit operations and does not optimize well. A: A simple algorithm to count the number of set bits: int countbits(n) { int count = 0; while(n != 0) { n = n & (n-1); count++; } return count; } Take the example of 11 (1011) and try manually running through the algorithm. It should help you a lot! A: def hammingWeight(n): count = 0 while n: if n&1: count += 1 n >>= 1 return count A: Here is the functional master race recursive solution, and it is by far the purest one (and can be used with any bit length!): template<typename T> int popcnt(T n) { if (n>0) return n&1 + popcnt(n>>1); return 0; } A: For Java, there is a java.util.BitSet. https://docs.oracle.com/javase/8/docs/api/java/util/BitSet.html cardinality(): Returns the number of bits set to true in this BitSet. The BitSet is memory efficient since it's stored as a Long. A: Kotlin pre 1.4 fun NumberOfSetBits(i: Int): Int { var i = i i -= (i ushr 1 and 0x55555555) i = (i and 0x33333333) + (i ushr 2 and 0x33333333) return (i + (i ushr 4) and 0x0F0F0F0F) * 0x01010101 ushr 24 } This is more or less a copy of the answer seen in the top answer. It is with the Java fixes and is then converted using the converter in the IntelliJ IDEA Community Edition 1.4 and beyond (as of 2021-05-05 - it could change in the future). fun NumberOfSetBits(i: Int): Int { return i.countOneBits() } Under the hood it uses Integer.bitCount as seen here: @SinceKotlin("1.4") @WasExperimental(ExperimentalStdlibApi::class) @kotlin.internal.InlineOnly public actual inline fun Int.countOneBits(): Int = Integer.bitCount(this) A: For those who want it in C++11 for any unsigned integer type as a consexpr function (tacklelib/include/tacklelib/utility/math.hpp): #include <stdint.h> #include <limits> #include <type_traits> const constexpr uint32_t uint32_max = (std::numeric_limits<uint32_t>::max)(); namespace detail { template <typename T> inline constexpr T _count_bits_0(const T & v) { return v - ((v >> 1) & 0x55555555); } template <typename T> inline constexpr T _count_bits_1(const T & v) { return (v & 0x33333333) + ((v >> 2) & 0x33333333); } template <typename T> inline constexpr T _count_bits_2(const T & v) { return (v + (v >> 4)) & 0x0F0F0F0F; } template <typename T> inline constexpr T _count_bits_3(const T & v) { return v + (v >> 8); } template <typename T> inline constexpr T _count_bits_4(const T & v) { return v + (v >> 16); } template <typename T> inline constexpr T _count_bits_5(const T & v) { return v & 0x0000003F; } template <typename T, bool greater_than_uint32> struct _impl { static inline constexpr T _count_bits_with_shift(const T & v) { return detail::_count_bits_5( detail::_count_bits_4( detail::_count_bits_3( detail::_count_bits_2( detail::_count_bits_1( detail::_count_bits_0(v)))))) + count_bits(v >> 32); } }; template <typename T> struct _impl<T, false> { static inline constexpr T _count_bits_with_shift(const T & v) { return 0; } }; } template <typename T> inline constexpr T count_bits(const T & v) { static_assert(std::is_integral<T>::value, "type T must be an integer"); static_assert(!std::is_signed<T>::value, "type T must be not signed"); return uint32_max >= v ? detail::_count_bits_5( detail::_count_bits_4( detail::_count_bits_3( detail::_count_bits_2( detail::_count_bits_1( detail::_count_bits_0(v)))))) : detail::_impl<T, sizeof(uint32_t) < sizeof(v)>::_count_bits_with_shift(v); } Plus tests in google test library: #include <stdlib.h> #include <time.h> namespace { template <typename T> inline uint32_t _test_count_bits(const T & v) { uint32_t count = 0; T n = v; while (n > 0) { if (n % 2) { count += 1; } n /= 2; } return count; } } TEST(FunctionsTest, random_count_bits_uint32_100K) { srand(uint_t(time(NULL))); for (uint32_t i = 0; i < 100000; i++) { const uint32_t r = uint32_t(rand()) + (uint32_t(rand()) << 16); ASSERT_EQ(_test_count_bits(r), count_bits(r)); } } TEST(FunctionsTest, random_count_bits_uint64_100K) { srand(uint_t(time(NULL))); for (uint32_t i = 0; i < 100000; i++) { const uint64_t r = uint64_t(rand()) + (uint64_t(rand()) << 16) + (uint64_t(rand()) << 32) + (uint64_t(rand()) << 48); ASSERT_EQ(_test_count_bits(r), count_bits(r)); } } A: Counting set bits in binary representation (N): Pseudocode - * *set counter = 0. *repeat counting till N is not zero. *check last bit. if last bit = 1 , increment counter *Discard last digit of N. Now let's code this in C++ int countSetBits(unsigned int n){ int count = 0; while(n!=0){ count += n&1; n = n >>1; } return count; } Let's use this function. int main(){ int x = 5; cout<<countSetBits(x); return 0; } Output: 2 Because 5 has 2 bits set in binary representation (101). You can run the code here. A: I'll contribute to @Arty's answer __popcnt16()/__popcnt()/__popcnt64() MSVC's intrinsic (doc here) popcnt instruction, as noted in "Remarks" section, is available as part of SSE4 instruction set and there is a relatively high chance of it not being available. If you run code that uses these intrinsics on hardware that doesn't support the popcnt instruction, the results are unpredictable. So, you need to implement a check as per "Remarks" section: To determine hardware support for the popcnt instruction, call the __cpuid intrinsic with InfoType=0x00000001 and check bit 23 of CPUInfo[2] (ECX). This bit is 1 if the instruction is supported, and 0 otherwise. Here's how you do it: unsigned popcnt(const unsigned input) { struct cpuinfo_t { union { int regs[4]; struct { long eax, ebx, ecx, edx; }; }; cpuinfo_t() noexcept : regs() {} } cpuinfo; // EAX=1: Processor Info and Feature Bits __cpuid(cpuinfo.regs, 1); // ECX bit 23: popcnt if (_bittest(&cpuinfo.ecx, 23)) { return __popcnt(input); } // Choose any fallback implementation you like, there's already a ton of them unsigned num = input; num = (num & 0x55555555) + (num >> 1 & 0x55555555); num = (num & 0x33333333) + (num >> 2 & 0x33333333); num = (num & 0x0F0F0F0F) + (num >> 4 & 0x0F0F0F0F); num = (num & 0x00FF00FF) + (num >> 8 & 0x00FF00FF); num = (num & 0x0000FFFF) + (num >> 16 & 0x0000FFFF); return num; } A: public class BinaryCounter { private int N; public BinaryCounter(int N) { this.N = N; } public static void main(String[] args) { BinaryCounter counter=new BinaryCounter(7); System.out.println("Number of ones is "+ counter.count()); } public int count(){ if(N<=0) return 0; int counter=0; int K = 0; do{ K = biggestPowerOfTwoSmallerThan(N); N = N-K; counter++; }while (N != 0); return counter; } private int biggestPowerOfTwoSmallerThan(int N) { if(N==1) return 1; for(int i=0;i<N;i++){ if(Math.pow(2, i) > N){ int power = i-1; return (int) Math.pow(2, power); } } return 0; } } A: This will also work fine: int ans = 0; while(num) { ans += (num & 1); num = num >> 1; } return ans; A: I use the following function. I haven't checked benchmarks, but it works. int msb(int num) { int m = 0; for (int i = 16; i > 0; i = i>>1) { // debug(i, num, m); if(num>>i) { m += i; num>>=i; } } return m; } A: This is the implementation in golang func CountBitSet(n int) int { count := 0 for n > 0 { count += n & 1 n >>= 1 } return count } A: // How about the following: public int CountBits(int value) { int count = 0; while (value > 0) { if (value & 1) count++; value <<= 1; } return count; } A: You can do something like: int countSetBits(int n) { n=((n&0xAAAAAAAA)>>1) + (n&0x55555555); n=((n&0xCCCCCCCC)>>2) + (n&0x33333333); n=((n&0xF0F0F0F0)>>4) + (n&0x0F0F0F0F); n=((n&0xFF00FF00)>>8) + (n&0x00FF00FF); return n; } int main() { int n=10; printf("Number of set bits: %d",countSetBits(n)); return 0; } See heer: http://ideone.com/JhwcX The working can be explained as follows: First, all the even bits are shifted towards right & added with the odd bits to count the number of bits in group of two. Then we work in group of two, then four & so on.. A: I am giving two algorithms to answer the question, package countSetBitsInAnInteger; import java.util.Scanner; public class UsingLoop { public static void main(String[] args) { Scanner in = new Scanner(System.in); try { System.out.println("Enter a integer number to check for set bits in it"); int n = in.nextInt(); System.out.println("Using while loop, we get the number of set bits as: " + usingLoop(n)); System.out.println("Using Brain Kernighan's Algorithm, we get the number of set bits as: " + usingBrainKernighan(n)); System.out.println("Using "); } finally { in.close(); } } private static int usingBrainKernighan(int n) { int count = 0; while(n > 0) { n& = (n-1); count++; } return count; } /* Analysis: Time complexity = O(lgn) Space complexity = O(1) */ private static int usingLoop(int n) { int count = 0; for(int i=0; i<32; i++) { if((n&(1 << i)) != 0) count++; } return count; } /* Analysis: Time Complexity = O(32) // Maybe the complexity is O(lgn) Space Complexity = O(1) */ } A: For JavaScript, you can count the number of set bits on a 32-bit value using a lookup table (and this code can be easily translated to C). In addition, added 8-bit and 16-bit versions for completeness for people who find this through web search. const COUNT_BITS_TABLE = makeLookupTable() function makeLookupTable() { const table = new Uint8Array(256) for (let i = 0; i < 256; i++) { table[i] = (i & 1) + table[(i / 2) | 0]; } return table } function countOneBits32(n) { return COUNT_BITS_TABLE[n & 0xff] + COUNT_BITS_TABLE[(n >> 8) & 0xff] + COUNT_BITS_TABLE[(n >> 16) & 0xff] + COUNT_BITS_TABLE[(n >> 24) & 0xff]; } function countOneBits16(n) { return COUNT_BITS_TABLE[n & 0xff] + COUNT_BITS_TABLE[(n >> 8) & 0xff] } function countOneBits8(n) { return COUNT_BITS_TABLE[n & 0xff] } console.log('countOneBits32', countOneBits32(0b10101010000000001010101000000000)) console.log('countOneBits32', countOneBits32(0b10101011110000001010101000000000)) console.log('countOneBits16', countOneBits16(0b1010101000000000)) console.log('countOneBits8', countOneBits8(0b10000010))
{ "language": "en", "url": "https://stackoverflow.com/questions/109023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "994" }
Q: What is the best way to paginate results in php I need to display many pages of news in a site. Should I do the pagination in the database query using LIMIT or with the PHP script after getting all the results? A: If you want only work with a DBMS that support this than do it on the DBMS. If you want support other DBMS in the future then ad a layer between that can handle depending on the current DBMS. A: You can use some existing libraries to help you: Pear::Pager can help with the output, and to limit the database traffic to only what you need, you can use a wrapper provided in the examples that come with it. Here's a tutorial I just googled that has it all... A: Use limit - you don't want to transfer masses of data from the database to the scripting engine if you can avoid it. A: Use limit in SQL! Every time! Otherwise you're throwing around considerably more data than you need to, which makes your scripts unnecessarily slow, and will lead to scalability problems as the amount of data in your tables increases. Limit is your friend! A: In addition to using LIMIT, I'd suggest using an explicit WHERE clause to set the offset, and order the results on that column. For example: --- First page (showing first 50 records) SELECT * FROM people ORDER BY id LIMIT 50 --- Second page SELECT * FROM people WHERE id > 50 ORDER BY id LIMIT 50 This further limits the numbers of rows returned to those within the desired range. Using the WHERE approach (as opposed to a LIMIT clause with a separate offset, e.g. LIMIT 50,50) allows you to deal effectively with paging through records with other natural keys, e.g. alphabetically by name, or by date order. A: Personally, I would use the query to do it. Obviously, that can change if your dealing with AJAX and such, but just doing a basic limit in the query and outputting the results is simple and efficient.
{ "language": "en", "url": "https://stackoverflow.com/questions/109027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: VB.NET - RichTextBox - Apply formatting to selected text I have a RichTextBox control on my form. I also have this button, labeled Bold, that I want, if someone selects text in the RichTextBox, then presses the button, the selected text turns bold. Any way to do that? Simple, everyday task for end users. Thanks. A: You'll want to use the .SelectionFont property of the RichTextBox and assign it a Font object with the desired styles. Example - this code would be in the event handler for the button: Dim bfont As New Font(RichTextBoxFoo.Font, FontStyle.Bold) RichTextBoxFoo.SelectionFont = bfont A: A variation on the above that takes into consideration switching bold on/off depending on the currently selected text's font info: With Me.rtbDoc If .SelectionFont IsNot Nothing Then Dim currentFont As System.Drawing.Font = .SelectionFont Dim newFontStyle As System.Drawing.FontStyle If .SelectionFont.Bold = True Then newFontStyle = currentFont.Style - Drawing.FontStyle.Bold Else newFontStyle = currentFont.Style + Drawing.FontStyle.Bold End If .SelectionFont = New Drawing.Font(currentFont.FontFamily, currentFont.Size, newFontStyle) End If End With It may need cleaned up a bit, I pulled this from an older project.
{ "language": "en", "url": "https://stackoverflow.com/questions/109032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: ASP.NET MVC: Making routes/URLs IIS6 and IIS7-friendly I have an ASP.NET MVC-application which I want deployable on both IIS6 and IIS7 and as we all know, IIS6 needs the ".mvc"-naming in the URL. Will this code work to make sure it works on all IIS-versions? Without having to make special adjustments in code, global.asax or config-files for the different IIS-versions. bool usingIntegratedPipeline = HttpRuntime.UsingIntegratedPipeline; routes.MapRoute( "Default", usingIntegratedPipeline ? "{controller}/{action}/{id}" : "{controller}.mvc/{action}/{id}", new { controller = "Home", action = "Index", id = "" } ); Update: Forgot to mention. No ISAPI. Hosted website, no control over the IIS-server. A: That should fix the .mvc problem since the integrated pipeline is IIS7 strictly. But remember to change settings on the IIS7 website to use "2.0 Integrated Pipeline" otherwhise it will return false aswell. Also ofcouse setup the mapping of .mvc to the asp.net isapi dll, but Im guessing that you already know this. Some small suggestions on other things you might need to remember when deploying MVC applications on IIS6 that I found useful: http://msmvps.com/blogs/omar/archive/2008/06/30/deploy-asp-net-mvc-on-iis-6-solve-404-compression-and-performance-problems.aspx A: You can use an ISAPI filter to rewrite URLs which will allow you to have the nice URLs while still on IIS 6. Look, for example, here
{ "language": "en", "url": "https://stackoverflow.com/questions/109044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Countdown timer on ASP.NET page Could you recommend me a way to place a coundown timer on ASP.NET page? Now I use this code: Default.aspx <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <asp:Label ID="Label1" runat="server">60</asp:Label> <asp:Timer ID="Timer1" runat="server" Interval="1000" ontick="Timer1_Tick"> </asp:Timer> </ContentTemplate> </asp:UpdatePanel> Default.aspx.cs protected void Timer1_Tick(object sender, EventArgs e) { int seconds = int.Parse(Label1.Text); if (seconds > 0) Label1.Text = (seconds - 1).ToString(); else Timer1.Enabled = false; } But it is traffic expensive. I would prefer pure client-side method. Is it possible in ASP.NET? A: OK, finally I ended with <span id="timerLabel" runat="server"></span> <script type="text/javascript"> function countdown() { seconds = document.getElementById("timerLabel").innerHTML; if (seconds > 0) { document.getElementById("timerLabel").innerHTML = seconds - 1; setTimeout("countdown()", 1000); } } setTimeout("countdown()", 1000); </script> Really simple. Like old good plain HTML with JavaScript. A: time1 = (DateTime)ViewState["time"] - DateTime.Now; if (time1.TotalSeconds <= 0) { Label1.Text = Label2.Text = "TimeOut!"; } else { if (time1.TotalMinutes > 59) { Label1.Text = Label2.Text = string.Format("{0}:{1:D2}:{2:D2}", time1.Hours, time1.Minutes, time1.Seconds); } else { Label1.Text = Label2.Text = string.Format("{0:D2}:{1:D2}", time1.Minutes, time1.Seconds); } } A: You might add something like this in your .aspx page <form name="counter"><input type="text" size="8" name="d2"></form> <script> <!-- // var milisec=0 var seconds=30 document.counter.d2.value='30' function display(){ if (milisec<=0){ milisec=9 seconds-=1 } if (seconds<=-1){ milisec=0 seconds+=1 } else milisec-=1 document.counter.d2.value=seconds+"."+milisec setTimeout("display()",100) } display() --> </script> Found here A: <script type="text/javascript"> var sec = 10; var min = 0 var hour = 0; var t; function display() { sec -= 1 if ((sec == 0) && (min == 0) && (hour == 0)) { //if a popup window is used: setTimeout("self.close()", 1000); return; } if (sec < 0) { sec = 59; min -= 1; } if (min < 0) { min = 59; hour -= 1; } else document.getElementById("<%=TextBox1.ClientID%>").value = hour + ":" + min + ":" + sec; t = setTimeout("display()", 1000); } window.onload = display; </script> A: use this javascript code---- var sec=0 ; var min=0; var hour=0; var t; function display(){ if (sec<=0){ sec+=1; } if(sec==60) { sec=0; min+=1; } if(min==60){ hour+=1; min=0; } if (min<=-1){ sec=0; min+=1; } else sec+=1 ; document.getElementById("<%=TextBox1.ClientID%>").value=hour+":"+min+":"+sec; t=setTimeout("display()",1000); } window.onload=display;
{ "language": "en", "url": "https://stackoverflow.com/questions/109064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I use FlashVars with ActionScript 3.0? I found this guide for using the flash parameters, thought it might be useful to post here, since Flash CS3 lacks a usage example for reading these parameters. See answers for the link A: Not sure why his example calls LoaderInfo. The DisplayObject class has its own (readonly) loaderinfo property. As long as your main class extends a DisplayObject, you can call the property directly package { import flash.display.Sprite; public class Main extends Sprite { public function Main() { var test1:String = ''; if (this.loaderInfo.parameters.test1 !== undefined) { test1 = this.loaderInfo.parameters.test1; } } } } From the doc: Returns a LoaderInfo object containing information about loading the file to which this display object belongs. The loaderInfo property is defined only for the root display object of a SWF file or for a loaded Bitmap (not for a Bitmap that is drawn with ActionScript). To find the loaderInfo object associated with the SWF file that contains a display object named myDisplayObject, use myDisplayObject.root.loaderInfo. A: var paramObj:Object = LoaderInfo(this.root.loaderInfo).parameters; The entire article is at: http://blogs.adobe.com/pdehaan/2006/07/using_flashvars_with_actionscr.html Important note! This will only work in the main class. If you'll try to load the parameters in a subclass you'll get nothing.
{ "language": "en", "url": "https://stackoverflow.com/questions/109066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a .NET OS abstraction layer to make OS calls work cross-platform? I really want to write .NET apps that run on any platform (PC, Linux and Mac). I am not really concerned about UI capabilities because these are mostly background services. I have heard of MONO and that it allows you to write .NET apps that run on Mac and Linux, but I want to be able to write a single app that when compiled for Windows will run as a Service, and when compiled for Linux will run as whatever the UNIX equivalent is. I also would like to be able to store things in the registry and have that work. Is there any way to write truly OS agnostic code like this? ...and DON'T say I should make it run on the web! :) A: Yes, Mono has the windows service stuff ported to Linux, but you are going to have to think of a better way to store configuration settings than Registry... Using XML files for instance would be cross platform. You should also check out mono's wiki on how to develop portable applications here. A: Short answer: no. You could create an application which will run on both Windows and Linux. But there are platform-specific features and right now Mono could not automatically 'translate' those for you. A Windows Service is a good example of that.
{ "language": "en", "url": "https://stackoverflow.com/questions/109070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Testing all classes which implement an interface in Java Is there anything out there (for Java specifically) that allow you to automatically test the behavior of an interface? As an example, let's say I have a bunch of tests for the Comparable interface, that should apply to anything that implements Comparable. What I'd like is to be able to include "ComparableTests" automatically in the test fixtures for any of my classes which implement Comparable. Bonus points if this would work with generic interfaces. I know the .NET framework mbUnit has something similar, and when you're using something like TestNG's generator functions you could set up a test fixture for Comparable and have the generator create an instance of each of your classes that implement Comparable. But I'd rather have it be automatic, and located at the test fixture for each of my classes (since I'll already have them around for testing other parts of that class). Clarification: I could definitely build something like this. I was asking if there was anything out there that already enabled this. A: Based on your last paragraph, what you're trying to do is inject some 'extra methods' into unit testing since you're already testing a specific class. I do not know of a testing harness that allows you to attach tests based on the hierarchy of a class. However, with your own suggestion of using TestNG for building something similar, I think you might be very close. You could very well incorporate some base code that adds your class to a list of 'default test classes', which are in turn tested if they implement a specific interface. Still, regarding the general case, I think you're out of luck, since the Java type system is one-way, you can only (easily) find out what interfaces a class implements, not the other way around. Furthermore, the problem is 'where to stop looking': if you have a test that checks all your comparable implementers, do you want it to check the validity of String's one too, since that is in your Java environment? A: Try this http://www.xmlizer.biz/java/classloader/ClassList.java A: In .NET it would be pretty simple to set up a method that looks through an assembly and identifies each class's inheritance/implementation hierarchy. I'm sure you could do it in Java, too, if you research the Java reflection API. You could then create an array of ITargetInterfaces and call a test method on each one. A: One way would be to search through the jar file for all the .class files (or search through the classes directory), use the Class.forName() method to load the class file and check MyInterface.class.isAssignableFrom(myClass). This wouldn't deal easily public inner static classes (you could parse the class file name), but would never work with private inner classes or anonymous inner classes.
{ "language": "en", "url": "https://stackoverflow.com/questions/109072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Does anyone know of any problems with using WCF to expose a SOAP interface for non .NET clients? Does anyone know of any problems with using WCF to expose a SOAP interface for non .NET clients? For example incompatibilities with other SOAP libraries? This is so that the SOAP interface can be exposed for third parties to integrate with our software. A: Some of the problem areas I've encountered with WCF: * *It generates WSDL that is split across multiple URLs. That is, one part of the schema is at one URL, another is at a different URL, etc. The "main" WSDL URL (the one with just "?WSDL" after the service name) references the others via xsd:import elements. Many SOAP clients (eg pre-.NET Delphi) have enormous difficulty with this idiom. So you really have to "flatten" your WSDL in order to achieve interoperability in practice. One solution is given here. *WCF doesn't generate XML namespaces the same way as, say, ASMX web services. WCF has a tendency to place any service or data contract into a namespace of its own choosing. Again, some SOAP clients have difficulty with this. You can increase you interoperability level by adding an explicit namespace to your ServiceContract and DataContract attributes. *Many SOAP clients won't handle faults as nicely as WCF clients. For example, the proxy generation code won't create client-side objects for the faults declared in the WSDL. The faults will still be transmitted to the client, of course, but the client then has to do more work to figure out what kind of fault it was. A: versions of the WS-* standards stack can also be an interoperability issue - for example the version of WS-Addressing (2003) supported by some java implementations eg Oracle BPEL is not supported by WCF which supports the later draft and 1.0 versions but not the earlier 2003 one A: Generally everything works fine. It will obviously depend on the client you're using - not everyone implement SOAP properly. P.S. Could you please rephrase your question if you hope for more specific answer?
{ "language": "en", "url": "https://stackoverflow.com/questions/109083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Stop setInterval call in JavaScript I am using setInterval(fname, 10000); to call a function every 10 seconds in JavaScript. Is it possible to stop calling it on some event? I want the user to be able to stop the repeated refresh of data. A: You can set a new variable and have it incremented by ++ (count up one) every time it runs, then I use a conditional statement to end it: var intervalId = null; var varCounter = 0; var varName = function(){ if(varCounter <= 10) { varCounter++; /* your code goes here */ } else { clearInterval(intervalId); } }; $(document).ready(function(){ intervalId = setInterval(varName, 10000); }); I hope that it helps and it is right. A: setInterval() returns an interval ID, which you can pass to clearInterval(): var refreshIntervalId = setInterval(fname, 10000); /* later */ clearInterval(refreshIntervalId); See the docs for setInterval() and clearInterval(). A: Already answered... But if you need a featured, re-usable timer that also supports multiple tasks on different intervals, you can use my TaskTimer (for Node and browser). // Timer with 1000ms (1 second) base interval resolution. const timer = new TaskTimer(1000); // Add task(s) based on tick intervals. timer.add({ id: 'job1', // unique id of the task tickInterval: 5, // run every 5 ticks (5 x interval = 5000 ms) totalRuns: 10, // run 10 times only. (omit for unlimited times) callback(task) { // code to be executed on each run console.log(task.name + ' task has run ' + task.currentRuns + ' times.'); // stop the timer anytime you like if (someCondition()) timer.stop(); // or simply remove this task if you have others if (someCondition()) timer.remove(task.id); } }); // Start the timer timer.start(); In your case, when users click for disturbing the data-refresh; you can also call timer.pause() then timer.resume() if they need to re-enable. See more here. A: If you set the return value of setInterval to a variable, you can use clearInterval to stop it. var myTimer = setInterval(...); clearInterval(myTimer); A: In nodeJS you can you use the "this" special keyword within the setInterval function. You can use this this keyword to clearInterval, and here is an example: setInterval( function clear() { clearInterval(this) return clear; }() , 1000) When you print the value of this special keyword within the function you output a Timeout object Timeout {...} A: The Trick setInterval returns a number: Solution Take this number. Pass it to the function clearInterval and you're safe: Code: Always store the returned number of setInterval in a variable, so that you can stop the interval later on: const intervalID = setInterval(f, 1000); // Some code clearInterval(intervalID); (Think of this number as the ID of a setInterval. Even if you have called many setInterval, you can still stop anyone of them by using the proper ID.) A: Why not use a simpler approach? Add a class! Simply add a class that tells the interval not to do anything. For example: on hover. var i = 0; this.setInterval(function() { if(!$('#counter').hasClass('pauseInterval')) { //only run if it hasn't got this class 'pauseInterval' console.log('Counting...'); $('#counter').html(i++); //just for explaining and showing } else { console.log('Stopped counting'); } }, 500); /* In this example, I'm adding a class on mouseover and remove it again on mouseleave. You can of course do pretty much whatever you like */ $('#counter').hover(function() { //mouse enter $(this).addClass('pauseInterval'); },function() { //mouse leave $(this).removeClass('pauseInterval'); } ); /* Other example */ $('#pauseInterval').click(function() { $('#counter').toggleClass('pauseInterval'); }); body { background-color: #eee; font-family: Calibri, Arial, sans-serif; } #counter { width: 50%; background: #ddd; border: 2px solid #009afd; border-radius: 5px; padding: 5px; text-align: center; transition: .3s; margin: 0 auto; } #counter.pauseInterval { border-color: red; } <!-- you'll need jQuery for this. If you really want a vanilla version, ask --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <p id="counter">&nbsp;</p> <button id="pauseInterval">Pause</button></p> I've been looking for this fast and easy approach for ages, so I'm posting several versions to introduce as many people to it as possible.
{ "language": "en", "url": "https://stackoverflow.com/questions/109086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1710" }
Q: How to get instance variables in Python? Is there a built-in method in Python to get an array of all a class' instance variables? For example, if I have this code: class hi: def __init__(self): self.ii = "foo" self.kk = "bar" Is there a way for me to do this: >>> mystery_method(hi) ["ii", "kk"] Edit: I originally had asked for class variables erroneously. A: Your example shows "instance variables", not really class variables. Look in hi_obj.__class__.__dict__.items() for the class variables, along with other other class members like member functions and the containing module. class Hi( object ): class_var = ( 23, 'skidoo' ) # class variable def __init__( self ): self.ii = "foo" # instance variable self.jj = "bar" Class variables are shared by all instances of the class. A: Suggest >>> print vars.__doc__ vars([object]) -> dictionary Without arguments, equivalent to locals(). With an argument, equivalent to object.__dict__. In otherwords, it essentially just wraps __dict__ A: Although not directly an answer to the OP question, there is a pretty sweet way of finding out what variables are in scope in a function. take a look at this code: >>> def f(x, y): z = x**2 + y**2 sqrt_z = z**.5 return sqrt_z >>> f.func_code.co_varnames ('x', 'y', 'z', 'sqrt_z') >>> The func_code attribute has all kinds of interesting things in it. It allows you todo some cool stuff. Here is an example of how I have have used this: def exec_command(self, cmd, msg, sig): def message(msg): a = self.link.process(self.link.recieved_message(msg)) self.exec_command(*a) def error(msg): self.printer.printInfo(msg) def set_usrlist(msg): self.client.connected_users = msg def chatmessage(msg): self.printer.printInfo(msg) if not locals().has_key(cmd): return cmd = locals()[cmd] try: if 'sig' in cmd.func_code.co_varnames and \ 'msg' in cmd.func_code.co_varnames: cmd(msg, sig) elif 'msg' in cmd.func_code.co_varnames: cmd(msg) else: cmd() except Exception, e: print '\n-----------ERROR-----------' print 'error: ', e print 'Error proccessing: ', cmd.__name__ print 'Message: ', msg print 'Sig: ', sig print '-----------ERROR-----------\n' A: Sometimes you want to filter the list based on public/private vars. E.g. def pub_vars(self): """Gives the variable names of our instance we want to expose """ return [k for k in vars(self) if not k.startswith('_')] A: built on dmark's answer to get the following, which is useful if you want the equiv of sprintf and hopefully will help someone... def sprint(object): result = '' for i in [v for v in dir(object) if not callable(getattr(object, v)) and v[0] != '_']: result += '\n%s:' % i + str(getattr(object, i, '')) return result A: You normally can't get instance attributes given just a class, at least not without instantiating the class. You can get instance attributes given an instance, though, or class attributes given a class. See the 'inspect' module. You can't get a list of instance attributes because instances really can have anything as attribute, and -- as in your example -- the normal way to create them is to just assign to them in the __init__ method. An exception is if your class uses slots, which is a fixed list of attributes that the class allows instances to have. Slots are explained in http://www.python.org/2.2.3/descrintro.html, but there are various pitfalls with slots; they affect memory layout, so multiple inheritance may be problematic, and inheritance in general has to take slots into account, too. A: Every object has a __dict__ variable containing all the variables and its values in it. Try this >>> hi_obj = hi() >>> hi_obj.__dict__.keys() Output dict_keys(['ii', 'kk']) A: You can also test if an object has a specific variable with: >>> hi_obj = hi() >>> hasattr(hi_obj, "some attribute") False >>> hasattr(hi_obj, "ii") True >>> hasattr(hi_obj, "kk") True A: Both the Vars() and dict methods will work for the example the OP posted, but they won't work for "loosely" defined objects like: class foo: a = 'foo' b = 'bar' To print all non-callable attributes, you can use the following function: def printVars(object): for i in [v for v in dir(object) if not callable(getattr(object,v))]: print '\n%s:' % i exec('print object.%s\n\n') % i A: Use vars() class Foo(object): def __init__(self): self.a = 1 self.b = 2 vars(Foo()) #==> {'a': 1, 'b': 2} vars(Foo()).keys() #==> ['a', 'b'] A: You will need to first, examine the class, next, examine the bytecode for functions, then, copy the bytecode, and finally, use the __code__.co_varnames. This is tricky because some classes create their methods using constructors like those in the types module. I will provide code for it on GitHub. A: Based on answer of Ethan Joffe def print_inspect(obj): print(f"{type(obj)}\n") var_names = [attr for attr in dir(obj) if not callable(getattr(obj, attr)) and not attr.startswith("__")] for v in var_names: print(f"\tself.{v} = {getattr(obj, v)}\n")
{ "language": "en", "url": "https://stackoverflow.com/questions/109087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "148" }
Q: How do you capture stderr, stdout, and the exit code all at once, in Perl? Is it possible to run an external process from Perl, capture its stderr, stdout AND the process exit code? I seem to be able to do combinations of these, e.g. use backticks to get stdout, IPC::Open3 to capture outputs, and system() to get exit codes. How do you capture stderr, stdout, and the exit code all at once? A: There are three basic ways of running external commands: system $cmd; # using system() $output = `$cmd`; # using backticks (``) open (PIPE, "cmd |"); # using open() With system(), both STDOUT and STDERR will go the same place as the script's STDOUT and STDERR, unless the system() command redirects them. Backticks and open() read only the STDOUT of your command. You could also call something like the following with open to redirect both STDOUT and STDERR. open(PIPE, "cmd 2>&1 |"); The return code is always stored in $? as noted by @Michael Carman. A: (Update: I updated the API for IO::CaptureOutput to make this even easier.) There are several ways to do this. Here's one option, using the IO::CaptureOutput module: use IO::CaptureOutput qw/capture_exec/; my ($stdout, $stderr, $success, $exit_code) = capture_exec( @cmd ); This is the capture_exec() function, but IO::CaptureOutput also has a more general capture() function that can be used to capture either Perl output or output from external programs. So if some Perl module happens to use some external program, you still get the output. It also means you only need to remember one single approach to capturing STDOUT and STDERR (or merging them) instead of using IPC::Open3 for external programs and other modules for capturing Perl output. A: If you reread the documentation for IPC::Open3, you'll see a note that you should call waitpid to reap the child process. Once you do this, the status should be available in $?. The exit value is $? >> 8. See $? in perldoc perlvar. A: If you don't want the contents of STDERR, then the capture() command from IPC::System::Simple module is almost exactly what you're after: use IPC::System::Simple qw(capture system $EXITVAL); my $output = capture($cmd, @args); my $exit_value = $EXITVAL; You can use capture() with a single argument to invoke the shell, or multiple arguments to reliably avoid the shell. There's also capturex() which never calls the shell, even with a single argument. Unlike Perl's built-in system and backticks commands, IPC::System::Simple returns the full 32-bit exit value under Windows. It also throws a detailed exception if the command can't be started, dies to a signal, or returns an unexpected exit value. This means for many programs, rather than checking the exit values yourself, you can rely upon IPC::System::Simple to do the hard work for you: use IPC::System::Simple qw(system capture $EXIT_ANY); system( [0,1], "frobincate", @files); # Must return exitval 0 or 1 my @lines = capture($EXIT_ANY, "baznicate", @files); # Any exitval is OK. foreach my $record (@lines) { system( [0, 32], "barnicate", $record); # Must return exitval 0 or 32 } IPC::System::Simple is pure Perl, has no dependencies, and works on both Unix and Windows systems. Unfortunately, it doesn't provide a way of capturing STDERR, so it may not be suitable for all your needs. IPC::Run3 provides a clean and easy interface into re-plumbing all three common filehandles, but unfortunately it doesn't check to see if the command was successful, so you'll need to inspect $? manually, which is not at all fun. Providing a public interface for inspecting $? is something which is on my to-do list for IPC::System::Simple, since inspecting $? in a cross-platform fashion is not a task I'd wish on anyone. There are other modules in the IPC:: namespace that may also provide you with assistance. YMMV. All the best, Paul A: If you're getting really complicated, you might want to try Expect.pm. But that's probably overkill if you don't need to also manage sending input to the process as well. A: I found IPC:run3 to be very helpful. You can forward all child pipes to a glob or a variable; very easily! And exit code will be stored in $?. Below is how i grabbed stderr which i knew would be a number. The cmd output informatic transformations to stdout (which i piped to a file in the args using >) and reported how many transformations to STDERR. use IPC::Run3 my $number; my $run = run3("cmd arg1 arg2 >output_file",\undef, \undef, \$number); die "Command failed: $!" unless ($run && $? == 0);
{ "language": "en", "url": "https://stackoverflow.com/questions/109124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: Dynamically created operators I created a program using dev-cpp and wxwidgets which solves a puzzle. The user must fill the operations blocks and the results blocks, and the program will solve it. I'm solving it using brute force, I generate all non-repeated 9 length number combinations using a recursive algorithm. It does it pretty fast. Up to here all is great! But the problem is when my program operates depending the character on the blocks. Its extremely slow (it never gets the answer), because of the chars comparation against +, -, *, etc. I'm doing a CASE. Is there some way or some programming language which allows dynamic creation of operators? So I can define the operator ROW1COL2 to be a +, and the same way to all other operations. I leave a screenshot of the app, so its easier to understand how the puzzle works. http://www.imageshare.web.id/images/9gg5cev8vyokp8rhlot9.png PD: The algorithm works, I tried it with a trivial puzzle, and solved it in a second. A: Not sure that this is really what you're looking for but.. Any Object Oriented language such as C++ or C# will allow you to create an "Operator" base class and then to derive from this base class a "PlusOperator" or "MinusOperator" etc'. this is the standard way to avoid such case statements. However I am not sure this will solve your performance problem. Using plain brute force for such a problem will result you in an exponential solution. this will seem to work fast for small input - say completing all the numbers. But if you want to complete the operations its a much larger problem with alot more possibilities. So its likely that even without the CASE your program is not going to be able to solve it. The right way to try to solve this kind of problems is using some advanced search methods which use some Heuristic function. See the A* (A-star) algorithm for example. Good luck! A: You can represent the numbers and operators as objects, so the parsing is done only once in the beginning of the solving.
{ "language": "en", "url": "https://stackoverflow.com/questions/109129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you change the style of a div programmatically How do I change the style (color) of a div such as the following? "<div id=foo class="ed" style="display: <%= ((foo.isTrue) ? string.Empty : "none") %>"> <%= ((foo.isTrue) ? foo.Name: "false foo") %>"` A: If you want to alter the color of the div with client side code (javascript) running in the browser, you do something like the following: <script> var fooElement = document.getElementById("foo"); fooElement.style.color = "red"; //to change the font color </script> A: If you wanted to change the class instead of the style directly: ie.. create another class with the styling you want... myDiv.Attributes["class"] = "otherClassName" A: It looks like you are writing ASP, or maybe JSP. I'm not too familiar with either language, but the principles are the same no matter what language you are working in. If you are working with a limited number of colours, then the usual option is to create a number of classes and write rule-sets for them in your stylesheet: .important { background: red; } .todo { background: blue; } And so on. Then have your server side script generate the HTML to make the CSS match: <div class="important"> You should, of course, ensure that the information is available through means other than colour as well. If the colours are determined at run time, then you can generate style attributes: <div style="background-color: red;"> A: You should set your colors in CSS, and then change the CSS class programatically. For example: (CSS) div.Error { color:red; } (ASP.NET/VB) <div class='<%=Iif(HasError, "Error", "")%>'> .... </div> A: Generally, you can do it directly document.getElementById("myDiv").style.color = "red"; There's a reference here. A: Try this: in the .aspx file put thees lines <div id="myDiv" runat="server"> Some text </div> then you can use for example myDiv.Style["color"] = "red"; A: That code fragment doesn't say much - if the code is server-side why don't you change e.g. the class of the HTML element there? A: IMO this is the better way to do it. I found some of this in other posts but this one comes up first in google search. This part works for standard JavaScript. I am pretty sure you can use it to remove all styles as well as add/overwite. var div = document.createElement('div'); div.style.cssText = "border-radius: 6px 6px 6px 6px; height: 250px; width: 600px"; OR var div = document.getElementById('foo'); div.style.cssText = "background-color: red;"; This works for jQuery only $("#" + TDDeviceTicketID).attr("style", "padding: 10px;"); $("#" + TDDeviceTicketID).attr("class", "roundbox1"); This works for removing it JQUERY $("#" + TDDeviceTicketID).removeAttr("style"); $("#" + TDDeviceTicketID).removeAttr("class");
{ "language": "en", "url": "https://stackoverflow.com/questions/109134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do you manage a large product backlog? We have a large backlog of things we should do in our software, in a lot of different categories, for example: * *New problem areas for our products to solve *New functionality supporting existing problem areas *New functionality requested by our existing users *Usability and "look" enhancements *Architectural upgrades to the back-end *Bug fixes Managing all of these in a sensible fashion is a job that falls to Product Management, but it is tricky for a lot of reasons. Firstly, we have a number of different systems that hold the different things (market requirements document in files, bugs in a bug database, customer requirements in our help desk system, enginering's wish-list on our intranet, etc). And secondly, many of the items are of wildly different size, scope, complexity and of course value, which means that choosing isn't as simple as just ordering a list by priority. Because we now are fairly large, have a complex product and lots of customers, the basic solutions (a spreadsheet, a google doc, a basecamp to-do list) just isn't sufficient to deal with this. We need a way to group things together in various ways, prioritise them on an ongoing basis, make it clear what we're doing and what is coming - without it requiring all of someone's time to just manage some tool. How do you manage this in a way that allows the business to always do what is most valuable to existing customers, helps get new ones, and keeps the software innards sane? Note that this is different from the development-side, which I think we have down pretty well. We develop everything in an iterative, agile fashion, and once something has been chosen for design and implementation, we can do that. It's the part where we need to figure out what to do next that's hardest! Have you found a method or a tool that works? If so, please share! (And if you would like to know the answer too, rate up the question so it stays visible :) Addendum: Of course it's nice to fix all the bugs first, but in a real system that actually is installed on customers' machines, that is not always practical. For example, we may have a bug that only occurs very rarely and that it would take a huge amount of time and architectural upheaval to fix - we might leave that for a while. Or we might have a bug where someone thinks something is hard to use, and we think fixing it should wait for a bigger revamp of that area. So, there are lots of reasons why we don't just fix them all straight away, but keep them open so we don't forget. Besides, it is the prioritization of the non-bugs that is the hardest; just imagine we don't have any :) A: The key is aggressive categorization and prioritization. Fix the problems which are keeping customers away quickly and add more features to keep the customers coming. Push back issues which only affect a small number of people unless they are very easy to fix. A: A simple technique is to use a prioritization matrix. Examples: * *http://erc.msh.org/quality/pstools/psprior2.cfm *http://it.toolbox.com/blogs/enterprise-solutions/sample-project-prioritization-matrix-23381 Also useful is the prioritization quadrants (two dimensions: Importance, Urgency) that Covey proposes: http://www.dkeener.com/keenstuff/priority.html. Focus on the Important and Urgent, then the Important and Not urgent. The non-Important stuff...well.. if someone wants to do that in their off hours :-). A variant of the Covey quadrants that I've used is with the dimensions of Importance and Ease. Ease is a good way to prioritize the tasks within a Covey quadrant. A: Managing a large backlog in an aggressive manner is almost always wasteful. By the time you get to the middle of a prioritized pile things have more often than not changed. I'd recommend adopting something like what Corey Ladas calls a priority filter: http://leansoftwareengineering.com/2008/08/19/priority-filter/ Essentially, you have a few buckets of increasing size and decreasing priority. You allow stakeholders to fill them, but force them to ignore the rest of the stories until there are openings in the buckets. Very simple but very effective. Edit: Allan asked what to do if tasks are of different sizes. Basically, a big part of making this work is right-sizing your tasks. We only apply this prioritization to user stories. User stories are typically significantly smaller than "create a community site". I would consider the community site bit an epic or even a project. It would need to be broken down into significantly smaller bits in order to be prioritized. That said, it can still be challenging to make stories similarly sized. Sometimes you just can't, so you communicate that during your planning decisions. With regards to moving wibbles two pixels, many of these things that are easy can be done for "free". You just have to be careful to balance these and only do them if they're really close to free and they're actually somewhat important. We treat bugs similarly. Bugs get one of three categories, Now, Soon or Eventually. We fix Now and Soon bugs as quickly as we can with the only difference being when we publish the fixes. Eventually bugs don't get fix unless devs get bored and have nothing to do or they somehow become higher priority. A: I think you have to get them all into one place so that the can be prioritised. Having to collate several different sources makes this virtually impossible. Once you have that then someone/a group have to rank each bug, requested feature and desired development. Things you could prioritise by are: * *Value added to the product *Importance to customers, both existing and potential *Scale of the task A: You should fix all the bugs first and only then think about adding new functions to it. A: All of this stuff could be tracked by a good bug tracking system that has the following features: * *Ability to mark work items as bugs or enhancement requests *Category field for the region of responsibility that the work item falls under (UI, back-end, etc) *Version # field for when the fix or feature is scheduled to be done *Status field (in progress, completed, verified, etc) *Priority field A: Since you already are doing things in agile fashion, you could borrow some ideas from XP: * *put all your stories in big pile of index cards (or some such tool) *now developers should estimate how big or small those stories are (here developers have final word) *and let client (or their proxy -- like product manager) order those stories by their business value (here client has final word) *and if developers think that there is something technical which is more important (like fixing those pesky bugs), they have to communicate that to client (business person) and make client to rise that priority (client still has final word) *select as many stories for next iteration as your teams velocity allows This way: * *there is a single queue of task, ordered by business needs *clients get best return for their investment *business value drives development, not technology or geeks *developers get to say how hard things are to implement *if there is no ROI, task stays near bottom of that pile For more information, see Planning Extreme Programming by Kent Bech and Martin Fowler. They say it much better than I can ever do. A: I'm not sure if the tool is as critical as the process. I've seen teams be very successful using something as simple as index cards and white boards to manage fairly large projects. One thing that I would recommend in prioritization is make sure you have a comprehensive list of these items together. This way you can weigh the priority of fixing an issue vs. a new feature, etc.. A: Beyond any tool and process, there should be... some people ;) In our shop, he is called a Release Manager and he determines the next functional perimeter to ship into production. Then there is a Freeze Manager who actually knows about code and files and bugs (he is usually one of the programmers), and will enforce the choices of the release manager, and monitor the necessary merges in order to have something to test and then release. Between them two, a prioritization can be established, both at high level (functional requests) and low-level (bugs and technical issues)
{ "language": "en", "url": "https://stackoverflow.com/questions/109141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Best javascript framework for drawing/showing images? What's the best javascript framework for drawing (lines, curves whatnot) on images? A: jQuery has several plugins available for doing graphics. Raphael is a plugin that uses SVG (for Firefox and other browsers that support SVG), and VML for the IE products. In addition, jQuery provides a great architecture for javascript projects with plenty of support and plug-ins. Raphael is available here: http://raphaeljs.com/index.html jQuery is available here: http://jquery.com/ A: Processing var p = Processing(CanvasElement); p.size(100, 100); p.background(0); p.fill(255); p.ellipse(50, 50, 50, 50); A: Take a look at this library that is a jquery plugin: http://www.openstudio.fr/Library-for-simple-drawing-with.html A: Refer to this question. A: You can create "images" using javascript's flot library. It's on google code: flot And requires jQuery Here's an example, how a graph might look like
{ "language": "en", "url": "https://stackoverflow.com/questions/109149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: .NET Reporting Tutorial Does anyone know a Tutorial for the Reporting in C# .NET. I mean the Reports in "Microsoft.Reporting" Namespace (not Crystal Reports). A: I know you probably aren't looking for links that you can find on google yourself, but these cover reporting services in great detail and should cover most of your questions. * *Reporting Services Tutorials *Intro to reporting services *Reporting Services in Action *Webcasts on Reporting Services *Useful reporting services links But I'm pretty sure reporting services is tied pretty close to MS SQL, so if you aren't using it you might have to look for a different solution.
{ "language": "en", "url": "https://stackoverflow.com/questions/109154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Strict vs NonStrict NHibernate cache concurrency strategies This question is about the difference between ReadWrite and NonStrictReadWrite cache concurrency strategies for NHibernate's second level cache. As I understand it, the difference between these two strategies is relevant when you have a distributed replicated cache - nonstrict won't guarantee that one cache has the exact same value as another cache, while strict read/write should - assuming the cache provider does the appropriate distributed locking. The part I don't understand is how the strict vs nonstrict distinction is relevant when you have a single cache, or a distributed partitioned (non replicated) cache. Can it be relevant? It seems to me that in non replicated scenarios, the timestamps cache will ensure that stale results are not served. If it can be relevant, I would like to see an example. A: I have created a post here explaining the differences. Please have a look and feel free to comment. A: What you assume is right, in a single target/thread environment there's little difference. However if you look at the cache providers there is a bit going on even in a multi-threaded scenario. How an object is re-cached from it's modified state is different in the non-strict. For example, if your object is much heftier to reload but you'd like it to after an update instead of footing the next user with the bill, then you'll see different performance with strict vs non-strict. For example: non-strict simply dumps an object from cache after an update is performed...price is paid for the fetch on the next access instead of a post-update event handler. In the strict model, the re-cache is taken care of automatically. A similar thing happens with inserts, non-strict will do nothing where strict will go behind and load the newly inserted object into cache. In non-strict you also have the possibility of a dirty read, since the cache isn't locked at the time of the read you would not see the result of another thread's change to the item. In strict the cache key for that item would lock and you would be held up but see the absolute latest result. So, even in a single target environment, if there is a large amount of concurrent reads/edits on objects then you have a chance to see data that isn't really accurate. This of course becomes a problem when a save is performed and an edit screen is loading: the person thinking they're editing the latest version of the object really isn't, and they're in for a nasty surprise when they try to save the edits to the stale data they loaded.
{ "language": "en", "url": "https://stackoverflow.com/questions/109179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Bypass invalid SSL certificate errors when calling web services in .Net We are setting up a new SharePoint for which we don't have a valid SSL certificate yet. I would like to call the Lists web service on it to retrieve some meta data about the setup. However, when I try to do this, I get the exception: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The nested exception contains the error message: The remote certificate is invalid according to the validation procedure. This is correct since we are using a temporary certificate. My question is: how can I tell the .Net web service client (SoapHttpClientProtocol) to ignore these errors? A: Like Jason S's answer: ServicePointManager.ServerCertificateValidationCallback = delegate { return true; }; I put this in my Main and look to my app.config and test if (ConfigurationManager.AppSettings["IgnoreSSLCertificates"] == "True") before calling that line of code. A: ServicePointManager.ServerCertificateValidationCallback += (mender, certificate, chain, sslPolicyErrors) => true; will bypass invaild ssl . Write it to your web service constructor. A: I solved it this way: Call the following just before calling your ssl webservice that cause that error: using System.Net; using System.Net.Security; using System.Security.Cryptography.X509Certificates; /// <summary> /// solution for exception /// System.Net.WebException: /// The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure. /// </summary> public static void BypassCertificateError() { ServicePointManager.ServerCertificateValidationCallback += delegate( Object sender1, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) { return true; }; } A: The approach I used when faced with this problem was to add the signer of the temporary certificate to the trusted authorities list on the computer in question. I normally do testing with certificates created with CACERT, and adding them to my trusted authorities list worked swimmingly. Doing it this way means you don't have to add any custom code to your application and it properly simulates what will happen when your application is deployed. As such, I think this is a superior solution to turning off the check programmatically. A: I was having same error using DownloadString; and was able to make it works as below with suggestions on this page System.Net.WebClient client = new System.Net.WebClient(); ServicePointManager.ServerCertificateValidationCallback = delegate { return true; }; string sHttpResonse = client.DownloadString(sUrl); A: Alternatively you can register a call back delegate which ignores the certification error: ... ServicePointManager.ServerCertificateValidationCallback = MyCertHandler; ... static bool MyCertHandler(object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors error) { // Ignore errors return true; } A: For newbies, you can extend your partial service class in a separate cs file and add the code the code provided by "imanabidi" to get it integrated A: To further expand on Simon Johnsons post - Ideally you want a solution that will simulate the conditions you will see in production and modifying your code won't do that and could be dangerous if you forget to take the code out before you deploy it. You will need a self-signed certificate of some sort. If you're using IIS Express you will have one of these already, you'll just have to find it. Open Firefox or whatever browser you like and go to your dev website. You should be able to view the certificate information from the URL bar and depending on your browser you should be able to export the certificate to a file. Next, open MMC.exe, and add the Certificate snap-in. Import your certificate file into the Trusted Root Certificate Authorities store and that's all you should need. It's important to make sure it goes into that store and not some other store like 'Personal'. If you're unfamiliar with MMC or certificates, there are numerous websites with information how to do this. Now, your computer as a whole will implicitly trust any certificates that it has generated itself and you won't need to add code to handle this specially. When you move to production it will continue to work provided you have a proper valid certificate installed there. Don't do this on a production server - that would be bad and it won't work for any other clients other than those on the server itself.
{ "language": "en", "url": "https://stackoverflow.com/questions/109186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "91" }
Q: How do I check if a directory is writeable in PHP? Does anyone know how I can check to see if a directory is writeable in PHP? The function is_writable doesn't work for folders. Edit: It does work. See the accepted answer. A: to be more specific for owner/group/world $dir_writable = substr(sprintf('%o', fileperms($folder)), -4) == "0774" ? "true" : "false"; peace... A: You may be sending a complete file path to the is_writable() function. is_writable() will return false if the file doesn't already exist in the directory. You need to check the directory itself with the filename removed, if this is the case. If you do that, is_writable will correctly tell you whether the directory is writable or not. If $file contains your file path do this: $file_directory = dirname($file); Then use is_writable($file_directory) to determine if the folder is writable. I hope this helps someone. A: According to the documentation for is_writable, it should just work - but you said "folder", so this could be a Windows issue. The comments suggest a workaround. (A rushed reading earlier made me think that trailing slashes were important, but that turned out to be specific to this work around). A: I've written a little script (I call it isWritable.php) that detects all directories in the same directory the script is in and writes to the page whether each directory is writable or not. Hope this helps. <?php // isWritable.php detects all directories in the same directory the script is in // and writes to the page whether each directory is writable or not. $dirs = array_filter(glob('*'), 'is_dir'); foreach ($dirs as $dir) { if (is_writable($dir)) { echo $dir.' is writable.<br>'; } else { echo $dir.' is not writable. Permissions may have to be adjusted.<br>'; } } ?> A: this is the code :) <?php $newFileName = '/var/www/your/file.txt'; if ( ! is_writable(dirname($newFileName))) { echo dirname($newFileName) . ' must writable!!!'; } else { // blah blah blah } A: stat() Much like a system stat, but in PHP. What you want to check is the mode value, much like you would out of any other call to stat in other languages (I.E. C/C++). http://us2.php.net/stat A: Yes, it does work for folders.... Returns TRUE if the filename exists and is writable. The filename argument may be a directory name allowing you to check if a directory is writable. A: According to the PHP manual is_writable should work fine on directories. A: In my case, is_writable returned true, but when tried to write the file - an error was generated. This code helps to check if the $dir exists and is writable: <?php $dir = '/path/to/the/dir'; // try to create this directory if it doesn't exist $booExists = is_dir($dir) || (mkdir($dir, 0774, true) && is_dir($dir)); $booIsWritable = false; if ($booExists && is_writable($dir)) { $tempFile = tempnam($dir, 'tmp'); if ($tempFile !== false) { $res = file_put_contents($tempFile, 'test'); $booIsWritable = $res !== false; @unlink($tempFile); } } A: this is how I do it: create a file with file_put_contents() and check the return value, if it is positive (number of written in Bytes) then you can go ahead and do what you have to do, if it is FALSE then it is not writable $is_writable = file_put_contents('directory/dummy.txt', "hello"); if ($is_writable > 0) echo "yes directory it is writable"; else echo "NO directory it is not writable"; then you can delete the dummy file by using unlink() unlink('directory/dummy.txt');
{ "language": "en", "url": "https://stackoverflow.com/questions/109188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: Deploying a site from VSS In answer to this question Joel Coehoorn said Finally, only after the site's gone through a suitable QA process, the production server is updated from source control, not from within visual studio. Does VSS Explorer have tools for deploying sites (via FTP, I would assume)? I noticed for the first time a Web/Deploy menu option, but it's grayed out. How does this work? A: VSS has a pretty comprehensive set of command line arguments. The best way I know is to write a batch file to: 1 - Get Latest to the local system (presumably a clean build machine) 2 - Push the newly-updated local files to your FTP site. A: We use Nant for our project. Nant is a dot net port of ant for java. It has tasks to checkout from VSS, compile and deploy. * *Nant *VSS checkout task *Codeproject Article on deployment with Nant A: Managing Web Content Using MS Visual SourceSafe yes you can deploy via ftp or network share using the "web deploy" feature in the VSS explorer.
{ "language": "en", "url": "https://stackoverflow.com/questions/109199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: PHP GD, imagecreatefromstring( ); how to get the image dimensions? Normally I use imagecreatefromjpeg() and then getimagesize(), but with Firefox 3 I need to go round this different. So now im using imagecreatefromstring(), but how do I retreive the image dimensions now? A: ah yes! i just found the answer on the internet a second ago :) for those who still interested : $image = imagecreatefromstring($img_str); $w = imagesx($image); $h = imagesy($image); A: imagesx() and imagesy() functions seem to work with images made with imagecreatefromstring(), too.
{ "language": "en", "url": "https://stackoverflow.com/questions/109210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What is the best online book service for software development references? I'm a member of ACM, and I have limited access to both Books 24x7 and Safari Books Online, however if I was interested in moving up to a full account for greater access to other books what online book service would you recommend? A: Safari Books Online has a corporate subscription service which companies appreciate. A: Safari Books Online is my choice, here's a related post: Which Online eBook Reference Library Do You Use? A: I have Books 24x7, it has a great range of books and the search functionality works well. It also gives you recommendations from other users, and if other people from your organization join then it tells you about books they have recommended. A: Audible - Self Development, Business and Education sections! A: Safari Books Online.. By far.. http://techbus.safaribooksonline.com:80/ A: I think it depends on how much you want O'Reilly and Pearson Technology books (which includes many smaller publishers like Peachpit). Safari has exclusive rights to those titles. Safari also has a monthly subscription level so you don't have to commit to an entire year.
{ "language": "en", "url": "https://stackoverflow.com/questions/109230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is the best way to paginate results in SQL Server What is the best way (performance wise) to paginate results in SQL Server 2000, 2005, 2008, 2012 if you also want to get the total number of results (before paginating)? A: Finally, Microsoft SQL Server 2012 was released, I really like its simplicity for a pagination, you don't have to use complex queries like answered here. For getting the next 10 rows just run this query: SELECT * FROM TableName ORDER BY id OFFSET 10 ROWS FETCH NEXT 10 ROWS ONLY; https://learn.microsoft.com/en-us/sql/t-sql/queries/select-order-by-clause-transact-sql#using-offset-and-fetch-to-limit-the-rows-returned Key points to consider when using it: * *ORDER BY is mandatory to use OFFSET ... FETCH clause. *OFFSET clause is mandatory with FETCH. You cannot use ORDER BY ... FETCH. *TOP cannot be combined with OFFSET and FETCH in the same query expression. A: For SQL Server 2000 you can simulate ROW_NUMBER() using a table variable with an IDENTITY column: DECLARE @pageNo int -- 1 based DECLARE @pageSize int SET @pageNo = 51 SET @pageSize = 20 DECLARE @firstRecord int DECLARE @lastRecord int SET @firstRecord = (@pageNo - 1) * @pageSize + 1 -- 1001 SET @lastRecord = @firstRecord + @pageSize - 1 -- 1020 DECLARE @orderedKeys TABLE ( rownum int IDENTITY NOT NULL PRIMARY KEY CLUSTERED, TableKey int NOT NULL ) SET ROWCOUNT @lastRecord INSERT INTO @orderedKeys (TableKey) SELECT ID FROM Orders WHERE OrderDate >= '1980-01-01' ORDER BY OrderDate SET ROWCOUNT 0 SELECT t.* FROM Orders t INNER JOIN @orderedKeys o ON o.TableKey = t.ID WHERE o.rownum >= @firstRecord ORDER BY o.rownum This approach can be extended to tables with multi-column keys, and it doesn't incur the performance overhead of using OR (which skips index usage). The downside is the amount of temporary space used up if the data set is very large and one is near the last page. I did not test cursor performance in that case, but it might be better. Note that this approach could be optimized for the first page of data. Also, ROWCOUNT was used since TOP does not accept a variable in SQL Server 2000. A: Getting the total number of results and paginating are two different operations. For the sake of this example, let's assume that the query you're dealing with is SELECT * FROM Orders WHERE OrderDate >= '1980-01-01' ORDER BY OrderDate In this case, you would determine the total number of results using: SELECT COUNT(*) FROM Orders WHERE OrderDate >= '1980-01-01' ...which may seem inefficient, but is actually pretty performant, assuming all indexes etc. are properly set up. Next, to get actual results back in a paged fashion, the following query would be most efficient: SELECT * FROM ( SELECT ROW_NUMBER() OVER ( ORDER BY OrderDate ) AS RowNum, * FROM Orders WHERE OrderDate >= '1980-01-01' ) AS RowConstrainedResult WHERE RowNum >= 1 AND RowNum < 20 ORDER BY RowNum This will return rows 1-19 of the original query. The cool thing here, especially for web apps, is that you don't have to keep any state, except the row numbers to be returned. A: From SQL Server 2012, we can use OFFSET and FETCH NEXT Clause to achieve the pagination. Try this, for SQL Server: In the SQL Server 2012 a new feature was added in the ORDER BY clause, to query optimization of a set data, making work easier with data paging for anyone who writes in T-SQL as well for the entire Execution Plan in SQL Server. Below the T-SQL script with the same logic used in the previous example. --CREATING A PAGING WITH OFFSET and FETCH clauses IN "SQL SERVER 2012" DECLARE @PageNumber AS INT, @RowspPage AS INT SET @PageNumber = 2 SET @RowspPage = 10 SELECT ID_EXAMPLE, NM_EXAMPLE, DT_CREATE FROM TB_EXAMPLE ORDER BY ID_EXAMPLE OFFSET ((@PageNumber - 1) * @RowspPage) ROWS FETCH NEXT @RowspPage ROWS ONLY; TechNet: Paging a Query with SQL Server A: Use case wise the following seem to be easy to use and fast. Just set the page number. use AdventureWorks DECLARE @RowsPerPage INT = 10, @PageNumber INT = 6; with result as( SELECT SalesOrderDetailID, SalesOrderID, ProductID, ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS RowNum FROM Sales.SalesOrderDetail where 1=1 ) select SalesOrderDetailID, SalesOrderID, ProductID from result WHERE result.RowNum BETWEEN ((@PageNumber-1)*@RowsPerPage)+1 AND @RowsPerPage*(@PageNumber) also without CTE use AdventureWorks DECLARE @RowsPerPage INT = 10, @PageNumber INT = 6 SELECT SalesOrderDetailID, SalesOrderID, ProductID FROM ( SELECT SalesOrderDetailID, SalesOrderID, ProductID, ROW_NUMBER() OVER (ORDER BY SalesOrderDetailID) AS RowNum FROM Sales.SalesOrderDetail where 1=1 ) AS SOD WHERE SOD.RowNum BETWEEN ((@PageNumber-1)*@RowsPerPage)+1 AND @RowsPerPage*(@PageNumber) A: The best way for paging in sql server 2012 is by using offset and fetch next in a stored procedure. OFFSET Keyword - If we use offset with the order by clause then the query will skip the number of records we specified in OFFSET n Rows. FETCH NEXT Keywords - When we use Fetch Next with an order by clause only it will returns the no of rows you want to display in paging, without Offset then SQL will generate an error. here is the example given below. create procedure sp_paging ( @pageno as int, @records as int ) as begin declare @offsetcount as int set @offsetcount=(@pageno-1)*@records select id,bs,variable from salary order by id offset @offsetcount rows fetch Next @records rows only end you can execute it as follow. exec sp_paging 2,3 A: Try this approach: SELECT TOP @offset a.* FROM (select top @limit b.*, COUNT(*) OVER() totalrows from TABLENAME b order by id asc) a ORDER BY id desc; A: These are my solutions for paging the result of query in SQL server side. these approaches are different between SQL Server 2008 and 2012. Also, I have added the concept of filtering and order by with one column. It is very efficient when you are paging and filtering and ordering in your Gridview. Before testing, you have to create one sample table and insert some row in this table : (In real world you have to change Where clause considering your table fields and maybe you have some join and subquery in main part of select) Create Table VLT ( ID int IDentity(1,1), Name nvarchar(50), Tel Varchar(20) ) GO Insert INTO VLT VALUES ('NAME' + Convert(varchar(10),@@identity),'FAMIL' + Convert(varchar(10),@@identity)) GO 500000 In all of these sample, I want to query 200 rows per page and I am fetching the row for page number 1200. In SQL server 2008, you can use the CTE concept. Because of that, I have written two type of query for SQL server 2008+ -- SQL Server 2008+ DECLARE @PageNumber Int = 1200 DECLARE @PageSize INT = 200 DECLARE @SortByField int = 1 --The field used for sort by DECLARE @SortOrder nvarchar(255) = 'ASC' --ASC or DESC DECLARE @FilterType nvarchar(255) = 'None' --The filter type, as defined on the client side (None/Contain/NotContain/Match/NotMatch/True/False/) DECLARE @FilterValue nvarchar(255) = '' --The value the user gave for the filter DECLARE @FilterColumn int = 1 --The column to wich the filter is applied, represents the column number like when we send the information. SELECT Data.ID, Data.Name, Data.Tel FROM ( SELECT ROW_NUMBER() OVER( ORDER BY CASE WHEN @SortByField = 1 AND @SortOrder = 'ASC' THEN VLT.ID END ASC, CASE WHEN @SortByField = 1 AND @SortOrder = 'DESC' THEN VLT.ID END DESC, CASE WHEN @SortByField = 2 AND @SortOrder = 'ASC' THEN VLT.Name END ASC, CASE WHEN @SortByField = 2 AND @SortOrder = 'DESC' THEN VLT.Name END ASC, CASE WHEN @SortByField = 3 AND @SortOrder = 'ASC' THEN VLT.Tel END ASC, CASE WHEN @SortByField = 3 AND @SortOrder = 'DESC' THEN VLT.Tel END ASC ) AS RowNum ,* FROM VLT WHERE ( -- We apply the filter logic here CASE WHEN @FilterType = 'None' THEN 1 -- Name column filter WHEN @FilterType = 'Contain' AND @FilterColumn = 1 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.ID LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'NotContain' AND @FilterColumn = 1 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.ID NOT LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'Match' AND @FilterColumn = 1 AND VLT.ID = @FilterValue THEN 1 WHEN @FilterType = 'NotMatch' AND @FilterColumn = 1 AND VLT.ID <> @FilterValue THEN 1 -- Name column filter WHEN @FilterType = 'Contain' AND @FilterColumn = 2 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.Name LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'NotContain' AND @FilterColumn = 2 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.Name NOT LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'Match' AND @FilterColumn = 2 AND VLT.Name = @FilterValue THEN 1 WHEN @FilterType = 'NotMatch' AND @FilterColumn = 2 AND VLT.Name <> @FilterValue THEN 1 -- Tel column filter WHEN @FilterType = 'Contain' AND @FilterColumn = 3 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.Tel LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'NotContain' AND @FilterColumn = 3 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.Tel NOT LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'Match' AND @FilterColumn = 3 AND VLT.Tel = @FilterValue THEN 1 WHEN @FilterType = 'NotMatch' AND @FilterColumn = 3 AND VLT.Tel <> @FilterValue THEN 1 END ) = 1 ) AS Data WHERE Data.RowNum > @PageSize * (@PageNumber - 1) AND Data.RowNum <= @PageSize * @PageNumber ORDER BY Data.RowNum GO And second solution with CTE in SQL server 2008+ DECLARE @PageNumber Int = 1200 DECLARE @PageSize INT = 200 DECLARE @SortByField int = 1 --The field used for sort by DECLARE @SortOrder nvarchar(255) = 'ASC' --ASC or DESC DECLARE @FilterType nvarchar(255) = 'None' --The filter type, as defined on the client side (None/Contain/NotContain/Match/NotMatch/True/False/) DECLARE @FilterValue nvarchar(255) = '' --The value the user gave for the filter DECLARE @FilterColumn int = 1 --The column to wich the filter is applied, represents the column number like when we send the information. ;WITH Data_CTE AS ( SELECT ROW_NUMBER() OVER( ORDER BY CASE WHEN @SortByField = 1 AND @SortOrder = 'ASC' THEN VLT.ID END ASC, CASE WHEN @SortByField = 1 AND @SortOrder = 'DESC' THEN VLT.ID END DESC, CASE WHEN @SortByField = 2 AND @SortOrder = 'ASC' THEN VLT.Name END ASC, CASE WHEN @SortByField = 2 AND @SortOrder = 'DESC' THEN VLT.Name END ASC, CASE WHEN @SortByField = 3 AND @SortOrder = 'ASC' THEN VLT.Tel END ASC, CASE WHEN @SortByField = 3 AND @SortOrder = 'DESC' THEN VLT.Tel END ASC ) AS RowNum ,* FROM VLT WHERE ( -- We apply the filter logic here CASE WHEN @FilterType = 'None' THEN 1 -- Name column filter WHEN @FilterType = 'Contain' AND @FilterColumn = 1 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.ID LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'NotContain' AND @FilterColumn = 1 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.ID NOT LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'Match' AND @FilterColumn = 1 AND VLT.ID = @FilterValue THEN 1 WHEN @FilterType = 'NotMatch' AND @FilterColumn = 1 AND VLT.ID <> @FilterValue THEN 1 -- Name column filter WHEN @FilterType = 'Contain' AND @FilterColumn = 2 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.Name LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'NotContain' AND @FilterColumn = 2 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.Name NOT LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'Match' AND @FilterColumn = 2 AND VLT.Name = @FilterValue THEN 1 WHEN @FilterType = 'NotMatch' AND @FilterColumn = 2 AND VLT.Name <> @FilterValue THEN 1 -- Tel column filter WHEN @FilterType = 'Contain' AND @FilterColumn = 3 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.Tel LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'NotContain' AND @FilterColumn = 3 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.Tel NOT LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'Match' AND @FilterColumn = 3 AND VLT.Tel = @FilterValue THEN 1 WHEN @FilterType = 'NotMatch' AND @FilterColumn = 3 AND VLT.Tel <> @FilterValue THEN 1 END ) = 1 ) SELECT Data.ID, Data.Name, Data.Tel FROM Data_CTE AS Data WHERE Data.RowNum > @PageSize * (@PageNumber - 1) AND Data.RowNum <= @PageSize * @PageNumber ORDER BY Data.RowNum -- SQL Server 2012+ DECLARE @PageNumber Int = 1200 DECLARE @PageSize INT = 200 DECLARE @SortByField int = 1 --The field used for sort by DECLARE @SortOrder nvarchar(255) = 'ASC' --ASC or DESC DECLARE @FilterType nvarchar(255) = 'None' --The filter type, as defined on the client side (None/Contain/NotContain/Match/NotMatch/True/False/) DECLARE @FilterValue nvarchar(255) = '' --The value the user gave for the filter DECLARE @FilterColumn int = 1 --The column to wich the filter is applied, represents the column number like when we send the information. ;WITH Data_CTE AS ( SELECT * FROM VLT WHERE ( -- We apply the filter logic here CASE WHEN @FilterType = 'None' THEN 1 -- Name column filter WHEN @FilterType = 'Contain' AND @FilterColumn = 1 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.ID LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'NotContain' AND @FilterColumn = 1 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.ID NOT LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'Match' AND @FilterColumn = 1 AND VLT.ID = @FilterValue THEN 1 WHEN @FilterType = 'NotMatch' AND @FilterColumn = 1 AND VLT.ID <> @FilterValue THEN 1 -- Name column filter WHEN @FilterType = 'Contain' AND @FilterColumn = 2 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.Name LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'NotContain' AND @FilterColumn = 2 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.Name NOT LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'Match' AND @FilterColumn = 2 AND VLT.Name = @FilterValue THEN 1 WHEN @FilterType = 'NotMatch' AND @FilterColumn = 2 AND VLT.Name <> @FilterValue THEN 1 -- Tel column filter WHEN @FilterType = 'Contain' AND @FilterColumn = 3 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.Tel LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'NotContain' AND @FilterColumn = 3 AND ( -- In this case, when the filter value is empty, we want to show everything. VLT.Tel NOT LIKE '%' + @FilterValue + '%' OR @FilterValue = '' ) THEN 1 WHEN @FilterType = 'Match' AND @FilterColumn = 3 AND VLT.Tel = @FilterValue THEN 1 WHEN @FilterType = 'NotMatch' AND @FilterColumn = 3 AND VLT.Tel <> @FilterValue THEN 1 END ) = 1 ) SELECT Data.ID, Data.Name, Data.Tel FROM Data_CTE AS Data ORDER BY CASE WHEN @SortByField = 1 AND @SortOrder = 'ASC' THEN Data.ID END ASC, CASE WHEN @SortByField = 1 AND @SortOrder = 'DESC' THEN Data.ID END DESC, CASE WHEN @SortByField = 2 AND @SortOrder = 'ASC' THEN Data.Name END ASC, CASE WHEN @SortByField = 2 AND @SortOrder = 'DESC' THEN Data.Name END ASC, CASE WHEN @SortByField = 3 AND @SortOrder = 'ASC' THEN Data.Tel END ASC, CASE WHEN @SortByField = 3 AND @SortOrder = 'DESC' THEN Data.Tel END ASC OFFSET @PageSize * (@PageNumber - 1) ROWS FETCH NEXT @PageSize ROWS ONLY; A: From 2012 onward we can use OFFSET 10 ROWS FETCH NEXT 10 ROWS ONLY A: MSDN: ROW_NUMBER (Transact-SQL) Returns the sequential number of a row within a partition of a result set, starting at 1 for the first row in each partition. The following example returns rows with numbers 50 to 60 inclusive in the order of the OrderDate. WITH OrderedOrders AS ( SELECT ROW_NUMBER() OVER(ORDER BY FirstName DESC) AS RowNumber, FirstName, LastName, ROUND(SalesYTD,2,1) AS "Sales YTD" FROM [dbo].[vSalesPerson] ) SELECT RowNumber, FirstName, LastName, Sales YTD FROM OrderedOrders WHERE RowNumber > 50 AND RowNumber < 60; RowNumber FirstName LastName SalesYTD --- ----------- ---------------------- ----------------- 1 Linda Mitchell 4251368.54 2 Jae Pak 4116871.22 3 Michael Blythe 3763178.17 4 Jillian Carson 3189418.36 5 Ranjit Varkey Chudukatil 3121616.32 6 José Saraiva 2604540.71 7 Shu Ito 2458535.61 8 Tsvi Reiter 2315185.61 9 Rachel Valdez 1827066.71 10 Tete Mensa-Annan 1576562.19 11 David Campbell 1573012.93 12 Garrett Vargas 1453719.46 13 Lynn Tsoflias 1421810.92 14 Pamela Ansman-Wolfe 1352577.13 A: There is a good overview of different paging techniques at http://www.codeproject.com/KB/aspnet/PagingLarge.aspx I've used ROWCOUNT method quite often mostly with SQL Server 2000 (will work with 2005 & 2008 too, just measure performance compared to ROW_NUMBER), it's lightning fast, but you need to make sure that the sorted column(s) have (mostly) unique values. A: Incredibly, no other answer has mentioned the fastest way to do pagination in all SQL Server versions. Offsets can be terribly slow for large page numbers as is benchmarked here. There is an entirely different, much faster way to perform pagination in SQL. This is often called the "seek method" or "keyset pagination" as described in this blog post here. SELECT TOP 10 first_name, last_name, score, COUNT(*) OVER() FROM players WHERE (score < @previousScore) OR (score = @previousScore AND player_id < @previousPlayerId) ORDER BY score DESC, player_id DESC The "seek predicate" The @previousScore and @previousPlayerId values are the respective values of the last record from the previous page. This allows you to fetch the "next" page. If the ORDER BY direction is ASC, simply use > instead. With the above method, you cannot immediately jump to page 4 without having first fetched the previous 40 records. But often, you do not want to jump that far anyway. Instead, you get a much faster query that might be able to fetch data in constant time, depending on your indexing. Plus, your pages remain "stable", no matter if the underlying data changes (e.g. on page 1, while you're on page 4). This is the best way to implement pagination when lazy loading more data in web applications, for instance. Note, the "seek method" is also called keyset pagination. Total records before pagination The COUNT(*) OVER() window function will help you count the number of total records "before pagination". If you're using SQL Server 2000, you will have to resort to two queries for the COUNT(*). A: This is a duplicate of the 2012 old SO question: efficient way to implement paging FROM [TableX] ORDER BY [FieldX] OFFSET 500 ROWS FETCH NEXT 100 ROWS ONLY Here the topic is discussed in greater details, and with alternate approaches. A: Well I have used the following sample query in my SQL 2000 database, it works well for SQL 2005 too. The power it gives you is dynamically order by using multiple columns. I tell you ... this is powerful :) ALTER PROCEDURE [dbo].[RE_ListingReports_SelectSummary] @CompanyID int, @pageNumber int, @pageSize int, @sort varchar(200) AS DECLARE @sql nvarchar(4000) DECLARE @strPageSize nvarchar(20) DECLARE @strSkippedRows nvarchar(20) DECLARE @strFields nvarchar(4000) DECLARE @strFilter nvarchar(4000) DECLARE @sortBy nvarchar(4000) DECLARE @strFrom nvarchar(4000) DECLARE @strID nvarchar(100) If(@pageNumber < 0) SET @pageNumber = 1 SET @strPageSize = CAST(@pageSize AS varchar(20)) SET @strSkippedRows = CAST(((@pageNumber - 1) * @pageSize) AS varchar(20))-- For example if pageNumber is 5 pageSize is 10, then SkippedRows = 40. SET @strID = 'ListingDbID' SET @strFields = 'ListingDbID, ListingID, [ExtraRoom] ' SET @strFrom = ' vwListingSummary ' SET @strFilter = ' WHERE CompanyID = ' + CAST(@CompanyID As varchar(20)) End SET @sortBy = '' if(len(ltrim(rtrim(@sort))) > 0) SET @sortBy = ' Order By ' + @sort -- Total Rows Count SET @sql = 'SELECT Count(' + @strID + ') FROM ' + @strFROM + @strFilter EXEC sp_executesql @sql --// This technique is used in a Single Table pagination SET @sql = 'SELECT ' + @strFields + ' FROM ' + @strFROM + ' WHERE ' + @strID + ' IN ' + ' (SELECT TOP ' + @strPageSize + ' ' + @strID + ' FROM ' + @strFROM + @strFilter + ' AND ' + @strID + ' NOT IN ' + ' (SELECT TOP ' + @strSkippedRows + ' ' + @strID + ' FROM ' + @strFROM + @strFilter + @SortBy + ') ' + @SortBy + ') ' + @SortBy Print @sql EXEC sp_executesql @sql The best part is sp_executesql caches later calls, provided you pass same parameters i.e generate same sql text. A: CREATE view vw_sppb_part_listsource as select row_number() over (partition by sppb_part.init_id order by sppb_part.sppb_part_id asc ) as idx, * from ( select part.SPPB_PART_ID , 0 as is_rev , part.part_number , part.init_id from t_sppb_init_part part left join t_sppb_init_partrev prev on ( part.SPPB_PART_ID = prev.SPPB_PART_ID ) where prev.SPPB_PART_ID is null union select part.SPPB_PART_ID , 1 as is_rev , prev.part_number , part.init_id from t_sppb_init_part part inner join t_sppb_init_partrev prev on ( part.SPPB_PART_ID = prev.SPPB_PART_ID ) ) sppb_part will restart idx when it comes to different init_id A: For the ROW_NUMBER technique, if you do not have a sorting column to use, you can use the CURRENT_TIMESTAMP as follows: SELECT TOP 20 col1, col2, col3, col4 FROM ( SELECT tbl.col1 AS col1 ,tbl.col2 AS col2 ,tbl.col3 AS col3 ,tbl.col4 AS col4 ,ROW_NUMBER() OVER ( ORDER BY CURRENT_TIMESTAMP ) AS sort_row FROM dbo.MyTable tbl ) AS query WHERE query.sort_row > 10 ORDER BY query.sort_row This has worked well for me for searches over table sizes of even up to 700,000. This fetches records 11 to 30. A: create PROCEDURE SP_Company_List (@pagesize int = -1 ,@pageindex int= 0 ) > AS BEGIN SET NOCOUNT ON; select Id , NameEn from Company ORDER by Id ASC OFFSET (@pageindex-1 )* @pagesize ROWS FETCH NEXt @pagesize ROWS ONLY END GO DECLARE @return_value int EXEC @return_value = [dbo].[SP_Company_List] @pagesize = 1 , > @pageindex = 2 SELECT 'Return Value' = @return_value GO A: This bit gives you ability to paginate using SQL Server, and newer versions of MySQL and carries the total number of rows in every row. Uses your pimary key to count number of unique rows. WITH T AS ( SELECT TABLE_ID, ROW_NUMBER() OVER (ORDER BY TABLE_ID) AS RN , (SELECT COUNT(TABLE_ID) FROM TABLE) AS TOTAL FROM TABLE (NOLOCK) ) SELECT T2.FIELD1, T2.FIELD2, T2.FIELD3, T.TOTAL FROM TABLE T2 (NOLOCK) INNER JOIN T ON T2.TABLE_ID=T.TABLE_ID WHERE T.RN >= 100 AND T.RN < 200 A: You didn't specify the language nor which driver you are using. Therefore I'm describing it abstractly. * *Create a scrollable resultset / dataset. This required a primary on the table(s) *jump to the end *request the row count *jump to the start of the page *scroll through the rows until the end of the page
{ "language": "en", "url": "https://stackoverflow.com/questions/109232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "574" }
Q: Why isn't there a standard memswap function Why doesn't have the c standard a memswap function, which would probably look like: int memswap(void *ptr1, void *ptr2, size_t nbytes)? I know it'd be easy to write, but i think the libc could do some awesome tricks to speed it up like some implementations do it for memcpy. A: This isn't something that is routinely required. The ideas may have been considered and discarded because it is quite difficult to come up with an algorithm that is general purpose. Don't forget that C is an old language and extensions need to be generally useful. Possible error conditions :- * *behaviour when the ranges being swapped overlap *length of zero *running out of memory (an optimal implementation might allocate memory to do this) *null pointer The best algorithm might also depend upon what you are doing, and so could be better coded directly by you. * *swapping structures likely to be quicker using a temp structure and assignment *small lengths - may be better allocating temporary memory *long lengths - 'section' by section swap (where section is some optimal length) *use of hardware copy functions A: Probably because it's not needed very often - I memset and memcpy reasonably often, but I don't know that I'd ever have used memswap if it was available. A: It probably isn't required very often in C programming, in C++ where swap is a regular thing to do on class members there's the std::swap algorithm which is highly optimized for different types. A: I think because it's not needed very often. However, there is an easy way to do this in C++: #include <algorithm> swap_ranges(ptr1, ptr1 + nbytes, ptr2) It's it may not be quite as optimized as a compiler built in, but it has the potential of being faster than a loop you write for yourself, since it may have platform specific optimization that you would not implement. You do need to be careful with the above, because it assumes that ptr1 and ptr2 are char pointers. The more canonical way to do this is: #include <algorithm> swap_ranges(ptr1, ptr1 + num_items, ptr2)
{ "language": "en", "url": "https://stackoverflow.com/questions/109249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Binding TRANSFORM query in Access to a report Whats the best way to bind variable column names to a report field in Access when using a crosstab query? A: This page has an exhaustive example of setting up a dynamic column ("crosstab"-type) report in Access. http://www.blueclaw-db.com/report_dynamic_crosstab_field.htm (From google search: access transform query report) A: The best article I found for binding columns from a crosstab query to a report is from ewbi.develops's notes. Specifically, PARAMETERS foryear Short; TRANSFORM Sum(mytable.amount) AS total SELECT mytable.project FROM mytable WHERE mytable.year In ([foryear],[foryear]+1) GROUP BY mytable.project PIVOT IIf(mytable.year=[foryear],"thisyear","nextyear") IN ("thisyear", "nextyear"); This only displays two columns that can be bound as needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/109251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: WinForms context menu - not open in certain parts / detect underlying control I have a .NET 2.0 Windows Forms application. On this app there is a Form control with a Menu bar and a status bar. Also there's a ListView on this form. If I add a context menu to this form, the context menu will open when the user right clicks any part of the form, including the menu bar and the status bar. * *How can I prevent the context menu from opening when the click happened on the menu bar / status bar? I want it to open only when clicking the "gray area" of the form. *If the click happened above a control on this form (for example, on the ListView), how can I identify this? I'd like to know if the user right clicked above the gray area or above the ListView, so I can enable/disable some menu items based on this. A: After you've placed your Statusbar at the bottom and MenuStrip at the top, * *Set ContextMenuStrip on your form to None *Place a standard Panel in the middle (between MenuStrip and StatusStrip) with the Dock property set to Fill. *Set the ContextMenuStrip property on your Panel (instead of on the form). And place the ListView and all other controls that should go into the form in the Panel Eg ~~~~~~~~~~~~~ menustrip ~~~~~~~~~~~~~ Panel. Dock=Fill. ContextMenuStrip=yourContextMenu. ~~~~~~~~~~~~~ StatusStrip ~~~~~~~~~~~~~ A: I found the answer: Point clientPos = this.PointToClient(Form.MousePosition); Control control = this.GetChildAtPoint(clientPos); This should give the underlying control that was clicked on the Form, or null if the click was on the gray area. So we just need to test for the type of the control on the Opening event of the context menu. If it's MenuStrip, ToolStrip or StatusStrip, do e.Cancel = true;.
{ "language": "en", "url": "https://stackoverflow.com/questions/109262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: databinding a property to either of two dependency properties I have two custom controls that are analogous to a node and the control that draws links between nodes. I would like to have both controls written as much in xaml as possible. The link stores both nodes as dependency properties, and I use databinding to move the line between the nodes whenever the nodes move. It would be great to be able to change some other value of the line, for instance the stroke width, depending on the distance between the two nodes. So the property needs to update when either node moves, and I can't quite get my head around how that would work. Anyone got any ideas? A: you can try doing something like that: * *as in previous post define a width, stroke (whatever you need) property on your link class *define a multibinding applied to that property, passing your two nodes to the binding it should look like: <Multibinding Converter="{StaticResource converter}"> <Binding Path="Node1" RelativeSource|Source.../> <Binding Path="Node2" ... /> </Multibinding> *Implement interface IMultiValueConverter, which will basically calculate how the stroke should look like based on the distance between nodes. *in xaml create instance of your converter, and add it to your multibinding's Converter property. the advantage of this solution is, that you have pretty clear class model and each class does simple tasks. moreover, later on, you can configure your converter class to support extra cases without touching node class which stays simple and is designed simply for displaying nodes. in general, whenever you have to map multiple property values to one other property, you'll have to use multibinding and converter. A: You could define a property StrokeWidth in your link class that gets calculated every time the nodes move and then bind the appropriate style property to it. I suppose you could also try to do something with DataTriggers, but they need specific values to work with - you can't use any kind of expressions. This would make it difficult to have the solution scale well to a wide array of distances between the nodes.
{ "language": "en", "url": "https://stackoverflow.com/questions/109275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: In ASP.NET how you get the physcial file path when HttpContext.Current is NULL? I'm working with DotNetNuke's scheduler to schedule tasks and I'm looking to get the physical file path of a email template that I created. The problem is that HttpContext is NULL because the scheduled task is on a different thread and there is not http request. How would you go about getting the file's physical path? A: System.Web.Hosting.HostingEnvironment.MapPath is what you're looking for. Whenever you're using the Server or HttpContext.Current objects, check first to see if HostingEnvironment has what you need. A: There are many ways of doing this, I personally get around it by storing path information as a config option for my modules, it isn't elegant, but it works and works every time. Joe Brinkman I belive somewhere around has a blog posting on how to construct a new HTTPContext for use inside the scheduler. A: Since this process is really out-of-band in relation to the web site, maybe you can just put the path in a config file. May not be the best idea, but it is an alternative. A: what says this.GetType().Assembly.Location ? A: Can you look at the Assembly & the CodeBase paths like this: Imports System.Reflection Imports System.IO ... Path.GetDirectoryName( Assembly.GetExecutingAssembly().CodeBase ) That kind of stuff doesn't always work, so what I would recommend doing is writing a log with a bunch of data about the assembly, to see what works in this location. It is what I had to do to get something similar when I was creating a COM component to be hosted in AppCenter. I used this to "get" what "APP_BASE" should be, and set that, so the app.config file would load properly. Log.Write ( Assembly.GetExecutingAssembly().CodeBase ) Log.Write ( Assembly.GetExecutingAssembly().Location ) Log.Write ( Path.GetFullPath(".") ) Log.Write ( Application.StartupPath ) ... and so on, whatever you can think of ...
{ "language": "en", "url": "https://stackoverflow.com/questions/109280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: BSCMAKE: error BK1506 : cannot open file StdAfx.sbr No such file or directory I have converted one of my VS2006 projects into VS2008 and when trying to build the project in VS2008 I get the above error. What is .sbr file ? and how can I fix the compile error? Any help is hugely appreciated. A: You can go to: Configuration Properties -> C/C++ -> Browse Information Remove the Enable Browse Information (Set it to No) A: An .sbr file is used to keep the "browse information" for symbol browsing within the projects. It's created at the same time as its source .cpp file gets complied. If VS cannot find an .sbr file, it means that the source .cpp was not compiled properly. Try to "rebuild" the project (rather than just "build" it), it may fix the error. A: Check (manually) your .vcproj file for a <BrowseFileInformation></BrowseFileInformation> property tag in the configuration section for the configuration you are compiling. If your intermediate directory is the normal $(IntDir), then the empty property is telling the compilation to put the SBR files in the same directory as the source files, but the BSCMAKE command is looking for them in the $(IntDir) directory (and they aren't there). Remove the <BrowseFileInformation></BrowseFileInformation> lines in the .vcproj file (you will have to do this by manually editing the file; setting properties in VS2010 or VS2008 won't do it) A: I get this problem by adding a new class into my project through the VS wizard. I had to change the location of my "class.cpp" and "class.h", so I copy pasted them into the right directory. Then, I've added them into my project through the VS wizard with it's new path, and I finally get the BSCMAKE error after generating (and regenerating) my project. I had this error just after an other one, saying that my "class.cpp" couldn't be found. I get the solution of my problems thanks to SVN. By comparing the current and the original version of my "project.vcproj" file, I realized that the class I added was setted with the old path, so it couldn't find the right one. Hence, if you think that your error may have the same origin, what you have to do is: -Open your "project.vcproj" file in an editor -Search in the code where the path of your "class.cpp" is setted -Change it to the right one -Rebuild your project It should work then A: I'm new to c++ and I'm using Visual Studio 2008. I was trying to add a new class to a large program and got the same error (BK1506). The problem for me was that I had not implemented my class correctly using: namespace ns { class Name { }; } Although this most likely wasn't the reason for your error I would advise people to check this first as the previous answers got me thinking my problem was more advanced than it really was
{ "language": "en", "url": "https://stackoverflow.com/questions/109281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Testing rendering of a given layout with RSpec & Rails Is it possible to test the use of a given layout using RSpec with Rails, for example I'd like a matcher that does the following: response.should use_layout('my_layout_name') I found a use_layout matcher when Googling but it doesn't work as neither the response or controller seem to have a layout property that matcher was looking for. A: I found an example of how to write a use_layout matcher that will do just that. Here's the code in case that link goes away: # in spec_helper.rb class UseLayout def initialize(expected) @expected = 'layouts/' + expected end def matches?(controller) @actual = controller.layout #@actual.equal?(@expected) @actual == @expected end def failure_message return "use_layout expected #{@expected.inspect}, got # {@actual.inspect}", @expected, @actual end def negeative_failure_message return "use_layout expected #{@expected.inspect} not to equal # {@actual.inspect}", @expected, @actual end end def use_layout(expected) UseLayout.new(expected) end # in controller spec response.should use_layout("application") A: David Chelimsky posted a good answer over on the Ruby Forum: response.should render_template("layouts/some_layout") A: I had to write the following to make this work: response.should render_template("layouts/some_folder/some_layout", "template-name") A: Here is an updated version of the matcher. I've updated it to conform to the latest version of RSpec. I've added the relevant read only attributes and remove old return format. # in spec_helper.rb class UseLayout attr_reader :expected attr_reader :actual def initialize(expected) @expected = 'layouts/' + expected end def matches?(controller) if controller.is_a?(ActionController::Base) @actual = 'layouts/' + controller.class.read_inheritable_attribute(:layout) else @actual = controller.layout end @actual ||= "layouts/application" @actual == @expected end def description "Determines if a controller uses a layout" end def failure_message return "use_layout expected #{@expected.inspect}, got #{@actual.inspect}" end def negeative_failure_message return "use_layout expected #{@expected.inspect} not to equal #{@actual.inspect}" end end def use_layout(expected) UseLayout.new(expected) end Additionally the matcher now also works with layouts specified at the controller class level and can be used as follows: class PostsController < ApplicationController layout "posts" end And in the controller spec you can simply use: it { should use_layout("posts") } A: Here's the solution I ended up going with. Its for rpsec 2 and rails 3. I just added this file in the spec/support directory. The link is: https://gist.github.com/971342 # spec/support/matchers/render_layout.rb ActionView::Base.class_eval do unless instance_methods.include?('_render_layout_with_tracking') def _render_layout_with_tracking(layout, locals, &block) controller.instance_variable_set(:@_rendered_layout, layout) _render_layout_without_tracking(layout, locals, &block) end alias_method_chain :_render_layout, :tracking end end # You can use this matcher anywhere that you have access to the controller instance, # like in controller or integration specs. # # == Example Usage # # Expects no layout to be rendered: # controller.should_not render_layout # Expects any layout to be rendered: # controller.should render_layout # Expects app/views/layouts/application.html.erb to be rendered: # controller.should render_layout('application') # Expects app/views/layouts/application.html.erb not to be rendered: # controller.should_not render_layout('application') # Expects app/views/layouts/mobile/application.html.erb to be rendered: # controller.should_not render_layout('mobile/application') RSpec::Matchers.define :render_layout do |*args| expected = args.first match do |c| actual = get_layout(c) if expected.nil? !actual.nil? # actual must be nil for the test to pass. Usage: should_not render_layout elsif actual actual == expected.to_s else false end end failure_message_for_should do |c| actual = get_layout(c) if actual.nil? && expected.nil? "expected a layout to be rendered but none was" elsif actual.nil? "expected layout #{expected.inspect} but no layout was rendered" else "expected layout #{expected.inspect} but #{actual.inspect} was rendered" end end failure_message_for_should_not do |c| actual = get_layout(c) if expected.nil? "expected no layout but #{actual.inspect} was rendered" else "expected #{expected.inspect} not to be rendered but it was" end end def get_layout(controller) if template = controller.instance_variable_get(:@_rendered_layout) template.virtual_path.sub(/layouts\//, '') end end end A: This works for me with edge Rails and edge RSpec on Rails: response.layout.should == 'layouts/application' Shouldn't be hard to turn this into a matcher suitable for you. A: There's already a perfectly functional matcher for this: response.should render_template(:layout => 'fooo') (Rspec 2.6.4) A: controller.active_layout.name works for me. A: response.should render_template("layouts/some_folder/some_layout") response.should render_template("template-name") A: Shoulda Matchers provides a matcher for this scenario. (Documentation) This seems to work: expect(response).to render_with_layout('my_layout') it produces appropriate failure messages like: Expected to render with the "calendar_layout" layout, but rendered with "application", "application" Tested with rails 4.2, rspec 3.3 and shoulda-matchers 2.8.0 Edit: shoulda-matchers provides this method. Shoulda::Matchers::ActionController::RenderWithLayoutMatcher A: Here's a version of dmcnally's code that allows no arguments to be passed, making "should use_layout" and "should_not use_layout" work (to assert that the controller is using any layout, or no layout, respectively - of which I would expect only the second to be useful as you should be more specific if it is using a layout): class UseLayout def initialize(expected = nil) if expected.nil? @expected = nil else @expected = 'layouts/' + expected end end def matches?(controller) @actual = controller.layout #@actual.equal?(@expected) if @expected.nil? @actual else @actual == @expected end end def failure_message if @expected.nil? return 'use_layout expected a layout to be used, but none was', 'any', @actual else return "use_layout expected #{@expected.inspect}, got #{@actual.inspect}", @expected, @actual end end def negative_failure_message if @expected.nil? return "use_layout expected no layout to be used, but #{@actual.inspect} found", 'any', @actual else return "use_layout expected #{@expected.inspect} not to equal #{@actual.inspect}", @expected, @actual end end end def use_layout(expected = nil) UseLayout.new(expected) end
{ "language": "en", "url": "https://stackoverflow.com/questions/109284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: What would you like to see in an beginner's ASP.NET security book This is a shameless information gathering exercise for my own book. One of the talks I give in the community is an introduction to web site vulnerabilities. Usually during the talk I can see at least two members of the audience go very pale; and this is basic stuff, Cross Site Scripting, SQL Injection, Information Leakage, Cross Site Form Requests and so on. So, if you can think back to being one, as a beginning web developer (be it ASP.NET or not) what do you feel would be useful information about web security and how to develop securely? I will already be covering the OWASP Top Ten (And yes this means stackoverflow will be in the acknowledgements list if someone comes up with something I haven't thought of yet!) It's all done now, and published, thank you all for your responses A: First, I would point out the insecurities of the web in a way that makes them accesible to people for whom developing with security in mind may (unfortunately) be a new concept. For example, show them how to intercept an HTTP header and implement an XSS attack. The reason you want to show them the attacks is so they themselves have a better idea of what they're defending against. Talking about security beyond that is great, but without understanding the type of attack they're meant to thwart, it will be hard for them to accurately "test" their systems for security. Once they can test for security by trying to intercept messages, spoof headers, etc. then they at least know if whatever security they're trying to implement is working or not. You can teach them whatever methods you want for implementing that security with confidence, knowing if they get it wrong, they will actually know about it because it will fail the security tests you showed them to try. A: Defensive programming as an archetypal topic which covers all the particular attacks, as most, if not all, of them are caused by not thinking defensively enough. Make that subject the central column of the book . What would've served me well back then was knowing about techniques to never trust anything, not just one stop tips, like "do not allow SQL comments or special chars in your input". Another interesting thing I'd love to have learned earlier is how to actually test for them. A: I think all vulnerabilities are based off of programmers not thinking, either momentary lapses of judgement, or something they haven't thought of. One big vulnerability that was in an application that I was tasked to "fix up", was the fact that they had returned 0 (Zero) from the authentication method when the user that was logging in was an administrator. Because of the fact that the variable was initialized originally as 0, if any issues happened such as the database being down, which caused it to throw an exception. The variable would never be set to the proper "security code" and the user would then have admin access to the site. Absolutely horrible thought went into that process. So, that brings me to a major security concept; Never set the initial value of a variable representing a "security level" or anything of that sort, to something that represents total god control of the site. Better yet, use existing libraries out there that have gone through the fire of being used in massive amounts of production environments for a long period of time. A: I would like to see how ASP.NET security is different from ASP Classic security. A: Foxes A: Good to hear that you will have the OWASP Top Ten. Why not also include coverage of the SANS/CWE Top 25 Programming mistakes. A: I always try to show the worst-case scenario on things that might go wrong. For instance on how a cross-site script injection can work as a black-box attack that even works on pages in the application that a hacker can’t access himself or how even an SQL injection can work as a black box and how a hacker can steal your sensitive business data, even when your website connects to your database with a normal non-privileged login account. A: How to make sure your security method is scalable with SQL Server. Especially how to avoid having SQL Server serialize requests from multiple users because they all connect with the same ID...
{ "language": "en", "url": "https://stackoverflow.com/questions/109293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Are writes from within a Tomcat 6 CometProcessor non-blocking I have a CometProcessor implementation that is effectively doing a Multicast to a potentially large number of clients. When an event occurs that needs propagated to to all the clients, the CometProcessor will need to loop through the list of clients writing out the response. If writing responses block then there is the possibility that potentially slow clients could have an adverse effect on the distribution of the event. Example: public class MyCometProcessor implements CometProcessor { private List<Event> connections = new ArrayList<Event>(); public void onEvent(byte[] someInfo) { synchronized (connections) { for (Event e : connections) { HttpServletResponse r = e.getHttpResponse(); // -- Does this line block while waiting for I/O -- r.getOutputStream().write(someInfo); } } } public void event(CometEvent event) { switch (event.getEventType()) { case READ: synchronzied (connections) { connections.add(event); } break; // ... } } } Update: Answering my own question. Writes from a CometProcessor are blocking: http://tomcat.apache.org/tomcat-6.0-doc/config/http.html See the table at the bottom of the page. A: Tomcat6's implementation of HttpServlerResponse is the Response class. Internally it uses a CoyoteOutputStream wrapped around an OutputBuffer. As the name suggests, this class is a buffer, default size 8k. So I would say at the very least if you are writing less than 8k then you arent going to block. You may need to flush though for your clients to see the data which means that ultimately it depends on which connector variant you are using. In your Connector config if you want non-blocking writes then specify protocol=org.apache.coyote.http11.Http11NioProtocol This Connector/Protocol is massively configurable: http://tomcat.apache.org/tomcat-6.0-doc/config/http.html
{ "language": "en", "url": "https://stackoverflow.com/questions/109294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ASP.Net GridView Size Formatting I have an ASP.Net GridView control that I need to remain a fixed size whether there are 0 records or n records in the grid. The header and the footer should remain in the same position regardless of the amount of data in the grid. Obviously, I need to implement paging for larger datasets but how would I achieve this fixed sized GridView? Ideally I would like this to be a reusable control. A: You may have to drop the headers and footers from the GridView altogether and add them to the page as separate table elements. You will need to make sure each table cell in the header and footer tables have fixed widths that correspond to the widths of the cells in your GridView. The GridView itself would probably be nested in a DIV tag of a fixed height. Something like as follows. <table><tr><td style="width:100px">Header 1</td><td style="width:200px">Header 2</td></table> <div style="width:300px;height:400px"> <asp:GridView>.....</asp:GridView> </div> <table><tr><td style="width:100px">Footer 1</td><td style="width:200px">Footer 2</td></table> You will probably have to tweak the margin and padding value to get it all to line up exactly though. A: Put grid inside div set div style as follows <div style="width:100px; height:100px; overflow:scroll;"> <asp:GridView ID="GridView1" runat="server"> </asp:GridView> </div>
{ "language": "en", "url": "https://stackoverflow.com/questions/109305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why use c strings in c++? Is there any good reason to use C-strings in C++ nowadays? My textbook uses them in examples at some points, and I really feel like it would be easier just to use a std::string. A: Because that's how they come from numerous API/libraries? A: Let's say you have some string constants in your code, which is a pretty common need. It's better to define these as C strings than as C++ objects -- more lightweight, portable, etc. Now, if you're going to be passing these strings to various functions, it's nice if these functions accept a C string instead of requiring a C++ string object. Of course, if the strings are mutable, then it's much more convenient to use C++ string objects. A: If a function needs a constant string I still prefer to use 'const char*' (or const wchar_t*) even if the program uses std::string, CString, EString or whatever elsewhere. There are just too many sources of strings in a large code base to be sure the caller will have the string as a std::string and 'const char*' is the lowest common denominator. A: Textbooks feature old-school C strings because many basic functions still expect them as arguments, or return them. Additionally, it gives some insight into the underlying structure of the string in memory. A: Memory control. I recently had to handle strings (actually blobs from a database) about 200-300 MB in size, in a massively multithreaded application. It was a situation where just-one-more copy of the string might have burst the 32bit address space. I had to know exactly how many copies of the string existed. Although I'm an STL evangelist, I used char * then because it gave me the guarantee that no extra memory or even extra copy was allocated. I knew exactly how much space it would need. Apart from that, standard STL string processing misses out on some great C functions for string processing/parsing. Thankfully, std::string has the c_str() method for const access to the internal buffer. To use printf() you still have to use char * though (what a crazy idea of the C++ team to not include (s)printf-like functionality, one of the most useful functions EVER in C. I hope boost::format will soon be included in the STL. A: The only reasons I've had to use them is when interfacing with 3rd party libraries that use C style strings. There might also be esoteric situations where you would use C style strings for performance reasons, but more often than not, using methods on C++ strings is probably faster due to inlining and specialization, etc. You can use the c_str() method in many cases when working with those sort of APIs, but you should be aware that the char * returned is const, and you should not modify the string via that pointer. In those sort of situations, you can still use a vector<char> instead, and at least get the benefit of easier memory management. A: If the C++ code is "deep" (close to the kernel, heavily dependent on C libraries, etc.) you may want to use C strings explicitly to avoid lots of conversions in to and out of std::string. Of, if you're interfacing with other language domains (Python, Ruby, etc.) you might do so for the same reason. Otherwise, use std::string. A: Some posts mention memory concerns. That might be a good reason to shun std::string, but char* probably is not the best replacement. It's still an OO language. Your own string class is probably better than a char*. It may even be more efficient - you can apply the Small String Optimization, for instance. In my case, I was trying to get about 1GB worth of strings out of a 2GB file, stuff them in records with about 60 fields and then sort them 7 times of different fields. My predecessors code took 25 hours with char*, my code ran in 1 hour. A: 1) "string constant" is a C string (const char *), converting it to const std::string& is run-time process, not necessarily simple or optimized. 2) fstream library uses c-style strings to pass file names. My rule of thumb is to pass const std::string& if I am about to use the data as std::string anyway (say, when I store them in a vector), and const char * in other cases. A: After spending far, far, too much time debugging initialization rules and every conceivable string implementation on several platforms we require static strings to be const char*. After spending far, far, too much time debugging bad char* code and memory leaks I suggest that all non-static strings be some type of string object ... until profiling shows that you can and should do something better ;-) A: Legacy code that doesn't know of std::string. Also, before C++11 opening files with std::ifstream or std::ofstream was only possible with const char* as an input to the file name. A: A couple more memory control notes: C strings are POD types, so they can be allocated in your application's read-only data segment. If you declare and define std::string constants at namespace scope, the compiler will generate additional code that runs before main() that calls the std::string constructor for each constant. If your application has many constant strings (e.g. if you have generated C++ code that uses constant strings), C strings may be preferable in this situation. Some implementations of std::string support a feature called SSO ("short string optimization" or "small string optimization") where the std::string class contains storage for strings up to a certain length. This increases the size of std::string but often significantly reduces the frequency of free-store allocations/deallocations, improving performance. If your implementation of std::string does not support SSO, then constructing an empty std::string on the stack will still perform a free-store allocation. If that is the case, using temporary stack-allocated C strings may be helpful for performance-critical code that uses strings. Of course, you have to be careful not to shoot yourself in the foot when you do this. A: Given the choice, there is generally no reason to choose primitive C strings (char*) over C++ strings (std::string). However, often you don't have the luxury of choice. For instance, std::fstream's constructors take C strings, for historical reasons. Also, C libraries (you guessed it!) use C strings. In your own C++ code it is best to use std::string and extract the object's C string as needed by using the c_str() function of std::string. A: It depends on the libraries you're using. For example, when working with the MFC, it's often easier to use CString when working with various parts of the Windows API. It also seems to perform better than std::string in Win32 applications. However, std::string is part of the C++ standard, so if you want better portability, go with std::string. A: For applications such as most embedded platforms where you do not have the luxury of a heap to store the strings being manipulated, and where deterministic preallocation of string buffers is required. A: c strings don't carry the overhead of being a class. c strings generally can result in faster code, as they are closer to the machine level This is not to say, you can't write bad code with them. They can be misused, like every other construct. There is a wealth of libary calls that demand them for historical reasons. Learn to use c strings, and stl strings, and use each when it makes sense to do so. A: STL strings are certainly far easier to use, and I don't see any reason to not use them. If you need to interact with a library that only takes C-style strings as arguments, you can always call the c_str() method of the string class. A: The usual reason to do it is that you enjoy writing buffer overflows in your string handling. Counted strings are so superior to terminated strings it's hard to see why the C designers ever used terminated strings. It was a bad decision then; it's a bad decision now.
{ "language": "en", "url": "https://stackoverflow.com/questions/109317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Using .Net what limitations (if any) are there in using the XmlSerializer? Using .Net what limitations (if any) are there in using the XmlSerializer? For example, can you serialize Images to XML? A: Another problem is that calling the constructor of XmlSerializer will compile code at runtime and will generate a temp DLL (in the %temp% folder) with the code to do the de/serialization. You can watch the code if you add the following lines to app.config: <system.diagnostics> <switches> <add name="XmlSerialization.Compilation" value="4"/> </switches> </system.diagnostics> This takes a lot of time the first time you serialize a class and needs code with permissions for compiling and writing to disk. A way to get around that is to precompile these DLL using the sGen.exe tool that comes with VS 2005+. Look here for more information. A: I generally find the XmlSerializer to be a poor choice for any POCO that's more than just a DTO. If you require specific XML, you can go the Xml*Attribute and/or IXmlSerializable route - but you're left with an object pretty mangled. For some purposes, it still an obvious choice - even with it's limitations. But, for simply storing and reloading data, I've found BinaryFormatter to be a much easier choice with less pitfalls. Here's a list of some annoyances with XmlSerializer - most I've been bitten by at one point or another, others I found over at MSDN: * *Requires a public, no args constructor *Only serializes public read/write properties and fields *Requires all types to be known *Actually calls into get_* and set_*, so validation, etc. will be run. This may be good or bad (think about the order of the calls as well) *Will only serialize IEnumerable or ICollection collections conforming to specific rules The XmlSerializer gives special treatment to classes that implement IEnumerable or ICollection. A class that implements IEnumerable must implement a public Add method that takes a single parameter. The Add method's parameter must be of the same type as is returned from the Current property on the value returned from GetEnumerator, or one of that type's bases. A class that implements ICollection (such as CollectionBase) in addition to IEnumerable must have a public Item indexed property (indexer in C#) that takes an integer, and it must have a public Count property of type integer. The parameter to the Add method must be the same type as is returned from the Item property, or one of that type's bases. For classes that implement ICollection, values to be serialized are retrieved from the indexed Item property, not by calling GetEnumerator. * *Does not serialize IDictionary *Uses dynamically generated assemblies, which may not get unloaded from the app domain. To increase performance, the XML serialization infrastructure dynamically generates assemblies to serialize and deserialize specified types. The infrastructure finds and reuses those assemblies. This behavior occurs only when using the following constructors: XmlSerializer.XmlSerializer(Type) XmlSerializer.XmlSerializer(Type, String) If you use any of the other constructors, multiple versions of the same assembly are generated and never unloaded, which results in a memory leak and poor performance. * *Cannot serialize ArrayList[] or List<T>[] *Has other weird edge cases The XmlSerializer cannot be instantiated to serialize an enumeration if the following conditions are true: The enumeration is of type unsigned long (ulong in C#) and the enumeration contains any member with a value larger than 9,223,372,036,854,775,807. The XmlSerializer class no longer serializes objects that are marked as [Obsolete]. You must have permission to write to the temporary directory (as defined by the TEMP environment variable) to deserialize an object. * *Requires reading .InnerException to get any useful info on errors A: Not sure if there's any limitation.. But there was a memory leak bug in XmlSerialization in .NET 1.1, you sort of had to create a cache serializer object to get around with this issue... In fact, Im not sure if this issue has been fixed in .net 2.0 or newer... A: The XmlSerializer has a few drawbacks. * *It must know all the types being serialized. You cannot pass it something by interface that represents a type that the serializer does not know. *It cannot do circular references. *It will serializes the same object multiple times if referenced multiple times in the object graph. *Cannot handle private field serialization. I (stupidly) wrote my own serializer to get around some of these problems. Don't do that; it is a lot of work and you will find subtle bugs in it months down the road. The only thing I gained in writing my own serializer and formatter was a greater appreciation of the minutia involved in object graph serialization. I found the NetDataContractSerializer when WCF came out. It does all the stuff from above that XmlSerializer doesn't do. It drives the serialization in a similar fashion to the XmlSerializer. One decorates various properties or fields with attributes to inform the serializer what to serialize. I replaced the custom serializer I had written with the NetDataContractSerializer and was very happy with the results. I would highly recommend it. A: The one limitation that I can think of is that XmlSerialization is opt-out; meaning any properties of a class that you don't want serialized MUST be decorated with [XmlIgnore]. Contrast that to DataContractSerializer where all properties are opt-in, you must explicitly declare inclusion attributes. Here's a good write-up. Images or their binary arrays are serialized as base64 encoded text by XmlSerializer. A: Any class you write can theoretically be fed through XmlSerializer. Howerver, it only has access to the public fields, and the classes need to be marked with the correct attributes (e.g. XmlAttribute). Even in the basic framework, not everything supports XmlSerializer. System.Collections.Generic.Dictionary<> for instance. A: For example, you can't serialize classes implementing IDictionary interface. A: For collections they need to have an Add method taking a single argument. If you just need a text format and not specifically xml you might try JSON. I've developed one for .NET, JsonExSerializer, and there are others available as well at http://www.json.org.
{ "language": "en", "url": "https://stackoverflow.com/questions/109318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: error PRJ0003 : Error spawning 'cl.exe' I converted VS2006 vc++ project to VS2008. When compiling I get the above error. How do i fix it? am I missing this exe ? A: cl.exe is VS2008 (and any other VS) C/C++ compiler, so check for more detailed error message why it cannot be spawned. Be sure you've installed C++ language support when installing VS2008. A: There is a bug in the Visual Studio 2008 Standard Edition installer. It does not install cl.exe if you only install Visual C++ but not Visual C#. To work around this you have to install Visual C# even if you do not need this. A: It could be that your "path" environment variable does not contain the path to the folder where cl.exe is located. Another possible reason could be that when installing VS2008, you did not select the option to install the Win32 tools (which include the command line compiler). In any case, you may want to try to repair the installation of VS2008 (by running its setup via Control Panel - Add/Remove Programs), or use its "Add/Remove components" option and add the "Win32 tools" option (under Visual C++ - Visual C++ Tools). A: I had this problem under Windows 10 and solved it by adding the following paths to the PATH environment variable: C:\ProgramFilesC\VS2008\Common7\IDE C:\ProgramFilesC\VS2008\VC\bin\x86_amd64 where C:\ProgramFilesC\VS2008 is the path where I installed Visual Studio. A: Actually this error occurs because of path is not correctly set. Goto Tools>Options>Directories> show directories for > Select Executable files Here copy the path address from the folder where you installed and paste that path address G:\Program files\vb (visual basic) 6.0\Visual Basic 6.0\VC98\BIN then click OK. This may work for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/109319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: PostgreSQL "DESCRIBE TABLE" How do you perform the equivalent of Oracle's DESCRIBE TABLE in PostgreSQL with psql command? A: In addition to the PostgreSQL way (\d 'something' or \dt 'table' or \ds 'sequence' and so on) The SQL standard way, as shown here: select column_name, data_type, character_maximum_length, column_default, is_nullable from INFORMATION_SCHEMA.COLUMNS where table_name = '<name of table>'; It's supported by many db engines. A: This variation of the query (as explained in other answers) worked for me. SELECT COLUMN_NAME FROM information_schema.COLUMNS WHERE TABLE_NAME = 'city'; It's described here in details: http://www.postgresqltutorial.com/postgresql-describe-table/ A: To improve on the other answer's SQL query (which is great!), here is a revised query. It also includes constraint names, inheritance information, and a data types broken into it's constituent parts (type, length, precision, scale). It also filters out columns that have been dropped (which still exist in the database). SELECT n.nspname as schema, c.relname as table, f.attname as column, f.attnum as column_id, f.attnotnull as not_null, f.attislocal not_inherited, f.attinhcount inheritance_count, pg_catalog.format_type(f.atttypid,f.atttypmod) AS data_type_full, t.typname AS data_type_name, CASE WHEN f.atttypmod >= 0 AND t.typname <> 'numeric'THEN (f.atttypmod - 4) --first 4 bytes are for storing actual length of data END AS data_type_length, CASE WHEN t.typname = 'numeric' THEN (((f.atttypmod - 4) >> 16) & 65535) END AS numeric_precision, CASE WHEN t.typname = 'numeric' THEN ((f.atttypmod - 4)& 65535 ) END AS numeric_scale, CASE WHEN p.contype = 'p' THEN 't' ELSE 'f' END AS is_primary_key, CASE WHEN p.contype = 'p' THEN p.conname END AS primary_key_name, CASE WHEN p.contype = 'u' THEN 't' ELSE 'f' END AS is_unique_key, CASE WHEN p.contype = 'u' THEN p.conname END AS unique_key_name, CASE WHEN p.contype = 'f' THEN 't' ELSE 'f' END AS is_foreign_key, CASE WHEN p.contype = 'f' THEN p.conname END AS foreignkey_name, CASE WHEN p.contype = 'f' THEN p.confkey END AS foreign_key_columnid, CASE WHEN p.contype = 'f' THEN g.relname END AS foreign_key_table, CASE WHEN p.contype = 'f' THEN p.conkey END AS foreign_key_local_column_id, CASE WHEN f.atthasdef = 't' THEN d.adsrc END AS default_value FROM pg_attribute f JOIN pg_class c ON c.oid = f.attrelid JOIN pg_type t ON t.oid = f.atttypid LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum LEFT JOIN pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey) LEFT JOIN pg_class AS g ON p.confrelid = g.oid WHERE c.relkind = 'r'::char AND f.attisdropped = false AND n.nspname = '%s' -- Replace with Schema name AND c.relname = '%s' -- Replace with table name AND f.attnum > 0 ORDER BY f.attnum ; A: If you want to obtain it from query instead of psql, you can query the catalog schema. Here's a complex query that does that: SELECT f.attnum AS number, f.attname AS name, f.attnum, f.attnotnull AS notnull, pg_catalog.format_type(f.atttypid,f.atttypmod) AS type, CASE WHEN p.contype = 'p' THEN 't' ELSE 'f' END AS primarykey, CASE WHEN p.contype = 'u' THEN 't' ELSE 'f' END AS uniquekey, CASE WHEN p.contype = 'f' THEN g.relname END AS foreignkey, CASE WHEN p.contype = 'f' THEN p.confkey END AS foreignkey_fieldnum, CASE WHEN p.contype = 'f' THEN g.relname END AS foreignkey, CASE WHEN p.contype = 'f' THEN p.conkey END AS foreignkey_connnum, CASE WHEN f.atthasdef = 't' THEN d.adsrc END AS default FROM pg_attribute f JOIN pg_class c ON c.oid = f.attrelid JOIN pg_type t ON t.oid = f.atttypid LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum LEFT JOIN pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey) LEFT JOIN pg_class AS g ON p.confrelid = g.oid WHERE c.relkind = 'r'::char AND n.nspname = '%s' -- Replace with Schema name AND c.relname = '%s' -- Replace with table name AND f.attnum > 0 ORDER BY number ; It's pretty complex but it does show you the power and flexibility of the PostgreSQL system catalog and should get you on your way to pg_catalog mastery ;-). Be sure to change out the %s's in the query. The first is Schema and the second is the table name. A: You can do that with a psql slash command: \d myTable describe table It also works for other objects: \d myView describe view \d myIndex describe index \d mySequence describe sequence Source: faqs.org A: You can also check using below query Select * from schema_name.table_name limit 0; Expmple : My table has 2 columns name and pwd. Giving screenshot below. *Using PG admin3 A: In postgres \d is used to describe the table structure. e.g. \d schema_name.table_name this command will provide you the basic info of table such as, columns, type and modifiers. If you want more info about table use \d+ schema_name.table_name this will give you extra info such as, storage, stats target and description A: The psql equivalent of DESCRIBE TABLE is \d table. See the psql portion of the PostgreSQL manual for more details. A: This should be the solution: SELECT * FROM information_schema.columns WHERE table_schema = 'your_schema' AND table_name = 'your_table' A: When your table name starts with a capital letter you should put your table name in the quotation. Example: \d "Users" A: Try this (in the psql command-line tool): \d+ tablename See the manual for more info. A: The best way to describe a table such as a column, type, modifiers of columns, etc. \d+ tablename or \d tablename A: When your table is not part of the default schema, you should write: \d+ schema_name.table_name Otherwise, you would get the error saying that "the relation doesn not exist." A: You may do a \d *search pattern * with asterisks to find tables that match the search pattern you're interested in. A: Use this command \d table name like \d queuerecords Table "public.queuerecords" Column | Type | Modifiers -----------+-----------------------------+----------- id | uuid | not null endtime | timestamp without time zone | payload | text | queueid | text | starttime | timestamp without time zone | status | text | A: In addition to the command line \d+ <table_name> you already found, you could also use the information-schema to look up the column data, using info_schema.columns SELECT * FROM info_schema.columns WHERE table_schema = 'your_schema' AND table_name = 'your_table' A: You can use this : SELECT attname FROM pg_attribute,pg_class WHERE attrelid=pg_class.oid AND relname='TableName' AND attstattarget <>0; A: Use the following SQL statement SELECT DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE table_name = 'tbl_name' AND COLUMN_NAME = 'col_name' If you replace tbl_name and col_name, it displays data type of the particular coloumn that you looking for. A: In MySQL , DESCRIBE table_name In PostgreSQL , \d table_name Or , you can use this long command: SELECT a.attname AS Field, t.typname || '(' || a.atttypmod || ')' AS Type, CASE WHEN a.attnotnull = 't' THEN 'YES' ELSE 'NO' END AS Null, CASE WHEN r.contype = 'p' THEN 'PRI' ELSE '' END AS Key, (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid), '\'(.*)\'') FROM pg_catalog.pg_attrdef d WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef) AS Default, '' as Extras FROM pg_class c JOIN pg_attribute a ON a.attrelid = c.oid JOIN pg_type t ON a.atttypid = t.oid LEFT JOIN pg_catalog.pg_constraint r ON c.oid = r.conrelid AND r.conname = a.attname WHERE c.relname = 'tablename' AND a.attnum > 0 ORDER BY a.attnum A: 1) PostgreSQL DESCRIBE TABLE using psql In psql command line tool, \d table_name or \d+ table_name to find the information on columns of a table 2) PostgreSQL DESCRIBE TABLE using information_schema SELECT statement to query the column_names,datatype,character maximum length of the columns table in the information_schema database; SELECT COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH from INFORMATION_SCHEMA.COLUMNS where table_name = 'tablename'; For more information https://www.postgresqltutorial.com/postgresql-describe-table/ A: I'll add the pg_dump command even thou the psql command was requested. because it generate an output more common to previous MySQl users. # sudo -u postgres pg_dump --table=my_table_name --schema-only mydb A: /dt is the commad which lists you all the tables present in a database. using /d command and /d+ we can get the details of a table. The sysntax will be like * /d table_name (or) \d+ table_name A: The command below can describe multiple tables simply \dt <table> <table> The command below can describe multiple tables in detail: \d <table> <table> The command below can describe multiple tables in more detail: \d+ <table> <table> A: I worked out the following script for get table schema. 'CREATE TABLE ' || 'yourschema.yourtable' || E'\n(\n' || array_to_string( array_agg( ' ' || column_expr ) , E',\n' ) || E'\n);\n' from ( SELECT ' ' || column_name || ' ' || data_type || coalesce('(' || character_maximum_length || ')', '') || case when is_nullable = 'YES' then ' NULL' else ' NOT NULL' end as column_expr FROM information_schema.columns WHERE table_schema || '.' || table_name = 'yourschema.yourtable' ORDER BY ordinal_position ) column_list;
{ "language": "en", "url": "https://stackoverflow.com/questions/109325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2103" }
Q: Batching in REST With web services it is considered a good practice to batch several service calls into one message to reduce a number of remote calls. Is there any way to do this with RESTful services? A: If you really need to batch, Http 1.1 supports a concept called HTTP Pipelining that allows you to send multiple requests before receiving a response. Check it out here A: I don't see how batching requests makes any sense in REST. Since the URL in a REST-based service represents the operation to perform and the data on which to perform it, making batch requests would seriously break the conceptual model. An exception would be if you were performing the same operation on the same data multiple times. In this case you can either pass in multiple values for a request parameter or encode this repetition in the body (however this would only really work for PUT or POST). The Gliffy REST API supports adding multiple users to the same folder via POST /folders/ROOT/the/folder/name/users?userId=56&userId=87&userId=45 which is essentially: PUT /folders/ROOT/the/folder/name/users/56 PUT /folders/ROOT/the/folder/name/users/87 PUT /folders/ROOT/the/folder/name/users/45 As the other commenter pointed out, paging results from a GET can be done via request parameters: GET /some/list/of/resources?startIndex=10&pageSize=50 if the REST service supports it. A: I agree with Darrel Miller. HTTP already supports HTTP Pipelining, plus HTTP supports keep alive letting you stream multiple HTTP operations concurrently down the same socket to avoid having to wait for the responses before streaming new requests to the server etc. So with HTTP pipelining and keep alive you get the effect of batching while using the same underlying REST API - so there's usually no need for another REST API to your service A: The team with Astoria made good use of multi-part mime to send a batch of calls. Different from pipelining as the multi-part message can infer the intent of an atomic operation. Seems rather elegant. * *Original blog post explaining rational *MSDN Documentation A: Of course there is a way but it would require server-side support. There is no magical one size fits all methodology that I know of.
{ "language": "en", "url": "https://stackoverflow.com/questions/109343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: All possible uses for the Application.SysCmd Method Is there a place to find all possible uses of the syscmd method in MS Access? I know Microsoft has a developer reference, but I have found there are many other uses for this method that are not listed here. A: Access itself provides an interface to the full object model of all libraries in use. In the VBE, hit F2 on the keyboard (or, from the VIEW menu, choose OBJECT BROWSER). Type "syscmd" in the search box and you'll get the full details on it. The variable names are verbose enough to explain just about everything you need to know. EDIT: The object browser doesn't give you anything but the SysCmd functions that have been documented by assigning named constants. But the recommendation to familiarize yourself with the object browser is a good one, especially if you right click on the CLASSES list and choose SHOW HIDDEN MEMBERS -- you can learn a lot from that. A: Here are a few of the "undocumented" functions, I know from experience that you can basically run anything that windows can do using syscmd once you understand how to structure the commands from examples like these. http://www.everythingaccess.com/tutorials.asp?ID=Undocumented-SysCmd-Functions From google search: syscmd access A: Here's a comprehensive list, including which Access versions each command applies to, translated into English. http://www.excite-webtl.jp/world/english/web/?wb_url=http%3A%2F%2Fwww.f3.dion.ne.jp%2F%7Eelement%2Fmsaccess%2FAcTipsUnDocumentedSysCmd.html&wb_lp=JAEN&wb_dis=2
{ "language": "en", "url": "https://stackoverflow.com/questions/109345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Bezier clipping I'm trying to find/make an algorithm to compute the intersection (a new filled object) of two arbitrary filled 2D objects. The objects are defined using either lines or cubic beziers and may have holes or self-intersect. I'm aware of several existing algorithms doing the same with polygons, listed here. However, I'd like to support beziers without subdividing them into polygons, and the output should have roughly the same control points as the input in areas where there are no intersections. This is for an interactive program to do some CSG but the clipping doesn't need to be real-time. I've searched for a while but haven't found good starting points. A: I know I'm at risk of being redundant, but I was investigating the same issue and found a solution that I'd read in academic papers but hadn't found a working solution for. You can rewrite the bezier curves as a set of two bi-variate cubic equations like this: * *∆x = ax₁*t₁^3 + bx₁*t₁^2 + cx₁*t₁ + dx₁ - ax₂*t₂^3 - bx₂*t₂^2 - cx₂*t₂ - dx₂ *∆y = ay₁*t₁^3 + by₁*t₁^2 + cy₁*t₁ + dy₁ - ay₂*t₂^3 - by₂*t₂^2 - cy₂*t₂ - dy₂ Obviously, the curves intersect for values of (t₁,t₂) where ∆x = ∆y = 0. Unfortunately, it's complicated by the fact that it is difficult to find roots in two dimensions, and approximate approaches tend to (as another writer put it) blow up. But if you're using integers or rational numbers for your control points, then you can use Groebner bases to rewrite your bi-variate order-3 polynomials into a (possibly-up-to-order-9-thus-your-nine-possible-intersections) monovariate polynomial. After that you just need to find your roots (for, say t₂) in one dimension, and plug your results back into one of your original equations to find the other dimension. Burchburger has a layman-friendly introduction to Groebner Bases called "Gröbner Bases: A Short Introduction for Systems Theorists" that was very helpful for me. Google it. The other paper that was helpful was one called "Fast, precise flattening of cubic Bézier path and offset curves" by TF Hain, which has lots of utility equations for bezier curves, including how to find the polynomial coefficients for the x and y equations. As for whether the Bezier clipping will help with this particular method, I doubt it, but bezier clipping is a method for narrowing down where intersections might be, not for finding a final (though possibly approximate) answer of where it is. A lot of time with this method will be spent in finding the mono-variate equation, and that task doesn't get any easier with clipping. Finding the roots is by comparison trivial. However, one of the advantages of this method is that it doesn't depend on recursively subdividing the curve, and the problem becomes a simple one-dimensional root-finding problem, which is not easy, but well documented. The major disadvantage is that computing Groebner bases is costly and becomes very unwieldy if you're dealing with floating point polynomials or using higher order Bezier curves. If you want some finished code in Haskell to find the intersections, let me know. A: I wrote code to do this a long, long time ago. The project I was working on defined 2D objects using piecewise Bezier boundaries that were generated as PostScipt paths. The approach I used was: Let curves p, q, be defined by Bezier control points. Do they intersect? Compute the bounding boxes of the control points. If they don't overlap, then the two curves don't intersect. Otherwise: p.x(t), p.y(t), q.x(u), q.y(u) are cubic polynomials on 0 <= t,u <= 1.0. The distance squared (p.x - q.x) ** 2 + (p.y - q.y) ** 2 is a polynomial on (t,u). Use Newton-Raphson to try and solve that for zero. Discard any solutions outside 0 <= t,u <= 1.0 N-R may or may not converge. The curves might not intersect, or N-R can just blow up when the two curves are nearly parallel. So cut off N-R if it's not converging after after some arbitrary number of iterations. This can be a fairly small number. If N-R doesn't converge on a solution, split one curve (say, p) into two curves pa, pb at t = 0.5. This is easy, it's just computing midpoints, as the linked article shows. Then recursively test (q, pa) and (q, pb) for intersections. (Note that in the next layer of recursion that q has become p, so that p and q are alternately split on each ply of the recursion.) Most of the recursive calls return quickly because the bounding boxes are non-overlapping. You will have to cut off the recursion at some arbitrary depth, to handle weird cases where the two curves are parallel and don't quite touch, but the distance between them is arbitrarily small -- perhaps only 1 ULP of difference. When you do find an intersection, you're not done, because cubic curves can have multiple crossings. So you have to split each curve at the intersecting point, and recursively check for more interections between (pa, qa), (pa, qb), (pb, qa), (pb, qb). A: I found the following publication to be the best of information regarding Bezier Clipping: T. W. Sederberg, BYU, Computer Aided Geometric Design Course Notes Chapter 7 that talks about Curve Intersection is available online. It outlines 4 different approaches to find intersections and describes Bezier Clipping in detail: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=1000&context=facpub A: There are a number of academic research papers on doing bezier clipping: http://www.andrew.cmu.edu/user/sowen/abstracts/Se306.html http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.61.6669 http://www.dm.unibo.it/~casciola/html/research_rr.html I recommend the interval methods because as you describe, you don't have to divide down to polygons, and you can get guaranteed results as well as define your own arbitrary precision for the resultset. For more information on interval rendering, you may also refer to http://www.sunfishstudio.com
{ "language": "en", "url": "https://stackoverflow.com/questions/109364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Sort a Map by values I am relatively new to Java, and often find that I need to sort a Map<Key, Value> on the values. Since the values are not unique, I find myself converting the keySet into an array, and sorting that array through array sort with a custom comparator that sorts on the value associated with the key. Is there an easier way? A: With Java 8, you can use the streams api to do it in a significantly less verbose way: Map<K, V> sortedMap = map.entrySet().stream() .sorted(Entry.comparingByValue()) .collect(Collectors.toMap(Entry::getKey, Entry::getValue, (e1, e2) -> e1, LinkedHashMap::new)); A: Instead of using Collections.sort as some do I'd suggest using Arrays.sort. Actually what Collections.sort does is something like this: public static <T extends Comparable<? super T>> void sort(List<T> list) { Object[] a = list.toArray(); Arrays.sort(a); ListIterator<T> i = list.listIterator(); for (int j=0; j<a.length; j++) { i.next(); i.set((T)a[j]); } } It just calls toArray on the list and then uses Arrays.sort. This way all the map entries will be copied three times: once from the map to the temporary list (be it a LinkedList or ArrayList), then to the temporary array and finally to the new map. My solution ommits this one step as it does not create unnecessary LinkedList. Here is the code, generic-friendly and performance-optimal: public static <K, V extends Comparable<? super V>> Map<K, V> sortByValue(Map<K, V> map) { @SuppressWarnings("unchecked") Map.Entry<K,V>[] array = map.entrySet().toArray(new Map.Entry[map.size()]); Arrays.sort(array, new Comparator<Map.Entry<K, V>>() { public int compare(Map.Entry<K, V> e1, Map.Entry<K, V> e2) { return e1.getValue().compareTo(e2.getValue()); } }); Map<K, V> result = new LinkedHashMap<K, V>(); for (Map.Entry<K, V> entry : array) result.put(entry.getKey(), entry.getValue()); return result; } A: This is a variation of Anthony's answer, which doesn't work if there are duplicate values: public static <K, V extends Comparable<V>> Map<K, V> sortMapByValues(final Map<K, V> map) { Comparator<K> valueComparator = new Comparator<K>() { public int compare(K k1, K k2) { final V v1 = map.get(k1); final V v2 = map.get(k2); /* Not sure how to handle nulls ... */ if (v1 == null) { return (v2 == null) ? 0 : 1; } int compare = v2.compareTo(v1); if (compare != 0) { return compare; } else { Integer h1 = k1.hashCode(); Integer h2 = k2.hashCode(); return h2.compareTo(h1); } } }; Map<K, V> sortedByValues = new TreeMap<K, V>(valueComparator); sortedByValues.putAll(map); return sortedByValues; } Note that it's rather up in the air how to handle nulls. One important advantage of this approach is that it actually returns a Map, unlike some of the other solutions offered here. A: Best Approach import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Set; import java.util.Map.Entry; public class OrderByValue { public static void main(String a[]){ Map<String, Integer> map = new HashMap<String, Integer>(); map.put("java", 20); map.put("C++", 45); map.put("Unix", 67); map.put("MAC", 26); map.put("Why this kolavari", 93); Set<Entry<String, Integer>> set = map.entrySet(); List<Entry<String, Integer>> list = new ArrayList<Entry<String, Integer>>(set); Collections.sort( list, new Comparator<Map.Entry<String, Integer>>() { public int compare( Map.Entry<String, Integer> o1, Map.Entry<String, Integer> o2 ) { return (o1.getValue()).compareTo( o2.getValue() );//Ascending order //return (o2.getValue()).compareTo( o1.getValue() );//Descending order } } ); for(Map.Entry<String, Integer> entry:list){ System.out.println(entry.getKey()+" ==== "+entry.getValue()); } }} Output java ==== 20 MAC ==== 26 C++ ==== 45 Unix ==== 67 Why this kolavari ==== 93 A: Late Entry. With the advent of Java-8, we can use streams for data manipulation in a very easy/succinct way. You can use streams to sort the map entries by value and create a LinkedHashMap which preserves insertion-order iteration. Eg: LinkedHashMap sortedByValueMap = map.entrySet().stream() .sorted(comparing(Entry<Key,Value>::getValue).thenComparing(Entry::getKey)) //first sorting by Value, then sorting by Key(entries with same value) .collect(LinkedHashMap::new,(map,entry) -> map.put(entry.getKey(),entry.getValue()),LinkedHashMap::putAll); For reverse ordering, replace: comparing(Entry<Key,Value>::getValue).thenComparing(Entry::getKey) with comparing(Entry<Key,Value>::getValue).thenComparing(Entry::getKey).reversed() A: Major problem. If you use the first answer (Google takes you here), change the comparator to add an equal clause, otherwise you cannot get values from the sorted_map by keys: public int compare(String a, String b) { if (base.get(a) > base.get(b)) { return 1; } else if (base.get(a) < base.get(b)){ return -1; } return 0; // returning 0 would merge keys } A: There are a lot of answers for this question already, but none provided me what I was looking for, a map implementation that returns keys and entries sorted by the associated value, and maintains this property as keys and values are modified in the map. Two other questions ask for this specifically. I cooked up a generic friendly example that solves this use case. This implementation does not honor all of the contracts of the Map interface, such as reflecting value changes and removals in the sets return from keySet() and entrySet() in the original object. I felt such a solution would be too large to include in a Stack Overflow answer. If I manage to create a more complete implementation, perhaps I will post it to Github and then to it link in an updated version of this answer. import java.util.*; /** * A map where {@link #keySet()} and {@link #entrySet()} return sets ordered * by associated values based on the the comparator provided at construction * time. The order of two or more keys with identical values is not defined. * <p> * Several contracts of the Map interface are not satisfied by this minimal * implementation. */ public class ValueSortedMap<K, V> extends HashMap<K, V> { protected Map<V, Collection<K>> valueToKeysMap; // uses natural order of value object, if any public ValueSortedMap() { this((Comparator<? super V>) null); } public ValueSortedMap(Comparator<? super V> valueComparator) { this.valueToKeysMap = new TreeMap<V, Collection<K>>(valueComparator); } public boolean containsValue(Object o) { return valueToKeysMap.containsKey(o); } public V put(K k, V v) { V oldV = null; if (containsKey(k)) { oldV = get(k); valueToKeysMap.get(oldV).remove(k); } super.put(k, v); if (!valueToKeysMap.containsKey(v)) { Collection<K> keys = new ArrayList<K>(); keys.add(k); valueToKeysMap.put(v, keys); } else { valueToKeysMap.get(v).add(k); } return oldV; } public void putAll(Map<? extends K, ? extends V> m) { for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) put(e.getKey(), e.getValue()); } public V remove(Object k) { V oldV = null; if (containsKey(k)) { oldV = get(k); super.remove(k); valueToKeysMap.get(oldV).remove(k); } return oldV; } public void clear() { super.clear(); valueToKeysMap.clear(); } public Set<K> keySet() { LinkedHashSet<K> ret = new LinkedHashSet<K>(size()); for (V v : valueToKeysMap.keySet()) { Collection<K> keys = valueToKeysMap.get(v); ret.addAll(keys); } return ret; } public Set<Map.Entry<K, V>> entrySet() { LinkedHashSet<Map.Entry<K, V>> ret = new LinkedHashSet<Map.Entry<K, V>>(size()); for (Collection<K> keys : valueToKeysMap.values()) { for (final K k : keys) { final V v = get(k); ret.add(new Map.Entry<K,V>() { public K getKey() { return k; } public V getValue() { return v; } public V setValue(V v) { throw new UnsupportedOperationException(); } }); } } return ret; } } A: Simple way to sort any map in Java 8 and above Map<String, Object> mapToSort = new HashMap<>(); List<Map.Entry<String, Object>> list = new LinkedList<>(mapToSort.entrySet()); Collections.sort(list, Comparator.comparing(o -> o.getValue().getAttribute())); HashMap<String, Object> sortedMap = new LinkedHashMap<>(); for (Map.Entry<String, Object> map : list) { sortedMap.put(map.getKey(), map.getValue()); } if you are using Java 7 and below Map<String, Object> mapToSort = new HashMap<>(); List<Map.Entry<String, Object>> list = new LinkedList<>(mapToSort.entrySet()); Collections.sort(list, new Comparator<Map.Entry<String, Object>>() { @Override public int compare(Map.Entry<String, Object> o1, Map.Entry<String, Object> o2) { return o1.getValue().getAttribute().compareTo(o2.getValue().getAttribute()); } }); HashMap<String, Object> sortedMap = new LinkedHashMap<>(); for (Map.Entry<String, Object> map : list) { sortedMap.put(map.getKey(), map.getValue()); } A: Depending on the context, using java.util.LinkedHashMap<T> which rememebers the order in which items are placed into the map. Otherwise, if you need to sort values based on their natural ordering, I would recommend maintaining a separate List which can be sorted via Collections.sort(). A: This is just too complicated. Maps were not supposed to do such job as sorting them by Value. The easiest way is to create your own Class so it fits your requirement. In example lower you are supposed to add TreeMap a comparator at place where * is. But by java API it gives comparator only keys, not values. All of examples stated here is based on 2 Maps. One Hash and one new Tree. Which is odd. The example: Map<Driver driver, Float time> map = new TreeMap<Driver driver, Float time>(*); So change the map into a set this way: ResultComparator rc = new ResultComparator(); Set<Results> set = new TreeSet<Results>(rc); You will create class Results, public class Results { private Driver driver; private Float time; public Results(Driver driver, Float time) { this.driver = driver; this.time = time; } public Float getTime() { return time; } public void setTime(Float time) { this.time = time; } public Driver getDriver() { return driver; } public void setDriver (Driver driver) { this.driver = driver; } } and the Comparator class: public class ResultsComparator implements Comparator<Results> { public int compare(Results t, Results t1) { if (t.getTime() < t1.getTime()) { return 1; } else if (t.getTime() == t1.getTime()) { return 0; } else { return -1; } } } This way you can easily add more dependencies. And as the last point I'll add simple iterator: Iterator it = set.iterator(); while (it.hasNext()) { Results r = (Results)it.next(); System.out.println( r.getDriver().toString //or whatever that is related to Driver class -getName() getSurname() + " " + r.getTime() ); } A: Afaik the most cleaner way is utilizing collections to sort map on value: Map<String, Long> map = new HashMap<String, Long>(); // populate with data to sort on Value // use datastructure designed for sorting Queue queue = new PriorityQueue( map.size(), new MapComparable() ); queue.addAll( map.entrySet() ); // get a sorted map LinkedHashMap<String, Long> linkedMap = new LinkedHashMap<String, Long>(); for (Map.Entry<String, Long> entry; (entry = queue.poll())!=null;) { linkedMap.put(entry.getKey(), entry.getValue()); } public static class MapComparable implements Comparator<Map.Entry<String, Long>>{ public int compare(Entry<String, Long> e1, Entry<String, Long> e2) { return e1.getValue().compareTo(e2.getValue()); } } A: Since TreeMap<> does not work for values that can be equal, I used this: private <K, V extends Comparable<? super V>> List<Entry<K, V>> sort(Map<K, V> map) { List<Map.Entry<K, V>> list = new LinkedList<Map.Entry<K, V>>(map.entrySet()); Collections.sort(list, new Comparator<Map.Entry<K, V>>() { public int compare(Map.Entry<K, V> o1, Map.Entry<K, V> o2) { return o1.getValue().compareTo(o2.getValue()); } }); return list; } You might want to put list in a LinkedHashMap, but if you're only going to iterate over it right away, that's superfluous... A: This could be achieved very easily with java 8 public static LinkedHashMap<Integer, String> sortByValue(HashMap<Integer, String> map) { List<Map.Entry<Integer, String>> list = new ArrayList<>(map.entrySet()); list.sort(Map.Entry.comparingByValue()); LinkedHashMap<Integer, String> sortedMap = new LinkedHashMap<>(); list.forEach(e -> sortedMap.put(e.getKey(), e.getValue())); return sortedMap; } A: Java 8 offers a new answer: convert the entries into a stream, and use the comparator combinators from Map.Entry: Stream<Map.Entry<K,V>> sorted = map.entrySet().stream() .sorted(Map.Entry.comparingByValue()); This will let you consume the entries sorted in ascending order of value. If you want descending value, simply reverse the comparator: Stream<Map.Entry<K,V>> sorted = map.entrySet().stream() .sorted(Collections.reverseOrder(Map.Entry.comparingByValue())); If the values are not comparable, you can pass an explicit comparator: Stream<Map.Entry<K,V>> sorted = map.entrySet().stream() .sorted(Map.Entry.comparingByValue(comparator)); You can then proceed to use other stream operations to consume the data. For example, if you want the top 10 in a new map: Map<K,V> topTen = map.entrySet().stream() .sorted(Map.Entry.comparingByValue(Comparator.reverseOrder())) .limit(10) .collect(Collectors.toMap( Map.Entry::getKey, Map.Entry::getValue, (e1, e2) -> e1, LinkedHashMap::new)); The LinkedHashMap seen above iterates entries in the order in which they were inserted. Or print to System.out: map.entrySet().stream() .sorted(Map.Entry.comparingByValue()) .forEach(System.out::println); A: Important note: This code can break in multiple ways. If you intend to use the code provided, be sure to read the comments as well to be aware of the implications. For example, values can no longer be retrieved by their key. (get always returns null.) It seems much easier than all of the foregoing. Use a TreeMap as follows: public class Testing { public static void main(String[] args) { HashMap<String, Double> map = new HashMap<String, Double>(); ValueComparator bvc = new ValueComparator(map); TreeMap<String, Double> sorted_map = new TreeMap<String, Double>(bvc); map.put("A", 99.5); map.put("B", 67.4); map.put("C", 67.4); map.put("D", 67.3); System.out.println("unsorted map: " + map); sorted_map.putAll(map); System.out.println("results: " + sorted_map); } } class ValueComparator implements Comparator<String> { Map<String, Double> base; public ValueComparator(Map<String, Double> base) { this.base = base; } // Note: this comparator imposes orderings that are inconsistent with // equals. public int compare(String a, String b) { if (base.get(a) >= base.get(b)) { return -1; } else { return 1; } // returning 0 would merge keys } } Output: unsorted map: {D=67.3, A=99.5, B=67.4, C=67.4} results: {D=67.3, B=67.4, C=67.4, A=99.5} A: Based on @devinmoore code, a map sorting methods using generics and supporting both ascending and descending ordering. /** * Sort a map by it's keys in ascending order. * * @return new instance of {@link LinkedHashMap} contained sorted entries of supplied map. * @author Maxim Veksler */ public static <K, V> LinkedHashMap<K, V> sortMapByKey(final Map<K, V> map) { return sortMapByKey(map, SortingOrder.ASCENDING); } /** * Sort a map by it's values in ascending order. * * @return new instance of {@link LinkedHashMap} contained sorted entries of supplied map. * @author Maxim Veksler */ public static <K, V> LinkedHashMap<K, V> sortMapByValue(final Map<K, V> map) { return sortMapByValue(map, SortingOrder.ASCENDING); } /** * Sort a map by it's keys. * * @param sortingOrder {@link SortingOrder} enum specifying requested sorting order. * @return new instance of {@link LinkedHashMap} contained sorted entries of supplied map. * @author Maxim Veksler */ public static <K, V> LinkedHashMap<K, V> sortMapByKey(final Map<K, V> map, final SortingOrder sortingOrder) { Comparator<Map.Entry<K, V>> comparator = new Comparator<Entry<K,V>>() { public int compare(Entry<K, V> o1, Entry<K, V> o2) { return comparableCompare(o1.getKey(), o2.getKey(), sortingOrder); } }; return sortMap(map, comparator); } /** * Sort a map by it's values. * * @param sortingOrder {@link SortingOrder} enum specifying requested sorting order. * @return new instance of {@link LinkedHashMap} contained sorted entries of supplied map. * @author Maxim Veksler */ public static <K, V> LinkedHashMap<K, V> sortMapByValue(final Map<K, V> map, final SortingOrder sortingOrder) { Comparator<Map.Entry<K, V>> comparator = new Comparator<Entry<K,V>>() { public int compare(Entry<K, V> o1, Entry<K, V> o2) { return comparableCompare(o1.getValue(), o2.getValue(), sortingOrder); } }; return sortMap(map, comparator); } @SuppressWarnings("unchecked") private static <T> int comparableCompare(T o1, T o2, SortingOrder sortingOrder) { int compare = ((Comparable<T>)o1).compareTo(o2); switch (sortingOrder) { case ASCENDING: return compare; case DESCENDING: return (-1) * compare; } return 0; } /** * Sort a map by supplied comparator logic. * * @return new instance of {@link LinkedHashMap} contained sorted entries of supplied map. * @author Maxim Veksler */ public static <K, V> LinkedHashMap<K, V> sortMap(final Map<K, V> map, final Comparator<Map.Entry<K, V>> comparator) { // Convert the map into a list of key,value pairs. List<Map.Entry<K, V>> mapEntries = new LinkedList<Map.Entry<K, V>>(map.entrySet()); // Sort the converted list according to supplied comparator. Collections.sort(mapEntries, comparator); // Build a new ordered map, containing the same entries as the old map. LinkedHashMap<K, V> result = new LinkedHashMap<K, V>(map.size() + (map.size() / 20)); for(Map.Entry<K, V> entry : mapEntries) { // We iterate on the mapEntries list which is sorted by the comparator putting new entries into // the targeted result which is a sorted map. result.put(entry.getKey(), entry.getValue()); } return result; } /** * Sorting order enum, specifying request result sort behavior. * @author Maxim Veksler * */ public static enum SortingOrder { /** * Resulting sort will be from smaller to biggest. */ ASCENDING, /** * Resulting sort will be from biggest to smallest. */ DESCENDING } A: Here is an OO solution (i.e., doesn't use static methods): import java.util.Collections; import java.util.Comparator; import java.util.HashMap; import java.util.Iterator; import java.util.LinkedList; import java.util.LinkedHashMap; import java.util.List; import java.util.Map; public class SortableValueMap<K, V extends Comparable<V>> extends LinkedHashMap<K, V> { public SortableValueMap() { } public SortableValueMap( Map<K, V> map ) { super( map ); } public void sortByValue() { List<Map.Entry<K, V>> list = new LinkedList<Map.Entry<K, V>>( entrySet() ); Collections.sort( list, new Comparator<Map.Entry<K, V>>() { public int compare( Map.Entry<K, V> entry1, Map.Entry<K, V> entry2 ) { return entry1.getValue().compareTo( entry2.getValue() ); } }); clear(); for( Map.Entry<K, V> entry : list ) { put( entry.getKey(), entry.getValue() ); } } private static void print( String text, Map<String, Double> map ) { System.out.println( text ); for( String key : map.keySet() ) { System.out.println( "key/value: " + key + "/" + map.get( key ) ); } } public static void main( String[] args ) { SortableValueMap<String, Double> map = new SortableValueMap<String, Double>(); map.put( "A", 67.5 ); map.put( "B", 99.5 ); map.put( "C", 82.4 ); map.put( "D", 42.0 ); print( "Unsorted map", map ); map.sortByValue(); print( "Sorted map", map ); } } Hereby donated to the public domain. A: Some simple changes in order to have a sorted map with pairs that have duplicate values. In the compare method (class ValueComparator) when values are equal do not return 0 but return the result of comparing the 2 keys. Keys are distinct in a map so you succeed to keep duplicate values (which are sorted by keys by the way). So the above example could be modified like this: public int compare(Object a, Object b) { if((Double)base.get(a) < (Double)base.get(b)) { return 1; } else if((Double)base.get(a) == (Double)base.get(b)) { return ((String)a).compareTo((String)b); } else { return -1; } } } A: For sure the solution of Stephen is really great, but for those who can't use Guava: Here's my solution for sorting by value a map. This solution handle the case where there are twice the same value etc... // If you want to sort a map by value, and if there can be twice the same value: // here is your original map Map<String,Integer> mapToSortByValue = new HashMap<String, Integer>(); mapToSortByValue.put("A", 3); mapToSortByValue.put("B", 1); mapToSortByValue.put("C", 3); mapToSortByValue.put("D", 5); mapToSortByValue.put("E", -1); mapToSortByValue.put("F", 1000); mapToSortByValue.put("G", 79); mapToSortByValue.put("H", 15); // Sort all the map entries by value Set<Map.Entry<String,Integer>> set = new TreeSet<Map.Entry<String,Integer>>( new Comparator<Map.Entry<String,Integer>>(){ @Override public int compare(Map.Entry<String,Integer> obj1, Map.Entry<String,Integer> obj2) { Integer val1 = obj1.getValue(); Integer val2 = obj2.getValue(); // DUPLICATE VALUE CASE // If the values are equals, we can't return 0 because the 2 entries would be considered // as equals and one of them would be deleted (because we use a set, no duplicate, remember!) int compareValues = val1.compareTo(val2); if ( compareValues == 0 ) { String key1 = obj1.getKey(); String key2 = obj2.getKey(); int compareKeys = key1.compareTo(key2); if ( compareKeys == 0 ) { // what you return here will tell us if you keep REAL KEY-VALUE duplicates in your set // if you want to, do whatever you want but do not return 0 (but don't break the comparator contract!) return 0; } return compareKeys; } return compareValues; } } ); set.addAll(mapToSortByValue.entrySet()); // OK NOW OUR SET IS SORTED COOL!!!! // And there's nothing more to do: the entries are sorted by value! for ( Map.Entry<String,Integer> entry : set ) { System.out.println("Set entries: " + entry.getKey() + " -> " + entry.getValue()); } // But if you add them to an hashmap Map<String,Integer> myMap = new HashMap<String,Integer>(); // When iterating over the set the order is still good in the println... for ( Map.Entry<String,Integer> entry : set ) { System.out.println("Added to result map entries: " + entry.getKey() + " " + entry.getValue()); myMap.put(entry.getKey(), entry.getValue()); } // But once they are in the hashmap, the order is not kept! for ( Integer value : myMap.values() ) { System.out.println("Result map values: " + value); } // Also this way doesn't work: // Logic because the entryset is a hashset for hashmaps and not a treeset // (and even if it was a treeset, it would be on the keys only) for ( Map.Entry<String,Integer> entry : myMap.entrySet() ) { System.out.println("Result map entries: " + entry.getKey() + " -> " + entry.getValue()); } // CONCLUSION: // If you want to iterate on a map ordered by value, you need to remember: // 1) Maps are only sorted by keys, so you can't sort them directly by value // 2) So you simply CAN'T return a map to a sortMapByValue function // 3) You can't reverse the keys and the values because you have duplicate values // This also means you can't neither use Guava/Commons bidirectionnal treemaps or stuff like that // SOLUTIONS // So you can: // 1) only sort the values which is easy, but you loose the key/value link (since you have duplicate values) // 2) sort the map entries, but don't forget to handle the duplicate value case (like i did) // 3) if you really need to return a map, use a LinkedHashMap which keep the insertion order The exec: http://www.ideone.com/dq3Lu The output: Set entries: E -> -1 Set entries: B -> 1 Set entries: A -> 3 Set entries: C -> 3 Set entries: D -> 5 Set entries: H -> 15 Set entries: G -> 79 Set entries: F -> 1000 Added to result map entries: E -1 Added to result map entries: B 1 Added to result map entries: A 3 Added to result map entries: C 3 Added to result map entries: D 5 Added to result map entries: H 15 Added to result map entries: G 79 Added to result map entries: F 1000 Result map values: 5 Result map values: -1 Result map values: 1000 Result map values: 79 Result map values: 3 Result map values: 1 Result map values: 3 Result map values: 15 Result map entries: D -> 5 Result map entries: E -> -1 Result map entries: F -> 1000 Result map entries: G -> 79 Result map entries: A -> 3 Result map entries: B -> 1 Result map entries: C -> 3 Result map entries: H -> 15 Hope it will help some folks A: Sorting the keys requires the Comparator to look up each value for each comparison. A more scalable solution would use the entrySet directly, since then the value would be immediately available for each comparison (although I haven't backed this up by numbers). Here's a generic version of such a thing: public static <K, V extends Comparable<? super V>> List<K> getKeysSortedByValue(Map<K, V> map) { final int size = map.size(); final List<Map.Entry<K, V>> list = new ArrayList<Map.Entry<K, V>>(size); list.addAll(map.entrySet()); final ValueComparator<V> cmp = new ValueComparator<V>(); Collections.sort(list, cmp); final List<K> keys = new ArrayList<K>(size); for (int i = 0; i < size; i++) { keys.set(i, list.get(i).getKey()); } return keys; } private static final class ValueComparator<V extends Comparable<? super V>> implements Comparator<Map.Entry<?, V>> { public int compare(Map.Entry<?, V> o1, Map.Entry<?, V> o2) { return o1.getValue().compareTo(o2.getValue()); } } There are ways to lessen memory rotation for the above solution. The first ArrayList created could for instance be re-used as a return value; this would require suppression of some generics warnings, but it might be worth it for re-usable library code. Also, the Comparator does not have to be re-allocated at every invocation. Here's a more efficient albeit less appealing version: public static <K, V extends Comparable<? super V>> List<K> getKeysSortedByValue2(Map<K, V> map) { final int size = map.size(); final List reusedList = new ArrayList(size); final List<Map.Entry<K, V>> meView = reusedList; meView.addAll(map.entrySet()); Collections.sort(meView, SINGLE); final List<K> keyView = reusedList; for (int i = 0; i < size; i++) { keyView.set(i, meView.get(i).getKey()); } return keyView; } private static final Comparator SINGLE = new ValueComparator(); Finally, if you need to continously access the sorted information (rather than just sorting it once in a while), you can use an additional multi map. Let me know if you need more details... A: If you have duplicate keys and only a small set of data (<1000) and your code is not performance critical you can just do the following: Map<String,Integer> tempMap=new HashMap<String,Integer>(inputUnsortedMap); LinkedHashMap<String,Integer> sortedOutputMap=new LinkedHashMap<String,Integer>(); for(int i=0;i<inputUnsortedMap.size();i++){ Map.Entry<String,Integer> maxEntry=null; Integer maxValue=-1; for(Map.Entry<String,Integer> entry:tempMap.entrySet()){ if(entry.getValue()>maxValue){ maxValue=entry.getValue(); maxEntry=entry; } } tempMap.remove(maxEntry.getKey()); sortedOutputMap.put(maxEntry.getKey(),maxEntry.getValue()); } inputUnsortedMap is the input to the code. The variable sortedOutputMap will contain the data in decending order when iterated over. To change order just change > to a < in the if statement. Is not the fastest sort but does the job without any additional dependencies. A: You can try Guava's multimaps: TreeMap<Integer, Collection<String>> sortedMap = new TreeMap<>( Multimaps.invertFrom(Multimaps.forMap(originalMap), ArrayListMultimap.<Integer, String>create()).asMap()); As a result you get a map from original values to collections of keys that correspond to them. This approach can be used even if there are multiple keys for the same value. A: I've merged the solutions of user157196 and Carter Page: class MapUtil { public static <K, V extends Comparable<? super V>> Map<K, V> sortByValue( Map<K, V> map ){ ValueComparator<K,V> bvc = new ValueComparator<K,V>(map); TreeMap<K,V> sorted_map = new TreeMap<K,V>(bvc); sorted_map.putAll(map); return sorted_map; } } class ValueComparator<K, V extends Comparable<? super V>> implements Comparator<K> { Map<K, V> base; public ValueComparator(Map<K, V> base) { this.base = base; } public int compare(K a, K b) { int result = (base.get(a).compareTo(base.get(b))); if (result == 0) result=1; // returning 0 would merge keys return result; } } A: Here is the code by Java 8 with abacus-common Map<String, Integer> map = N.asMap("a", 2, "b", 3, "c", 1, "d", 2); Map<String, Integer> sortedMap = Stream.of(map.entrySet()).sorted(Map.Entry.comparingByValue()).toMap(e -> e.getKey(), e -> e.getValue(), LinkedHashMap::new); N.println(sortedMap); // output: {c=1, a=2, d=2, b=3} Declaration: I'm the developer of abacus-common. A: Sort any Hashmap the easiest way in Java. We need not store it in treemaps, list etc. Here, I would be using Java Streams: Lets sort this map by its value (Ascending order) Map<String, Integer> mp= new HashMap<>(); mp.put("zebra", 1); mp.put("blossom", 2); mp.put("gemini", 3); mp.put("opera", 7); mp.put("adelaide", 10); Map<String, Integer> resultMap= mp.entrySet().stream().sorted(Map.Entry.<String, Integer>comparingByValue()).collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue,(e1, e2) -> e1, LinkedHashMap::new)); You can now printed the sorted resultMap in multiple ways like using advanced for loops or iterators. The above map can also be sorted in descending order of the value Map<String, Integer> resultMap= mp.entrySet().stream().sorted(Map.Entry.<String, Integer>comparingByValue().reversed()).collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue,(e1, e2) -> e1, LinkedHashMap::new)); Lets now take another scenario where we store "User" in the map and sort it based on "name" of the "User" in ascending order (lexicographically): User u1= new User("hi", 135); User u2= new User("bismuth", 900); User u3= new User("alloy", 675); User u4= new User("jupiter", 342); User u5= new User("lily", 941); Map<String, User> map2= new HashMap<>(); map2.put("zebra", u3); map2.put("blossom", u5); map2.put("gemini", u1); map2.put("opera", u2); map2.put("adelaide", u4); Map<String, User> resultMap= map2.entrySet().stream().sorted(Map.Entry.<String, User>comparingByValue( (User o1, User o2)-> o1.getName().compareTo(o2.getName()))).collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue,(e1, e2) -> e2, LinkedHashMap::new)); class User { String name; int id; public User(String name, int id) { super(); this.name = name; this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public int getId() { return id; } public void setId(int id) { this.id = id; } @Override public String toString() { return "User [name=" + name + ", id=" + id + "]"; } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + id; result = prime * result + ((name == null) ? 0 : name.hashCode()); return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; User other = (User) obj; if (id != other.id) return false; if (name == null) { if (other.name != null) return false; } else if (!name.equals(other.name)) return false; return true; } } A: The commons-collections library contains a solution called TreeBidiMap. Or, you could have a look at the Google Collections API. It has TreeMultimap which you could use. And if you don't want to use these framework... they come with source code. A: I've looked at the given answers, but a lot of them are more complicated than needed or remove map elements when several keys have same value. Here is a solution that I think fits better: public static <K, V extends Comparable<V>> Map<K, V> sortByValues(final Map<K, V> map) { Comparator<K> valueComparator = new Comparator<K>() { public int compare(K k1, K k2) { int compare = map.get(k2).compareTo(map.get(k1)); if (compare == 0) return 1; else return compare; } }; Map<K, V> sortedByValues = new TreeMap<K, V>(valueComparator); sortedByValues.putAll(map); return sortedByValues; } Note that the map is sorted from the highest value to the lowest. A: Three 1-line answers... I would use Google Collections Guava to do this - if your values are Comparable then you can use valueComparator = Ordering.natural().onResultOf(Functions.forMap(map)) Which will create a function (object) for the map [that takes any of the keys as input, returning the respective value], and then apply natural (comparable) ordering to them [the values]. If they're not comparable, then you'll need to do something along the lines of valueComparator = Ordering.from(comparator).onResultOf(Functions.forMap(map)) These may be applied to a TreeMap (as Ordering extends Comparator), or a LinkedHashMap after some sorting NB: If you are going to use a TreeMap, remember that if a comparison == 0, then the item is already in the list (which will happen if you have multiple values that compare the same). To alleviate this, you could add your key to the comparator like so (presuming that your keys and values are Comparable): valueComparator = Ordering.natural().onResultOf(Functions.forMap(map)).compound(Ordering.natural()) = Apply natural ordering to the value mapped by the key, and compound that with the natural ordering of the key Note that this will still not work if your keys compare to 0, but this should be sufficient for most comparable items (as hashCode, equals and compareTo are often in sync...) See Ordering.onResultOf() and Functions.forMap(). Implementation So now that we've got a comparator that does what we want, we need to get a result from it. map = ImmutableSortedMap.copyOf(myOriginalMap, valueComparator); Now this will most likely work work, but: * *needs to be done given a complete finished map *Don't try the comparators above on a TreeMap; there's no point trying to compare an inserted key when it doesn't have a value until after the put, i.e., it will break really fast Point 1 is a bit of a deal-breaker for me; google collections is incredibly lazy (which is good: you can do pretty much every operation in an instant; the real work is done when you start using the result), and this requires copying a whole map! "Full" answer/Live sorted map by values Don't worry though; if you were obsessed enough with having a "live" map sorted in this manner, you could solve not one but both(!) of the above issues with something crazy like the following: Note: This has changed significantly in June 2012 - the previous code could never work: an internal HashMap is required to lookup the values without creating an infinite loop between the TreeMap.get() -> compare() and compare() -> get() import static org.junit.Assert.assertEquals; import java.util.HashMap; import java.util.Map; import java.util.TreeMap; import com.google.common.base.Functions; import com.google.common.collect.Ordering; class ValueComparableMap<K extends Comparable<K>,V> extends TreeMap<K,V> { //A map for doing lookups on the keys for comparison so we don't get infinite loops private final Map<K, V> valueMap; ValueComparableMap(final Ordering<? super V> partialValueOrdering) { this(partialValueOrdering, new HashMap<K,V>()); } private ValueComparableMap(Ordering<? super V> partialValueOrdering, HashMap<K, V> valueMap) { super(partialValueOrdering //Apply the value ordering .onResultOf(Functions.forMap(valueMap)) //On the result of getting the value for the key from the map .compound(Ordering.natural())); //as well as ensuring that the keys don't get clobbered this.valueMap = valueMap; } public V put(K k, V v) { if (valueMap.containsKey(k)){ //remove the key in the sorted set before adding the key again remove(k); } valueMap.put(k,v); //To get "real" unsorted values for the comparator return super.put(k, v); //Put it in value order } public static void main(String[] args){ TreeMap<String, Integer> map = new ValueComparableMap<String, Integer>(Ordering.natural()); map.put("a", 5); map.put("b", 1); map.put("c", 3); assertEquals("b",map.firstKey()); assertEquals("a",map.lastKey()); map.put("d",0); assertEquals("d",map.firstKey()); //ensure it's still a map (by overwriting a key, but with a new value) map.put("d", 2); assertEquals("b", map.firstKey()); //Ensure multiple values do not clobber keys map.put("e", 2); assertEquals(5, map.size()); assertEquals(2, (int) map.get("e")); assertEquals(2, (int) map.get("d")); } } When we put, we ensure that the hash map has the value for the comparator, and then put to the TreeSet for sorting. But before that we check the hash map to see that the key is not actually a duplicate. Also, the comparator that we create will also include the key so that duplicate values don't delete the non-duplicate keys (due to == comparison). These 2 items are vital for ensuring the map contract is kept; if you think you don't want that, then you're almost at the point of reversing the map entirely (to Map<V,K>). The constructor would need to be called as new ValueComparableMap(Ordering.natural()); //or new ValueComparableMap(Ordering.from(comparator)); A: Given Map Map<String, Integer> wordCounts = new HashMap<>(); wordCounts.put("USA", 100); wordCounts.put("jobs", 200); wordCounts.put("software", 50); wordCounts.put("technology", 70); wordCounts.put("opportunity", 200); Sort the map based on the value in ascending order Map<String,Integer> sortedMap = wordCounts.entrySet(). stream(). sorted(Map.Entry.comparingByValue()). collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (e1, e2) -> e1, LinkedHashMap::new)); System.out.println(sortedMap); Sort the map based on value in descending order Map<String,Integer> sortedMapReverseOrder = wordCounts.entrySet(). stream(). sorted(Map.Entry.comparingByValue(Comparator.reverseOrder())). collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (e1, e2) -> e1, LinkedHashMap::new)); System.out.println(sortedMapReverseOrder); Output: {software=50, technology=70, USA=100, jobs=200, opportunity=200} {jobs=200, opportunity=200, USA=100, technology=70, software=50} A: When I'm faced with this, I just create a list on the side. If you put them together in a custom Map implementation, it'll have a nice feel to it... You can use something like the following, performing the sort only when needed. (Note: I haven't really tested this, but it compiles... might be a silly little bug in there somewhere) (If you want it sorted by both keys and values, have the class extend TreeMap, don't define the accessor methods, and have the mutators call super.xxxxx instead of map_.xxxx) package com.javadude.sample; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.Comparator; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Set; public class SortedValueHashMap<K, V> implements Map<K, V> { private Map<K, V> map_ = new HashMap<K, V>(); private List<V> valueList_ = new ArrayList<V>(); private boolean needsSort_ = false; private Comparator<V> comparator_; public SortedValueHashMap() { } public SortedValueHashMap(List<V> valueList) { valueList_ = valueList; } public List<V> sortedValues() { if (needsSort_) { needsSort_ = false; Collections.sort(valueList_, comparator_); } return valueList_; } // mutators public void clear() { map_.clear(); valueList_.clear(); needsSort_ = false; } public V put(K key, V value) { valueList_.add(value); needsSort_ = true; return map_.put(key, value); } public void putAll(Map<? extends K, ? extends V> m) { map_.putAll(m); valueList_.addAll(m.values()); needsSort_ = true; } public V remove(Object key) { V value = map_.remove(key); valueList_.remove(value); return value; } // accessors public boolean containsKey(Object key) { return map_.containsKey(key); } public boolean containsValue(Object value) { return map_.containsValue(value); } public Set<java.util.Map.Entry<K, V>> entrySet() { return map_.entrySet(); } public boolean equals(Object o) { return map_.equals(o); } public V get(Object key) { return map_.get(key); } public int hashCode() { return map_.hashCode(); } public boolean isEmpty() { return map_.isEmpty(); } public Set<K> keySet() { return map_.keySet(); } public int size() { return map_.size(); } public Collection<V> values() { return map_.values(); } } A: This method will just serve the purpose. (the 'setback' is that the Values must implement the java.util.Comparable interface) /** * Sort a map according to values. * @param <K> the key of the map. * @param <V> the value to sort according to. * @param mapToSort the map to sort. * @return a map sorted on the values. */ public static <K, V extends Comparable< ? super V>> Map<K, V> sortMapByValues(final Map <K, V> mapToSort) { List<Map.Entry<K, V>> entries = new ArrayList<Map.Entry<K, V>>(mapToSort.size()); entries.addAll(mapToSort.entrySet()); Collections.sort(entries, new Comparator<Map.Entry<K, V>>() { @Override public int compare( final Map.Entry<K, V> entry1, final Map.Entry<K, V> entry2) { return entry1.getValue().compareTo(entry2.getValue()); } }); Map<K, V> sortedMap = new LinkedHashMap<K, V>(); for (Map.Entry<K, V> entry : entries) { sortedMap.put(entry.getKey(), entry.getValue()); } return sortedMap; } http://javawithswaranga.blogspot.com/2011/06/generic-method-to-sort-hashmap.html A: The simplest brute-force sortHashMap method for HashMap<String, Long>: you can just copypaste it and use like this: public class Test { public static void main(String[] args) { HashMap<String, Long> hashMap = new HashMap<>(); hashMap.put("Cat", (long) 4); hashMap.put("Human", (long) 2); hashMap.put("Dog", (long) 4); hashMap.put("Fish", (long) 0); hashMap.put("Tree", (long) 1); hashMap.put("Three-legged-human", (long) 3); hashMap.put("Monkey", (long) 2); System.out.println(hashMap); //{Human=2, Cat=4, Three-legged-human=3, Monkey=2, Fish=0, Tree=1, Dog=4} System.out.println(sortHashMap(hashMap)); //{Cat=4, Dog=4, Three-legged-human=3, Human=2, Monkey=2, Tree=1, Fish=0} } public LinkedHashMap<String, Long> sortHashMap(HashMap<String, Long> unsortedMap) { LinkedHashMap<String, Long> result = new LinkedHashMap<>(); //add String keys to an array: the array would get sorted, based on those keys' values ArrayList<String> sortedKeys = new ArrayList<>(); for (String key: unsortedMap.keySet()) { sortedKeys.add(key); } //sort the ArrayList<String> of keys for (int i=0; i<unsortedMap.size(); i++) { for (int j=1; j<sortedKeys.size(); j++) { if (unsortedMap.get(sortedKeys.get(j)) > unsortedMap.get(sortedKeys.get(j-1))) { String temp = sortedKeys.get(j); sortedKeys.set(j, sortedKeys.get(j-1)); sortedKeys.set(j-1, temp); } } } // construct the result Map for (String key: sortedKeys) { result.put(key, unsortedMap.get(key)); } return result; } } A: posting my version of answer List<Map.Entry<String, Integer>> list = new ArrayList<>(map.entrySet()); Collections.sort(list, (obj1, obj2) -> obj2.getValue().compareTo(obj1.getValue())); Map<String, Integer> resultMap = new LinkedHashMap<>(); list.forEach(arg0 -> { resultMap.put(arg0.getKey(), arg0.getValue()); }); System.out.println(resultMap); A: Using LinkedList //Create a list by HashMap List<Map.Entry<String, Double>> list = new LinkedList<>(hashMap.entrySet()); //Sorting the list Collections.sort(list, new Comparator<Map.Entry<String, Double>>() { public int compare(Map.Entry<String, Double> o1, Map.Entry<String, Double> o2) { return (o1.getValue()).compareTo(o2.getValue()); } }); //put data from sorted list to hashmap HashMap<String, Double> sortedData = new LinkedHashMap<>(); for (Map.Entry<String, Double> data : list) { sortedData.put(data.getKey(), data.getValue()); } System.out.print(sortedData); A: This has the added benefit of being able to sort ascending or descending, using Java 8 import static java.util.Comparator.comparingInt; import static java.util.stream.Collectors.toMap; import java.util.LinkedHashMap; import java.util.Map; import java.util.Map.Entry; import java.util.stream.Collectors; import java.util.stream.Stream; class Utils { public static Map<String, Integer> sortMapBasedOnValues(Map<String, Integer> map, boolean descending) { int multiplyBy = (descending) ? -1: 1; Map<String, Integer> sorted = map.entrySet().stream() .sorted(comparingInt(e -> multiplyBy * e.getValue() )) .collect(toMap( Map.Entry::getKey, Map.Entry::getValue, (a, b) -> { throw new AssertionError();}, LinkedHashMap::new )); return sorted; } } A: map = your hashmap; List<Map.Entry<String, Integer>> list = new LinkedList<Map.Entry<String, Integer>>(map.entrySet()); Collections.sort(list, new cm());//IMP HashMap<String, Integer> sorted = new LinkedHashMap<String, Integer>(); for(Map.Entry<String, Integer> en: list){ sorted.put(en.getKey(),en.getValue()); } System.out.println(sorted);//sorted hashmap create new class class cm implements Comparator<Map.Entry<String, Integer>>{ @Override public int compare(Map.Entry<String, Integer> a, Map.Entry<String, Integer> b) { return (a.getValue()).compareTo(b.getValue()); } } A: Map<String, Integer> map = new HashMap<>(); map.put("b", 2); map.put("a", 1); map.put("d", 4); map.put("c", 3); // ----- Using Java 7 ------------------- List<Map.Entry<String, Integer>> entries = new ArrayList<>(map.entrySet()); Collections.sort(entries, (o1, o2) -> o1.getValue().compareTo(o2.getValue())); System.out.println(entries); // [a=1, b=2, c=3, d=4] // ----- Using Java 8 Stream API -------- map.entrySet().stream().sorted(Map.Entry.comparingByValue()).forEach(System.out::println); // {a=1, b=2, c=3, d=4} A: From http://www.programmersheaven.com/download/49349/download.aspx private static <K, V> Map<K, V> sortByValue(Map<K, V> map) { List<Entry<K, V>> list = new LinkedList<>(map.entrySet()); Collections.sort(list, new Comparator<Object>() { @SuppressWarnings("unchecked") public int compare(Object o1, Object o2) { return ((Comparable<V>) ((Map.Entry<K, V>) (o1)).getValue()).compareTo(((Map.Entry<K, V>) (o2)).getValue()); } }); Map<K, V> result = new LinkedHashMap<>(); for (Iterator<Entry<K, V>> it = list.iterator(); it.hasNext();) { Map.Entry<K, V> entry = (Map.Entry<K, V>) it.next(); result.put(entry.getKey(), entry.getValue()); } return result; } A: To accomplish this with the new features in Java 8: import static java.util.Map.Entry.comparingByValue; import static java.util.stream.Collectors.toList; <K, V> List<Entry<K, V>> sort(Map<K, V> map, Comparator<? super V> comparator) { return map.entrySet().stream().sorted(comparingByValue(comparator)).collect(toList()); } The entries are ordered by their values using the given comparator. Alternatively, if your values are mutually comparable, no explicit comparator is needed: <K, V extends Comparable<? super V>> List<Entry<K, V>> sort(Map<K, V> map) { return map.entrySet().stream().sorted(comparingByValue()).collect(toList()); } The returned list is a snapshot of the given map at the time this method is called, so neither will reflect subsequent changes to the other. For a live iterable view of the map: <K, V extends Comparable<? super V>> Iterable<Entry<K, V>> sort(Map<K, V> map) { return () -> map.entrySet().stream().sorted(comparingByValue()).iterator(); } The returned iterable creates a fresh snapshot of the given map each time it's iterated, so barring concurrent modification, it will always reflect the current state of the map. A: Create customized comparator and use it while creating new TreeMap object. class MyComparator implements Comparator<Object> { Map<String, Integer> map; public MyComparator(Map<String, Integer> map) { this.map = map; } public int compare(Object o1, Object o2) { if (map.get(o2) == map.get(o1)) return 1; else return ((Integer) map.get(o2)).compareTo((Integer) map.get(o1)); } } Use the below code in your main func Map<String, Integer> lMap = new HashMap<String, Integer>(); lMap.put("A", 35); lMap.put("B", 75); lMap.put("C", 50); lMap.put("D", 50); MyComparator comparator = new MyComparator(lMap); Map<String, Integer> newMap = new TreeMap<String, Integer>(comparator); newMap.putAll(lMap); System.out.println(newMap); Output: {B=75, D=50, C=50, A=35} A: While I agree that the constant need to sort a map is probably a smell, I think the following code is the easiest way to do it without using a different data structure. public class MapUtilities { public static <K, V extends Comparable<V>> List<Entry<K, V>> sortByValue(Map<K, V> map) { List<Entry<K, V>> entries = new ArrayList<Entry<K, V>>(map.entrySet()); Collections.sort(entries, new ByValue<K, V>()); return entries; } private static class ByValue<K, V extends Comparable<V>> implements Comparator<Entry<K, V>> { public int compare(Entry<K, V> o1, Entry<K, V> o2) { return o1.getValue().compareTo(o2.getValue()); } } } And here is an embarrassingly incomplete unit test: public class MapUtilitiesTest extends TestCase { public void testSorting() { HashMap<String, Integer> map = new HashMap<String, Integer>(); map.put("One", 1); map.put("Two", 2); map.put("Three", 3); List<Map.Entry<String, Integer>> sorted = MapUtilities.sortByValue(map); assertEquals("First", "One", sorted.get(0).getKey()); assertEquals("Second", "Two", sorted.get(1).getKey()); assertEquals("Third", "Three", sorted.get(2).getKey()); } } The result is a sorted list of Map.Entry objects, from which you can obtain the keys and values. A: Use a generic comparator such as: final class MapValueComparator<K,V extends Comparable<V>> implements Comparator<K> { private final Map<K,V> map; private MapValueComparator() { super(); } public MapValueComparator(Map<K,V> map) { this(); this.map = map; } public int compare(K o1, K o2) { return map.get(o1).compareTo(map.get(o2)); } } A: Here's a generic-friendly version: public class MapUtil { public static <K, V extends Comparable<? super V>> Map<K, V> sortByValue(Map<K, V> map) { List<Entry<K, V>> list = new ArrayList<>(map.entrySet()); list.sort(Entry.comparingByValue()); Map<K, V> result = new LinkedHashMap<>(); for (Entry<K, V> entry : list) { result.put(entry.getKey(), entry.getValue()); } return result; } } A: The answer voted for the most does not work when you have 2 items that equals. the TreeMap leaves equal values out. the exmaple: unsorted map key/value: D/67.3 key/value: A/99.5 key/value: B/67.4 key/value: C/67.5 key/value: E/99.5 results key/value: A/99.5 key/value: C/67.5 key/value: B/67.4 key/value: D/67.3 So leaves out E!! For me it worked fine to adjust the comparator, if it equals do not return 0 but -1. in the example: class ValueComparator implements Comparator { Map base; public ValueComparator(Map base) { this.base = base; } public int compare(Object a, Object b) { if((Double)base.get(a) < (Double)base.get(b)) { return 1; } else if((Double)base.get(a) == (Double)base.get(b)) { return -1; } else { return -1; } } } now it returns: unsorted map: key/value: D/67.3 key/value: A/99.5 key/value: B/67.4 key/value: C/67.5 key/value: E/99.5 results: key/value: A/99.5 key/value: E/99.5 key/value: C/67.5 key/value: B/67.4 key/value: D/67.3 as a response to Aliens (2011 nov. 22): I Am using this solution for a map of Integer Id's and names, but the idea is the same, so might be the code above is not correct (I will write it in a test and give you the correct code), this is the code for a Map sorting, based on the solution above: package nl.iamit.util; import java.util.Comparator; import java.util.Map; public class Comparators { public static class MapIntegerStringComparator implements Comparator { Map<Integer, String> base; public MapIntegerStringComparator(Map<Integer, String> base) { this.base = base; } public int compare(Object a, Object b) { int compare = ((String) base.get(a)) .compareTo((String) base.get(b)); if (compare == 0) { return -1; } return compare; } } } and this is the test class (I just tested it, and this works for the Integer, String Map: package test.nl.iamit.util; import java.util.HashMap; import java.util.TreeMap; import nl.iamit.util.Comparators; import org.junit.Test; import static org.junit.Assert.assertArrayEquals; public class TestComparators { @Test public void testMapIntegerStringComparator(){ HashMap<Integer, String> unSoretedMap = new HashMap<Integer, String>(); Comparators.MapIntegerStringComparator bvc = new Comparators.MapIntegerStringComparator( unSoretedMap); TreeMap<Integer, String> sorted_map = new TreeMap<Integer, String>(bvc); //the testdata: unSoretedMap.put(new Integer(1), "E"); unSoretedMap.put(new Integer(2), "A"); unSoretedMap.put(new Integer(3), "E"); unSoretedMap.put(new Integer(4), "B"); unSoretedMap.put(new Integer(5), "F"); sorted_map.putAll(unSoretedMap); Object[] targetKeys={new Integer(2),new Integer(4),new Integer(3),new Integer(1),new Integer(5) }; Object[] currecntKeys=sorted_map.keySet().toArray(); assertArrayEquals(targetKeys,currecntKeys); } } here is the code for the Comparator of a Map: public static class MapStringDoubleComparator implements Comparator { Map<String, Double> base; public MapStringDoubleComparator(Map<String, Double> base) { this.base = base; } //note if you want decending in stead of ascending, turn around 1 and -1 public int compare(Object a, Object b) { if ((Double) base.get(a) == (Double) base.get(b)) { return 0; } else if((Double) base.get(a) < (Double) base.get(b)) { return -1; }else{ return 1; } } } and this is the testcase for this: @Test public void testMapStringDoubleComparator(){ HashMap<String, Double> unSoretedMap = new HashMap<String, Double>(); Comparators.MapStringDoubleComparator bvc = new Comparators.MapStringDoubleComparator( unSoretedMap); TreeMap<String, Double> sorted_map = new TreeMap<String, Double>(bvc); //the testdata: unSoretedMap.put("D",new Double(67.3)); unSoretedMap.put("A",new Double(99.5)); unSoretedMap.put("B",new Double(67.4)); unSoretedMap.put("C",new Double(67.5)); unSoretedMap.put("E",new Double(99.5)); sorted_map.putAll(unSoretedMap); Object[] targetKeys={"D","B","C","E","A"}; Object[] currecntKeys=sorted_map.keySet().toArray(); assertArrayEquals(targetKeys,currecntKeys); } of cource you can make this a lot more generic, but I just needed it for 1 case (the Map) A: My solution is a quite simple approach in the way of using mostly given APIs. We use the feature of Map to export its content as Set via entrySet() method. We now have a Set containing Map.Entry objects. Okay, a Set does not carry an order, but we can take the content an put it into an ArrayList. It now has an random order, but we will sort it anyway. As ArrayList is a Collection, we now use the Collections.sort() method to bring order to chaos. Because our Map.Entry objects do not realize the kind of comparison we need, we provide a custom Comparator. public static void main(String[] args) { HashMap<String, String> map = new HashMap<>(); map.put("Z", "E"); map.put("G", "A"); map.put("D", "C"); map.put("E", null); map.put("O", "C"); map.put("L", "D"); map.put("Q", "B"); map.put("A", "F"); map.put(null, "X"); MapEntryComparator mapEntryComparator = new MapEntryComparator(); List<Entry<String,String>> entryList = new ArrayList<>(map.entrySet()); Collections.sort(entryList, mapEntryComparator); for (Entry<String, String> entry : entryList) { System.out.println(entry.getKey() + " : " + entry.getValue()); } } A: If there is a preference of having a Map data structure that inherently sorts by values without having to trigger any sort methods or explicitly pass to a utility, then the following solutions may be applicable: (1) org.drools.chance.core.util.ValueSortedMap (JBoss project) maintains two maps internally one for lookup and one for maintaining the sorted values. Quite similar to previously added answers, but probably it is the abstraction and encapsulation part (including copying mechanism) that makes it safer to use from the outside. (2) http://techblog.molindo.at/2008/11/java-map-sorted-by-value.html avoids maintaining two maps and instead relies/extends from Apache Common's LinkedMap. (Blog author's note: as all the code here is in the public domain): // required to access LinkEntry.before and LinkEntry.after package org.apache.commons.collections.map; // SNIP: imports /** * map implementation based on LinkedMap that maintains a sorted list of * values for iteration */ public class ValueSortedHashMap extends LinkedMap { private final boolean _asc; // don't use super()! public ValueSortedHashMap(final boolean asc) { super(DEFAULT_CAPACITY); _asc = asc; } // SNIP: some more constructors with initial capacity and the like protected void addEntry(final HashEntry entry, final int hashIndex) { final LinkEntry link = (LinkEntry) entry; insertSorted(link); data[hashIndex] = entry; } protected void updateEntry(final HashEntry entry, final Object newValue) { entry.setValue(newValue); final LinkEntry link = (LinkEntry) entry; link.before.after = link.after; link.after.before = link.before; link.after = link.before = null; insertSorted(link); } private void insertSorted(final LinkEntry link) { LinkEntry cur = header; // iterate whole list, could (should?) be replaced with quicksearch // start at end to optimize speed for in-order insertions while ((cur = cur.before) != header & amp; & amp; !insertAfter(cur, link)) {} link.after = cur.after; link.before = cur; cur.after.before = link; cur.after = link; } protected boolean insertAfter(final LinkEntry cur, final LinkEntry link) { if (_asc) { return ((Comparable) cur.getValue()) .compareTo((V) link.getValue()) & lt; = 0; } else { return ((Comparable) cur.getValue()) .compareTo((V) link.getValue()) & gt; = 0; } } public boolean isAscending() { return _asc; } } (3) Write a custom Map or extends from LinkedHashMap that will only sort during enumeration (e.g., values(), keyset(), entryset()) as needed. The inner implementation/behavior is abstracted from the one using this class but it appears to the client of this class that values are always sorted when requested for enumeration. This class hopes that sorting will happen mostly once if all put operations have been completed before enumerations. Sorting method adopts some of the previous answers to this question. public class SortByValueMap<K, V> implements Map<K, V> { private boolean isSortingNeeded = false; private final Map<K, V> map = new LinkedHashMap<>(); @Override public V put(K key, V value) { isSortingNeeded = true; return map.put(key, value); } @Override public void putAll(Map<? extends K, ? extends V> map) { isSortingNeeded = true; map.putAll(map); } @Override public Set<K> keySet() { sort(); return map.keySet(); } @Override public Set<Entry<K, V>> entrySet() { sort(); return map.entrySet(); } @Override public Collection<V> values() { sort(); return map.values(); } private void sort() { if (!isSortingNeeded) { return; } List<Entry<K, V>> list = new ArrayList<>(size()); for (Iterator<Map.Entry<K, V>> it = map.entrySet().iterator(); it.hasNext();) { Map.Entry<K, V> entry = it.next(); list.add(entry); it.remove(); } Collections.sort(list); for (Entry<K, V> entry : list) { map.put(entry.getKey(), entry.getValue()); } isSortingNeeded = false; } @Override public String toString() { sort(); return map.toString(); } } (4) Guava offers ImmutableMap.Builder.orderEntriesByValue(Comparator valueComparator) although the resulting map will be immutable: Configures this Builder to order entries by value according to the specified comparator. The sort order is stable, that is, if two entries have values that compare as equivalent, the entry that was inserted first will be first in the built map's iteration order. A: I rewrote devinmoore's method that performs sorting a map by it's value without using Iterator : public static Map<K, V> sortMapByValue(Map<K, V> inputMap) { Set<Entry<K, V>> set = inputMap.entrySet(); List<Entry<K, V>> list = new ArrayList<Entry<K, V>>(set); Collections.sort(list, new Comparator<Map.Entry<K, V>>() { @Override public int compare(Entry<K, V> o1, Entry<K, V> o2) { return (o1.getValue()).compareTo( o2.getValue() ); //Ascending order } } ); Map<K, V> sortedMap = new LinkedHashMap<>(); for(Map.Entry<K, V> entry : list){ sortedMap.put(entry.getKey(), entry.getValue()); } return sortedMap; } Note: that we used LinkedHashMap as output map, because our list has been sorted by value and now we should store our list into output map with order of inserted key,values. So if you use for example TreeMap as your output map, your map will be sorted by map keys again! This is the main method: public static void main(String[] args) { Map<String, String> map = new HashMap<>(); map.put("3", "three"); map.put("1", "one"); map.put("5", "five"); System.out.println("Input Map:" + map); System.out.println("Sorted Map:" + sortMapByValue(map)); } Finally, this is the output: Input Map:{1=one, 3=three, 5=five} Sorted Map:{5=five, 1=one, 3=three} A: Using Guava library: public static <K,V extends Comparable<V>>SortedMap<K,V> sortByValue(Map<K,V> original){ var comparator = Ordering.natural() .reverse() // highest first .nullsLast() .onResultOf(Functions.forMap(original, null)) .compound(Ordering.usingToString()); return ImmutableSortedMap.copyOf(original, comparator); } A: creates a list of entries for each value, where the values are sorted requires Java 8 or above Map<Double,List<Entry<String,Double>>> sorted = map.entrySet().stream().collect( Collectors.groupingBy( Entry::getValue, TreeMap::new, Collectors.mapping( Function.identity(), Collectors.toList() ) ) ); using the map {[A=99.5], [B=67.4], [C=67.4], [D=67.3]} gets {67.3=[D=67.3], 67.4=[B=67.4, C=67.4], 99.5=[A=99.5]} …and how to access each entry one after the other: sorted.entrySet().forEach( e -> e.getValue().forEach( l -> System.out.println( l ) ) ); D=67.3 B=67.4 C=67.4 A=99.5 A: I can give you an example but sure this is what you need. map = {10 = 3, 11 = 1,12 = 2} Let's say you want the top 2 most frequent key which is (10, 12) So the easiest way is using a PriorityQueue to sort based on the value of the map. PriorityQueue<Integer> pq = new PriorityQueue<>((a, b) -> (map.get(a) - map.get(b)); for(int key: map.keySets()) { pq.add(key); if(pq.size() > 2) { pq.poll(); } } // Now pq has the top 2 most frequent key based on value. It sorts the value. A: In TreeMap, keys are sorted in natural order. For example, if you sorting numbers, (notice the ordering of 4) {0=0, 10=10, 20=20, 30=30, 4=4, 50=50, 60=60, 70=70} To fix this, In Java8, first check string length and then compare. Map<String, String> sortedMap = new TreeMap<>Comparator.comparingInt(String::length) .thenComparing(Function.identity())); {0=0, 4=4, 10=10, 20=20, 30=30, 50=50, 60=60, 70=70} A: public class Test { public static void main(String[] args) { TreeMap<Integer, String> hm=new TreeMap(); hm.put(3, "arun singh"); hm.put(5, "vinay singh"); hm.put(1, "bandagi singh"); hm.put(6, "vikram singh"); hm.put(2, "panipat singh"); hm.put(28, "jakarta singh"); ArrayList<String> al=new ArrayList(hm.values()); Collections.sort(al, new myComparator()); System.out.println("//sort by values \n"); for(String obj: al){ for(Map.Entry<Integer, String> map2:hm.entrySet()){ if(map2.getValue().equals(obj)){ System.out.println(map2.getKey()+" "+map2.getValue()); } } } } } class myComparator implements Comparator{ @Override public int compare(Object o1, Object o2) { String o3=(String) o1; String o4 =(String) o2; return o3.compareTo(o4); } } OUTPUT= //sort by values 3 arun singh 1 bandagi singh 28 jakarta singh 2 panipat singh 6 vikram singh 5 vinay singh A: Geeks For Geeks on sorting the HashMap by Value Input : Key = Math, Value = 98 Key = Data Structure, Value = 85 Key = Database, Value = 91 Key = Java, Value = 95 Key = Operating System, Value = 79 Key = Networking, Value = 80 Output : Key = Operating System, Value = 79 Key = Networking, Value = 80 Key = Data Structure, Value = 85 Key = Database, Value = 91 Key = Java, Value = 95 Key = Math, Value = 98 Solution: The idea is to store the entry set in a list and sort the list on the basis of values. Then fetch values and keys from the list and put them in a new hashmap. Thus, a new hashmap is sorted according to values. Below is the implementation of the above idea: // Java program to sort hashmap by values import java.util.*; import java.lang.*; public class GFG { // function to sort hashmap by values public static HashMap<String, Integer> sortByValue(HashMap<String, Integer> hm) { // Create a list from elements of HashMap List<Map.Entry<String, Integer> > list = new LinkedList<Map.Entry<String, Integer> >(hm.entrySet()); // Sort the list Collections.sort(list, new Comparator<Map.Entry<String, Integer> >() { public int compare(Map.Entry<String, Integer> o1, Map.Entry<String, Integer> o2) { return (o1.getValue()).compareTo(o2.getValue()); } }); // put data from sorted list to hashmap HashMap<String, Integer> temp = new LinkedHashMap<String, Integer>(); for (Map.Entry<String, Integer> aa : list) { temp.put(aa.getKey(), aa.getValue()); } return temp; } // Driver Code public static void main(String[] args) { HashMap<String, Integer> hm = new HashMap<String, Integer>(); // enter data into hashmap hm.put("Math", 98); hm.put("Data Structure", 85); hm.put("Database", 91); hm.put("Java", 95); hm.put("Operating System", 79); hm.put("Networking", 80); Map<String, Integer> hm1 = sortByValue(hm); // print the sorted hashmap for (Map.Entry<String, Integer> en : hm1.entrySet()) { System.out.println("Key = " + en.getKey() + ", Value = " + en.getValue()); } } } Output Key = Operating System, Value = 79 Key = Networking, Value = 80 Key = Data Structure, Value = 85 Key = Database, Value = 91 Key = Java, Value = 95 Key = Math, Value = 98 A: I think the best way, it to use special data structure. You may think about TreeMap, but values can be not unique in general case. So, your choise is PriorityQueue: public static <K, V> Iterator<Map.Entry<K, V>> sortByValue( Map<K, V> map, Comparator<V> valueComparator) { Queue<Map.Entry<K, V>> queue = new PriorityQueue<>((one, two) -> valueComparator.compare(one.getValue(), two.getValue())); queue.addAll(map.entrySet()); return queue.iterator(); } A: If your Map values implement Comparable (e.g. String), this should work Map<Object, String> map = new HashMap<Object, String>(); // Populate the Map List<String> mapValues = new ArrayList<String>(map.values()); Collections.sort(mapValues); If the map values themselves don't implement Comparable, but you have an instance of Comparable that can sort them, replace the last line with this: Collections.sort(mapValues, comparable); A: Best thing is to convert HashMap to TreeMap. TreeMap sort keys on its own. If you want to sort on values than quick fix can be you can switch values with keys if your values are not duplicates. A: We simply sort a map just like this Map<String, String> unsortedMap = new HashMap<String, String>(); unsortedMap.put("E", "E Val"); unsortedMap.put("F", "F Val"); unsortedMap.put("H", "H Val"); unsortedMap.put("B", "B Val"); unsortedMap.put("C", "C Val"); unsortedMap.put("A", "A Val"); unsortedMap.put("G", "G Val"); unsortedMap.put("D", "D Val"); Map<String, String> sortedMap = new TreeMap<String, String>(unsortedMap); System.out.println("\nAfter sorting.."); for (Map.Entry <String, String> mapEntry : sortedMap.entrySet()) { System.out.println(mapEntry.getKey() + " \t" + mapEntry.getValue()); A: For sorting upon the keys I found a better solution with a TreeMap (I will try to get a solution for value based sorting ready too): public static void main(String[] args) { Map<String, String> unsorted = new HashMap<String, String>(); unsorted.put("Cde", "Cde_Value"); unsorted.put("Abc", "Abc_Value"); unsorted.put("Bcd", "Bcd_Value"); Comparator<String> comparer = new Comparator<String>() { @Override public int compare(String o1, String o2) { return o1.compareTo(o2); }}; Map<String, String> sorted = new TreeMap<String, String>(comparer); sorted.putAll(unsorted); System.out.println(sorted); } Output would be: {Abc=Abc_Value, Bcd=Bcd_Value, Cde=Cde_Value} A: Okay, this version works with two new Map objects and two iterations and sorts on values. Hope, the performs well although the map entries must be looped twice: public static void main(String[] args) { Map<String, String> unsorted = new HashMap<String, String>(); unsorted.put("Cde", "Cde_Value"); unsorted.put("Abc", "Abc_Value"); unsorted.put("Bcd", "Bcd_Value"); Comparator<String> comparer = new Comparator<String>() { @Override public int compare(String o1, String o2) { return o1.compareTo(o2); }}; System.out.println(sortByValue(unsorted, comparer)); } public static <K, V> Map<K,V> sortByValue(Map<K, V> in, Comparator<? super V> compare) { Map<V, K> swapped = new TreeMap<V, K>(compare); for(Entry<K,V> entry: in.entrySet()) { if (entry.getValue() != null) { swapped.put(entry.getValue(), entry.getKey()); } } LinkedHashMap<K, V> result = new LinkedHashMap<K, V>(); for(Entry<V,K> entry: swapped.entrySet()) { if (entry.getValue() != null) { result.put(entry.getValue(), entry.getKey()); } } return result; } The solution uses a TreeMap with a Comparator and sorts out all null keys and values. First, the ordering functionality from the TreeMap is used to sort upon the values, next the sorted Map is used to create a result as a LinkedHashMap that retains has the same order of values. Greetz, GHad A: If there's not any value bigger than the size of the map, you could use arrays, this should be the fastest approach: public List<String> getList(Map<String, Integer> myMap) { String[] copyArray = new String[myMap.size()]; for (Entry<String, Integer> entry : myMap.entrySet()) { copyArray[entry.getValue()] = entry.getKey(); } return Arrays.asList(copyArray); } A: static <K extends Comparable<? super K>, V extends Comparable<? super V>> Map sortByValueInDescendingOrder(final Map<K, V> map) { Map re = new TreeMap(new Comparator<K>() { @Override public int compare(K o1, K o2) { if (map.get(o1) == null || map.get(o2) == null) { return -o1.compareTo(o2); } int result = -map.get(o1).compareTo(map.get(o2)); if (result != 0) { return result; } return -o1.compareTo(o2); } }); re.putAll(map); return re; } @Test(timeout = 3000l, expected = Test.None.class) public void testSortByValueInDescendingOrder() { char[] arr = "googler".toCharArray(); Map<Character, Integer> charToTimes = new HashMap(); for (int i = 0; i < arr.length; i++) { Integer times = charToTimes.get(arr[i]); charToTimes.put(arr[i], times == null ? 1 : times + 1); } Map sortedByTimes = sortByValueInDescendingOrder(charToTimes); Assert.assertEquals(charToTimes.toString(), "{g=2, e=1, r=1, o=2, l=1}"); Assert.assertEquals(sortedByTimes.toString(), "{o=2, g=2, r=1, l=1, e=1}"); Assert.assertEquals(sortedByTimes.containsKey('a'), false); Assert.assertEquals(sortedByTimes.get('a'), null); Assert.assertEquals(sortedByTimes.get('g'), 2); Assert.assertEquals(sortedByTimes.equals(charToTimes), true); } A: Use java.util.TreeMap. "The map is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used." A: as map is unordered to sort it ,we can do following Map<String, String> map= new TreeMap<String, String>(unsortMap); You should note that, unlike a hash map, a tree map guarantees that its elements will be sorted in ascending key order. A: public class SortedMapExample { public static void main(String[] args) { Map<String, String> map = new HashMap<String, String>(); map.put("Cde", "C"); map.put("Abc", "A"); map.put("Cbc", "Z"); map.put("Dbc", "D"); map.put("Bcd", "B"); map.put("sfd", "Bqw"); map.put("DDD", "Bas"); map.put("BGG", "Basd"); System.out.println(sort(map, new Comparator<String>() { @Override public int compare(String o1, String o2) { return o1.compareTo(o2); }})); } @SuppressWarnings("unchecked") public static <K, V> Map<K,V> sort(Map<K, V> in, Comparator<? super V> compare) { Map<K, V> result = new LinkedHashMap<K, V>(); V[] array = (V[])in.values().toArray(); for(int i=0;i<array.length;i++) { } Arrays.sort(array, compare); for (V item : array) { K key= (K) getKey(in, item); result.put(key, item); } return result; } public static <K, V> Object getKey(Map<K, V> in,V value) { Set<K> key= in.keySet(); Iterator<K> keyIterator=key.iterator(); while (keyIterator.hasNext()) { K valueObject = (K) keyIterator.next(); if(in.get(valueObject).equals(value)) { return valueObject; } } return null; } } // Please try here. I am modifing the code for value sort.
{ "language": "en", "url": "https://stackoverflow.com/questions/109383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1848" }
Q: Can you do Desktop Development using JavaScript? I know there's JScript.NET, but it isn't the same as the JavaScript we know from the web. Does anyone know if there are any JavaScript based platforms/compilers for desktop development? Most specifically Windows desktop development. A: "node-webkit is an app runtime based on Chromium and node.js. You can write native apps in HTML and Javascript with node-webkit. It also lets you call Node.js modules directly from the DOM..." A: There's Titanium Developer which is similar to Adobe AIR (html+css+javascript), but does not require a framework to be pre-installed. A: You can make a desktop application using XML and javascript (and/or VBS) using the Windows Script host. The trick is to save your XML file with a .hta extension. See this reference. A: There's SpiderMonkey, a JavaScript engine written in C and Rhino, an implementation of JavaScript in Java. A: Try AppJS, It is an SDK on top of NodeJS and Chromium Embedded Framework. You can build desktop apps easily with the web technologies. * *Webpage: http://appjs.com *Github: https://github.com/appjs A: Google Gears. There's also Mozilla's XUL, but it's too bit complicated, IMHO (albeit extremely powerful). A: Google has a new interesting technology going on. It's in a quite early stage but works good already. It's called Packaged Apps and is using Chrome as a runtime and works on both Pc and Mac. Have a look at http://developer.chrome.com/apps/about_apps.html A: There is XULRunner, which let's you build GUI apps like Firefox using JavaScript and XUL. It has a lot of extension to JavaScript though, using XPCOM. They also offer Prism which let's you build web apps that work offline, sort of like AIR. Yahoo uses it for their Zimbra email desktop client. A: Yes, with Adobe AIR. Adobe AIR lets you make desktop applications with Javascript, Flex, or Flash. A: Looks like there are 3 types of html5-desktop app SDK * *https://qt-project.org/ *http://awesomium.com *http://berkelium.org *http://www.appcelerator.com/platform Browser runtime * *http://developer.mozilla.org/en-US/docs/XULRunner *http://developer.chrome.com/apps/about_apps.html Node.js based * *http://appjs.com/ *https://github.com/maccman/bowline A: Another option I didn't see mentioned is for Cocoa (Mac OS X, iPhone OS) applications you can use a web view (embedded WebKit) as the application UI. A: You can try JavaLikeScript, it does not provide the same native/root objects that a web browser but it has network and user interface features. A: Electron, originally Atom Shell, allows applications to be written in web technologies (HTML, JS, CSS) and run on any of the major operating systems, including Windows. A: Windows 8 allows for Windows Store Apps to be written in HTML5/JavaScript. A: There's Yahoo's Konfabulator for the windows desktop. A: Script# has extensions for Vista Gadgets. http://projects.nikhilk.net/ScriptSharp/ A: Here are some JSOS (Javascript Operating Systems), sort-of still need a browser. http://fractalbrain.net/ /* The Best. */ http://cometdesktop.com/ /* Alright. */ http://skylightproject.com/ /* Worst */ A: I answered with node-webkit above, but I recently saw a presentation on Tint2. It seems to address security concerns with node-webkit and looks promising.
{ "language": "en", "url": "https://stackoverflow.com/questions/109399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: What are the key strengths of ASP.NET Webforms? What are the key strengths of ASP.NET Webforms (2.0-3.5)? I'm not looking for comparisons to other frameworks, I'm just am looking for feedback on ASP.NET as it stands today. A: One key strength (to some) is a drag-and-drop development environment integrated into Visual Studio. This allows you to get simple things up and running quickly, but can also be a liability when the time comes that you actually need to understand the underlying code. A: I think the component model is the key thing and ease to use parts of other web pages as components (via User Controls) are the key advantages. A: The key strengths of ASP.Net are: * *Compiled code - performance *Multiple language development *XCopy Deployment *Visual Studio Design-time Integration and Expression Web *Many 3rd party controls, both open-sourced and commercial *Easy to learn for beginners A: * *State Management *Low learning curve to get started on something simple (However becomes complicated quickly as soon as you have a page with dynamically added elements). *Huge library of mature controls *Huge amount of documentation and resources *Great performance Despite what other people may have said, it's possible to keep things from turning into a complete mess. A: Benefits of Webforms * *Event Driven *Stateful *Easily develop reusable controls Misconceptions of Webforms * *Its not easy to test * *Its very easy to test if you architect your code properly *The controls generated HTML is bad * *Not anymore, there are CSS Friendly Control Adapters Overall ASP.NET webforms is a great development model and most of the downfalls that people complain about are misconceptions or poor design/architecture. The most important feature is with the Lifecycle/statefulness of webforms you have the flexibility to develop very easy to use and reusable controls. A: Relatively fast construction of web applications, but relatively hard to maintain. Relatively easy to learn. You don`t need to know html, css and javascript. However if you already know html, css and javascript other web development technologies might be easier to learn. It's relatively easy to adopt asp.net web forms if you come from different technologies, because it doesn't require a strict methodology to be effective and because it supports multiple languages. It's a relatively mature technology. It's relatively easy to add new functionality to a web application, but it's relatively hard to change existing functionality while maintaining good quality of code. Easy integration with windows applications, because it's event driven. Many third party libraries available. Relatively easy to deal with application state. Relatively easy to deploy. Platform independence (I do not have any experience with this, so I don`t know if this is true) HOWEVER: In theory it has good performance, but in practice it doesn't. Compiled code does not lead to better performance, because this is not the bottleneck in a web application. The bottlenecks are the amount of html that is send to the browser, how fast it can do string operations and the speed at which the database can be queried. It is one of the worst products on the market when it comes to these points. Fast construction of web applications can be an advantage in simple applications or innovative projects where rival companies are developing a similar product, but in the majority of the time application maintainability is much more important than construction time. Development in multiple languages is a disadvantage, because some third party components will only be available in one language and if you happen to need this it will require you to learn two languages instead of just one. C# and VB.Net are for 99% the same, they just use different words, but the syntax is pretty much identical. When developing in a team there will always be people preferring one over the other and developing one product in two languages only makes things confusing, so one language has to be chosen and the team members who disagree with this choice will start the project with a bad morale which will have a bad effect on your project eventually. The only one who really benefits from the multiple language support is Microsoft. The state-full nature of web forms is a disadvantage compared to stateless systems, because it will lead to many problems. External websites or bookmarks will not be able to link to all content directly as some content will only be available after performing a couple of user actions, so instead of linking directly to www.example.com/a/b?c=d the website will have to link to www.example.com/a/b and give the user some instructions on how to get to the referenced content. Most search engines will not be able to find most of the content. Using the back button in a browser can lead to errors. Connectivity problems or hibernation of the client can lead to errors. It can not take advantage of proxies which leads to bad performance in some cases. Proxies, gateways and cache can cause errors. Surfing the website while the website is updated causes errors. I agree with sontek on that it can be easy to test if architected in the right way, but I disagree with sontek on that this is an advantage of web forms. Because in software architecture every advantage you create by architecture will also lead to a disadvantage. In this case it will lead to a disadvantage in construction time which is pretty much the main strength of web forms, so if you want a testable system it doesn`t make sense to use web forms, because then you can better use MVC or something like that. A: It lets you build web applications without having a good understanding of the underlying concepts such as HTTP. This has its own downsides. A: Ease of use to get something simple done
{ "language": "en", "url": "https://stackoverflow.com/questions/109409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Reading datagridview I populated a datagridview from a datatable. How do I read from the datagridview when the application is running? A: how did you populate it? is the DataSource something useful like a BindlingList? If it is then something like: BindingSource bindingSource = this.dataGridView1.DataSource as BindingSource; //substitute your business object type for T T entity = bindingSource.Current as T; would get you the entity bound to the row. Otherwise there is always the datagridview.Columns[n].Cells[n].Value but really I'd look at using the objects in the DataSource Edit: Ah... a datatable... righto: var table = dataGridView1.DataSource as DataTable; foreach(DataRow row in table.Rows) { foreach(DataColumn column in table.Columns) { Console.WriteLine(row[column]); } } A: You can iterate through your datagridview and retrieve each cell. for(int i =0; i < DataGridView.Rows.Count; i++){ DataGridView.Rows.Columns["columnName"].Text= ""; } There is an example here. A: namespace WindowsFormsApplication2 { public partial class Form1 : Form { public static DataTable objDataTable = new DataTable("UpdateAddress"); public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { Stream myStream = null; OpenFileDialog openFileDialog1 = new OpenFileDialog(); openFileDialog1.InitialDirectory = "c:\\"; openFileDialog1.Filter = "csv files (*.csv)|*.txt|All files (*.*)|*.*"; openFileDialog1.FilterIndex = 2; openFileDialog1.RestoreDirectory = true; if (openFileDialog1.ShowDialog() == DialogResult.OK) { try { if ((myStream = openFileDialog1.OpenFile()) != null) { string fileName = openFileDialog1.FileName; List<string> dataFile = new List<string>(); dataFile = ReadList(fileName); foreach (string item in dataFile) { string[] temp = item.Split(','); DataRow objDR = objDataTable.NewRow(); objDR["EmployeeID"] = temp[0].ToString(); objDR["Street"] = temp[1].ToString(); objDR["POBox"] = temp[2].ToString(); objDR["City"] = temp[3].ToString(); objDR["State"] = temp[4].ToString(); objDR["Zip"] = temp[5].ToString(); objDR["Country"] = temp[6].ToString(); objDataTable.Rows.Add(objDR); } } } catch (Exception ex) { MessageBox.Show("Error: Could not read file from disk. Original error: " + ex.Message); } } } public static List<string> ReadList(string filename) { List<string> fileData = new List<string>(); StreamReader sr = new StreamReader(filename); while (!sr.EndOfStream) fileData.Add(sr.ReadLine()); return fileData; } private void Form1_Load(object sender, EventArgs e) { objDataTable.Columns.Add("EmployeeID", typeof(int)); objDataTable.Columns.Add("Street", typeof(string)); objDataTable.Columns.Add("POBox", typeof(string)); objDataTable.Columns.Add("City", typeof(string)); objDataTable.Columns.Add("State", typeof(string)); objDataTable.Columns.Add("Zip", typeof(string)); objDataTable.Columns.Add("Country", typeof(string)); objDataTable.Columns.Add("Status", typeof(string)); dataGridView1.DataSource = objDataTable; dataGridView1.Refresh(); } private void button2_Click(object sender, EventArgs e) { // Displays a SaveFileDialog so the user can save the backup of AD address before the update // assigned to Button2. SaveFileDialog saveFileDialog1 = new SaveFileDialog(); saveFileDialog1.Filter = "BAK Files|*.BAK"; saveFileDialog1.Title = "Save AD Backup"; saveFileDialog1.ShowDialog(); if (saveFileDialog1.FileName != "") { TextWriter fileOut = new StreamWriter(saveFileDialog1.FileName); //This is where I want read from the datagridview the EmployeeID column and use it in my BackupAddress method. } } A: You might want to take a look at DataTable.WriteXml, and it's brother DataTable.ReadXml. No fuss, no muss saving of a DataTable.
{ "language": "en", "url": "https://stackoverflow.com/questions/109417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the easiest way for a non-programmer to learn the basics of iPhone App creation? I'm primarily a designer, with a fairly high level of understanding of CSS and HTML. I have an idea for a very simple iPhone app, largely involving a timer, an animated graphic, and some sound. If I get more advanced there could be some simple customization settings I have no understanding of Objective C, or C of any kind for that matter. (The closest I got was a Pascal course 20 years ago.) Aside from befriending a developer with motivation to help me out, what would be the simplest, most likely method of learning the minimum I need to know to create my own iPhone App? A: If you have no programming experience, then creating a native iPhone application will be a daunting task. Developing for the iPhone is much like developing for the desktop mac, it's a very complete and mature system. I'd honestly say, stick with doing a web-app for the iPhone. Mobile Safari makes available some special hooks which allow you to get "closer" to the system than a "regular" web-app would. And sometimes that's quite enough. A: If you're really serious about it and are willing to put in some time to actually learn to program in Cocoa, the way I would do it would be a combination of reading all the stuff Apple has to offer along with a couple good books both for reference and more conceptual big picture/getting into the Cocoa mindset stuff. If you just want to try to hack something together that works than you'll probably do best with a combination of Apple's sample code and lots of questions on various forums when you get stuck. The books I would recommend would be Programming in Objective-C, by Stephen Kochan and Cocoa Programming for Mac OS X, by Aaron Hillegass. The former is a good introduction to the Objective-C language itself, and the latter is pretty much the Cocoa book. It's not an iPhone specific book, but pretty much everything in it (especially the concepts and design patterns) still apply. Keep in mind you wont have access to the garbage collector on the iPhone. You should also be sure to read through Apple's own Introduction to The Objective-C 2.0 Programming Language. For actual code to look over and adapt to your own needs, it's hard to find anything better than Apple's own iPhone sample code library. You might also try these two forums for any SDK questions you might have, as well as of course Stack Overflow for the more general stuff that doesn't fall under the NDA. A: Take a course at iNVASIVECODE, or Big Nerd Ranch. There is also the Stanford CS193 iOS classes which is really good. Updated every term CS193p /* Links updated April 2015 */ A: http://developer.apple.com/iphone/ They have some pretty basic apps and some good articles. A: Join the iPhone Dev program and read through their code samples (they are simple) as well as their guides (very helpful). I know of no other way. A: Manning Publication have a book in the pipeline called iPhone in Action which will address coding web-based as well as native iPhone applications. It is slated for a January 09 release but depending on how long Apple will keep the NDA in effect, it may take longer… A: The easiest way is to use a web based service that is created for non programmers. Find the one that will give you the flexibility to create custom apps, not just look alike templates. Check out http://www.Snappii.com A: If you already understand HTML and CSS, you might want to brush up on your JavaScript instead and use something like Kendo UI Mobile or jQuery Mobile. You can basically make an HTML5 single-page app that can run on iPhone and Android devices. Might as well play to your strengths instead of start from scratch! Unless you really want to learn objective-c, in which case, just ignore this answer :)
{ "language": "en", "url": "https://stackoverflow.com/questions/109429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What not to test when it comes to Unit Testing? In which parts of a project writing unit tests is nearly or really impossible? Data access? ftp? If there is an answer to this question then %100 coverage is a myth, isn't it? A: The goal is not 100% code coverage nor is it 80% code coverage. A unit test being easy to write doesn't mean you should write it, and a unit tests being hard to write doesn't mean you should avoid the effort. The goal of any test is to detect user visible problems in the most afforable manner. Is the total cost of authoring, maintaining, and diagnosing problems flagged by the test (including false positives) worth the problems that specific test catches? If the problem the test catches is 'expensive' then you can afford to put effort into figuring out how to test it, and maintaining that test. If the problem the test catches is trivial then writing (and maintaining!) the test (even in the presence of code changes) better be trivial. The core goal of a unit test is to protect devs from implementation errors. That alone should indicate that too much effort will be a waste. After a certain point there are better strategies for getting correct implementation. Also after a certain point the user visible problems are due to correctly implementing the wrong thing which can only be caught by user level or integration testing. A: What would you not test? Anything that could not possibly break. When it comes to code coverage you want to aim for 100% of the code you actually write - that is you need not test third-party library code, or operating system code since that code will have been delivered to you tested. Unless its not. In which case you might want to test it. Or if there are known bugs in which case you might want to test for the presence of the bugs, so that you get a notification of when they are fixed. A: Here I found (via haacked something Michael Feathers says that can be an answer: He says, A test is not a unit test if: * *It talks to the database *It communicates across the network *It touches the file system *It can't run at the same time as any of your other unit tests *You have to do special things to your environment (such as editing config files) to run it. Again in same article he adds: Generally, unit tests are supposed to be small, they test a method or the interaction of a couple of methods. When you pull the database, sockets, or file system access into your unit tests, they are not really about those methods any more; they are about the integration of your code with that other software. A: Data access is possible because you can set up a test database. Generally the 'untestable' stuff is FTP, email and so forth. However, they are generally framework classes which you can rely on and therefore do not need to test if you hide them behind an abstraction. Also, 100% code coverage is not enough on its own. A: Unit testing of a GUI is also difficult, albeit not impossible, I guess. A: @GarryShutler I actually unittest email by using a fake smtp server (Wiser). Makes sure you application code is correct: http://maas-frensch.com/peter/2007/08/29/unittesting-e-mail-sending-using-spring/ Something like that could probably be done for other servers. Otherwise you should be able to mock the API... BTW: 100% coverage is only the beginning... just means that all code has actually bean executed once.... nothing about edge cases etc. A: Most tests, that need huge and expensive (in cost of resource or computationtime) setups are integration tests. Unit tests should (in theory) only test small units of the code. Individual functions. For example, if you are testing email-functionality, it makes sense, to create a mock-mailer. The purpose of that mock is to make sure, your code calls the mailer correctly. To see if your application actually sends mail is an integration test. It is very useful to make a distinction between unit-tests and integration tests. Unit-tests should run very fast. It should be easily possible to run all your unit-tests before you check in your code. However, if your test-suite consists of many integration tests (that set up and tear down databases and the like), your test-run can easily exceed half an hour. In that case it is very likely that a developer will not run all the unit-tests before she checks in. So to answer your question: Do net unit-test things, that are better implemented as an integration test (and also don't test getter/setter - it is a waste of time ;-) ). A: In unit testing, you should not test anything that does not belong to your unit; testing units in their context is a different matter. That's the simple answer. The basic rule I use is that you should unit test anything that touches the boundaries of your unit (usually class, or whatever else your unit might be), and mock the rest. There is no need to test the results that some database query returns, it suffices to test that your unit spits out the correct query. This does not mean that you should not omit stuff that is just hard to test; even exception handling and concurrency issues can be tested pretty well using the right tools. A: "What not to test when it comes to Unit Testing?" * Beans with just getters and setters. Reasoning: Usually a waste of time that could be better spent testing something else. A: Anything that is not completely deterministic is a no-no for unit testing. You want your unit tests to ALWAYS pass or fail with the same initial conditions - if weirdness like threading, or random data generation, or time/dates, or external services can affect this, then you shouldn't be covering it in your unit tests. Time/dates are a particularly nasty case. You can usually architect code to have a date to work with be injected (by code and tests) rather than relying on functionality at the current date and time. That said though, unit tests shouldn't be the only level of testing in your application. Achieving 100% unit test coverage is often a waste of time, and quickly meets diminishing returns. Far better is to have a set of higher level functional tests, and even integration tests to ensure that the system works correctly "once it's all joined up" - which the unit tests by definition do not test. A: That 100% coverage is a myth, which it is, does not mean that 80% coverage is useless. The goal, of course, is 100%, and between unit tests and then integration tests, you can approach it.What is impossible in unit testing is predicting all the totally strange things your customers will do to the product. Once you begin to discover these mind-boggling perversions of your code, make sure to roll tests for them back into the test suite. A: achieving 100% code coverage is almost always wasteful. There are many resources on this. Nothing is impossible to unit test but there are always diminishing returns. It may not be worth it to unit test things that are painful to unit test. A: Anything that needs a very large and complicated setup. Ofcourse you can test ftp (client), but then you need to setup a ftp server. For unit test you need a reproducible test setup. If you can not provide it, you can not test it. A: You can test them, but they won't be unit tests. Unit test is something that doesn't cross the boundaries, such as crossing over the wire, hitting database, running/interacting with a third party, Touching an untested/legacy codebase etc. Anything beyond this is integration testing. The obvious answer of the question in the title is You shouldn't unit test the internals of your API, you shouldn't rely on someone else's behavior, you shouldn't test anything that you are not responsible for. The rest should be enough for only to make you able to write your code inside it, not more, not less. A: Sure 100% coverage is a good goal when working on a large project, but for most projects fixing one or two bugs before deployment isn't necessarily worth the time to create exhaustive unit tests. Exhaustively testing things like forms submission, database access, FTP access, etc at a very detailed level is often just a waste of time; unless the software being written needs a very high level of reliability (99.999% stuff) unit testing too much can be overkill and a real time sink. A: I disagree with quamrana's response regarding not testing third-party code. This is an ideal use of a unit test. What if bug(s) are introduced in a new release of a library? Ideally, when a new version third-party library is released, you run the unit tests that represent the expected behaviour of this library to verify that it still works as expected. A: Configuration is another item that is very difficult to test well in unit tests. Integration tests and other testing should be done against configuration. This reduces redundancy of testing and frees up a lot of time. Trying to unit test configuration is often frivolous. A: FTP, SMTP, I/O in general should be tested using an interface. The interface should be implemented by an adapter (for the real code) and a mock for the unit test. No unit test should exercise the real external resource (FTP server etc) A: If the code to set up the state required for a unit test becomes significantly more complex than the code to be tested I tend to draw the line, and find another way to test the functionality. At that point you have to ask how do you know the unit test is right! A: FTP, email and so forth can you test with a server emulation. It is difficult but possible. Not testable are some error handling. In every code there are error handling that can never occur. For example in Java there must be catch many exception because it is part of a interface. But the used instance will never throw it. Or the default case of a switch if for all possible cases a case block exist. Of course some of the not needed error handling can be removed. But is there a coding error in the future then this is bad. A: The main reason to unit test code in the first place is to validate the design of your code. It's possible to gain 100% code coverage, but not without using mock objects or some form of isolation or dependency injection. Remember, unit tests aren't for users, they are for developers and build systems to use to validate a system prior to release. To that end, the unit tests should run very fast and have as little configuration and dependency friction as possible. Try to do as much as you can in memory, and avoid using network connections from the tests.
{ "language": "en", "url": "https://stackoverflow.com/questions/109432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: PHP replace double backslashes "\\" to a single backslash "\" Okay so im working on this php image upload system but for some reason internet explorer turns my basepath into the same path, but with double backslashes instead of one; ie: C:\\Documents and Settings\\kasper\\Bureaublad\\24.jpg This needs to become C:\Documents and Settings\kasper\Bureaublad\24.jpg. A: Note that you may be running into PHP's Magic Quotes "feature" where incoming backslashes are turned to \\. See http://us2.php.net/magic_quotes A: Use the stripslashes function. That should make them all single slashes. A: Have you considered the stripslashes() function? http://www.php.net/stripslashes
{ "language": "en", "url": "https://stackoverflow.com/questions/109444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Getting a FILE* from a std::fstream Is there a (cross-platform) way to get a C FILE* handle from a C++ std::fstream ? The reason I ask is because my C++ library accepts fstreams and in one particular function I'd like to use a C library that accepts a FILE*. A: Well, you can get the file descriptor - I forget whether the method is fd() or getfd(). The implementations I've used provide such methods, but the language standard doesn't require them, I believe - the standard shouldn't care whether your platform uses fd's for files. From that, you can use fdopen(fd, mode) to get a FILE*. However, I think that the mechanisms the standard requires for synching STDIN/cin, STDOUT/cout and STDERR/cerr don't have to be visible to you. So if you're using both the fstream and FILE*, buffering may mess you up. Also, if either the fstream OR the FILE closes, they'll probably close the underlying fd, so you need to make sure you flush BOTH before closing EITHER. A: The short answer is no. The reason, is because the std::fstream is not required to use a FILE* as part of its implementation. So even if you manage to extract file descriptor from the std::fstream object and manually build a FILE object, then you will have other problems because you will now have two buffered objects writing to the same file descriptor. The real question is why do you want to convert the std::fstream object into a FILE*? Though I don't recommend it, you could try looking up funopen(). Unfortunately, this is not a POSIX API (it's a BSD extension) so its portability is in question. Which is also probably why I can't find anybody that has wrapped a std::stream with an object like this. FILE *funopen( const void *cookie, int (*readfn )(void *, char *, int), int (*writefn)(void *, const char *, int), fpos_t (*seekfn) (void *, fpos_t, int), int (*closefn)(void *) ); This allows you to build a FILE object and specify some functions that will be used to do the actual work. If you write appropriate functions you can get them to read from the std::fstream object that actually has the file open. A: In a single-threaded POSIX application you can easily get the fd number in a portable way: int fd = dup(0); close(fd); // POSIX requires the next opened file descriptor to be fd. std::fstream file(...); // now fd has been opened again and is owned by file This method breaks in a multi-threaded application if this code races with other threads opening file descriptors. A: yet another way to do this in Linux: #include <stdio.h> #include <cassert> template<class STREAM> struct STDIOAdapter { static FILE* yield(STREAM* stream) { assert(stream != NULL); static cookie_io_functions_t Cookies = { .read = NULL, .write = cookieWrite, .seek = NULL, .close = cookieClose }; return fopencookie(stream, "w", Cookies); } ssize_t static cookieWrite(void* cookie, const char* buf, size_t size) { if(cookie == NULL) return -1; STREAM* writer = static_cast <STREAM*>(cookie); writer->write(buf, size); return size; } int static cookieClose(void* cookie) { return EOF; } }; // STDIOAdapter Usage, for example: #include <boost/iostreams/filtering_stream.hpp> #include <boost/iostreams/filter/bzip2.hpp> #include <boost/iostreams/device/file.hpp> using namespace boost::iostreams; int main() { filtering_ostream out; out.push(boost::iostreams::bzip2_compressor()); out.push(file_sink("my_file.txt")); FILE* fp = STDIOAdapter<filtering_ostream>::yield(&out); assert(fp > 0); fputs("Was up, Man", fp); fflush (fp); fclose(fp); return 1; } A: There is a way to get file descriptor from fstream and then convert it to FILE* (via fdopen). Personally I don't see any need in FILE*, but with file descriptor you may do many interesting things such as redirecting (dup2). Solution: #define private public #define protected public #include <fstream> #undef private #undef protected std::ifstream file("some file"); auto fno = file._M_filebuf._M_file.fd(); The last string works for libstdc++. If you are using some other library you will need to reverse-engineer it a bit. This trick is dirty and will expose all private and public members of fstream. If you would like to use it in your production code I suggest you to create separate .cpp and .h with single function int getFdFromFstream(std::basic_ios<char>& fstr);. Header file must not include fstream. A: There isn't a standardized way. I assume this is because the C++ standardization group didn't want to assume that a file handle can be represented as a fd. Most platforms do seem to provide some non-standard way to do this. http://www.ginac.de/~kreckel/fileno/ provides a good writeup of the situation and provides code that hides all the platform specific grossness, at least for GCC. Given how gross this is just on GCC, I think I'd avoid doing this all together if possible. A: UPDATE: See @Jettatura what I think it is the best answer https://stackoverflow.com/a/33612982/225186 (Linux only?). ORIGINAL: (Probably not cross platform, but simple) Simplifying the hack in http://www.ginac.de/~kreckel/fileno/ (dvorak answer), and looking at this gcc extension http://gcc.gnu.org/onlinedocs/gcc-4.6.2/libstdc++/api/a00069.html#a59f78806603c619eafcd4537c920f859, I have this solution that works on GCC (4.8 at least) and clang (3.3 at least) before C++11: #include<fstream> #include<ext/stdio_filebuf.h> typedef std::basic_ofstream<char>::__filebuf_type buffer_t; typedef __gnu_cxx::stdio_filebuf<char> io_buffer_t; FILE* cfile_impl(buffer_t* const fb){ return (static_cast<io_buffer_t* const>(fb))->file(); //type std::__c_file } FILE* cfile(std::ofstream const& ofs){return cfile_impl(ofs.rdbuf());} FILE* cfile(std::ifstream const& ifs){return cfile_impl(ifs.rdbuf());} and can be used this, int main(){ std::ofstream ofs("file.txt"); fprintf(cfile(ofs), "sample1"); fflush(cfile(ofs)); // ofs << std::flush; doesn't help ofs << "sample2\n"; } Note: The stdio_filebuf is not used in newer versions of the library. The static_cast<>() is somewhat dangerous too. Use a dynamic_cast<>() instead of if you get a nullptr you need that's not the right class. You can try with stdio_sync_filebuf instead. Problem with that class is that the file() is not available at all anymore. Limitations: (comments are welcome) * *I find that it is important to fflush after fprintf printing to std::ofstream, otherwise the "sample2" appears before "sample1" in the example above. I don't know if there is a better workaround for that than using fflush. Notably ofs << flush doesn't help. *Cannot extract FILE* from std::stringstream, I don't even know if it is possible. (see below for an update). *I still don't know how to extract C's stderr from std::cerr etc., for example to use in fprintf(stderr, "sample"), in an hypothetical code like this fprintf(cfile(std::cerr), "sample"). Regarding the last limitation, the only workaround I found is to add these overloads: FILE* cfile(std::ostream const& os){ if(std::ofstream const* ofsP = dynamic_cast<std::ofstream const*>(&os)) return cfile(*ofsP); if(&os == &std::cerr) return stderr; if(&os == &std::cout) return stdout; if(&os == &std::clog) return stderr; if(dynamic_cast<std::ostringstream const*>(&os) != 0){ throw std::runtime_error("don't know cannot extract FILE pointer from std::ostringstream"); } return 0; // stream not recognized } FILE* cfile(std::istream const& is){ if(std::ifstream const* ifsP = dynamic_cast<std::ifstream const*>(&is)) return cfile(*ifsP); if(&is == &std::cin) return stdin; if(dynamic_cast<std::ostringstream const*>(&is) != 0){ throw std::runtime_error("don't know how to extract FILE pointer from std::istringstream"); } return 0; // stream not recognized } Attempt to handle iostringstream It is possible to read with fscanf from istream using fmemopen, but that requires a lot of book keeping and updating the input position of the stream after each read, if one wants to combine C-reads and C++-reads. I wasn't able to convert this into a cfile function like above. (Maybe a cfile class that keeps updating after each read is the way to go). // hack to access the protected member of istreambuf that know the current position char* access_gptr(std::basic_streambuf<char, std::char_traits<char>>& bs){ struct access_class : std::basic_streambuf<char, std::char_traits<char>>{ char* access_gptr() const{return this->gptr();} }; return ((access_class*)(&bs))->access_gptr(); } int main(){ std::istringstream iss("11 22 33"); // read the C++ way int j1; iss >> j1; std::cout << j1 << std::endl; // read the C way float j2; char* buf = access_gptr(*iss.rdbuf()); // get current position size_t buf_size = iss.rdbuf()->in_avail(); // get remaining characters FILE* file = fmemopen(buf, buf_size, "r"); // open buffer memory as FILE* fscanf(file, "%f", &j2); // finally! iss.rdbuf()->pubseekoff(ftell(file), iss.cur, iss.in); // update input stream position from current FILE position. std::cout << "j2 = " << j2 << std::endl; // read again the C++ way int j3; iss >> j3; std::cout << "j3 = " << j3 << std::endl; } A: I ran in that problem when I was faced with isatty() only working on a file descriptor. In newer versions of the C++ standard library (at least since C++11), the solution proposed by alfC does not work anymore because that one class was changed to a new class. The old method will still work if you use very old versions of the compiler. In newer version, you need to use std::basic_filebuf<>(). But that does not work with the standard I/O such as std::cout. For those, you need to use __gnu_cxx::stdio_sync_filebuf<>(). I have a functional example in my implementation of isatty() for C++ streams here. You should be able to lift off that one file and reuse it in your own project. In your case, though, you wanted the FILE* pointer, so just return that instead of the result of ::isatty(fileno(<of FILE*>)). Here is a copy of the template function: template<typename _CharT , typename _Traits = std::char_traits<_CharT>> bool isatty(std::basic_ios<_CharT, _Traits> const & s) { { // cin, cout, cerr, and clog typedef __gnu_cxx::stdio_sync_filebuf<_CharT, _Traits> io_sync_buffer_t; io_sync_buffer_t * buffer(dynamic_cast<io_sync_buffer_t *>(s.rdbuf())); if(buffer != nullptr) { return ::isatty(fileno(buffer->file())); } } { // modern versions typedef std::basic_filebuf<_CharT, _Traits> file_buffer_t; file_buffer_t * file_buffer(dynamic_cast<file_buffer_t *>(s.rdbuf())); if(file_buffer != nullptr) { typedef detail::our_basic_filebuf<_CharT, _Traits> hack_buffer_t; hack_buffer_t * buffer(static_cast<hack_buffer_t *>(file_buffer)); if(buffer != nullptr) { return ::isatty(fileno(buffer->file())); } } } { // older versions typedef __gnu_cxx::stdio_filebuf<_CharT, _Traits> io_buffer_t; io_buffer_t * buffer(dynamic_cast<io_buffer_t *>(s.rdbuf())); if(buffer != nullptr) { return ::isatty(fileno(buffer->file())); } } return false; } Now, you should be asking: But what is that detail class our_basic_filebuf?!? And that's a good question. The fact is that the _M_file pointer is protected and there is no file() (or fd()) in the std::basic_filebuf. For that reason, I created a shell class which has access to the protected fields and that way I can return the FILE* pointer. template<typename _CharT , typename _Traits = std::char_traits<_CharT>> class our_basic_filebuf : public std::basic_filebuf<_CharT, _Traits> { public: std::__c_file * file() throw() { return this->_M_file.file(); } }; This is somewhat ugly, but cleanest I could think off to gain access to the _M_file field.
{ "language": "en", "url": "https://stackoverflow.com/questions/109449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "80" }
Q: Reporting Services Line Graph: How to better control the smoothed curve I have a report that I built for a client where I need to plot x 0-100, y 0-100. Let's imagine I have these points: 0, 0 2, 24 50, 70 100, 100 I need to represent these as a smoothed line chart, as the application of it is a dot gain graph for printing presses. Here's the problem. The line draws fine from 100,100 (top right) down to 2,24. But then what happens is from 2,24 to 0,0 the line curves out off the left of the graph and then to down to 0,0. Imagine it putting a point at -10,10. I understand this is because of the generic Bézier curve algorithm it is using and the large separation of control points, thus heavily weighting it. I was wondering however if anyone knows a way I can control it. I have tried adding in averaged points between the existing control points, but it still curves off the graph as if it's still heavily weighted. The only other answer I can think of is custom drawing a graph or looking into Dundas Charts and using its GDI+ drawing support. But before I go that route, anyone have any thoughts? Here's the thing. I know how to draw the curve manually. The problem lies in the fact that there is such a high weighting between 2 and 50. I tried to add points in at the lows and the mids, but it was still bowing off the edge. I will have to go check out the source and modify the graph back and see if I can get a screenshot up. Right now I just have the graph stop at 2 until I can get this solved. A: alt text http://img140.imageshack.us/img140/1279/smoothlinebezierxl0.jpg (Providing a picture of the behaviour to help you get a better answer). For those with a theory, you can try this out in Excel as well (not just Reporting Services). You mentioned adding points in your question, but it seems like adding in interpolated points near the problem area has the desired effect (e.g. { (1,12), (1.5, 18) }). This is a clumsy "solution" at best though. A: You could try using a cosine interpolation for the points in-between.
{ "language": "en", "url": "https://stackoverflow.com/questions/109464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Caching paginated results, purging on update - how to solve? I've created a forum, and we're implementing an apc and memcache caching solution to save the database some work. I started implementing the cache layer with keys like "Categories::getAll", and if I had user-specific data, I'd append the keys with stuff like the user ID, so you'd get "User::getFavoriteThreads|1471". When a user added a new favorite thread, I'd delete the cache key, and it would recreate the entry. However, and here comes the problem: I wanted to cache the threads in a forum. Simple enough, "Forum::getThreads|$iForumId". But... With pagination, I'd have to split this into several cache entries, for example "Forum::getThreads|$iForumId|$iLimit|$iOffset". Which is alright, until someone posts a new thread in the forum. I will now have to delete all the keys under "Forum::getThreads|$iForumId", no matter what the limit and offset is. What would be a good way of solving this problem? I'd really rather not loop through every possible limit and offset until I find something that doesn't match anymore. Thanks. A: Just an update: I decided that Josh's point on data usage was a very good one. People are unlikely to keep viewing page 50 of a forum. Based on this model, I decided to cache the 90 latest threads in each forum. In the fetching function I check the limit and offset to see if the specified slice of threads is within cache or not. If it is within the cache limit, I use array_slice() to retrieve the right part and return it. This way, I can use a single cache key per forum, and it takes very little effort to clear/update the cache :-) I'd also like to point out that in other more resource heavy queries, I went with flungabunga's model, storing the relations between keys. Unfortunately Stack Overflow won't let me accept two answers. Thanks! A: I've managed to solve this by extending the memcache class with a custom class (say ExtendedMemcache) which has a protected property which will contain a hash table of group to key values. The ExtendedMemcache->set method accepts 3 args ($strGroup,$strKey, $strValue) When you call set, it will store the relationship between $strGroup, and $strKey, in the protected property and then go on to store the $strKey to $strValue relationship in memcache. You can then add a new method to the ExtendedMemcache class called "deleteGroup", which will, when passed a string, find that keys associated to that group, and purge each key in turn. It would be something like this: http://pastebin.com/f566e913b I hope all that makes sense and works out for you. PS. I suppose if you wanted to use static calls the protected property could be saved in memcache itself under it's own key. Just a thought. A: You might also want to have a look at the cost of storing the cache data, in terms of your effort and CPU cost, against how what the cache will buy you. If you find that 80% of your forum views are looking at the first page of threads, then you could decide to cache that page only. That would mean both cache reads and writes are much simpler to implment. Likewise with the list of a user's favourite threads. If this is something that each person visits rarely then cache might not improve performance too much. A: You're essentially trying to cache a view, which is always going to get tricky. You should instead try to cache data only, because data rarely changes. Don't cache a forum, cache the thread rows. Then your db call should just return a list of ids, which you already have in your cache. The db call will be lightening fast on any MyISAM table, and then you don't have to do a big join, which eats db memory. A: One possible solution is not to paginate the cache of threads in a forum, but rather put the thread information in to Forum::getThreads|$iForumId. Then in your PHP code only pull out the ones you want for that given page, e.g. $page = 2; $threads_per_page = 25; $start_thread = $page * $threads_per_page; // Pull threads from cache (assuming $cache class for memcache interface..) $threads = $cache->get("Forum::getThreads|$iForumId"); // Only take the ones we need for($i=$start_thread; $i<=$start_thread+$threads_per_page; $i++) { // Thread display logic here... showThread($threads[$i]); } This means that you do have a bit more work to do pulling them out on each page, but now only have to worry about invalidating the cache in one place on update / addition of new thread. A: flungabunga: Your solution is very close to what I'm looking for. The only thing keeping me from doing this is having to store the relationships in memcache after each request and loading them back. I'm not sure how much of a performance hit this would mean, but it seems a little inefficient. I will do some tests and see how it pans out. Thank you for a structured suggestion (and some code to show for it, thanks!). A: Be very careful about doing this kind of optimisation without having hard facts to measure against. Most databases have several levels of caches. If these are tuned correctly, the database will probably do a much better job at caching, than you can do your self. A: In response to flungabunga: Another way to implement grouping is to put the group name plus a sequence number into the keys themselves and increment the sequence number to "clear" the group. You store the current valid sequence number for each group in its own key. e.g. get seqno_mygroup 23 get mygroup23_mykey <mykeydata...> get mygroup23_mykey2 <mykey2data...> Then to "delete" the group simply: incr seqno_mygroup Voila: get seqno_mygroup 24 get mygroup24_mykey ...empty etc..
{ "language": "en", "url": "https://stackoverflow.com/questions/109480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is it bad design to use table tags when displaying forms in html? I keep hearing that div tags should be used for layout purposes and not table tags. So does that also apply to form layout? I know a form layout is still a layout, but it seems like creating form layouts with divs requires more html and css. So with that in mind, should forms layouts use div tags instead? A: If you just need a simply row column grip type layout you shouldn't feel guilty using tables. I don't know how anyone can call that 'bad design' just because it's not CSS. I've seen many bad CSS based designs. I love CSS and think it far superior in many ways to traditional nested table layouts, but do what works bests and what is easiest to maintain and move onto more important, more impactful decisions. A: The general principle is that you want to use whatever HTML elements best convey the semantic content that you are intending and then rely on css to visually represent that semantic content. Following that principle buys a lot of intrinsic benefits including easier site-general visual changes, search engine optimization, multi-device layouts, and accessibility. So, the short answer is: you can do whatever you want, but best practices suggest that you only use table tags to represent tabular data and instead use whatever html tags best convey what it is that you are trying to represent. It might be a little harder initially, but once you get used to the idea, you'll wonder why you ever did it any other way. Depending on what you are trying to do with your form, it shouldn't take that much more markup to use semantic markup and css, especially if you rely on the cascading properties of css. Also, if you have several of the same form across many pages in your site, the semantic approach is much more efficient, both to code and to maintain. A: To make forms as accessible as possible and semantically correct, I use the following format: <fieldset> <ol> <li> <label for='text_field'>Text Field</label> <input type='text' name='text_field' id='text_field' /> </li> </ol> </fieldset> A: I use CSS mostly until CSS becomes a drag. For example it's a lot easier to create a 3+ column (3 sets of label + form field) form using a table than in css. I couldn't get the layout to look properly in all major browsers using pure css and I was spending too much time getting it to work. So I said screw it and I did it easily using a table. Table are not bad. A: Yes, it does apply for form layouts. Keep in mind that there are also tags like FIELDSET and LABEL which exist specifically for adding structure to a form, so it's not really a question of just using DIV. You should be able to markup a form with pretty minimal HTML, and let CSS do the rest of the work. E.g.: <fieldset> <div> <label for="nameTextBox">Name:</label> <input id="nameTextBox" type="text" /> </div> ... </fieldset> A: It's certainly easier to use table than div to layout a table, but keep in mind that a table is supposed to mean something - it's presenting data in a regular way for the user to see, more than putting boxes on the screen in a given order. Generally, I think forms layouts should use divs to decide how the form elements are displayed. A: It's a grey area. Not everything in markup has clearly defined boundaries, and this is one case where you get to use your personal preference and make a judgement call. It doesn't quite fit the idea of data being organised, but the cells are related across multiple axes, and that's the rule of thumb I use to decide whether something fits in a table or not. A: I think it's a myth that forms are "difficult" to layout nicely with good HTML and CSS. The level of control that CSS gives you over your layout goes way beyond what any clunky table-based layout ever would. This isn't a new idea, as this Smashing Magazine article from way back in 2006 shows. I tend to use variants of the following markup in my forms. I have a generic .form style in my CSS and then variants for text inputs, checkboxes, selects, textareas etc etc. .field label { float: left; width: 20%; } .field.text input { width: 75%; margin-left: 2%; padding: 3px; } <div class="field text"> <label for="fieldName">Field Title</label> <input value="input value" type="text" name="fieldName" id="fieldName" /> </div> Tables aren't evil. They are by far the best option when tabular data needs to be displayed. Forms IMHO aren't tabular data - they're forms, and CSS provides more than enough tools to layout forms however you like. A: One thing that I don't often see discussed in these form layout questions, if you've chosen a table to layout your form (with labels on the left and fields on the right as mentioned in one of the answers) then that layout is fixed. At work we recently had to do a quick 'proof of concept' of our web apps in Arabic. The apps which had used tables for form layout were basically stuck, whereas I was able to reverse the order of all the form fields and labels on all my pages by changing about ten lines in my CSS file. A: If your forms are laid out in a tabular format (for example, labels on the left and fields on the right), then yes, use table tags. A: A form is not "presentation", you ask for data, you do not usually present data. I use a lot of inline editing in tabular data. Obviousely i use the datacells - td as holders for the input elements when switching from presentation to input. A: Most of the answers I've seen here seem appropriate. The only thing I'd add, specifically to apathetic's or Mr. Matt's is to use <dl>/<dt>/<dd>. I believe these represent the list semantically. <dl> <dt><label for="nameTextBox">Name:</label></dt> <dd><input value="input value" type="text" name="fieldName" id="fieldName" /></dd> </dl> You might want to restyle these, but this says semantically what's going on, that is you've got a list of "terms" (<dt>) and "definitions" (<dd>), with the term being the label and the definition being the user entered values.
{ "language": "en", "url": "https://stackoverflow.com/questions/109488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Problems with accessing FlashVars via parameters in AS3 I keep getting compiler errors when I try to access flashVars in an AS3 class. Here's a stripped version of the code: package myPackage { import flash.display.Loader; import flash.display.LoaderInfo; import flash.display.Sprite; public class myClass { public function CTrafficHandler() { var myVar:String = LoaderInfo(this.root.loaderInfo).parameters.myFvar;}}} And I get a compilation error: 1119: Access of possibly undefined property root through a reference with static type source:myClass. When I change the class row to public class myClass extends Sprite { I don't get a compiler error, but I do get this in the output window: TypeError: Error #1009: Cannot access a property or method of a null object reference. Via the debugger (as suggested) I can see that this.root is null. How can I solve this problem? A: Your problem is that your DisplayObject has not been added to the DisplayList, at the point at which you're trying to access flash vars. The root display object is therefore null, according to your object. You can ensure that your DisplayObject is on the stage by using the following: package { import flash.display.Sprite; import flash.events.Event; public class MySprite extends Sprite { // constructor public function MySprite() { super(); addEventListener( Event.ADDED_TO_STAGE, onAddedToStage, false, 0, true ); } private function onAddedToStage( event:Event ):void { removeEventListener( Event.ADDED_TO_STAGE, onAddedToStage ); var paramList:Object = LoaderInfo( this.root.loaderInfo ).parameters; var myParam:String = paramList["myParam"]; } } } ` A: I found what the problem was. The class in question wasn't the main class used in the project, but rather a secondary class. I've moved the code to the main class to get the parameters and after I got them, I sent them to the class constructor function. A: The problem was indeed that you were attempting to access this information from a non-display object, or from outside of the document class. If you wish to access root or stage, the object that wishes to access such must be first added to the display list. I often use flashvars for variables that are used often throughout the project. Variables like country, and language. I find that in this case it is best to catch these parameters in the document class and create public variables with said parameters as values. This will give _global style access to these variables. That all being said, you really should use global variables sparingly, especially when working on collaborative projects. A: As an alternative, you could try using the mx.core.Application.application.parameters object. From the LiveDocs page for mx.core.Application: application : Object [static] [read-only] A reference to the top-level application. parameters : Object [read-only] The parameters property returns an Object containing name-value pairs representing the parameters provided to this Application. There are two sources of parameters: the query string of the Application's URL, and the value of the FlashVars HTML parameter (this affects only the main Application). A: I think you should extend from Sprite, but be sure to initialize it first and maybe put to the stage. Try to enable debugging and see what exactly is null as exception report says.
{ "language": "en", "url": "https://stackoverflow.com/questions/109491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Rails and Gmail SMTP, how to use a custom from address I've got my Rails (2.1) app setup to send email via Gmail, however whenever I send an email no matter what I set the from address to in my ActionMailer the emails always come as if sent from my Gmail email address. Is this a security restriction they've put in place at Gmail to stop spammers using their SMTP? Note: I've tried both of the following methods within my ActionMailer (just in case): @from = me@mydomain.com from 'me@mydomain.com' A: I believe it's just something Gmail does when mail is sent through its SMTP, as it was mentioned that they do this on a tutorial about using their SMTP to send mail. A: This is most likely to stop people trying to send email from addresses that Google can't verify that the sender owns. This is fairly common amongst mail providers, and is probably a safeguard to stop people using Google's services for sending spam. A: I think I tried and failed in the past myself, but I did just come across this on the gmail site: http://mail.google.com/support/bin/answer.py?ctx=gmail&hl=en&answer=22370 Looks like you can specify a custom "From" address within gmail, and perhaps at that point, see if setting @from will work (now that gmail knows about your custom from address).
{ "language": "en", "url": "https://stackoverflow.com/questions/109520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I test that a Rails Helper defines a method? I am creating a Rails plugin and it is dynamically adding a method to a Helper. I just want to ensure that the method is added. How can I see if the Helper responds to the method name? A: Try this: def test_that_foo_helper_defines_bar o = Object.new assert !o.respond_to? :bar o.extend FooHelper assert o.respond_to? :bar end
{ "language": "en", "url": "https://stackoverflow.com/questions/109528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I programmatically manage iptables rules on the fly? I need to query existing rules, as well as being able to easily add and delete rules. I haven't found any API's for doing this. Is there something that I'm missing? The closest I've come to a solution is using iptables-save | iptables-xml for querying and manually calling the iptables command itself to add/delete rules. Another solution I've considered is simply regenerating the entire ruleset out of my application's database and flushing the whole chain, then applying it again. But I want to avoid this as I don't want to drop any packets -- unless there's a way to atomically do this. I'm wondering if there's a better way. An API in C would be great; however, as I'm planning to build this into a stand-alone suid program, libraries that do this in ANY language are fine too. A: You may consider using rfw which is the REST API for iptables. It is serializing iptables commands from various potentially concurrent sources and remotely executes iptables on the fly. rfw is designed for distributed systems that try to update firewall rules on multiple boxes but it can be run also on a single machine on localhost interface. Then it allows avoiding the SSL and authentication overhead as it can be run on plain HTTP in this case. Sample command: PUT /drop/input/eth0/11.22.33.44 which corresponds to: iptables -I INPUT -i eth0 -s 11.22.33.44 -j DROP You can insert and delete rules as well as query for current status to get the existing rules in JSON format: GET /list/input Disclaimer: I started that project. It's open source under the MIT license. A: As far as I understand (although no reference seems to mention it), iptables-restore is atomic. At the end, when the COMMIT line is read, iptables calls iptc_commit in libiptc (which in an internal interface you aren't supposed to use), which then calls setsockopt(SO_SET_REPLACE) with your new rulesets. That sounds about as atomic as you can get: with one kernel call. However, more knowledgeable parties are invited to dispute this. :-) Edit: I can confirm that your description is correct. iptables-restore is done as an atomic operation in the kernel. To be even more specific the operation "only" is atomic on a per CPU basis. As we store the entire ruleset blob per CPU (due to cache optimizations). A: There is deliberately no API to manage these rules. You're not supposed to want to do so. Or something. If you need rules which are sufficiently dynamic you care about the performance of executing /sbin/iptables, there are other ways to do it: * *Using something like the "recent" match or ip set matching, you can add/remove IP addresses from black/white lists without changing the rule set. *You can pass packets into userspace for filtering using NFQUEUE A: From the netfilter FAQ: The answer unfortunately is: No. Now you might think 'but what about libiptc?'. As has been pointed out numerous times on the mailinglist(s), libiptc was NEVER meant to be used as a public interface. We don't guarantee a stable interface, and it is planned to remove it in the next incarnation of linux packet filtering. libiptc is way too low-layer to be used reasonably anyway. We are well aware that there is a fundamental lack for such an API, and we are working on improving that situation. Until then, it is recommended to either use system() or open a pipe into stdin of iptables-restore. The latter will give you a way better performance. A: This morning I woke up to find that was getting a Denial Of Service (DOS) attack from Russia. They were hitting me from dozens of IP blocks. They must have either had a large pool of IPs or some sort of proxy list/service. Every time I blocked an IP, another one popped up. Finally, I looked for a script, and found I needed to write my own solution. The following is a bit agressive, but they were running my TOP LOAD LEVEL to over 200. Here is a quick script I wrote to block the DOS in realtime. cat **"output of the logs"** | php ipchains.php **"something unique in the logs"** ==> PHP Script: <?php $ip_arr = array(); while(1) { $line = trim(fgets(STDIN)); // reads one line from STDIN $ip = trim( strtok( $line, " ") ); if( !array_key_exists( $ip, $ip_arr ) ) $ip_arr[$ip] = 0; $regex = sprintf( "/%s/", $argv[1] ); $cnt = preg_match_all( $regex, $line ); if( $cnt < 1 ) continue; $ip_arr[$ip] += 1; if( $ip_arr[$ip] == 1 ) { // printf( "%s\n", $argv[1] ); // printf( "%d\n", $cnt ); // printf( "%s\n", $line ); printf( "-A BLOCK1 -s %s/24 -j DROP\n", $ip ); $cmd = sprintf( "/sbin/iptables -I BLOCK1 -d %s/24 -j DROP", $ip ); system( $cmd ); } } ?> Assumptions: 1) BLOCK1 is a Chain already created. 2) BLOCK1 is a Chain that is run/called from the INPUT CHAIN 3) Periodically you will need to run "ipchains -S BLOCK1" and put output in /etc/sysconfig file. 4) You are familiar with PHP 5) You understand web log line items/fields and output. A: Using iptables-save and iptables-restore to query and regenerate rules is easily the most efficient way of doing it. These used to, once, be shell scripts, but now they are C programs that work very efficiently. However, I should point out that there is a tool that you can use which will make maintaining iptables much easier. Most dynamic rulesets are really the same rule repeated many times, such as: iptables -A INPUT -s 1.1.1.1 -p tcp -m --dport 22 -j ACCEPT iptables -A INPUT -s 2.2.2.0/24 -p tcp -m --dport 22 -j ACCEPT iptables -A INPUT -p tcp -m tcp --dport 22 -j REJECT Instead of replacing those rules every time you want to change what ports can access port 22 (useful for say, port knocking), you can use ipsets. Viz: ipset -N ssh_allowed nethash iptables -A ssh_allowed -m set --set ssh_allowed src -p tcp -m --dport 22 -j ACCEPT ipset -A ssh_allowed 1.1.1.1 ipset -A ssh_allowed 2.2.2.0/24 Sets can hold ip addresses, networks, ports, mac addresses, and have timeouts on their records. (Ever wanted to add something for just an hour?). There is even an atomic way of swapping one set with another, so a refresh means creating a new temporary set, then swapping it in as the name of the existing set. A: This is an example of using bash and iptables to dynamically block hackers abusing sshd on CentOS. In this case, I configured sshd to disallow password login (allows keys). I look in /var/log/secure for entries of "Bye Bye", which is sshd's polite way of saying f-off... IP=$(awk '/Bye Bye/{print $9}' /var/log/secure | sed 's/://g' |sort -u | head -n 1) [[ "$IP" < "123" ]] || { echo "Found $IP - blocking it..." >> /var/log/hacker.log /sbin/iptables -A INPUT -s $IP -j DROP service iptables save sed -i "/$IP/d" /var/log/secure } I run this in a loop every second, or minute, or whatever makes me happy. I test the value of $IP to verify it found a useful value, if so I invoke iptables to drop it, and I use sed to purge the log file of $IP so the entry doesn't get added again. I do a little pre-processing (not shown) to white list some important IPs that are always valid and that might have had trouble connecting (due to user error). From time-to-time I sort the iptables filter list and create IP ranges from them (using a different script - and when checked, they are usually IP ranges from india, china and russia). Thus, my overall iptables filter rule set stays between 50 and 500 entries; ipset doesn't really improve much on a list that short. A: MarkR's right, you're not supposed to do this. The easiest way is to call iptables from the script or to write the iptables config and 'restore' it. Still, if you want to, read the source of iptables. iptables uses matches and tables as shared objects. You can use the source or them. The Linux netfilter also has some include files under /usr/include/netfilter*. These are somewhat low-level functions. It is what iptables uses. This is as near an API as one can get without iptables. But this API is 'messy'. Bear in mind that it was designed to be used only by iptables. It's not very well documented, you can hit very specific problems, the API can change fairly quick without any warning, so an upgrade propably will break your code, etc. A: I know its a short term solution, per the netfilter discussion, but in the short term you can use iptc wrapped in python with this: https://github.com/ldx/python-iptables I played with it some in a recent project of mine and found it quite effective.
{ "language": "en", "url": "https://stackoverflow.com/questions/109553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: How do I access cookies within Flash? I'm looking to grab cookie values for the same domain within a Flash movie. Is this possible? Let's see I let a user set a variable foo and I store it using any web programming language. I can access it easily via that language, but I would like to access it via the Flash movie without passing it in via printing it within the HTML page. A: If you just want to store and retrieve data, you probably want to use the SharedObject class. See Adobe's SharedObject reference for more details of that. If you want to access the HTTP cookies, you'll need to use ExternalInterface to talk to javascript. The way we do that here is to have a helper class called HTTPCookies. HTTPCookies.as: import flash.external.ExternalInterface; public class HTTPCookies { public static function getCookie(key:String):* { return ExternalInterface.call("getCookie", key); } public static function setCookie(key:String, val:*):void { ExternalInterface.call("setCookie", key, val); } } You need to make sure you enable javascript using the 'allowScriptAccess' parameter in your flash object. Then you need to create a pair of javascript functions, getCookie and setCookie, as follows (with thanks to quirksmode.org) HTTPCookies.js: function getCookie(key) { var cookieValue = null; if (key) { var cookieSearch = key + "="; if (document.cookie) { var cookieArray = document.cookie.split(";"); for (var i = 0; i < cookieArray.length; i++) { var cookieString = cookieArray[i]; // skip past leading spaces while (cookieString.charAt(0) == ' ') { cookieString = cookieString.substr(1); } // extract the actual value if (cookieString.indexOf(cookieSearch) == 0) { cookieValue = cookieString.substr(cookieSearch.length); } } } } return cookieValue; } function setCookie(key, val) { if (key) { var date = new Date(); if (val != null) { // expires in one year date.setTime(date.getTime() + (365*24*60*60*1000)); document.cookie = key + "=" + val + "; expires=" + date.toGMTString(); } else { // expires yesterday date.setTime(date.getTime() - (24*60*60*1000)); document.cookie = key + "=; expires=" + date.toGMTString(); } } } Once you have HTTPCookies.as in your flash project, and HTTPCookies.js loaded from your web page, you should be able to call getCookie and setCookie from within your flash movie to get or set HTTP cookies. This will only work for very simple values - strings or numbers - but for anything more complicated you really should be using SharedObject. A: I believe flash objects have functions accessible through javascript, so if there's no easier way, you could at least use a javascript onload handler and pass document.cookie into your flash app from the outside. More info here: http://www.permadi.com/tutorial/flashjscommand/ A: You can read and write cookies (Local Shared Object) from flash. Flash cookies are stored on your PC within a directory with the name of your domain. Those directories are located at: [Root drive]:\Documents and Settings\[username]\Application Data\Macromedia\Flash Player\#SharedObjects\ This article from Adobe is a good start. A: Some Googling shows that it can be done by using query strings: For web applications, you can pass values to swf by url parameters, and (with action script inside swf) save them to the sandbox. A: cookies are available to javascript through document.cookie - try using flash's getURL to call a javascript function. getURL('javascript:document.cookie = "varname=varvalue; expires=Thu, 2 Aug 2001 20:47:11 UTC; path="'); A: getCookie method in HTTPCookies.as should use "return" statement. import flash.external.ExternalInterface; public class HTTPCookies { public static function getCookie(key:String):* { return ExternalInterface.call("getCookie", key); } public static function setCookie(key:String, val:*):void { ExternalInterface.call("setCookie", key, val); } } A: I'm 10 years too late. If you can embed the data you need in the page, it's 10 times easier to grab. import flash.net.* var _loader:URLLoader = new URLLoader(); var _req:URLRequest = new URLRequest('https://stackoverflow.com'); _loader.addEventListener(Event.COMPLETE, _onComplete); _loader.load(_req); function _onComplete(e:Event):void{ var wantedData:RegExp = /<div class="cool-data">(.*?)</div>/ig; var result:Object = wantedData.exec(String(_loader.data)); trace(result[0].split('<div class="cool-data">').join('') .split('</div>').join('')); }
{ "language": "en", "url": "https://stackoverflow.com/questions/109580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Speed up images in Tomcat 6 In tomcat 6 i have a servlet running openbluedragon, everything compiles and servers up quik, with the exception of images, they really lag significantly. Any suggestions optimization for image serving? Here is my server.xml: <Service name="Catalina"> <Connector port="8009" protocol="AJP/1.3" /> <Connector port="8080" maxThreads="100" protocol="HTTP/1.1" connectionTimeout="20000" /> <Engine name="Standalone" defaultHost="hostname.whatever" jvmRoute="ajp13"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="hostname.whatever" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> ...context </Host> </Engine> </Service> A: Another option is to use apache as a frontend, connecting tomcat with mod_jk. This way you can let apache serve static content (e.g. images, css, javascript) and let tomcat generate the dynamic content. Might leave a bit of work to separate the static content from the dynamic ones, but works great for me. On Unix, having an apache as frontend is a nice option because being bound to port 80 you're often forced to run as root. Apache knows how to drop root permissions after binding a port, Tomcat doesn't. You don't want a server faced to the public to run as root. (This is similar to the reverse proxy answer, but doesn't involve a proxy but mod_jk) A: Are you serving the same set of images over and over? In that case adding a servlet filter that adds a reasonable Expires header might save tomcat a lot of work. It will not increase the speed of the served image but will just make the number of requests it has to handle less. Lots of examples for this on the web. A: If you have the option, you could add a reverse proxy in advance of your application. At work I have an Apache web server that receives all inbound HTTP connections. Based on the URL, it either forwards the request to another server or serves up the content itself. I've used this approach to accelerate serving up static content for a Trac site. The ProxyPass and ProxyPassReverse directives are a good place to start looking if you want to go this route. As a simple example, if you have a virtual directory called /images, Apache could serve up any request for something in that directory and forward everything else to your Tomcat instance. The syntax is pretty comprehensive. If there is any method at all to the way your static content is identified this is an approach that will work. Apache isn't the only choice here. I think all modern web servers include similar functionality. If I was starting today I'd probably look at LigHTTPd instead, just because it does less. There may even be caching reverse proxies that figure this out for you automatically. I'm not familiar with any of them though.
{ "language": "en", "url": "https://stackoverflow.com/questions/109592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Finding SQL queries in compiled app I have just inherited a server application, however it seems that the only copy of the database is corrupt and the working version is gone, so is it possible to find what queries the application is running so I can try to rebuild the tables? Edit: I have some files with no extensions that I are named the same as the databases, IDK if there is anything that can be done with them but if anyone has any ideas. The accepted answer seems the most likely to succeed however I was able to find another backup so I have not tested it. A: Turn on SQL query logging and watch what the application asks for. A: If you have access to either a unix machine, or can install the cygwin utilities (http://www.cygwin.com/), there is a command called 'strings' which will search through any file type and print out any contiguous sequence of character data (might just be ascii). That tool should help you identify the sql queries embedded in the aplication. A: Look for SQL Profiler, which (depending on which version you have) is normally available from the tools menu in query analyzer (isqlw.exe) or management studio (in later versions). With SQL profiler you can run a trace on the server which can show you which queries are being requested by the application. A: You could run the UNIX command "strings" on the program to see whether it has embedded sql strings: http://en.wikipedia.org/wiki/Strings_(Unix) A: You could RegEx the files to search for * *"SELECT *" *"UPDATE *" *"DELETE FROM *" *"INSERT INTO *"
{ "language": "en", "url": "https://stackoverflow.com/questions/109594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I position the content of a tabpage in Silverlight? When I place a control on a tabpage in Silverlight the control is placed ~10 pixels down and ~10 pixels right. For example, the following xaml: <System_Windows_Controls:TabControl x:Name=TabControlMain Canvas.Left="0" Canvas.Top="75" Width="800" Height="525" Background="Red" HorizontalContentAlignment="Left" VerticalContentAlignment="Top" Padding="0" Margin="0"> <System_Windows_Controls:TabItem Header="Test" VerticalContentAlignment="Top" BorderThickness="0" Margin="0" Padding="0" HorizontalContentAlignment="Left"> <ContentControl> <Grid Width="400" Height="200" Background="White"/> </ContentControl> </System_Windows_Controls:TabItem> </System_Windows_Controls:TabControl> will produce: How do I position the content at 0,0? A: Check the control template of your TabItem , it might have some default Margin of 10. Just a guess A: Look at the control template, it has a margin of that size. Use blend to modify the a copy of the tab control's template. A: You can also add a negative margin to the content. I found the value to be 9 pixels... <System_Windows_Controls:TabControl x:Name=TabControlMain Canvas.Left="0" Canvas.Top="75" Width="800" Height="525" Background="Red" HorizontalContentAlignment="Left" VerticalContentAlignment="Top" Padding="0" Margin="0"> <System_Windows_Controls:TabItem Header="Test" VerticalContentAlignment="Top" BorderThickness="0" Margin="0" Padding="0" HorizontalContentAlignment="Left"> <ContentControl> <Grid Width="400" Height="200" Margin="-9,-9,-9,-9" Background="White"/> </ContentControl> </System_Windows_Controls:TabItem> </System_Windows_Controls:TabControl> A: After spending a couple hours fooling around with this problem. Brian is totally right. The current version of VS does not allow changing the TabControl's template, but it can be done using Blend, and there is a margin on the template. The main drawback of doing this is that the XAML file will no longer be previewable from Visual Studio.
{ "language": "en", "url": "https://stackoverflow.com/questions/109608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How would you achieve this table based layout using CSS instead of HTML tables? I want the following layout to appear on the screen: FieldName 1 [Field input 1] FieldName 2 is longer [Field input 2] . . . . FieldName N [Field input N] Requirements: * *Field names and field inputs must align on the left edges *Both columns must dynamically size themselves to their content *Must work cross-browsers I find this layout extremely simple to do using HTML tables, but since I see a lot of CSS purists insisting that tables only be used for tabular data I figured I'd find out if there was a way to do it using CSS. A: I think most of the answers are missing the point that the original questioner wanted the columns widths to depend on the width of the content. I believe the only way to do this with pure CSS is by using display: table, display: table-row and display: table-cell, but that isn't supported by IE. But I'm not sure that this property is desirable, I find that creating a wide columns because there is a single long field name makes the layout less aesthetically pleasing and harder to use. Wrapped lines are fine in my opinion, so I think the answers that I just suggested were incorrect are probably the way to go. Robertc's example is ideal but if you really must use tables, I think you can make it a little more semantic by using <th> for the field names. I'm not sure about this so please someone correct me if I'm wrong. <table> <tr><th scope="row"><label for="field1">FieldName 1</label></th> <td><input id="field1" name="field1"></td></tr> <tr><th scope="row"><label for="field2">FieldName 2 is longer</label></th> <td><input id="field2" name="field2"></td></tr> <!-- ....... --> </table> Update: I haven't been following this closely, but IE8 apparently supports CSS tables, so some are suggesting that we should start using them. There's an article on 24 ways which contains a relevant example at the end. A: Better still use a list <fieldset class="classname"> <ul> <li> <label>Title:</label> <input type="text" name="title" value="" /> </li> </ul> </fieldset> The set the li tags width wide enough for both label and input and float the label to the left. Also to achieve that table like block with the tables you could set the label width to be as big as the largest fieldname forcing all the labels or expand that wide. [edit] this is some good reading on a list apart A: It's not clear that it is tabular data as some others have commented, though it could be. A table would imply a semantic relationship between all the items in the respective columns (other than just "they're all names of database columns"). Anyway, here's how I've done it before: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>Form layout</title> <style type="text/css"> fieldset {width: 60%; margin: 0 auto;} div.row {clear: both;} div.row label {float: left; width: 60%;} div.row span {float: right; width: 35%;} </style> </head> <body> <form action="#" method="post"> <fieldset> <legend>Section one</legend> <div class="row"> <label for="first-field">The first field</label> <span><input type="text" id="first-field" size="15" /></span> </div> <div class="row"> <label for="second-field">The second field with a longer label</label> <span><input type="text" id="second-field" size="10" /></span> </div> <div class="row"> <label for="third-field">The third field</label> <span><input type="text" id="third-field" size="5" /></span> </div> <div class="row"> <input type="submit" value="Go" /> </div> </fieldset> </form> </body> </html> Edit: Seems that 'by design' I can't reply to comments on my answer, obviously this is somehow less confusing. So, in reply to 17 of 26's comment - the 60% width is entirely optional, by default the fieldset will inherit the width of the containing element. You could also, of course, make use of min-width and max-width, or any of the table layout rules, if only IE supported them, but that's not CSS failing miserably ;) A: This markup and CSS roughly achieves your stated goals under the restrictions for this question... The Proposal <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>My Form</title> <style type="text/css"> #frm1 div {float: left;} #frm1 div.go {clear: both; } #frm1 label, #frm1 input { float: left; clear: left; } </style> </head> <body> <form id="frm1" action="#" method="post"> <fieldset> <legend>Section One</legend> <div> <label for="field1">Name</label> <label for="field2">Address, City, State, Zip</label> <label for="field3">Country</label> </div> <div> <input type="text" id="field1" size="15" /> <input type="text" id="field2" size="20" /> <input type="text" id="field3" size="10" /> </div> <div class="go"> <input type="submit" value="Go" /> </div> </fieldset> </form> </body> </html> The Merits ...but I would not recommend its use. The problems with this solution are * *the very annoying entire-column wrap at skinny browser widths *it separates the labels from their associated input fields in the markup The solution above should be (I haven't verified this) accessible-friendly because screen readers, I have read, do a good job of using the for="" attribute in associating labels to input fields. So visually and accessibly-wise this works, but you might not like listing all your labels separately from your input fields. Conclusion The question as it is crafted -- specifically the requirement to automatically size the width of an entire column of different-length labels to the largest label length -- biases the markup solution towards tables. Absent that requirement, there are several great semantic solutions to presenting forms, as has been mentioned and suggested by others in this thread. My point is this: There are several ways to present forms and collect user input in a pleasing, accessible, and intuitive way. If you can find no CSS layout that can meet your minimum requirements but tables can, then use tables. A: I wouldn't, I would use a table. This is a classic example of a tabular layout - exactly the sort of thing tables are supposed to be used for. A: FieldName objects should be contained in SPANs with style attributes of float: left and a width that is wide enough for your labels. Inputs should be contained within a span styled to float: left. Place a <div style="clear: both"/> or <br/> after each field input to break the floating. You may enclose the aforementioned objects into a div with the width style attribute set that is wide enough for both labels and inputs, so that the "table" stays small and contained. Example: <span style="float: left; width: 200px">FieldName1</span><span style="float: left"><input/><br/> <span style="float: left; width: 200px">FieldName2</span><span style="float: left"><input/><br/> <span style="float: left">FieldName3</span><span style="float: left"><input/><br/> A: Each row is to be taken as a 'div' that contains two 'spans' one for fieldname and one for the input. Set 'float:left' on both the spans. However, you need to set some width for the 'fieldname' span. Also, style the div to include the attribute 'clear:both' for precaution.
{ "language": "en", "url": "https://stackoverflow.com/questions/109618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What are the bigger hurdles to overcome migrating from Winforms to WPF? I've been developing Winforms applications in C# for a few years now and have been interested in moving future development toward WPF, mainly because of the positive things I've been hearing about it. But, I'm wondering what sort of hurdles others have had to overcome as they migrated to WPF. Was there a significant hit to your productivity or any particular issues which you found challenging? A: In my admittedly limited experience with WPF, the bigger hurdles include a complete overhaul of my mental model for how UIs are built and the new terminology that needs to be learned as a result of that. It may be that others have an easier time adjusting to the model, though. I can see how someone coming from best practices in the web world would find the transition much more natural. There was definitely a significant hit to my productivity (so significant that I'm not yet comfortable with the idea of going to my employer and saying "let me do this with WPF instead of Winforms"). I don't think that I'll never get there, but I need to develop some additional comfort with the technology through practice in my personal time. I didn't run into any particular issues that I found to be more challenging than any others. I believe Adam Nathan's WPF Unleashed was mentioned elsewhere, and that's definitely a worthwhile read. I've also heard good things about Charles Petzold's book, though I can't personally vouch for it. A: I'm not sure I can give you just one hurdle, because it is a complete departure from WinForms. My suggestion is get Adam Nathan's WPF Unleashed, forget everything you know about building UI's with any previous technology (Winforms, MFC, Java) and begin again at square one. If you try and do it any other way it will cause utter frustration. ETA: the reason I say to just start from scratch is because sometimes it's easier to learn new concepts if you go in with a clean slate. In the past, I've discovered that I can be my own worst enemy when it comes to learning something new if I try to carry knowledge from technology to technology (e.g. thinking that doing asmx web services for years precludes me from reading the first couple chapters of a WCF book). A: * *You cannot turn off anti-alias. *Your users need Vista or XP SP2 with .net 3.x framework. *If you want to use winforms be aware of the Air Space (one solution for D3D here). These are the major issues for me. Other than that, I'm all for it, its way more than looks. A: The Microsoft Learning website has a useful introduction, which I believe is available free if you have a Microsoft Passport account https://www.microsoftelearning.com/eLearning/courseDetail.aspx?courseId=85488 A: If the WinForms app has a proper Object model architecture defined(More like an MVC model architecture) I think it wont take much time to migrate your UI to WPF. WPF has organized its visual elements heirarchically(VisualTree) and RoutedEvents and RoutedCommands are totally new concepts in WPF. and obviously there are more stuffs like DataTemplate/Controltemplate all are at XAML level. All of these makes a very powerful and easy way to accomplish great user experience. So my major point here is that you can expect just your Object model reusable(With some modifications) in WPF and everything else on the Winforms project need to throw off. Ofcourse all other layers need not be modified(Comunication Layer/DataLayer) A: I'm in the same boat. I've been programming using winforms for so long. Now I keep decided that I'm going to learn WPF and start doing everything with it. The hardest thing for me is getting used to using XAML primarially for the UI rather than C# code, and a lot of the properties are different in WPF. (IE - to change a label's text you have to change the Content property). So my biggest problem is to get my head out of winforms and get it into a whole new way of thinking. A: Well, for me it was the fact that controls in WPF behave rather different from those in WPF (for example, when it comes to positioning in the form). You have to understand the difference as soon as possible to use it successfully and productively. A: Even with 20+ yrs experience I found WPF to have a steep learning curve. I attempted to do my latest project using WPF but the lack of built in controls (like NumericUpDown for example) and problems getting DataBinding to work with a business object forced me to fall back to Winforms for this project but I hope to do future projects with it. Almost all the code I wrote (vs what the designer generated) was reusable when I switched between WPF and Winforms.
{ "language": "en", "url": "https://stackoverflow.com/questions/109620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Finding the right VPS The market is flooded with VPS (virtual private server) hosting options. It seems everyone and their mother has a overloaded server in his/her closet. Enterprise options always seem priced insanely high, which make the ones that are cheap and claim enterprise level seem shaky. What do you look for in a quality VPS provider (language support, 24/hr tech, etc), and how if at all do you check their credibility? A: I've had good luck with Linode. I run Utility Mill through them. They run great, reasonable rates, and there are always people able to help you in their IRC room. Back when I was researching, Slicehost also looked very promising but they had a waiting list at that time. A: Most virtual hosting platforms will have a trial period in which you can test out their reliability. They will also give you a list of their high profile sites on their systems. Most keep track of the traffic hogs as it's a great way for them to attest their own stability. I would recommend Slicehost as I have been with them for over a year and love the control. They have an amazing panel in which you can console in, rebuild slices, and restart slices in an instance. They also allow a VERY fast and painless memory upgrade, bandwidth pooling (taking all of your accounts bandwidth into one large pool), and they allow lots of different Linux kernel OSes. So to answer your question without sounding like a complete advertisement: * *Check about their remote capabilities to manage your VPS. *Check out their largest clients and some big sites on their systems. *Test out their VPS for 30 days or so and give their support a test! *Check out forums where people talk about services (like this thread mentioning Slicehost 3 times already). *Check out places and make sure people aren't complaining of overselling or crowding out servers. I know in a VPS world, things are sandboxed a lot more than shared hosts, but it's still nice to know they can handle loads. *Check out the abilities to move servers or add more memory to your VPS. Those are things that I look for. A: Check credibility on forums like http://www.webhostingtalk.com/ I am about to purchase a VPS and after some research I selected http://www.servint.com/ * *Many years in the market *Seem credible on webhostingtalk *Managed servers: http://www.servint.net/vps/faq.php#14 A: I think one way is to look for ones that reputable sites use. For example, I learnt about Slicehost through Refactor my Code using it, and I love it. :-) A: Selecting a VPS supplier can be tricky. Some warning signs: * *Suppliers that don't list the amount of guaranteed RAM you get with a VPS plan, *Suppliers that don't list their contact information or office address (a lot of the small reseller outfits just list their plans and how they want your money). *Suppliers that don't mention if they own their own hardware or resell capacity from someone else. Out of a performance perspective there are a number of other things to consider: * *Being able to have your VPS located close to the majority of your customers (at least in the same country) *Cost/amount of guaranteed RAM. To list VPS plans and suppliers that at least provide this basic information try CompareVPS.com. A: I've been with ServInt for only a month, but so far, the experience has been great. I originally grabbed a VPS from LiquidWeb after hearing some good things on forums, but was terribly disappointed. The loadtimes I was getting were awful. Though I will say that their customer service was pretty sharp. Anyways, I think you'd be making a good move by going with ServInt. So far, the performance has been wonderful, haven't really had to deal with their CS yet though. As far as unmanaged goes, I've heard Slicehost and Linode are winning that race. My 2 cents. A: I also should add that this made me pretty confident that I was dealing with a good company, they stood up for their competitor, when they could have easily done the opposite. http://blog.servint.net/2009/07/08/why-servint-stands-beside-rackspace-and-you-should-too/ A: I've tried quite a few of them. The only one that I can recommend wholeheartedly is Slicehost. They are incredibly good at what they do. I have many clients running on their systems. A: I use rackforce. I havnt had a problem with mine, but then i dont have anything large scale on it. One thing about them that is good is they are on a borderline of two powergrids and backbones.(so i have heard) Just stay away from Webserve.ca. They are HORRIBLE, horrible support, they mess things up, slow to resolve problems. and i heard they are a reseller. I tried them a while back when i just needed something quick and didnt do any searching. Bad bad bad. A: The best way to find good or bad references about a hosting service, is always googling and foruns. I always look for a good support and how flexible it's service provider is. One that i have as a development and staging server is in the "A small orange" http://www.asmallorange.com/services/vps/. They have even a developer package, that, is what i use. But i started there getting a simple cheaper hosting plan (25 dollars/year) and I checked their support, their uptime and when i felt confident in their service, i've got a VPS. I recommend it. A: Availability is nice, but even cheap solutions are usually sufficient. Thing I've wanted most since I started having sites hosted: Shell access. It is a PAIN to do everything through FTP and web interfaces. Oh, and if your app has PHP/MySQL/Apache/IIS/.Net/JSP/etc version requirements, you'll want to check those first. A: You should try JoinVPS. It's cheap and enough reliable.
{ "language": "en", "url": "https://stackoverflow.com/questions/109631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: C pointer assignment behavior temp2, temp1 are pointers to some struct x: struct FunkyStruct x; struct FunkyStruct *temp1 = &x, *temp2 = &x; Now, after execution of following lines: temp2=temp1; temp1=temp1->nxt; ...Will temp2 and temp1 still point to the same memory location? If not, please explain why they would be different. A: This sounds like a question based on a background in java? The answer that dysfunctor gave is good. The important thing to realise is that in C assigning a pointer is no different to assigning an integer. Consider the following modification to your original code: int temp1 = 1; int temp2; temp2=temp1; temp1=temp1 + 1; At the end of this temp1 is 2, temp2 is 1. It's not like assigning a (non-primitive) object in java, where the assignment actually assigns a reference to the object rather than the value. A: temp2 will not be updated, but temp1 will point to the next item. So if temp1 is 0x89abcdef and temp1->next is 0x89b00000, then after you're done, temp1 will be 0x89b00000 and temp2 will be 0x89abcdef. Assuming you're making a linked list, of course. A: Initially, temp1 and temp2 both contain the memory address of x. temp2 = temp1 means "assign the value of temp1 to temp2". Since they have the same value to start with, this command does nothing. The expression temp1->next means "Look inside the data structure that temp1 points to, and return the value of the field next." So temp1 = temp1->next assigns the value of temp1->next to temp1. (Of course, the lookup happen before the assignment.) temp1 will now contain whatever value the next field happened to contain. It could be the same as the old value, or it could be different. A: You're not really giving us enough information to answer your question. Are they starting out pointing to the same structure, or are they only both of type pointer to structure x? And if it's some struct x, what's the definition of the nxt field? A: Different. You've saved the address of what temp1 is initially pointed to into temp2. You then changed what temp1 is pointed to, not the variable at the other end of what temp1 is pointed to. If you had done temp2 = temp1; *temp1 = temp1->foo; then temp1 & temp2 will both be pointing to (the same) modified variable. A: No, assuming there are pointers like in C. temp2 would be pointing at the location of x and temp1 would be pointing at whatever the nxt pointer points to. Usually this would be the layout for a singly linked list. A: x (and therefore x.nxt) will be initialised to an unspecified value, depending on the combination of compiler, compiler options and the runtime environment. temp1 and temp2 will both point to x (before and after temp1=temp2). Then temp1 will be assigned whatever value x.nxt has. Final answer: 0 < Pr(temp1 == temp2) << 1, because temp1 == temp2 iff x.nxt == &x. A: The short answer is no. But only if nxt is different to both temp1 and temp2 to start with. The line temp1=temp1->nxt; has two parts, separated by the = operator. These are: * *The right hand side temp1->nxt looks up the structure pointed to by temp1 and takes the value of the nxt variable. This is a pointer (new memory location). *The pointer from the right hand side is then used to update the value of temp1.
{ "language": "en", "url": "https://stackoverflow.com/questions/109644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: How do you do very quick (and dirty) estimations for coding tasks? So you've just been put on the spot by The Boss. You've got 15 minutes to come up with a back of the envelope estimate for the addition of some new feature. Your boss (fortunately) recognizes that you can't provide an accurate estimate in that time so expecting something that is in the right order of magnitude. The question is how do you go about giving a estimate in the time frame that is accurate to an order of magnitude? Note that this is meant to be a quick and dirty estimate, not something that might be expected from questions like this A: The best way is to try a quick breakdown of all of the major sub-components, e.g. * *Update data model script (3 items in 2 tables) *Change input screen (3 new inputs) *Check Input (3 new inputs) *Update Data. *Display results etc... *Build unit test Assign a rough guess on each of these and if you can't think of one put down at least 2 hours, because even the simplest item will probably take at least an hour, but the 2x will allow for uncertainty. At least you will have thought of all the items you will have to do so it will be in the right order of magnitude as was requested. A: Think back to similar tasks you've done in the past and how long they took you. If you've done nothing similar at all before, try to break the task down into subtasks, then each subtask down further, until no subtask is left that sounds like it will take longer than 1-2 days to prototype in the most naive possible way; if you can't divide up a task with an estimate of longer than 3 days, this usually implies that you don't really know what is involved in doing that task; do some quick research. Once everything is broken up enough, total it up, double the result and give that as your estimate. If you don't know how to approach a problem enough to do the above, and your boss is breathing down your neck so you don't feel you can research there and then, instead try to give your boss an estimate of how long it will take you to do the research required to understand the problem enough to give him a proper estimate. A: I can't imagine a situation where I truly can't make an estimate at all--more often there's the case where I can imagine multiple scenarios which would result in vastly different timeframes for the project, depending on various things that could reasonably crop up. And I don't want to lie--the worst thing you can do with your boss is to just make stuff up. So I explain each of the possibilities. Of course, this only works with an understanding boss, but if your boss is so ignorant or foolish that he refuses to listen to the full explanation, you have other problems. For example, here's how I did it for a recent case where I actually had to do exactly this. x264, the video encoder I work on, implements a very primitive form of interlaced coding chosen solely for the reason that it was very easy to implement. We wanted to implement the full form of this coding, but I had no idea how many of the assumptions made for the simplified version would fail in such a case. So I thought through the various levels of things that might have to be changed, and made the estimate a range--well, at best, it might already be nearly working, but that's doubtful. And at worst there's a whole ton of stuff that needs to be changed. So, I told my boss, it was probably better to assume the worst here, since the spec was very complicated and despite not knowing about any of that complexity, I suspected that given the major lack of related code in the program, nearly none of that complexity was actually implemented. In the end I was right--the changes required ended up being quite complicated, and they outsourced the project to a contractor with more expertise in the complexities of H.264's interlaced coding. A: In addition to the necessary breakdown : an advice I learned from the Pragmatic Programmers is to express estimates over 15 days in weeks, and estimates over 8 weeks in months ; so that the unit reflects the accuracy of the estimation. Be very careful over 30 weeks. You can also base your estimations on similar tasks you already done. A: If you really need very quick estimation, you can do work breakdown structure with every task for 1-2 days or smaller and after this estimate every task by providing min and max estimated values. sum of min and max values specify interval for the whole task. This gives information abouts risks to your boss, which is always very useful. You will obtain some interval, e.g. 12-15 day or 5-30 days - this is much more useful than 16 day instead of mentioned intervals. It can be useful for you good book by Steve McConnel Software Estimation: Demystifying the Black Art. A: Think of a number, double it and then double it again (i.e. four times the first number that pops into your head) When a boss says "how long to complete" a project, he means the time when it's complete and deployed live to the users. A programmer will (naturally) only think about the time needed to complete the programming (the time to physically type out the solution to the problem) so you typically under estimate. A rule of thumb would be: The 'first number' is the number of days you think it will take you to complete the task based on the scope of the task as just described. (But of course, you've not been told everything). The first multiple is the extra time needed to recode after the first demo / prototype given to the boss and he says "Good, great. But can you add..." The second multiple is the time needed to recode the recode up to the correct standard for production. The third multiple is time for testing, documentation & deployment and all the other admin stuff you need to do to actually get the thing out and live. And the fourth multiple is your contingency for the above. This should give you a safe estimate. Of course, you should insist that a more thorough planning and estimation exercise. A: Place finger in mouth, lick, wave in air and make up a number based on past experience. Then double it. Really, its just experience that counts. You imagine what the task entails you doing, and you know how long it'll take you to do that. Double it for unanticipated items. This is also why you never ask junior programmers for such estimates. A: I've recently been reading Agile Estimating and Planning, and can't recommend it enough. A: If I am forced to provide estimates without enough time to properly investigate the subject at hand I tend to massively overestimate. The fix is almost always more difficult than I think it is going to be. If I think something will take a day then I say two days. If I say something is going to take an hour then I say a day. What I am trying to illustrate with these comments is that for all but the most mundane tasks like spelling mistakes, even a small code change can explode into a full day. For anything I think might take a day or more I double the estimate. I know it can be tough to do this. Management wants small numbers. You want to look smart and capable in front of other developer. See also Scotty Factory. Even if you have QA team members that will test your code you have to remember that it is your job to test the code as well. Make sure to factor that into any estimate. That is something I have seen a lot of developers leave out of their estimating process. A: Factor #1 is the unknowns, and you're right, you can't know them all. However, you'll usually know some major questions no one can answer for you at that time. Factor #2 is the perceived difficulty and availability of tools and resources at hand. Result = roughly double your estimate A: * *Break down the task into parts and assign each part a time *Work in units of not less than 1/2 a day. This will prevent micro-scheduling *The big problem with project estimation is underestimation. If you know the task well and can almost see the code then weight the task by 1. If there is some uncertainty or the task requires an unknown technology then multiply it by a higher factor, depending on the level of uncertainty *Don't worry too much about accuracy of each part. The errors tend to cancel out as the only thing that really matters is the total duration There is always the good old standby of taking the optimistic time scale and multiplying it by PI. Works more often than it should! A: I personally refuse this type of thing. But then i work for myself, so i dont answer to a boss. Just a client, but its easier to make them understand its hard to do on the spot. A: At times like these, I remember the McKenzie Brother's rule on converting to metric: "Double it, and add thirty." I generally come up with how quickly I originally think it will take to do a thing, then double it because I'm always under-estimating, and then add 30 for testing, depending on the units I'm using. A: I usually break the task down into a few pieces, but I don't estimate for these kinds of things in blocks of time smaller than a half day. As long as there are at least 5 or 6 pieces to the feature after breakdown I find that the errors balance themselves out for the most part (some tasks take under an hour, etc) Of course, the minimum time division and number of pieces required for some level of comfort needs to vary depending on the problem domain - at least 5 or 6 half day chunks seems to be about right for the stuff I've been asked about lately, but that needs to be reviewed every few months. When I'm asked to estimate on behalf of someone else, I resist a bit more and follow a similar practice with a generous padding system ("double and add x" as mentioned above is probably a good approximation) A: To estimate in the right order of magnitude, you need: * *no introduction of new technology or framework for the wanted feature; *to separate your estimate in pure development time and availability of developers (and customer and tester..; *to get feedback on your earlier estimates; *a size of feature in your safe estimation range (not 2 times as large with 2 times more people) *a stable development team. *no project startup overhead. *to only estimate for work you do yourself. A: I believe the answer is always "six to eight weeks." A: "six to eight weeks" works really well, one other thing that works is based on the data model. Imagine the number of database tables (or similar) needed for the application, multiply that by the number of days you need to code the models, CRUD, UI , etc for each table and add between 30% to 50% of time on top of that.
{ "language": "en", "url": "https://stackoverflow.com/questions/109666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: IoC, Where do you put the container? I'm using castle windsor for a pet-project I'm working on. I'm starting to notice that I need to call the IoC container in different places in my code to create new objects. This dependency on the container makes my code harder to maintain. There are two solutions I've used to solve this problem I tried to create abstract factories as wrappers around the container that I could inject into parts of my application that need to create objects. This works but has some drawbacks because castle has a hard time injecting it's own container as a dependency. So I have to do that by hand, this kind of defeats the whole purpose of the IoC container. I have used the main applicationcontroller class to wrap the IoC container and work as a central factory/repository. This was quite succesfull but this class is getting too big and acts like a central god-object, almost every other objects has a reference to it. Both solutions sort of work but both have their drawbacks. So I'm curious if other people had the same problem and have found better solutions. edit The problem isn't for object A that depends on object B. Here I usually just use constructor injection and everything works. Sometimes I have objects of type A that need to create a variable number of other objects of type B during their lifetime. I'm not sure how to do this. @Blair Conrad: The maintenance issues are not severe until now. I had some classes depend on the container object calling container.Resolve<>. And I don't want to have my code depending on what I think is infrastructure. I'm still trying things out so I noticed I had to change a lot of code when switching from ninject to castle for this project. @flowers: Hmm. I like your fists solution. It combines the things that work from both solutions I've tried. I think I was still thinking too much in objects and not enough in interfaces/responsibilities. I tried purpose built factories but I would like to have them use the container behind the scenes to create the objects and I havn't found out how I can DI the container into objects in a clean way. A: The main benefit of Dependency Injection, at least in my applications, is the ability to write code that is context agnostic. From that perspective, your second solution seems like it really subverts the benefit DI could be giving you. If the 'god object' exposes different interfaces to each class that references it, it might not be too evil. But if you went that far I don't see why you don't take it all the way to the hoop. Example: Your God object has a getFoo() method and a getBar() method. Object A needs a Foo, object B needs a Bar. If A just needs one Foo, Foo should be injected directly into A and A should not be aware of God at all. But if A needs to keep creating Foos, giving A a reference to God is pretty much inevitable. But you can protect yourself from the damage done by passing God around by narrowing the type of the reference to God. If you make God implement FooFactory and give A a reference to the FooFactory implemented by God, you can still write the code in A in a context-neutral way. That improves the opportunities for code reuse, and it increases your confidence that a change to God will not cause unexpected side-effects. For example, you can be certain when removing getBar() from God that class A won't break. BUT ... if you're going to have all those interfaces anyway, you're probably better off writing purpose-built factory classes and wiring all your objects together, factories included, within the container, rather than wrapping the container at all. The container can still configure the factories. A: While I appreciate the explicitness of "purpose built factories" and even use them myself, this feels like a code smell in my own designs because the public interface (little "i") keeps changing with a new factory and/or a new GetX method for each implementation. After reading Jeremy Miller's It's time for IoC Container Detente, I suspect generics and injecting the container itself is the way to go. I would wrap Ninject, StructureMap, or Windsor in some kind of IServiceLocator interface like the one proposed in Jeremy's article. Then have a container factory that simply returns an IServiceLocator anywhere in your code, even in loops as you originally suggested. IServiceLocator container = ContainerFactory.GetContainer(); while( keepLooping ) { IExample example = container.GetInstance<IExample>(); keepLooping = example.DoWork(); } Your container factory can always return the same intance, you can swap IoC frameworks, whatever. A: Please, do not ever ever use static classes like IoC.Container.Resolve or ContainerFactory.GetContainer! This makes the code more complicated, harder to test to maintain, to reuse and to read. Normally any single component or a service has only one single point of injection - that's the constructor (with optional properties). And generally your components or service classes should not ever know about the existence of such thing as container. If your components really need to have dynamic resolution inside (i.e. resolving exception handling policy or workflow, based on the name), then I recommend to consider lending IoC powers via the highly-specific providers A: I'd recommend checking out Nick Blumhardt's mini-series on this. http://blogs.msdn.com/nblumhardt/archive/2008/12/27/container-managed-application-design-prelude-where-does-the-container-belong.aspx A: As a follow up to @flipdoubt If you do end up using a service locator type pattern you may want to check out http://www.codeplex.com/CommonServiceLocator. It has some bindings available to several popular IoC frameworks (windsor, structuremap) that might be helpful. Good luck. A: I would recommend in this case using strongly typed factories as you mentioned which get injected. Those factories can wrap the container, but can allow passing in additional context and do extra handling. For example the Create on the OrderFactory could accept contextual parameters. Having static dependencies on a generic service locator is a bad idea as you loose the intent, and context. When an IoC builds up an instance, it can provide the correct dependencies based on a host of factors such as proifle, context, etc as it has the big picture. CommonServiceLocator is not for this purpose, though one might be tempted to use it. The main purpose for CommonServiceLocator is for apps / frameworks that want to be cross IoC container compliant. However, apps that use should only call the locator optimally once to build up a hierarchy of components and their dependenices. It should never be directly called again. If we had some way to enforce that we would have. In Prism (http://www.microsoft.com/compositewpf) we introduced an IContainerFacade for building up modules. That is a service locator though a low level one. In retrospect we probably should have created a ModuleFactory or something and used IContianerFacade to get a hold of it, and then used that resolve modules vs going to the Facade directly. Hindsight is 20 / 20. It's low level enough though that it doesn't really affect things. On CSL, We wrestled with the naming because it might lead to confusion. In the end we decided on CSL because technically the interface did not for you to do DI. A: That's a really comon problem. Windsor's built in Typed Factory Facility will give you the benefits of using a factory, without the mentioned drawbacks.
{ "language": "en", "url": "https://stackoverflow.com/questions/109668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Python signal woes: SIGQUIT handler delays execution if SIGQUIT received during execution of another signal handler? The following program is very simple: it outputs a single dot each half a second. If it recieves a SIGQUIT, it proceeds to output ten Qs. If it recieves a SIGTSTP (Ctrl-Z), it outputs ten Zs. If it recieves a SIGTSTP while printing Qs, it will print ten Zs after it's done with the ten Qs. This is a good thing. However, if it recieves a SIGQUIT while printing Zs, it fails to print Qs after them. Instead, it prints them out only after I manually terminate execution via a KeyboardInterrupt. I want the Qs to be printed immediately after the Zs. This happens using Python2.3. What am I doing wrong? #!/usr/bin/python from signal import * from time import sleep from sys import stdout def write(text): stdout.write(text) stdout.flush() def process_quit(signum, frame): for i in range(10): write("Q") sleep(0.5) def process_tstp(signum, frame): for i in range(10): write("Z") sleep(0.5) signal(SIGQUIT, process_quit) signal(SIGTSTP, process_tstp) while 1: write('.') sleep(0.5) A: Your larger problem is blocking in signal handlers. This is usually discouraged since it can lead to strange timing conditions. But it's not quite the cause of your problem since the timing condition you're vulnerable to exists because of your choice of signal handlers. Anyway, here's how to at least minimize the timing condition by only setting flags in your handlers and leaving the main while loop to do the actual work. The explanation for why your code is behaving strangely is described after the code. #!/usr/bin/python from signal import * from time import sleep from sys import stdout print_Qs = 0 print_Zs = 0 def write(text): stdout.write(text) stdout.flush() def process_quit(signum, frame): global print_Qs print_Qs = 10 def process_tstp(signum, frame): global print_Zs print_Zs = 10 signal(SIGQUIT, process_quit) signal(SIGTSTP, process_tstp) while 1: if print_Zs: print_Zs -= 1 c = 'Z' elif print_Qs: print_Qs -= 1 c = 'Q' else: c = '.' write(c) sleep(0.5) Anyway, here's what's going on. SIGTSTP is more special than SIGQUIT. SIGTSTP masks the other signals from being delivered while its signal handler is running. When the kernel goes to deliver SIGQUIT and sees that SIGTSTP's handler is still running, it simply saves it for later. Once another signal comes through for delivery, such as SIGINT when you CTRL+C (aka KeyboardInterrupt), the kernel remembers that it never delivered SIGQUIT and delivers it now. You will notice if you change while 1: to for i in range(60): in the main loop and do your test case again, the program will exit without running the SIGTSTP handler since exit doesn't re-trigger the kernel's signal delivery mechanism. Good luck! A: On Python 2.5.2 on Linux 2.6.24, your code works exactly as you describe your desired results (if a signal is received while still processing a previous signal, the new signal is processed immediately after the first one is finished). On Python 2.4.4 on Linux 2.6.16, I see the problem behavior you describe. I don't know whether this is due to a change in Python or in the Linux kernel.
{ "language": "en", "url": "https://stackoverflow.com/questions/109705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do the likely/unlikely macros in the Linux kernel work and what is their benefit? I've been digging through some parts of the Linux kernel, and found calls like this: if (unlikely(fd < 0)) { /* Do something */ } or if (likely(!err)) { /* Do something */ } I've found the definition of them: #define likely(x) __builtin_expect((x),1) #define unlikely(x) __builtin_expect((x),0) I know that they are for optimization, but how do they work? And how much performance/size decrease can be expected from using them? And is it worth the hassle (and losing the portability probably) at least in bottleneck code (in userspace, of course). A: These are macros that give hints to the compiler about which way a branch may go. The macros expand to GCC specific extensions, if they're available. GCC uses these to to optimize for branch prediction. For example, if you have something like the following if (unlikely(x)) { dosomething(); } return x; Then it can restructure this code to be something more like: if (!x) { return x; } dosomething(); return x; The benefit of this is that when the processor takes a branch the first time, there is significant overhead, because it may have been speculatively loading and executing code further ahead. When it determines it will take the branch, then it has to invalidate that, and start at the branch target. Most modern processors now have some sort of branch prediction, but that only assists when you've been through the branch before, and the branch is still in the branch prediction cache. There are a number of other strategies that the compiler and processor can use in these scenarios. You can find more details on how branch predictors work at Wikipedia: http://en.wikipedia.org/wiki/Branch_predictor A: long __builtin_expect(long EXP, long C); This construct tells the compiler that the expression EXP most likely will have the value C. The return value is EXP. __builtin_expect is meant to be used in an conditional expression. In almost all cases will it be used in the context of boolean expressions in which case it is much more convenient to define two helper macros: #define unlikely(expr) __builtin_expect(!!(expr), 0) #define likely(expr) __builtin_expect(!!(expr), 1) These macros can then be used as in if (likely(a > 1)) Reference: https://www.akkadia.org/drepper/cpumemory.pdf A: They are hint to the compiler to emit instructions that will cause branch prediction to favour the "likely" side of a jump instruction. This can be a big win, if the prediction is correct it means that the jump instruction is basically free and will take zero cycles. On the other hand if the prediction is wrong, then it means the processor pipeline needs to be flushed and it can cost several cycles. So long as the prediction is correct most of the time, this will tend to be good for performance. Like all such performance optimisations you should only do it after extensive profiling to ensure the code really is in a bottleneck, and probably given the micro nature, that it is being run in a tight loop. Generally the Linux developers are pretty experienced so I would imagine they would have done that. They don't really care too much about portability as they only target gcc, and they have a very close idea of the assembly they want it to generate. A: (general comment - other answers cover the details) There's no reason that you should lose portability by using them. You always have the option of creating a simple nil-effect "inline" or macro that will allow you to compile on other platforms with other compilers. You just won't get the benefit of the optimization if you're on other platforms. A: In many linux release, you can find compiler.h in /usr/linux/ , you can include it for use simply. And another opinion, unlikely() is more useful rather than likely(), because if ( likely( ... ) ) { doSomething(); } it can be optimized as well in many compiler. And by the way, if you want to observe the detail behavior of the code, you can do simply as follow: gcc -c test.c objdump -d test.o > obj.s Then, open obj.s, you can find the answer. A: As per the comment by Cody, this has nothing to do with Linux, but is a hint to the compiler. What happens will depend on the architecture and compiler version. This particular feature in Linux is somewhat mis-used in drivers. As osgx points out in semantics of hot attribute, any hot or cold function called with in a block can automatically hint that the condition is likely or not. For instance, dump_stack() is marked cold so this is redundant, if(unlikely(err)) { printk("Driver error found. %d\n", err); dump_stack(); } Future versions of gcc may selectively inline a function based on these hints. There have also been suggestions that it is not boolean, but a score as in most likely, etc. Generally, it should be preferred to use some alternate mechanism like cold. There is no reason to use it in any place but hot paths. What a compiler will do on one architecture can be completely different on another. A: Let's decompile to see what GCC 4.8 does with it Without __builtin_expect #include "stdio.h" #include "time.h" int main() { /* Use time to prevent it from being optimized away. */ int i = !time(NULL); if (i) printf("%d\n", i); puts("a"); return 0; } Compile and decompile with GCC 4.8.2 x86_64 Linux: gcc -c -O3 -std=gnu11 main.c objdump -dr main.o Output: 0000000000000000 <main>: 0: 48 83 ec 08 sub $0x8,%rsp 4: 31 ff xor %edi,%edi 6: e8 00 00 00 00 callq b <main+0xb> 7: R_X86_64_PC32 time-0x4 b: 48 85 c0 test %rax,%rax e: 75 14 jne 24 <main+0x24> 10: ba 01 00 00 00 mov $0x1,%edx 15: be 00 00 00 00 mov $0x0,%esi 16: R_X86_64_32 .rodata.str1.1 1a: bf 01 00 00 00 mov $0x1,%edi 1f: e8 00 00 00 00 callq 24 <main+0x24> 20: R_X86_64_PC32 __printf_chk-0x4 24: bf 00 00 00 00 mov $0x0,%edi 25: R_X86_64_32 .rodata.str1.1+0x4 29: e8 00 00 00 00 callq 2e <main+0x2e> 2a: R_X86_64_PC32 puts-0x4 2e: 31 c0 xor %eax,%eax 30: 48 83 c4 08 add $0x8,%rsp 34: c3 retq The instruction order in memory was unchanged: first the printf and then puts and the retq return. With __builtin_expect Now replace if (i) with: if (__builtin_expect(i, 0)) and we get: 0000000000000000 <main>: 0: 48 83 ec 08 sub $0x8,%rsp 4: 31 ff xor %edi,%edi 6: e8 00 00 00 00 callq b <main+0xb> 7: R_X86_64_PC32 time-0x4 b: 48 85 c0 test %rax,%rax e: 74 11 je 21 <main+0x21> 10: bf 00 00 00 00 mov $0x0,%edi 11: R_X86_64_32 .rodata.str1.1+0x4 15: e8 00 00 00 00 callq 1a <main+0x1a> 16: R_X86_64_PC32 puts-0x4 1a: 31 c0 xor %eax,%eax 1c: 48 83 c4 08 add $0x8,%rsp 20: c3 retq 21: ba 01 00 00 00 mov $0x1,%edx 26: be 00 00 00 00 mov $0x0,%esi 27: R_X86_64_32 .rodata.str1.1 2b: bf 01 00 00 00 mov $0x1,%edi 30: e8 00 00 00 00 callq 35 <main+0x35> 31: R_X86_64_PC32 __printf_chk-0x4 35: eb d9 jmp 10 <main+0x10> The printf (compiled to __printf_chk) was moved to the very end of the function, after puts and the return to improve branch prediction as mentioned by other answers. So it is basically the same as: int main() { int i = !time(NULL); if (i) goto printf; puts: puts("a"); return 0; printf: printf("%d\n", i); goto puts; } This optimization was not done with -O0. But good luck on writing an example that runs faster with __builtin_expect than without, CPUs are really smart these days. My naive attempts are here. C++20 [[likely]] and [[unlikely]] C++20 has standardized those C++ built-ins: How to use C++20's likely/unlikely attribute in if-else statement They will likely (a pun!) do the same thing. A: They cause the compiler to emit the appropriate branch hints where the hardware supports them. This usually just means twiddling a few bits in the instruction opcode, so code size will not change. The CPU will start fetching instructions from the predicted location, and flush the pipeline and start over if that turns out to be wrong when the branch is reached; in the case where the hint is correct, this will make the branch much faster - precisely how much faster will depend on the hardware; and how much this affects the performance of the code will depend on what proportion of the time hint is correct. For instance, on a PowerPC CPU an unhinted branch might take 16 cycles, a correctly hinted one 8 and an incorrectly hinted one 24. In innermost loops good hinting can make an enormous difference. Portability isn't really an issue - presumably the definition is in a per-platform header; you can simply define "likely" and "unlikely" to nothing for platforms that do not support static branch hints. A: They're hints to the compiler to generate the hint prefixes on branches. On x86/x64, they take up one byte, so you'll get at most a one-byte increase for each branch. As for performance, it entirely depends on the application -- in most cases, the branch predictor on the processor will ignore them, these days. Edit: Forgot about one place they can actually really help with. It can allow the compiler to reorder the control-flow graph to reduce the number of branches taken for the 'likely' path. This can have a marked improvement in loops where you're checking multiple exit cases. A: These are GCC functions for the programmer to give a hint to the compiler about what the most likely branch condition will be in a given expression. This allows the compiler to build the branch instructions so that the most common case takes the fewest number of instructions to execute. How the branch instructions are built are dependent upon the processor architecture. A: I am wondering why its not defined like this: #define likely(x) __builtin_expect(((x) != 0),1) #define unlikely(x) __builtin_expect((x),0) I mean __builtin_expect docs say that the compiler will expect the first parameter having the value of the second one (and first param is returned), but the original way the macro is defined above makes it hard to use this for things that might return non-zero values as "true" values (instead of value 1). This might be buggy - but from my code you get the idea what direction I mean..
{ "language": "en", "url": "https://stackoverflow.com/questions/109710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "433" }
Q: Multiple Constructors with complex logic In C#, if you have multiple constructors, you can do something like this: public MyClass(Guid inputId, string inputName){ // do something } public MyClass(Guid inputId): this(inputId, "foo") {} The idea is of course code reuse. However, what is the best approach when there is a bit of complex logic needed? Say I want this constructor: public MyClass(MyOtherClass inputObject) { Guid inputId = inputObject.ID; MyThirdClass mc = inputObject.CreateHelper(); string inputText = mc.Text; mc.Dispose(); // Need to call the main Constructor now with inputId and inputText } The caveat here is that I need to create an object that has to be disposed after use. (Clarification: Not immediately, but I have to call Dispose() rather than waiting for Garbage Collection) However, I did not see a way to just call the base constructor again if I add some code inside my overloaded constructor. Is there a way to call the base constructor from within an overloaded one? Or is it possible to use public MyClass(MyOtherClass inputObject): this(inputObject.ID, inputObject.CreateHelper().Text) {} Would this automatically Dispose the generated Object from CreateHelper()? Edit: Thanks so far. Two problems: I do not control MyOtherClass and I do not have Extension Methods (only .NET 3.0...). I do control my own class though, and since I've just started writing it, I have no problem refactoring the constructors if there is a good approach. A: I don't see any reason to believe that creating an object in the constructor will automatically dispose the object. Yes, your object will immediately go out of scope and be available for garbage collection, but that is certainly not the same as being disposed. There really isn't a great way to do exactly what you want to do, but the whole thing feels like it could benefit from some refactoring. That is usually the case in my own code when I find myself trying to bend over backwards to create a constructor overload. If you have control over MyOtherClass, why not simplify the access to that text property by adding a getter method that handles the dispose: public class MyOtherClass { //... public string GetText() { using (var h = CreateHelper()) return h.Text; } } if you don't control MyOtherClass you could use an extension method public static class MyOtherClassExtensions { public static string GetText(this MyOtherClass parent) { using(var helper = parent.CreateHelper()) { return helper.Text; } } } Then, of course, in your constructor you can safely call public MyClass(MyOtherClass inputObject): this(inputObject.ID, inputObject.GetText()) {} A: The most common pattern used to solve this problem is to have an Initialize() method that your constructors call, but in the example you just gave, adding a static method that you called like the code below, would do the trick. public MyClass(MyOtherClass inputObject): this(inputObject.ID, GetHelperText(inputObject) {} private static string GetHelperText(MyOtherClass o) { using (var helper = o.CreateHelper()) return helper.Text; } A: The object would only be automatically disposed when garbage collection runs. If you want the dispose to run as soon as it went out of scope, you should use a using block: using (MyThirdClass mc = inputObject.CreateHelper()) { // do something with mc } This is really more of an issue with style and not really central to the question you had.
{ "language": "en", "url": "https://stackoverflow.com/questions/109717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Unit-testing COM written in .NET Is there a way to unit-test COM-visible .NET assemblies from .NET (not via direct .NET assembly reference)? When i add reference in my test project to the COM component whitten in .NET it complains. A: There's always vbunit. Unit testing vb code (vb classic / VB6) and com objects is what it does.
{ "language": "en", "url": "https://stackoverflow.com/questions/109719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to branch a virtual server in Hyper-V? We use Hyper-V extensively in our development environment. Each developer has a virtual server that they own and then we have a bunch of build, test, R&D, and staging virtual servers. Is there any documented or best practice way to duplicate a virtual machine in Hyper-V? What I would really like to be able to do is to split a machine from a snapshot and have multiple virtual machines that both roll up underneath a common root machines snapshot. I don't mind having to run some tools or having to rejoin the domain, I just want the ability to spawn new machines from an existing snapshot. Is this possible? A: I think the real problem is the duplication of servers on the Network - plus that evil kerberos-keys-getting-out-of-date issue that any offline copy of a Virtual Server can suffer. I'd suggest creating a SysPreped image as the base and then create multiple machines from that. I don't think branching servers would be very wise (at least not on the same network). Otherwise I'd just copy and paste the VHD to a new path and create a new server for each branch - keeping them in their own network space (and IP range).
{ "language": "en", "url": "https://stackoverflow.com/questions/109731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: salient concerns and questions to consider in designing a website content management system I'm designing my website and was curious, before I go rip someone else's ideas, what were the salient considerations and questions one should ask in designing a database? A: I think the most important question is, "Why are you doing it (the CMS, not the web site)?" This is very well-trod ground. Unless you have some really innovative ideas and unique insights into how you want it to be done ... and your question suggests that you probably don't ... you would probably be better-served by choosing an existing solution. A: In 99% of cases, writing a CMS is simply busy work for re-inventing the wheel. There are so many open-source CMSs out there that I can almost guarantee you can find one that will suit your needs. That said, if you're still determined to write your own, I would only write exactly as much functionality as you need. Writing a CMS can be a very simple task. But it's one of those things that can become a convoluted nightmare of overlapping, unused features. Only write what you need, and you can add features as the need arises. A: This is just off the top of my head: * *Content organization should be one of your primary concerns. How are you going to organize all the disparate pieces of content? *Security, and what levels? do you need to only secure the ability to edit any content? certain pieces of content? How about viewing of content, does that need to be secured in any way? *Versioning of content? *Multilingual? *What kind of content? Simple text? images? videos? blog postings? That should at least get you started in thinking in the right direction.
{ "language": "en", "url": "https://stackoverflow.com/questions/109736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Update a backend database on software update with Java With which tool / library it is possible to update an existing database structure. On the update of the software it is also needed to change the database. Because there can be different versions of the software it should compare the current status with the target status of the database. It should: * *add table columns, fill it with default values. *delete table columns *change the data type of columns, for example varchar(30) --> varchar(40) *add / remove indexes *add / alter / delete views *update some data in the tables *... It should support the DBMS: * *MS SQL Server 2000 - 2008 *Oracle Server 8 - 11 *MySQL Because our software setup and application run in Java that it must also be run in Java. What can we use? Ideally it scan our development and save it in an XML file. Then we can add some data modification SQL command. Then it can be run on customer side with the setup of the update. A: Check out Liquibase. A database migrations tool, like dbmigrate, might also be worth a lok. A: Autopatch is what we are using. It works pretty well. It allows sql patches, data patches, and java patches all applied to your sql database.
{ "language": "en", "url": "https://stackoverflow.com/questions/109746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Creating a shiny Graphic/Gloss Effect I would like to programmatically create a gloss effect on an Image, kinda like on the Apple-inspired design that the Web has adopted when it was updated to 2.0 Beta. Essentially this: example icons http://nhc.hcmuns.googlepages.com/web2_icons.jpg Now, I see two approaches here: I create one image which has an Alpha channel with the gloss effect, and then I just combine the input and the gloss alpha icon to create this. The second approach: Create the Alpha Gloss Image in code and then merge it with the input graphic. I would prefer the second solution, but I'm not much of a graphics person and I don't know what the algorhithm is called to create such effects. Can someone give me some pointers* for what I am actually looking here? is there a "gloss algorhitm" that has a name? or even a .net Implementation already? *No, not those type of pointers. A: I can explain that effect in graphic terms. * *Create an image around 3* the size of the icon. *Within this image, create a circle where (the height of the icon) < radius < 2*(the height of the icon). *Fill the circle with an alpha blend/transparency (to white) of say 10%. *Crop that circle image into a new image of equal size to your icons, where the center of the circle is centered outside the viewing area but upwards by 1/2 the height of the smaller image. If you then superimpose this image onto the original icon, the effect should look approximately like the above icons. It should be doable with imagemagick if you're keen on that, or you could go for one of the graphics API's depending on what language you want to use. From the steps above it should be straightforward to do programatically. A: Responding to the C# code ... Overall, good job on getting the imaging going. I've had to do similar things with some of my apps in the past. One piece of advice, however: All graphics objects in .NET are based on Windows GDI+ primitives. This means these objects require correct disposal to clean up their non-memory resources, much like file handles or database connections. You'll want to tweak the code a bit to support that correctly. All of the GDI+ objects implement the IDisposable interface, making them functional with the using statement. Consider rewriting your code similarly to the following: // Experiment with this value int exposurePercentage = 40; using (Image img = Image.FromFile("rss-icon.jpg")) { using (Graphics g = Graphics.FromImage(img)) { // First Number = Alpha, Experiment with this value. using (Pen p = new Pen(Color.FromArgb(75, 255, 255, 255))) { // Looks jaggy otherwise g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias; int x, y; // 3 * Height looks best int diameter = img.Height * 3; double imgPercent = (double)img.Height / 100; x = 0 - img.Width; // How many percent of the image to expose y = (0 - diameter) + (int)(imgPercent * exposurePercentage); g.FillEllipse(p.Brush, x, y, diameter, diameter); pictureBox1.Image = img; } } } (Bear in mind, unlike most of my samples, I haven't had a chance to compile and test this ... It's meant more as a sample of structuring the code for ensuring that there are no resource leaks, not as a finished product. There are probably better ways to abstract/structure that anyway. And strongly consider doing so -- toss this in a graphics library DLL that you can just reference in any project which needs these capabilities in the future!) A: Thank you, Devin! Here is my C# Code for implementing your suggestion. It works quite good. Turning this into a community owned Wiki post, If someone likes to add some code, feel free to edit this. (Example uses different values for Alpha and exposure than the code below) Image img = Image.FromFile("rss-icon.jpg"); pictureBox1.Image = AddCircularGloss(img, 30,25,255,255,255); public static Image AddCircularGloss(Image inputImage, int exposurePercentage, int transparency, int fillColorR, int fillColorG, int fillColorB) { Bitmap outputImage = new Bitmap(inputImage); using (Graphics g = Graphics.FromImage(outputImage)) { using (Pen p = new Pen(Color.FromArgb(transparency, fillColorR, fillColorG, fillColorB))) { // Looks jaggy otherwise g.SmoothingMode = SmoothingMode.HighQuality; g.CompositingQuality = CompositingQuality.HighQuality; int x, y; // 3 * Height looks best int diameter = outputImage.Height * 3; double imgPercent = (double)outputImage.Height / 100; x = 0 - outputImage.Width; // How many percent of the image to expose y = (0 - diameter) + (int)(imgPercent * exposurePercentage); g.FillEllipse(p.Brush, x, y, diameter, diameter); } } return outputImage; } (Changed after John's suggestion. I cannot dispose the Bitmap though, this has to be done by the caller of the function)
{ "language": "en", "url": "https://stackoverflow.com/questions/109753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Rails ar_mailer fails to send emails I've just switched an application to use ar_mailer and when I run ar_sendmail (after a long pause) I get the following error: Unhandled exception 530 5.7.0 Must issue a STARTTLS command first. h7sm16260325nfh.4 I am using Gmail SMTP to send the emails and I haven't changed any of the ActionMailer::Base.smtp_settings just installed ar_mailer. Versions: Rails: 2.1, ar_mailer: 1.3.1 A: Did some digging in the lib and it seems that if you want to use TLS (as you do with Gmail) then it adds a new option to the ActionMailer::Base.smtp_settings of :tls (default of which is false) which you should set to true. The only thing the installation instructions mention regarding TLS is to remove any other smtp_tls files, but the one I had didn't require the tls option to work. A: Maybe you use the Ruby version 1.8.7 You don't need the smtp_tls before. You just need add the enable_startls_auto option: ActionMailer::Base.smtp_settings = { :enable_starttls_auto => true, ... ... } A: What version of ar_mailer are you using? A gmail specific bug was fixed in 1.3.1, as shown here: http://rubyforge.org/forum/forum.php?forum_id=16364
{ "language": "en", "url": "https://stackoverflow.com/questions/109759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to setup Trac to run at / with Lighttpd on a subdomain I have the following config in my lighttpd.conf: $HTTP["host"] == "trac.domain.tld" { server.document-root = "/usr/home/daniels/trac/htdocs/" fastcgi.server = ( "/trac" => ( "trac" => ( "socket" => "/tmp/trac-fastcgi.sock", "bin-path" => "/usr/home/daniels/trac/cgi-bin/trac.fcgi", "check-local" => "disable", "bin-environment" => ( "TRAC_ENV" => "/usr/home/daniels/trac" ) ) ) ) } And it runs at trac.domain.tld/trac. How can i make it to run at trac.domain.tld/ so i will have trac.domain.tld/wiki, trac.domain.tld/timeline, etc instead of trac.domain.tld/trac/wiki, etc... A: Just change "/trac" to "/" in fastcgi.server A: Look for "For top level setup: ..." here.
{ "language": "en", "url": "https://stackoverflow.com/questions/109761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Defining objects when using Jaxer I've been playing with Jaxer and while the concept is very cool I cannot figure out how to define objects that are available on both the client and the server. None of the examples I can find define objects at all. I'd like to be able to define an object and specify which methods will be available on the server, which will be available on the client, and which will be available on the client but executed on the server (server-proxy). Can this be done without using three separate <script> tags with different runat attributes? I would like to be able to define all of my methods in the same js file if possible, and it is not practical to define my objects inline in the html with three separate tags... Basically I'd like to be able to do this in one js file: function Person(name) { this.name = name || 'default'; } Person.runat = 'both'; Person.clientStaticMethod = function () { log('client static method'); } Person.clientStaticMethod.runat = 'client'; Person.serverStaticMethod = function() { log('server static method'); } Person.serverStaticMethod.runat = 'server'; Person.proxyStaticMethod = function() { log('proxy static method'); } Person.proxyStaticMethod.runat = 'server-proxy'; Person.prototype.clientMethod = function() { log('client method'); }; Person.prototype.clientMethod.runat = 'client'; Person.prototype.serverMethod = function() { log('server method'); }; Person.prototype.serverMethod.runat = 'server'; Person.prototype.proxyMethod = function() { log('proxy method'); } Person.prototype.proxyMethod.runat = 'server-proxy'; Also, assuming I was able to do that, how would I include it into html pages correctly? A: I found a post on the Aptana forums (that no longer exists on the web) that states that only global functions can be proxied... Bummer. However, I've been playing around, and you can control which methods will be available on the client and the server by placing your code in an include file and using <script> tags with runat attributes. For example, I can create this file named Person.js.inc: <script runat="both"> function Person(name) { this.name = name || 'default'; } </script> <script runat="server"> Person.prototype.serverMethod = function() { return 'server method (' + this.name + ')'; }; Person.serverStaticMethod = function(person) { return 'server static method (' + person.name + ')'; } // This is a proxied function. It will be available on the server and // a proxy function will be set up on the client. Note that it must be // declared globally. function SavePerson(person) { return 'proxied method (' + person.name + ')'; } SavePerson.proxy = true; </script> <script runat="client"> Person.prototype.clientMethod = function() { return 'client method (' + this.name + ')'; }; Person.clientStaticMethod = function (person) { return 'client static method (' + person.name + ')'; } </script> And I can include it on a page using: <jaxer:include src="People.js.inc"></jaxer:include> Unfortunately with this method I lose the advantage of browser caching for client-side scripts because all the scripts get inlined. The only technique I can find to avoid that problem is to split the client methods, server methods and shared methods into their own js files: <script src="Person.shared.js" runat="both" autoload="true"></script> <script src="Person.server.js" runat="server" autoload="true"></script> <script src="Person.client.js" runat="client"></script> And, at that point I might as well split the proxied functions out into their own file as well... <script src="Person.proxies.js" runat="server-proxy"></script> Note that I used autoload="true" on the shared and server scripts so that they would be available to the proxied functions.
{ "language": "en", "url": "https://stackoverflow.com/questions/109762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you serialize javascript objects with methods using JSON? I am looking for an enhancement to JSON that will also serialize methods. I have an object that acts as a collection of objects, and would like to serialize the methods of the collection object as well. So far I've located ClassyJSON. Any thoughts? A: Try to get away without serializing javascript code. That way lies a world of pain. Debugging will be much easier if code can only come from static files, not from a database. Instead, walk your JSON responses after you receive them and pass the appropriate data to the appropriate object constructors. If you absolutely must serialize them, calling toString() on a function will return its source. A: If you use WCF framework to develop RESTful web service, that is very easy to achieve. Simply create your data structure classes with your desired collection with DataContract, DataMember attributes. [DataContract] public class Foo { [DataMember] public string FooName {get;set;} [DataMember] public FooItem[] FooItems {get;set;} } [DataContract] public class FooItem { [DataMember] public string Name {get;set;} } A: I don't think serializing methods is ever a good idea. If you intend to run the code serverside, you open yourself to attacks. If you want to run it client side, you are better off just the local methods, possibly referencing the name of the method you are going to use in the serialized objects. I do believe though that "f = "+function() {} will yield you a to string version that you can eval: var test = "f = " + function() { alert("Hello"); }; eval(test) And for good json handling, I would recommend prototype, which has great methods for serializing objects to json.
{ "language": "en", "url": "https://stackoverflow.com/questions/109769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to logically organize recurring tasks? What's the best way to create recurring tasks? Should I create some special syntax and parse it, kind of similar to Cronjobs on Linux or should I much rather just use a cronjob that runs every hour to create more of those recurring tasks with no end? Keep in mind, that you can have endless recurring tasks and tasks with an enddate. A: Quartz is an open source job scheduling system that uses cron expressions to control the periodicity of the job executions. A: My approach is always "minimum effort for maximum effect" (or best bang per buck). If it can be done with cron, why not use cron? I'd consider it wasted effort to re-implement cron just for the fun of it so, unless you really need features that cron doesn't have, stick with it.
{ "language": "en", "url": "https://stackoverflow.com/questions/109776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Uniq by object attribute in Ruby What's the most elegant way to select out objects in an array that are unique with respect to one or more attributes? These objects are stored in ActiveRecord so using AR's methods would be fine too. A: I had originally suggested using the select method on Array. To wit: [1, 2, 3, 4, 5, 6, 7].select{|e| e%2 == 0} gives us [2,4,6] back. But if you want the first such object, use detect. [1, 2, 3, 4, 5, 6, 7].detect{|e| e>3} gives us 4. I'm not sure what you're going for here, though. A: I like jmah's use of a Hash to enforce uniqueness. Here's a couple more ways to skin that cat: objs.inject({}) {|h,e| h[e.attr]=e; h}.values That's a nice 1-liner, but I suspect this might be a little faster: h = {} objs.each {|e| h[e.attr]=e} h.values A: Use Array#uniq with a block: objects.uniq {|obj| obj.attribute} Or a more concise approach: objects.uniq(&:attribute) A: The most elegant way I have found is a spin-off using Array#uniq with a block enumerable_collection.uniq(&:property) …it reads better too! A: If I understand your question correctly, I've tackled this problem using the quasi-hacky approach of comparing the Marshaled objects to determine if any attributes vary. The inject at the end of the following code would be an example: class Foo attr_accessor :foo, :bar, :baz def initialize(foo,bar,baz) @foo = foo @bar = bar @baz = baz end end objs = [Foo.new(1,2,3),Foo.new(1,2,3),Foo.new(2,3,4)] # find objects that are uniq with respect to attributes objs.inject([]) do |uniqs,obj| if uniqs.all? { |e| Marshal.dump(e) != Marshal.dump(obj) } uniqs << obj end uniqs end A: Use Array#uniq with a block: @photos = @photos.uniq { |p| p.album_id } A: Add the uniq_by method to Array in your project. It works by analogy with sort_by. So uniq_by is to uniq as sort_by is to sort. Usage: uniq_array = my_array.uniq_by {|obj| obj.id} The implementation: class Array def uniq_by(&blk) transforms = [] self.select do |el| should_keep = !transforms.include?(t=blk[el]) transforms << t should_keep end end end Note that it returns a new array rather than modifying your current one in place. We haven't written a uniq_by! method but it should be easy enough if you wanted to. EDIT: Tribalvibes points out that that implementation is O(n^2). Better would be something like (untested)... class Array def uniq_by(&blk) transforms = {} select do |el| t = blk[el] should_keep = !transforms[t] transforms[t] = true should_keep end end end A: You can use a hash, which contains only one value for each key: Hash[*recs.map{|ar| [ar[attr],ar]}.flatten].values A: Rails also has a #uniq_by method. Reference: Parameterized Array#uniq (i.e., uniq_by) A: Do it on the database level: YourModel.find(:all, :group => "status") A: You can use this trick to select unique by several attributes elements from array: @photos = @photos.uniq { |p| [p.album_id, p.author_id] } A: I like jmah and Head's answers. But do they preserve array order? They might in later versions of ruby since there have been some hash insertion-order-preserving requirements written into the language specification, but here's a similar solution that I like to use that preserves order regardless. h = Set.new objs.select{|el| h.add?(el.attr)} A: ActiveSupport implementation: def uniq_by hash, array = {}, [] each { |i| hash[yield(i)] ||= (array << i) } array end A: Now if you can sort on the attribute values this can be done: class A attr_accessor :val def initialize(v); self.val = v; end end objs = [1,2,6,3,7,7,8,2,8].map{|i| A.new(i)} objs.sort_by{|a| a.val}.inject([]) do |uniqs, a| uniqs << a if uniqs.empty? || a.val != uniqs.last.val uniqs end That's for a 1-attribute unique, but the same thing can be done w/ lexicographical sort ... A: If you are not married with arrays, we can also try eliminating duplicates through sets set = Set.new set << obj1 set << obj2 set.inspect Note that in case of custom objects, we need to override eql? and hash methods
{ "language": "en", "url": "https://stackoverflow.com/questions/109781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "141" }
Q: Use of the Exception class in c# Errors that occur deep down in a data access layer or even higher up, (say within ADO.net operations for example) rarely make much sense to an end user. Simply bubbling these errors up to a UI and displaying them will usually achieve nothing except frustration for an end user. I have recently employed a basic technique for reporting errors such as this whereby I catch the error and at least add some user friendly text so that at least the end user understands what failed. To do this I am catching an exception within each specific function (say for example a fetch function in a data access layer), then raising a new error with user friendly text about the function that has failed and probably cause, but then embedding the original exception in the new exception as the "inner exception" of that new exception. This can then occur at each layer if necessary, each consumer of the lower level function adding it's own context to the error message, so that what reaches the UI is an increasingly user friendly error message. Once the error reaches the UI - if necessary - it can then iterate through the nested exceptions in order to display an error message that firstly tells the user which operation failed, but also provides a bit of technical information about what actually went wrong. e.g. "The list of customer names your requested could not be displayed." "Obtaining the list of customers you requested failed due to an error with the database." "There was an error connecting to the database when retrieving a list of customers" "Login failed for user xx" My question is this: Is this horribly inefficient (all those nested exceptions)? I suspect it is not best practice so what should I be doing to achieve the same thing - or should I in fact be trying to achieve something better? A: It is just slightly horrible. If you are showing an error to the end user, the user is supposed to be able to act about it. In "The list of customer names your requested could not be displayed." case, your user will just think "so what?" On all of these cases, just display a "something bad happened" message. You do not even need to catch these exceptions, when something goes bad, let some global method (like application_error) handle it and display a generic message. When you or your user can do something about the error, catch it and do the thing or notify the user. But you will want to log every error that you do not handle. By the way, displaying information about the errors occuring may yield to security vulnerabilities. The less the attackers know about your system, the less likely they will find ways to hack it (remember those messages like "Syntax error in sql statement: Select * From Users Where username='a'; drp database;--'..." expected: 'drop' instead of 'drp'. They do not make sites like these anymore). A: It is technically costly to throw new exceptions, however I won't make a big debate out of that since "costly" is relative - if you're throwing 100 such exceptions a minute, you will likely not see the cost; if you're throwing 1000 such exceptions a second, you very well may see a performance hit (hence, not really worth discussing here - performance is your call). I guess I have to ask why this approach is being used. Is it really true that you can add meaningful exception information at every level where an exception might be thrown and, if so, is it also true that the information will be: * *Something you actually want to share with your user? *Something your user will be able to interpret, understand and use? *Written in such a way that it will not interfere with later reuse of low-level components, the utility of which might not be known when they were written? I ask about sharing information with your user because, in your example, your artificial stack starts by informing the user there was a problem authenticating on the database. For a potential hacker, that's a good piece of information that exposes something about what the operation was doing. As for handing back an entire custom exception stack, I don't think it's something that will be useful to most (honest) users. If I'm having trouble getting a list of customer names, for instance, is it going to help me (as a user) to know there was a problem authenticating with the database? Unless you're using integrated authentication, and each of your users has an account, and the ability to contact a system administrator to find out why their account lacks privileges, probably not. I would begin by first deciding if there is really a semantic difference between the Framework exception thrown and the exception message you'd like to provide to the user. If there is, then go ahead and use a custom exception at the lowest level ('login failed' in your example). The steps following that, up to the actual presentation of the exception, don't really require any custom exceptions. The exception you're interested in has already been generated (the login has failed) - continuing to wrap that message at every level of the call stack serves no real purpose other than exposing your call stack to your users. For those "middle" steps, assuming any try/catch blocks are in place, a simple 'log and throw' strategy would work fine. Really, though, this strategy has another potential flaw: it forces upon the developer the responsibility for maintaining the custom exception standard that's been implemented. Since you can't possibly know every permutation of call hierarchy when writing low-level types (their "clients" might not even have been written yet), it seems unlikely that all developers - or even one developer - would remember to wrap and customize any error condition in every code block. Instead of working from the bottom up, I typically worry about the display of thrown exceptions as late in the process as possible (i.e. as close to the "top" of the call stack as possible). Normally, I don't try to replace any messages in exceptions thrown at low levels of my applications - particularly since the usage of those low level members tend to get more and more abstract the deeper the call gets. I tend to catch and log exceptions in the business tier and lower, then deal with displaying them in a comprehensible manner in the presentation tier. Here are a couple of decent articles on exception handling best practices: http://www.codeproject.com/KB/architecture/exceptionbestpractices.aspx http://aspalliance.com/1119 Jeez this got wordy...apologies in advance. A: Yes, exceptions are expensive so there is a cost involved in catching and rethrowing or throwing a more useful exception. But wait a minute! If the framework code or the library you're using throws an exception then things are already coming unstuck. Do you have non-functional requirements for how quickly an error message is propagated following an exception? I doubt it. Is it really a big deal? Something unforeseen and 'exceptional' has happened. The main thing is to present sensible, helpful information to the user. I think you're on the right track with what you're doing. A: Of course it's horribly inefficient. But at the point that an exception occurs that is important enough to show to the end user, you should not care about that. A: Where I work we have only a few reasons to catch exceptions. We only do it when... * *We can do something about it - e.g We know that this can happen sometimes and we can rectify it in code as it happens (very rare). *We want to know it happens and where (then we just rethrow the exception). *We want to add a friendly message in which case we wrap the original exception in new excpetion derived from application exception and add a friendly message to that and then let it bubble up unchanged from that point. In your example we'd probably just display "Logon error occurred." anbd leave it at that while logging the real error and providing a why for the user to drill into the exception if they wanted too. (Perhaps a button on the error form). *We want to suppress the exception completely and keep going. Needless to say we only do this for expected exception types and only when there is no other way to detect the condition that generates the exception. A: Generally when you're dealing with exceptions, performance and efficiency are the least of your worries. You should be more worried about doing something to help the user recover from the problem. If there was a problem writing a certain record to the database, either roll the changes back or at least dump the row information so the user doesn't lose it.
{ "language": "en", "url": "https://stackoverflow.com/questions/109790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is the current state of art in Linux virtualization technology? What VM technologies exist for Linux, their pros and cons, and which is recommended for which application? Since this kind of question can be asked for X other than "VM technologies for Linux", and since the answer changes with progress, I suggest to define a template for this kind of pages. Those pages will have the tag 'stateoftheart' and they will be revisited each month and each month there will be up-to-date list of technologies, up-to-date reviews and up-to-date recommendations. A: This is a job for ... Wikipedia! * *Types of Virtualization *Platform Virtualization *Comparison of Virtual Machines Now that the obvious stuff is out of the way... Linux runs fine as a guest on every VM host I've used, so I'm going to assume that you're referring to Linux as the host operating system. I'm also going to assume x86 or amd64 hardware. Platform virtualization breaks down into two major forms: Desktop virtualization and Server virtualization. Both types will allow you to load and run multiple OS instances as guests that virtualize their I/O through the host OS. Desktop virtualization concentrates on providing a highly interactive console experience for each of the guest VMs, while Server virtualization concentrates on maximizing computing performance, generally while sacrificing console services and more exotic devices (Sound cards, USB, etc.) Server virtualization implementations typically include either RDP or VNC for remote access to a virtual console. On Linux, your choices for Desktop Virtualization include: * *VMware Workstation -- it's commercial, somewhat expensive, mature, and provides the most hardware, device, and guest OS support of any solution. *VMware Player -- it's commercial (freeware) and only supports VMs that were created elsewhere. Available with Ubuntu. *Parallels Workstation -- it's commercial, somewhat expensive, and not up to par with VMware. Doesn't support 64-bit guests. *VirtualBox -- available in commercial (freeware) and community versions (GPL). Fedora's preferred solution. On Linux, your choices for Server Virtualization include: * *VMware Server -- it's commercial (freeware), mature, and provides the most hardware, device, and guest OS support of any solution. Available with Ubuntu. *Xen -- it's open source. A para-virtualization solution, it has only recently added hardware-virtualization, so Windows guest support depends upon specific CPU support. *Virtual Iron -- a commercialized version of Xen that adds native virtualization. *KVM -- it's open source. It depends upon QEMU for the last mile. Ubuntu's preferred solution. *Linux-VServer -- it's open source. It provides virtual jails based on the host OS kernel, so no Windows guests. For myself, I stick with VMware Workstation (7+ years) and VMware Server for my Linux-hosted virtualization needs. At work, it's VMware Workstation (on Windows), VMware Server (on Windows), and VMware ESX (on bare metal). I'll probably have another look at Xen, KVM, and VirtualBox at some point, but for right now compatibility between work and home is paramount. A: 2008 Oct To be filled in at October to reflect the market status then. 2008 Sept Products/services/technologies currently existing * *VMware *Xen *VirtualBox *VServer *??? Comparisons ??? Recommendations for particular application areas * *Home multi-boot replacement *Small business which has MS-Windows legacy applications *Datacenter of multinational corporation *??? A: W Craig Trader answer is great, but just to add there is also User-mode Linux (UML) which has been around for a while - it has been in the mainline kernel tree since 2.6.0 . Note that I haven't used it myself. Ubuntu prefers KVM, and I believe Red Hat is moving to it over Xen now as well. Both KVM and Xen can be managed by libvirt, optionally through the virtual machine manager GUI. The virtual machine manager can manage remote instances through ssh connections. In addition, a good comparison can be found here (pdf). Lots of performance tests done. The short version is that xen and linux-vserver were generally the best on performance grounds.
{ "language": "en", "url": "https://stackoverflow.com/questions/109797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Database Choice for a C# 2008 front end I was wondering what and why you would choose to be able to make a database that can support no more than 100 users with no more than 10 using it at once with a Visual Studio 2008 C# Windows Form front end to access it by. I have to access the database over a network connection, not just on the local machine. I also need to define where the database is found at run-time in the code as opposed to the "Data Source" view in Visual Studio. If my question needs reframing or is not understood, let me know and I will adjust. Part of my problem is I am not sure even how to ask the right question, much less what the answer is. A: If it is not for comercial purposes you can try SQL Server 2008 Express. It can integrate nicely with Visual Studio 2008 for development and has support for LINQ, Entity Data Model and ADO.NET Entity Framework to make it easy to create next generation data-enabled applications. http://www.microsoft.com/express/sql/default.aspx You can also store your connections strings in the application configuration file and retrieve them programatically for setting up the database connection. http://www.codeguru.com/columns/DotNet/article.php/c7987/ A: I would probably go with Sql Server Express, it's free and works well with .NET. Assuming your schema is not changing at runtime you can probably still use the design time data source features in Visual Studio. The connection information is stored in the app.config file which you can update after the app is deployed to point to a different database. You can also develop a class that gets the connection info from somewhere else as well and just use that when you need to open a database connection. A: I know using mssql you can pick between different connection strings for all of your db calls, just do something like Command.Connection = GetMyConnectionWithWhateverLogicINeed(); A: I'd have a look at Sql Server Workgroup Edition http://www.microsoft.com/sql/editions/workgroup/ Express edition used to have some limiting features for more than about 5 users and it is not supplied with any management tools which is a bit disheartening. A: I'm not sure I totally get what you are asking, Matt, but I can tell you that I developed a series of apps written with VS 2008 and we used a MySQL DB for it. While I'm definitely not a DB guru at this point, I've not had many issues with using MySQL. Perhaps if you rephrase your question, we can provide better answers. A: SQLite for sure. ADO 2.0 Provider
{ "language": "en", "url": "https://stackoverflow.com/questions/109810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Programmatically determine video file format? Ok, I get the basics of video format - there are some container formats and then you have core video/audio formats. I would like to write a web based application that determines what video/audio codec a file is using. How best can I programmatically determine a video codec? Would it be best to use a standard library via system calls and parse its output? (eg ffmpeg, transcode, etc?) A: mplayer -identify will do the trick. Just calling ffmpeg on a file will also work--it will automatically print a set of info at the start about the input file regardless of what you're telling ffmpeg to actually do. Of course, if you want to do it from your program without an exec call to an external program, you can just include the avcodec libraries and run its own identify routine directly. While you could implement your own detection, it will surely be inferior to existing routines given the absolutely enormous number of formats that libav* supports. And it would be a rather silly case of reinventing the wheel. Linux's "file" command can also do the trick, but the amount of data it prints out depends on the video format. For example, on AVI it gives all sorts of data about resolution, FOURCC, fps, etc, while for an MKV file it just says "Matroska data," telling you nothing about the internals, or even the video and audio formats used. A: I have used FFMPEG in a perl script to achieve this. $info = `ffmpeg -i $path$file 2>&1 /dev/null`; @fields = split(/\n/, $info); And just find out what items in @fields you need to extract. A: You need to start further down the line. You need to know the container format and how it specifies the codec. So I'd start with a program that identifies the container format (not just from the extension, go into the header and determine the real container). Then figure out which containers your program will support, and put in the functions required to parse the meta data stored in the container, which will include the codecs. -Adam A: You really want a big database of binary identifying markers to look for near the start of the file. Luckily, your question is tagged "Linux", and such a dabase already exists there; file(1) will do the job for you. A: I would recommend using ffprobe and force output format to json. It would be so much easier to parse. Simplest example: $meta = json_decode(join(' ', `ffprobe -v quiet -print_format json -show_format -show_streams /path/to/file 2>&1`)); Be warned that in the case of corrupted file you will get null as result and warning depending on your error reporting settings. Complete example with proper error handling: $file = '/path/to/file'; $cmd = 'ffprobe -v quiet -print_format json -show_format -show_streams ' . escapeshellarg($file).' 2>&1'; exec($cmd, $output, $code); if ($code != 0) { throw new ErrorException("ffprobe returned non-zero code", $code, $output); } $joinedOutput = join(' ', $output); $parsedOutput = json_decode($joinedOutput); if (null === $parsedOutput) { throw new ErrorException("Unable to parse ffprobe output", $code, $output); } //here we can use $parsedOutput as simple stdClass A: You can use mediainfo: sudo apt-get install mediainfo If you just want to get video/audio codec, you can do the following: $videoCodec = `mediainfo --Inform="Video;%Format%" $filename`; $audioCodec = `mediainfo --Inform="Audio;%Format%" $filename`; In case you want to capture more info, you can parse XML output returned by mediainfo. Here is sample function: function getCodecInfo($inputFile) { $cmdLine = 'mediainfo --Output=XML ' . escapeshellarg($inputFile); exec($cmdLine, $output, $retcode); if($retcode != 0) return null; try { $xml = new SimpleXMLElement(join("\n",$output)); $videoCodec = $xml->xpath('//track[@type="Video"]/Format'); $audioCodec = $xml->xpath('//track[@type="Audio"]/Format'); } catch(Exception $e) { return null; } if(empty($videoCodec[0]) || empty($audioCodec[0])) return null; return array( 'videoCodec' => (string)$videoCodec[0], 'audioCodec' => (string)$audioCodec[0], ); }
{ "language": "en", "url": "https://stackoverflow.com/questions/109815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Should I bind directly to objects returned from a webservice? Should I bind directly to objects returned from a webservice or should I have client-side objects that I bind to my gridcontrols? For instance if I have a service that returns object Car should I have a client side Car object that I populate with values from the webservice Car object? What is considered best-practice? In C# do I need to mark my classes as serializable or do something special to them? A: This is a good question, which follows the sames lines as two questions I have asked myself: * *Large, Complex Objects as a Web Service Result. *ASP.NET Web Service Results, Proxy Classes and Type Conversion. Both of these may be a worthwhile read for you. Heres my two bits: * *Try to keep the return types of your Web Services to primitives where possible. This not only helps reduce the size of the messages, but also reduces complexity at the receiving end. *If you do need to return complex objects, return them as a raw xml string (I'll explain below). What I then do is create a seperate class which represents the object and handles it's xml. I ensure the class can be instantiated from and serialized to xml easily. Then both projects (the web service and the client) can reference the DLL with the concrete object in, but there is none of the annoying coupling with the proxy class. This coupling causes issues if you have shared code. For example (using your Car class): * *Web Service (CarFactory) method BuyCar(string make, string model) is a factory method that returns a car. *You also write a Mechanic class that works on Car objects to repair them, this is developed without knowledge of the Web Service. *You then right a Garage class for your application. You add a web reference to the CarFactory service to get your cars, and then add some Mechanic's to your garage and then crack your knuckles and get ready to get some cars from the factory to get them working on. *Then it all falls over, when you get the result of CarFactory.BuyCar("Audi", "R8") and then tell your Mechanic.Inspect(myAudi) the compiler moans, because the Car is actually of type CarFactory.Car not the original Car type, yes? So, using the method I suggested: * *Create your Car class in its own DLL. Add methods to instantiate it and serialize it from/to XML respectively. *Create your CarFactory web service, add a reference to the DLL, build your cars as before, but instead of returning the object, return the XML. *Create your Garage adding a reference to the Mechanic, Car DLL and the CarFactory web service. Call your BuyCar method and now it returns a string, you then pass this string to the Car class, which re-builds its object model. The Mechanic's can happily work on these Car's too because everything is singing from the same hymn sheet (or DLL?) :) *One major benefit is that if the object changes in its design, all you need to do is update the DLL and the web service and client apps are completely decoupled from the process. Note: Often it can be useful to then create a Facade layer two work with the web services and auto-generate objects from the XML results. I hope that makes sense, if not, then please shout and I will clarify . A: This really depends on what you are getting from the web service. If they are simple data transfer objects and you are only displaying data, then yes, you can bind. If you plan to edit the objects, it may not be usefull as you will need to track changes. Do your objects and/or collections on the client track changes? If so you can use them. If you have no change tracking, then you will need to track changes yourself, so you may need to translate the objects or wrap them in something to track changes. Again, it really depends on what you are getting, what they support, what you are doing with them, as well as what response the server wants back for changes. A: One thing you can do is to create client classes corresponding to the web service data contracts with any additional functionality that you want and set the web service reference to reuse existing types. Then there is no reason to create an additional wrapper class to bind to. A: If you bind directly to the Web service types, you're introducing a coupling. Should the Web service change in future, this may have undesired side-effects that mean lots of code changes. For example, what if you're using .asmx Web services today, then shift to WCF tomorrow? That might mean quite a few changes through your code if you've used types that WCF won't serialize. It's often better in the long run to create specific client-side objects and then translate to and from Web service data contract types. It may seem a lot of work, but this is often repaid greatly when it's time to refactor, as your changes are localised in one place. A: If you are the owner of both the web service and the client. And you need the parameters of the web service calls to be complex classes which contain not only data but also behavior (actual coded logic) then you are in a bit of a pickle when developing these web services using web service frame works. As suggested in the answer by Rob Cooper you can use pure xml as web service parameters and xml serialization, but there is a cleaner solution. If you are using Visual Studio 2005 (probably applies the same for 2008), You can customize the way VS creates you proxy as described in this article: Customizing generated Web Service proxies in Visual Studio 2005 This way you can tell VS to use your own classes instead of generating a proxy class. Well when I think of it, it's pretty much same solution as proposed by Rob Cooper, with the little twist, that you wont be writing a Facade layer your self but will be using VS itself as this layer.
{ "language": "en", "url": "https://stackoverflow.com/questions/109825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Not getting emails from ExceptionNotifier I followed this tutorial on configuring the Rails plugin ExceptionNotifier. I know that I have ActionMailer configured correctly because I am getting mail from other forms. I even have local_addresses.clear set so that it should be delivering mail no matter what. I am using Apache with a mongrel_cluster running in the backend. What am I missing? A: You're using the SVN version of the plugin, which is probably unmaintained. Latest version can be found here. Second thing which you can do is check the production log. Mailings get written to the log, so you'll see if Rails ever even tried to send it. If there are no entries, that means things are silently failing, which probably happens because -- for some reason -- exceptions are not caught properly. A: Check your production log, exceptions can be throw in side the exception_notifier plugin, which prevent it from sending mails A: If you added your ExceptionNotifier configuration information (your email address, etc.) into config/environment.rb, did you add it within the Rails::Initializer block or did you add it at the end of the file? The tutorial you linked to doesn't specify where in the environment file to put the configuration information. The tutorial I followed (which might have been this one) does specify to put it outside the block. Which things go inside that block and which outside is, frankly, still a little mysterious to me. But I thought this might answer your specific question.
{ "language": "en", "url": "https://stackoverflow.com/questions/109830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Object Memory Analysis in .NET Is there a tool or a way to find out how much memory consumed by each DLL or object in .NET? The more detail it analyzes the better. Thanks. A: You could try CLR Profiler which is free, or maybe the trial version of ANTS Profiler. A: .NET Memory Profiler should allow you to do that: http://memprofiler.com/ A: There are some decent memory profilers.. can look at this question What Are Some Good .NET Profilers? A: I always liked the dot.Trace profiler from Jetbrains (as well as Resharper)
{ "language": "en", "url": "https://stackoverflow.com/questions/109836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Auto-format structured data (phone, date) using jQuery plugin (or failing that vanilla JavaScript) I like jQuery and I was wondering if anyone have used a good plugin or (non-jQuery) JavaScript library that allows for auto-formatting of structured fields like phone numbers or dates. I know of the jquery-ui-datapicker plugin, and not what I am looking for here. You may type in a phone number as 123 which then becomes (123), additional numbers will be formatted as (123) 456 7890 Ext. 123456. If you press delete the auto-formatting stuff disappears automatically, and repositioning of the cursor, say, after (123) and pressing delete will remove the 3 and make the rest (124) 567 8901 Ext. 23456. The ones that I have played with appears unreliable. A: Does the Masked Input plugin do what you need or that one you have already found to be unreliable? A: Allan, I do believe your best bet would be to use regular expressions inside of two separate formatting methods in order to achieve the desired results. This will be rather straight forward for phone numbers and I'll post a code example if one isn't posted by the time I sit back and have 10 minutes straight to write something up. Perhaps for the date field, you can use something like the jQuery UI Datepicker instead? http://marcgrabanski.com/pages/code/jquery-ui-datepicker HTH, /sf
{ "language": "en", "url": "https://stackoverflow.com/questions/109854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Where can I find a complete reference of the ncurses C API? Where can I find a complete reference of the ncurses C API? A: The documentation that comes with the library actually isn't that bad. http://tldp.org/HOWTO/NCURSES-Programming-HOWTO/ A: You can buy this book. I have it and recommend: John Strang, Programming with curses, O'Reilly, ISBN 0-937175-02-1 The best online source information: http://invisible-island.net/ncurses/ncurses-intro.html I learned a lot about ncurses reading the minicom source code and the iptraf linux network monitor. A: I found this question a while back, but none of the answers so far answer the original question. The complete freely available API reference is available through the . . . NCURSES MAN PAGES A: I've found the book "Programmer's Guide to nCurses" (Dan Gookin, published by Wiley) invaluable as it includes both tutorial and an impressive reference to the API. There's also the O'Reilly Nutshell guide "Programming with Curses" which isn't too bad.
{ "language": "en", "url": "https://stackoverflow.com/questions/109855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: PHP with AWASP framework Who here is using WASP (http://wasp.sourceforge.net/content/) to in real world applications? What impressions do you have? Good? Bad? If you can provide any inputs, how good it is comparing with rails for example. I'm really looking for MVC frameworks for PHP Update: This comparation I found is good. A: I downloaded it a while ago and tried it out, but as the documentation is pretty terrible at the moment (consisting of some auto-generated 'documentation' that was useless) I gave up pretty quickly. I think one of the most important things to have in a framework is clear, thorough documentation - if you have to spend time digging through the code of the framework to find out if a class you want exists, the point of using a framework is lost. WASP does not seem to be ready for production environments just yet, as even their website admits that its not ready for enterprise applications. If you're looking for a PHP framework I would recommend CodeIgniter, which has excellent documentation and a helpful community, or Zend, which is pretty mature. A: CakePHP is a great framework with great documentation. Symfony lost me with all the configuration, at the time I was new to both frameworks and CakePHP stood out as being the best for me and I was able to pick it up very quickly A: Hey Victor, that comparison is pretty badly out of date. It was done about 1.5 years ago and, at least in the case of the Zend Framework that I use regularly,things have changed greatly since then. I'd say that comparison is so old as to be useless. A: Check out symfony, too. Free software, top-notch documentation. A: QCodo is great - amazing code generation, full MVC support. The strongest object-relational mapping I've seen; their scaffolding model is so much stronger than CakePHP and Zend... Plus, it's beautifully extensible with community controls. I've been using it for large projects for the last two years, it's great! A: Have you tried CodeIgniter? I tested CakePHP but it's too much a la Rails style and i didn't like it. CodeIgniter gives you more freedom to do whatever you whant.
{ "language": "en", "url": "https://stackoverflow.com/questions/109858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What does "DateTime?" mean in C#? I am reading a .NET book, and in one of the code examples there is a class definition with this field: private DateTime? startdate What does DateTime? mean? A: it basically gives you an extra state for primitives. It can be a value, or it can be null. It can be usefull in situations where a value does not need to be assigned. So rather than using for example, datetime.min or max, you can assign it null to represent no value. A: It's a nullable DateTime. ? after a primitive type/structure indicates that it is the nullable version. DateTime is a structure that can never be null. From MSDN: The DateTime value type represents dates and times with values ranging from 12:00:00 midnight, January 1, 0001 Anno Domini, or A.D. (also known as Common Era, or C.E.) through 11:59:59 P.M., December 31, 9999 A.D. (C.E.) DateTime? can be null however. A: Since DateTime is a struct, not a class, you get a DateTime object, not a reference, when you declare a field or variable of that type. And, in the same way as an int cannot be null, so this DateTime object can never be null, because it's not a reference. Adding the question mark turns it into a nullable type, which means that either it is a DateTime object, or it is null. DateTime? is syntactic sugar for Nullable<DateTime>, where Nullable is itself a struct. A: A ? as a suffix for a value type allows for null assignments that would be othwerwise impossible. http://msdn.microsoft.com/en-us/library/b3h38hb0.aspx Represents an object whose underlying type is a value type that can also be assigned a null reference. This means that you can write something like this: DateTime? a = null; if (!a.HasValue) { a = DateTime.Now; if (a.HasValue) { Console.WriteLine(a.Value); } } DateTime? is syntatically equivalent to Nullable<DateTime>. A: It's equivalent to Nullable< DateTime>. You can append "?" to any primitive type or struct. A: As we know, DateTime is a struct means DateTime is a value type, so you get a DateTime object, not a reference because DateTime is not a class, when you declare a field or variable of that type you cannot initial with null Because value types don't accept null. In the same way as an int cannot be null. so DateTime object never be null, because it's not a reference. But sometimes we need nullable variable or field of value types, that time we use question mark to make them nullable type so they allow null. For Example:- DateTime? date = null; int? intvalue = null; In above code, variable date is an object of DateTime or it is null. Same for intvalue. A: public class ReportsMapper : CommonMapper { public DateTime? cb_Bill_From_Date { get; set; } public DateTime? cb_Bill_To_Date { get; set; } public DateTime? tff_Bill_From_Date { get; set; } public DateTime? tff_Bill_To_Date { get; set; } } If you declare DateTime As Null In Procedure Then You get an error stating DateTime Object Can never be Null so you need to add ? After DateTime that will say DateTime is Nullable too. Hope This Help!
{ "language": "en", "url": "https://stackoverflow.com/questions/109859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "77" }
Q: Converting PostgreSQL database to MySQL I've seen questions for doing the reverse, but I have an 800MB PostgreSQL database that needs to be converted to MySQL. I'm assuming this is possible (all things are possible!), and I'd like to know the most efficient way of going about this and any common mistakes there are to look out for. I have next to no experience with Postgre. Any links to guides on this would be helpful also! Thanks. A: One advise is to start with a current version of MySQL, otherwise you will not have sub-queries, stored procedures or views. The other obvious difference is auto-increment fields. Check out: pg2mysql /Allan A: You should not convert to new database engine based solely on the fact that you do not know the old one. These databases are very different - MySQL is speed and simplicity, Postgres is robustness and concurrency. It will be easier for you to learn Postgres, it is not that hard. A: pg_dump can do the dump as insert statements and create table statements. That should get you close. The bigger question, though, is why do you want to switch. You may do a lot of work and not get any real gain from it.
{ "language": "en", "url": "https://stackoverflow.com/questions/109861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there a Perl solution for lazy lists this side of Perl 6? Has anybody found a good solution for lazily-evaluated lists in Perl? I've tried a number of ways to turn something like for my $item ( map { ... } @list ) { } into a lazy evaluation--by tie-ing @list, for example. I'm trying to avoid breaking down and writing a source filter to do it, because they mess with your ability to debug the code. Has anybody had any success. Or do you just have to break down and use a while loop? Note: I guess that I should mention that I'm kind of hooked on sometimes long grep-map chains for functionally transforming lists. So it's not so much the foreach loop or the while loop. It's that map expressions tend to pack more functionality into the same vertical space. A: [Sidenote: Be aware that each individual step along a map/grep chain is eager. If you give it a big list all at once, your problems start much sooner than at the final foreach.] What you can do to avoid a complete rewrite is to wrap your loop with an outer loop. Instead of writing this: for my $item ( map { ... } grep { ... } map { ... } @list ) { ... } … write it like this: while ( my $input = calculcate_next_element() ) { for my $item ( map { ... } grep { ... } map { ... } $input ) { ... } } This saves you from having to significantly rewrite your existing code, and as long as the list does not grow by several orders of magnitude during transformation, you get pretty nearly all the benefit that a rewrite to iterator style would offer. A: If you want to make lazy lists, you'll have to write your own iterator. Once you have that, you can use something like Object::Iterate which has iterator-aware versions of map and grep. Take a look at the source for that module: it's pretty simple and you'll see how to write your own iterator-aware subroutines. Good luck, :) A: There is at least one special case where for and foreach have been optimized to not generate the whole list at once. And that is the range operator. So you have the option of saying: for my $i (0..$#list) { my $item = some_function($list[$i]); ... } and this will iterate through the array, transformed however you like, without creating a long list of values up front. If you wish your map statement to return variable numbers of elements, you could do this instead: for my $i (0..$#array) { for my $item (some_function($array[$i])) { ... } } If you wish more pervasive laziness than this, then your best option is to learn how to use closures to generate lazy lists. MJD's excellent book Higher Order Perl can walk you through those techniques. However do be warned that they will involve far larger changes to your code. A: Bringing this back from the dead to mention that I just wrote the module List::Gen on CPAN which does exactly what the poster was looking for: use List::Gen; for my $item ( @{gen { ... } \@list} ) {...} all computation of the lists are lazy, and there are map / grep equivalents along with a few other functions. each of the functions returns a 'generator' which is a reference to a tied array. you can use the tied array directly, or there are a bunch of accessor methods like iterators to use. A: Use an iterator or consider using Tie::LazyList from CPAN (which is a tad dated). A: I asked a similar question at perlmonks.org, and BrowserUk gave a really good framework in his answer. Basically, a convenient way to get lazy evaluation is to spawn threads for the computation, at least as long as you're sure you want the results, Just Not Now. If you want lazy evaluation not to reduce latency but to avoid calculations, my approach won't help because it relies on a push model, not a pull model. Possibly using Corooutines, you can turn this approach into a (single-threaded) pull model as well. While pondering this problem, I also investigated tie-ing an array to the thread results to make the Perl program flow more like map, but so far, I like my API of introducing the parallel "keyword" (an object constructor in disguise) and then calling methods on the result. The more documented version of the code will be posted as a reply to that thread and possibly released onto CPAN as well. A: If I remember correctly, for/foreach do get the whole list first anyways, so a lazily evaluated list would be read completely and then it would start to iterate through the elements. Therefore, I think there's no other way than using a while loop. But I may be wrong. The advantage of a while loop is that you can fake the sensation of a lazily evaluated list with a code reference: my $list = sub { return calculate_next_element }; while(defined(my $element = &$list)) { ... } After all, I guess a tie is as close as you can get in Perl 5. A: As mentioned previously, for(each) is an eager loop, so it wants to evaluate the entire list before starting. For simplicity, I would recommend using an iterator object or closure rather than trying to have a lazily evaluated array. While you can use a tie to have a lazily evaluated infinite list, you can run into troubles if you ever ask (directly or indirectly, as in the foreach above) for the entire list (or even the size of the entire list). Without writing a full class or using any modules, you can make a simple iterator factory just by using closures: sub make_iterator { my ($value, $max, $step) = @_; return sub { return if $value > $max; # Return undef when we overflow max. my $current = $value; $value += $step; # Increment value for next call. return $current; # Return current iterator value. }; } And then to use it: # All the even numbers between 0 - 100. my $evens = make_iterator(0, 100, 2); while (defined( my $x = $evens->() ) ) { print "$x\n"; } There's also the Tie::Array::Lazy module on the CPAN, which provides a much richer and fuller interface to lazy arrays. I've not used the module myself, so your mileage may vary. All the best, Paul
{ "language": "en", "url": "https://stackoverflow.com/questions/109880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to return JSON from a HandleError filter? aspnet mvc has the HandleError filter that will return a view if an error occurs, but if an error occurs when calling a JsonResult Action how can I return a JSON object that represents an error? I don't want to wrap the code in each action method that returns a JsonResult in a try/catch to accomplish it, I'd rather do it by adding a 'HandleJsonError' attribute or using the existing HandleError attribute to the required action methods. A: In short, the way to go can be to extend the HandleErrorAttribute, like this: public class OncHandleErrorAttribute : HandleErrorAttribute { public override void OnException(ExceptionContext context) { // Elmah-Log only handled exceptions if (context.ExceptionHandled) ErrorSignal.FromCurrentContext().Raise(context.Exception); if (context.HttpContext.Request.IsAjaxRequest()) { // if request was an Ajax request, respond with json with Error field var jsonResult = new ErrorController { ControllerContext = context }.GetJsonError(context.Exception); jsonResult.ExecuteResult(context); context.ExceptionHandled = true; } else { // if not an ajax request, continue with logic implemented by MVC -> html error page base.OnException(context); } } } Remove Elmah logging code line if you don't need it. I use one of my controllers to return a json based on an error and context. Here is the sample: public class ErrorController : Controller { public ActionResult GetJsonError(Exception ex) { var ticketId = Guid.NewGuid(); // Lets issue a ticket to show the user and have in the log Request.ServerVariables["TTicketID"] = ticketId.ToString(); // Elmah will show this in a nice table ErrorSignal.FromCurrentContext().Raise(ex); //ELMAH Signaling ex.Data.Add("TTicketID", ticketId.ToString()); // Trying to see where this one gets in Elmah return Json(new { Error = String.Format("Support ticket: {0}\r\n Error: {1}", ticketId, ex.ToString()) }, JsonRequestBehavior.AllowGet); } I add some ticket info above, you can ignore this. Due to the way the filter is implemented (extends the default HandleErrorAttributes) we can remove then HandleErrorAttribute from the global filters: public class MvcApplication : System.Web.HttpApplication { public static void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new GlobalAuthorise()); filters.Add(new OncHandleErrorAttribute()); //filters.Add(new HandleErrorAttribute()); } This is basically it. You can read my blog entry for more detailed info, but for the idea, above should suffice. A: Take a look at the MVC implementation of HandleErrorAttribute. It returns a ViewResult. You could write your own version (HandleJsonErrorAttribute) that returns a JsonResult. A: Maybe you could create your own Attribute and have a constructor value that takes an enum value of View or Json. Below is what Im using for a custom Authorization Attribute to demonstrate what I mean. This way when authentication fails on a json request it responds with a json error and the same with if it returns a View. public enum ActionResultTypes { View, Json } public sealed class AuthorizationRequiredAttribute : ActionFilterAttribute, IAuthorizationFilter { public ActionResultTypes ActionResultType { get; set; } public AuthorizationRequiredAttribute(ActionResultTypes actionResultType) { this.ActionResultType = ActionResultType; } } //And used like [AuthorizationRequired(ActionResultTypes.View)] public ActionResult About() { }
{ "language": "en", "url": "https://stackoverflow.com/questions/109883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Regular expression to match only the first file in a RAR file set To see what file to invoke the unrar command on, one needs to determine which file is the first in the file set. Here are some sample file names, of which - naturally - only the first group should be matched: yes.rar yes.part1.rar yes.part01.rar yes.part001.rar no.part2.rar no.part02.rar no.part002.rar no.part011.rar One (limited) way to do it with PCRE compatible regexps is this: .*(?:(?<!part\d\d\d|part\d\d|\d)\.rar|\.part0*1\.rar) This did not work in Ruby when I tested it at Rejax however. How would you write one Ruby compatible regular expression to match only the first file in a set of RAR files? A: Don't rely on the names of the files to determine which one is first. You're going to end up finding an edge case where you get the wrong file. RAR's headers will tell you which file is the first on in the volume, assuming they were created in a somewhat-recent version of RAR. HEAD_FLAGS Bit flags: 2 bytes 0x0100 - First volume (set only by RAR 3.0 and later) So open up each file and examine the RAR headers, looking specifically for the flag that indicates which file is the first volume. This will never fail, as long as the archive isn't corrupt. I have done my own tests with spanning RAR archives and their headers are correct according to the link above. This is a much, much safer way of determining which file is first in a set like this. A: The short answer is that it's not possible to construct a single regex to satisfy your problem. Ruby 1.8 does not have lookaround assertions (the (?<! stuff in your example regex) which is why your regex doesn't work. This leaves you with two options. 1) Use more than one regex to do it. def is_first_rar(filename) if ((filename =~ /part(\d+)\.rar$/) == nil) return (filename =~ /\.rar$/) != nil else return $1.to_i == 1 end end 2) Use the regex engine for ruby 1.9, Oniguruma. It supports lookaround assertions, and you can install it as a gem for ruby 1.8. After that, you can do something like this: def is_first_rar(filename) reg = Oniguruma::ORegexp.new('.*(?:(?<!part\d\d\d|part\d\d|\d)\.rar|\.part0*1\.rar)') match = reg.match(filename) return match != nil end A: I am no regex expert but here is my attempt ^(yes|no)\.(rar|part0*1\.rar)$ Replace "yes|no" with the actual file name. I matched it against your examples to see if it would only match the first set hence the "yes|no" in the regex. UPDATE: fixed as per the comment. Not sure why the user would not know the filename so i did not fix that part... A: Personally I wouldn't use (extended) regular expressions in this case (or at least not just one to do it all). What's wrong with coding this in, for example, a few ifs?
{ "language": "en", "url": "https://stackoverflow.com/questions/109916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Multithreading in visual basic 6.0 How to implement multi-threading in visual basic 6.0. It would be great if someone can give an example. A: If the problem that you are trying to solve is a long calculation and you want to keep the UI responsive, then one possibility is to frequently call the DoEvents function within your long calculation. This way, your program can process any Windows messages, and thus the UI will respond to user commands. You can also set up a Cancel button to signal your process that it needs to end. If you do this, then you will need to be careful to disable any controls that could cause a problem, such as running the long process a second time after it has started. A: VB6 is not a really good environment for multi-threaded applications. There is no out-of-the-box support, you need to delve into standard WinAPI functions. Take a look at this article, which provides quite a comprehensive sample: http://www.freevbcode.com/ShowCode.Asp?ID=1287 A: On several projects I have implemented asynchronous processing in VB6 using multiple processes. Basically having a worker thread within an active exe project that is separate from the main process. The worker exe can then be passed whatever data it needs and started, raising back an event to say it's finished or there is data for the main process. It's a more resource hungry (an extra process rather than a thread) but VB6 is running in a single threaded apartment and doesn't have any built in support for starting new threads. If you really need to have multiple threads within one process I'd suggest looking at using .net or VC6 rather than VB6. A: Create "Active X" controls to manage your code. Each control has its own thread. You can stack multiple controls doing the same thing, or have individual controls doing unique things. EG, You make one to download a file from the net. Add ten controls and you have ten individual threaded downloads running, independent of the thread which the actual program is running. Essentially, they are all just interactive, windows, controlled by an instanced mini-dll program. Can't get any easier than that. You can throttle them, turn them on and off, as well as create more, or remove them, as needed. (Indexing just like any other of the "Objects", on a form. Which are all just active-x controls, which are simply managed by the vb-runtime dlls.) A: You can use the Interop Forms Toolkit 2.0 for multithreading in VB6. The Toolkit allows you to take advantage of .NET features without being forced onto an upgrade pat. Thus you can also use .NET User Controls as ActiveX controls in VB6.
{ "language": "en", "url": "https://stackoverflow.com/questions/109931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: linq equivalent of 'select *' sql for generic function? I've got a generic<> function that takes a linq query ('items') and enumerates through it adding additional properties. How can I select all the properties of the original 'item' rather than the item itself (as the code below does)? So equivalent to the sql: select *, 'bar' as Foo from items foreach (var item in items) { var newItem = new { item, // I'd like just the properties here, not the 'item' object! Foo = "bar" }; newItems.Add(newItem); } A: There's no easy way of doing what you're suggesting, as all types in C# are strong-typed, even the anonymous ones like you're using. However it's not impossible to pull it off. To do it you would have to utilize reflection and emit your own assembly in memory, adding a new module and type that contains the specific properties you want. It's possible to obtain a list of properties from your anonymous item using: foreach(PropertyInfo info in item.GetType().GetProperties()) Console.WriteLine("{0} = {1}", info.Name, info.GetValue(item, null)); A: Shoot you wrote exactly what i was going to post. I was just getting some code ready :/ Its a little convoluted but anyways: ClientCollection coll = new ClientCollection(); var results = coll.Select(c => { Dictionary<string, object> objlist = new Dictionary<string, object>(); foreach (PropertyInfo pi in c.GetType().GetProperties()) { objlist.Add(pi.Name, pi.GetValue(c, null)); } return new { someproperty = 1, propertyValues = objlist }; }); A: from item in items where someConditionOnItem select { propertyOne, propertyTwo }; A: Ask the item to give them to you. Reflection is one way... however, since all the properties are known at compile time, each item could have a method that helps this query get what it needs. Here's some example method signatures: public XElement ToXElement() public IEnumerable ToPropertyEnumerable() public Dictionary<string, object> ToNameValuePairs() A: Suppose you have a collection of Department class: public int DepartmentId { get; set; } public string DepartmentName { get; set; } Then use anonymous type like this: List<DepartMent> depList = new List<DepartMent>(); depList.Add(new DepartMent { DepartmentId = 1, DepartmentName = "Finance" }); depList.Add(new DepartMent { DepartmentId = 2, DepartmentName = "HR" }); depList.Add(new DepartMent { DepartmentId = 3, DepartmentName = "IT" }); depList.Add(new DepartMent { DepartmentId = 4, DepartmentName = "Admin" }); var result = from b in depList select new {Id=b.DepartmentId,Damartment=b.DepartmentName,Foo="bar" };
{ "language": "en", "url": "https://stackoverflow.com/questions/109934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }