text
stringlengths
8
267k
meta
dict
Q: Where can I find free WPF controls and control templates? I am looking for some recommendations on good places to find libraries of controls/templates/styles for WPF. I know about the usual places like Infragistics, but it seems to me that there should be some kind of community effort by now to share nice, clean, well written controls for WPF controls. I am not big on the design side, and it would be nice to fill out my personal libraries with some nice examples from people who are better at design. Any ideas or recommendations? A: Syncfusion has a free community version available with over 650 controls. Syncfusion You will find an FAQ there with any licensing questions you may have, it sound great to be honest. Have fun! Edit: The WPF controls themselves are 100+, the number of 650+ refers to all controls for all areas (WPF, Windows Forms etc). A: I searched for some good themes across internet and found nothing. So I ported selected controls of GTK Hybrid theme. It's MIT licensed and you can find it here: https://github.com/stil/candyshop It's not enterprise grade style and probably has some flaws, but I use it in my personal projects. A: They have one for free as a sample at http://www.xamltemplates.net/ A: I strongly recommend the MahApps it's simply awesome! A: Silverlight and WPF Dashboards and gauges Simple (but great) piece of work. A: Codeplex is definitively the right place. Recent "post": SofaWPF.codeplex.com based on AvalonDock.codeplex.com, an IDE like framework. A: Check out Reuxables although it comes at a cost.
{ "language": "en", "url": "https://stackoverflow.com/questions/121724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "87" }
Q: Server virtualization: how to avoid, locate and fix bottlenecks? Server virtualization is a big thing these days, so I'm tasked at work to install some of our software on a virtualized server and see what happens. Long story short: a rsync transfer promptly brings the virtualized server to its knees. The virtualization host is a beefy machine with no other load; I don't think this should be happening. Top shows high load averages, and cpu iowait near 100%. There's a huge bottleneck somewhere. I'm more a programmer than a sysadmin, I lack the knowledge on how to go about fixing this outside of random Googling. I suspect I'm not alone in this. What I'd like to see here is general advice on virtualization, and pointers to good articles and other resources, which I and others could use to educate ourselves. * *What tools (even standard unix tools) can be used to pinpoint bottlenecks? *What metrics should be followed to ensure things run smoothly? *What kind of things can be efficiently virtualized? *What kind of setups are doomed to fail? I apologize the broadness of the question. I just don't have the knowledge to ask useful specific questions about this. Edit: More on my specific problem: * *XAN paravirtualization, 3 x guest CentOS *All guests on local SCSI disks, there is a fully hardware raid controller *rsyncd running on 1 guest os, transfer initiated from a remote non virtualized server through 100mbps LAN Like I said before, I really can't provide a ton of useful data. I'm not really expecting to get a direct solution to this problem, I'd be happy with pointers on where to start building the skillset required to better understand these kinds of problems. A: I use rsync to keep some parts of our (very new) virtual environment in sync without any issues. I don't think it's a virtualization issue as much as it is an I/O issue, which you appear to have already identified. I've found that virtualization is very, very taxing on hard disks, and this only gets worse the more guests you have on the host box. For machines that are very I/O intensive, consider segmenting their disk access away from the other hosts. Are you using any kind of SAN technology? We've found that to be very useful at my workplace (we're using two 8-core Sun Intel servers and a 1TB 12 disk iSCSI array). Is your hardware fully supported by the virtualization software provider? If you're trying to run on unsupported hardware then there's a good chance that your disk controller is not going to be using the best drivers, which would explain your slow disk accesses. You can use iostat on Linux/Unix to get some feedback on the I/O, and there's iotop too, though it's not packaged in many distros yet. A: I was going to put this in a comment, but I think it's more useful in the open: Could you add more detail about your setup: * *Which VM server? (VMware Server, VMware ESX, MS VirtualServer, MS Hyper-V, something else?) *Which OS for the guest(s)? (Windows, Linux, 32-bit, 64-bit?) *Where are the guest(s) stored? (Local disk, or on a NAS or SAN?) *Were you rsyncing between guests on the same VM server, or between a guest and a physical server? *If across a network, how fast is the network? Performance tuning in any environment is 90% collecting data, and 10% analysis. Virtualized environments have more variables to consider than Physical environments, but more importantly, they have a different response curve than physical environments. Some applications can perform better in a virtualized environment than a physical environment; others will not. You have to understand the requirements of the application as well as constraints of the implementation. I don't believe that there are any software-only applications that cannot be successfully deployed on virtual servers, if you pay attention to the details. (Applications that require custom hardware that can't be successfully virtualized is a different problem.) A: Off hand I'd say this is I/O problem. In virtual environments on of the biggest factors that affects performance is the state of the disk of the host machine. The things that we do to optimize performance are: * *Fixed disk allocation. This way you get a contiguous block of drive space for the VM to live in. *Schedule a defrag of the VM Slice OS and the Host server drive. Fragmentation is your enemy. *Be sure that you are ending an server sessions gracefully. When bouncing the virtual OS, DO NOT just shutdown the VM slice, as this cause huge disk fragmentation. The virtual OS needs to perform the shut down / restart process.
{ "language": "en", "url": "https://stackoverflow.com/questions/121743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Best practices for SQL Server development across differing versions Expanding on this question, what is the best way to develop against both SQL Server 2005 and SQL Server 2008? I'd like to see if I could just use Orcas technology on my current Vista 64 machine and since SQL Server 2005 wants to install a stub version of Visual Studio 2005, I'd like to avoid using it. However, most places where my technology would be deployed are on SQL Server 2005 for the foreseeable future. So what would be the best course of action: * *Install SQL Server 2008 only on my development machine and just be cognizant of the 2008-specific abilities *Install SQL Server 2008 and SQL Server 2005 on separate instances on my development machine and develop against either depending on what the production project requires *Install SQL Server 2008 only on my development machine and install SQL Server 2005 on a different machine (like a test server) *Install SQL Server 2005 only on my development machine and install SQL Server 2008 on a different machine (like a test server) A: The safest practice is to code against the oldest database server you support. This version is the one that will be far more likely to give you trouble. By and large the new versions of the db will have backwards compatibility to support your TSQL and constructs. It is far to simple to introduce unsupported code into the mix when using a newer version db then your target. A: You need to code against the oldest version of SQL Server, so that you don't start using features not available until more recent versions. Although it is not necessarily true that the newer versions will continue to support older features, the best way to make sure is to run Microsoft's own SQL Server Best Practices Analyzer which will notify you of compatibility issues. A: The referenced question suggests changing the database compatibility level, this is a short term solution that would not be automated easily. Its important to have automated testing, and if your reading stackoverflow, you probably agree. I would say get both MS SQL Server 2005 and 2008 running. Then, assuming you run unit tests, always unit test your database code against both servers.
{ "language": "en", "url": "https://stackoverflow.com/questions/121748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you implement Coroutines in C++ I doubt it can be done portably, but are there any solutions out there? I think it could be done by creating an alternate stack and reseting SP,BP, and IP on function entry, and having yield save IP and restore SP+BP. Destructors and exception safety seem tricky but solvable. Has it been done? Is it impossible? A: Yes it can be done without a problem. All you need is a little assembly code to move the call stack to a newly allocated stack on the heap. I would look at the boost::coroutine library. The one thing that you should watch out for is a stack overflow. On most operating systems overflowing the stack will cause a segfault because virtual memory page is not mapped. However if you allocate the stack on the heap you don't get any guarantee. Just keep that in mind. A: You might be better off with an iterator than a coroutine if possible. That way you can keep calling next() to get the next value, but you can keep your state as member variables instead of local variables. It might make things more maintainable. Another C++ developer might not immediately understand the coroutine whereas they might be more familiar with an iterator. A: For those who want to know how they may leverage Coroutines in a portable way in C++ y̶o̶u̶ ̶w̶i̶l̶l̶ ̶h̶a̶v̶e̶ ̶t̶o̶ ̶w̶a̶i̶t̶ ̶f̶o̶r̶ ̶C̶+̶+̶1̶7̶ the wait is over (see below)! The standards committee is working on the feature see the N3722 paper. To summarize the current draft of the paper, instead of Async and Await, the keywords will be resumable, and await. Take a look at the experimental implementation in Visual Studio 2015 to play with Microsoft's experimental implementation. It doesn't look like clang has a implementation yet. There is a good talk from Cppcon Coroutines a negative overhead abstraction outline the benefits of using Coroutines in C++ and how it affects simplicity and performance of the code. At present we still have to use library implementations, but in the near future, we will have coroutines as a core C++ feature. Update: Looks like the coroutine implementation is slated for C++20, but was released as a technical specification with C++17 (p0057r2). Visual C++, clang and gcc allow you to opt in using a compile time flag. A: Does COROUTINE a portable C++ library for coroutine sequencing point you in the right direction? It seems like an elegant solution that has lasted the test of time.....it's 9 years old! In the DOC folder is a pdf of the paper A Portable C++ Library for Coroutine Sequencing by Keld Helsgaun which describes the library and provides short examples using it. [update] I'm actually making successful use of it myself. Curiosity got the better of me, so I looked into this solution, and found it was a good fit for a problem I've been working on for some time! A: I dont think there are many full-blown, clean implementations in C++. One try that I like is Adam Dunkels' protothread library. See also Protothreads: simplifying event-driven programming of memory-constrained embedded systems in the ACM Digital Library and discussion in Wikipedia topic Protothread, A: It is based on (cringe) macros, but the following site provides an easy-to-use generator implementation: http://www.codeproject.com/KB/cpp/cpp_generators.aspx A: A new library, Boost.Context, was released today with portable features for implementing coroutines. A: This is an old thread, but I would like to suggest a hack using Duff's device that is not os-dependent (as far as I remember): C coroutines using Duff's device And as an example, here is a telnet library I modified to use coroutines instead of fork/threads: Telnet cli library using coroutines And since standard C prior to C99 is essentially a true subset of C++, this works well in C++ too. A: I've come up with an implementation without asm code. The idea is to use the system's thread creating function to initialize stack and context, and use setjmp/longjmp to switch context. But It's not portable, see the tricky pthread version if you are interested. A: On POSIX, you can use makecontext()/swapcontext() routines to portably switch execution contexts. On Windows, you can use the fiber API. Otherwise, all you need is a bit of glue assembly code that switches the machine context. I have implemented coroutines both with ASM (for AMD64) and with swapcontext(); neither is very hard. A: For posterity, Dmitry Vyukov's wondeful web site has a clever trick using ucontext and setjump to simulated coroutines in c++. Also, Oliver Kowalke's context library was recently accepted into Boost, so hopefully we'll be seeing an updated version of boost.coroutine that works on x86_64 soon. A: There's no easy way to implement coroutine. Because coroutine itself is out of C/C++'s stack abstraction just like thread. So it cannot be supported without language level changes to support. Currently(C++11), all existing C++ coroutine implementations are all based on assembly level hacking which is hard to be safe and reliable crossing over platforms. To be reliable it needs to be standard, and handled by compilers rather than hacking. There's a standard proposal - N3708 for this. Check it out if you're interested. A: https://github.com/tonbit/coroutine is C++11 single .h asymmetric coroutine implementation supporting resume/yield/await primitives and Channel model. It's implementing via ucontext / fiber, not depending on boost, running on linux/windows/macOS. It's a good starting point to learn implementing coroutine in c++. A: Check out my implementation, it illustrates the asm hacking point, it works on x86, x86-64, aarch32 and aarch64: https://github.com/user1095108/cr2/ Most of hand-rolled coroutine implementations are variants of the setjmp/longjmp pattern or the ucontext pattern. Since these work on a variety of architectures, the coroutine implementations themselves are widely portable, you just need to provide some basic assembly code. A: Based on macros as well (Duff's device, fully portable, see http://www.chiark.greenend.org.uk/~sgtatham/coroutines.html ) and inspired by the link posted by Mark, the following emulates co-processes collaborating using events as synchronization mechanism (slightly different model than the traditional co-routines/generator style) // Coprocess.h #pragma once #include <vector> class Coprocess { public: Coprocess() : line_(0) {} void start() { line_ = 0; run(); } void end() { line_ = -1; on_end(); } virtual void run() = 0; virtual void on_end() {}; protected: int line_; }; class Event { public: Event() : curr_(0) {} void wait(Coprocess* p) { waiters_[curr_].push_back(p); } void notify() { Waiters& old = waiters_[curr_]; curr_ = 1 - curr_; // move to next ping/pong set of waiters waiters_[curr_].clear(); for (Waiters::const_iterator I=old.begin(), E=old.end(); I != E; ++I) (*I)->run(); } private: typedef std::vector<Coprocess*> Waiters; int curr_; Waiters waiters_[2]; }; #define corun() run() { switch(line_) { case 0: #define cowait(e) line_=__LINE__; e.wait(this); return; case __LINE__: #define coend default:; }} void on_end() An example of use: // main.cpp #include "Coprocess.h" #include <iostream> Event e; long sum=0; struct Fa : public Coprocess { int n, i; Fa(int x=1) : n(x) {} void corun() { std::cout << i << " starts\n"; for (i=0; ; i+=n) { cowait(e); sum += i; } } coend { std::cout << n << " ended " << i << std::endl; } }; int main() { // create 2 collaborating processes Fa f1(5); Fa f2(10); // start them f1.start(); f2.start(); for (int k=0; k<=100; k++) { e.notify(); } // optional (only if need to restart them) f1.end(); f2.end(); f1.start(); // coprocesses can be restarted std::cout << "sum " << sum << "\n"; return 0; } A: WvCont is a part of WvStreams that implements so-called semi-coroutines. These are a little easier to handle than full-on coroutines: you call into it, and it yields back to the person who called it. It's implemented using the more flexible WvTask, which supports full-on coroutines; you can find it in the same library. Works on win32 and Linux, at least, and probably any other Unix system. A: You should always consider using threads instead; especially in modern hardware. If you have work that can be logically separated in Co-routines, using threads means the work might actually be done concurrently, by separate execution units (processor cores). But, maybe you do want to use coroutines, perhaps because you have an well tested algorithm that has already been written and tested that way, or because you are porting code written that way. If you work within Windows, you should take a look at fibers. Fibers will give you a coroutine-like framework with support from the OS. I am not familiar with other OS's to recommend alternatives there. A: I've tried to implement coroutines myself using C++11 and threads: #include <iostream> #include <thread> class InterruptedException : public std::exception { }; class AsyncThread { public: AsyncThread() { std::unique_lock<std::mutex> lock(mutex); thread.reset(new std::thread(std::bind(&AsyncThread::run, this))); conditionVar.wait(lock); // wait for the thread to start } ~AsyncThread() { { std::lock_guard<std::mutex> _(mutex); quit = true; } conditionVar.notify_all(); thread->join(); } void run() { try { yield(); for (int i = 0; i < 7; ++i) { std::cout << i << std::endl; yield(); } } catch (InterruptedException& e) { return; } std::lock_guard<std::mutex> lock(mutex); quit = true; conditionVar.notify_all(); } void yield() { std::unique_lock<std::mutex> lock(mutex); conditionVar.notify_all(); conditionVar.wait(lock); if (quit) { throw InterruptedException(); } } void step() { std::unique_lock<std::mutex> lock(mutex); if (!quit) { conditionVar.notify_all(); conditionVar.wait(lock); } } private: std::unique_ptr<std::thread> thread; std::condition_variable conditionVar; std::mutex mutex; bool quit = false; }; int main() { AsyncThread asyncThread; for (int i = 0; i < 3; ++i) { std::cout << "main: " << i << std::endl; asyncThread.step(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/121757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: Sleep a thread until an event is attended in another thread I have two threads in an Android application, one is the view thread, and the other is the worker thread. What I want to do is, sleep the worker thread until the view thread terminates the handling of the onDraw method. How i can do this? is there any wait for the signal or something? A: Share a java.lang.Object between the two threads, whose sole purpose is to tell the worker thread when it can continue its work. Whenever the worker thread reaches a point where it should sleep, it does this: stick.wait(); When the view thread finishes its onDraw work, it calls this: stick.notify(); Note the requirement that the view thread owns the monitor on the object. In your case, this should be fairly simple to enforce with a small sync block: void onDraw() { ... synchronized (stick) { stick.notify(); } } // end onDraw() Consult the javadoc for java.lang.Object on these methods (and notifyAll, just in case); they're very well written. A: If you want a higher-level concurreny API (with things like Barriers), you could try the backport of the java 5 concurrency API, which works on java 1.3 and above, and may work on Android. The likes of Object.wait/notify will work, but they can be a bit terse. Backport-util-concurrent
{ "language": "en", "url": "https://stackoverflow.com/questions/121762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: What is the STL implementation with the lowest memory footprint? I am working on a very large scale computing library that is using STL heavily. The library is being built using MSVC2003 and it is using its STL implementation. I am looking for an alternative STL implementation that would help the library lower its memory requirements and increase its performance. It is not possible to switch to a newer version of MSVC for the moment. I would like some feedback on real world usage not based on benchmarks if possible. EDIT: To make it a little clearer, for example some STL implementation (like STLSoft) are proposing specific optimizations for string concatenation; these might sounds small in impact but they can lead to large improvements. STLPort is another good example where they clearly state their goal: having the fastest STL implementation around, there is the stdlib++, etc ... all of these can be good candidates but I have no time to test them all, i require some community help on that. A: STLPort. Haven't measured memory usage differences, but it's definitely quicker (yes, real world usage). A: I question your basic premise, that you can not switch to a newer version of MSVC. I don't think you're going to get lower memory and increased performance "for free" by downloading a new STL. Or at least, if you did, you would probably have to do as many code fixes as if you were to just update to the latest MSVC. Long term, there's no question you want to update... Do it now, and you might get lucky and get some of that memory and performance for free. The only thing I can think to suggest to you along the lines of what you say you're looking for would be to try the Intel compiler, which I've had both good (performance!) and bad (quirky, sometimes!) experience with. Other than that, find your own memory and performance problems, and write custom containers and algorithms. STL is awesome, but it's not a panacea for fixing all problems in all cases. Domain knowledge is your best ally. A: Have you considered writing your own memory allocator? You don't always need to switch the entire STL if you just don't like the memory allocation strategy. All containers accept a replacement allocator. A: Have you profiled your code and considered small tweaks to those areas that are the problem? I would think it would be much less painful than what you're considering. A: Most of it depends on which container you are talking about, and how you are using it. vector usually has the smallest footprint, except at the moment you add an element that is beyond the current vector capacity. At that moment it it will allocate something like 1.5 x the current vectors capacity, move the elements (or in the worst case make a new copy which also allocates memory) and when that is done, delete the old vectors internals, If you know how many elements it is going to hold up front, vector with a use of reserve is your best bet. The second smallest is list. It has the advantage that its not going to make a temporary copy of itself. After that set is your best bet is probably set. Some implementation have slist now, which is smaller. In these cases tt is pretty easy to make an allocator that packs the memory in pages. Stay away from memory hogs like unordered_* On MSVC be sure to #define _SECURE_SCL=0 This takes out a lot of overhead used for secure programming checks (like buffer overruns, etc) By far the most memory efficient containers are boost/intrusive These have extremely small footprints since they use the memory of the thing being contained. So instread of going to the heap for a small chunk of memory for a linked list or rb tree node, the node pointers are part of the object itself. Then the "container" is just one raw set of a few pointer to make a root node. I've used it quite a few times to get rid of footprint and allocation overhead. A: Most STL implementations, including the one of MSVC2003, are well implemented generic libraries. So you won't see a significant performance improvment from one implementation to the other. However, sometimes you can write algorithm (or container) that are faster than the STL for you because you know something about your data that the STL writer did not new (since they were writing generic containers and algorithm). In conclusion, if you want to improve your applications performances, you better try to create specialised containers that fit you data specially than looking for a more performant STL. A: If performance is so critical to your application, and STL is interwoven into it, is it possible to find an open-source implementation (such as STL-Port, as mentioned) and fork it for yourself, making performance improvements as needed? On the one hand, I can see this becoming a slippery slope where you make non-standard modifications to your fork of the STL library, thus creating problems. However, importance of performance to your application might outweigh the risk of this occurring.
{ "language": "en", "url": "https://stackoverflow.com/questions/121787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Fix for fatal error C1083 We have a set of nightly builds that build of full suite of software using Embedded Visual C++ batch files. There is probably a total of 30 builds that are done. Every night at least one or two builds fail with something like the following error: c:\lc\trunk\server\can\svcangettracedrivelength.cpp(11) : fatal error C1083: Cannot open precompiled header file: 'SH4Rel/CANWce.pch': Permission denied It is never the same file or precompile header that fails and it is rarely the same executable. As far as I know nothing else is happening on this build machine. Does anyone have a fix to make our nightly builds run more reliably? A: Try running it all in the visual IDE; it will be easier to catch this way. You sure you don't have multiple compiler instances working on several builds at once? One building a project/lib/whatever while another trying to access it? A: Does EVC 4.0 support macros? Maybe as a last resort you can have a macro that triggers the builds:) I don't understand your last statement. Clearly the trouble is at compile time, not at run time. Have you tried compiling without precompiled headers? What's the error then? A: Encountered apparently same issue - was seemingly caused by Microsoft Security Essentials. I tried disabling it, and it immediately fixed the issue and it has not returned since. A: Generally speaking we do not see this error when running inside of the IDE (EVC++ 4.0). We cannot run our nightly builds using the GUI, however. As far as we know the build machine is idle while the nightly builds are running.
{ "language": "en", "url": "https://stackoverflow.com/questions/121788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What version numbering scheme do you recommend? My question is, which version-naming scheme should be used for what type of project. Very common is major.minor.fix, but even this can lead to 4 number (i.e. Firefox 2.0.0.16). Some have a model that odd numbers indicate developer-versions and even numbers stable releases. And all sorts of additions can enter the mix, like -dev3, -rc1, SP2 etc. Exists reasons to prefer one scheme over another and should different type of projects (i.e. Open Source vs. Closed Source) have different version naming schemes? A: We prefer major.minor.milestone.revision-build scheme, where: * *major: Increments upon significant architectural changes or important advancements in capabilities. *minor: Small changes and new features that does not require architectural changes. *milestone: Indicates stability and maturity of the code: * *0 for development/pre-alpha *1 for alpha *2 for beta *3 for release candidate (RC) *4 for final/production release *revision: Indicates release, patch or bugfix number. *build: Unique references to specific builds, or versions, of an application. Build number is a sequential integer, typically incremented at each build. Examples: * *1.4.2.0-798: First beta release of version 1.4, created by build number 798. *1.8.3.4-970: 1.8-RC4, created by build number 970. *1.9.4.0-986: First production release of version 1.9, created by build number 986. *1.9.4.2-990: Second bugfix release of version 1.9, created by build number 990. Since prodcution releases always have 4 in their 3rd digit of version string, the digit may be removed for production releases. A: I personally prefer MAJOR.MINOR.BUGFIX-SUFFIX where SUFFIX is dev for development versions (version control checkouts), rc1 / rc2 for release candidates and no suffix for release versions. If you have suffixes for development checkouts, maybe even with the revision number, there is no need to make them even/odd to keep them apart. A: In the case of a library, the version number tells you about the level of compatibility between two releases, and thus how difficult an upgrade will be. A bug fix release needs to preserve binary, source, and serialization compatibility. Minor releases mean different things to different projects, but usually they don't need to preserve source compatibility. Major version numbers can break all three forms. I wrote more about the rationale here. A: With Agile software development practices and SaaS applications, the idea of a Major vs. a Minor release has gone away - releases come out extremely frequently on a regular basis - so a release numbering scheme that relies on this distinction is no longer useful to me. My company uses a numbering scheme that takes the last 2 digits of the year the release started followed by the release number within that year. So, the 4th release started in 2012 would be 12.4. You can include a "bug fix" version number after that if necessary, but ideally you are releasing frequently enough that these are not often necessary - so "12.4.2". This is a very simple scheme and has not given us any of the problems of other release numbering schemes that I have used before. A: There are two good answers for this (plus a lot of personal preferences; see gizmo's comment on religious wars). For public applications, the standard Major.Minor.Revision.Build works best IMO - public users can easily tell what version of the program they have and, to some degree, how far out of date their version is. For in house applications, where the users never asked for the application, the deployment is handled by IT, and users will be calling the help desk, I found the Year.Month.Day.Build to work better in a lot of situations. This version number can thus be decoded to provide more useful information to the help desk than the public versioning number scheme. However at the end of the day I would make one recommendation above all else - use a system you can keep consistent. If there is a system that you can setup/script your compiler to automatically use everytime, use that. The worst thing that can happen is you releasing binaries with the same version number as the previous ones - I've recently been dealing with automated network error reports (someone elses application), and came to the conclusion that the Year.Month.Day.Build version numbers shown in the core dumps where not even remotely up to date with the application itself (the application itself used a splash screen with the real numbers - which of course where not drawn from the binary as one might assume). The result is I have no way of knowing if crash dumps are coming from a 2 year old binary (what the version number indicates) or a 2 month old binary, and thus no way of getting the right source code (no source control either!) A: I'm a big fan of Semantic versioning As many others have commented this uses the X.Y.Z format and gives good reasons as to why. A: Here's what we use in our company: Major.Minor.Patch version.Build Number . The Major change involves a full release cycle, including marketing involvement etc. This number is controled by forces outside of R&D (for example, in one of the places I worked, Marketing decided that our next version would be '11' - to match a competitor. We were at version 2 at the time :)). Minor is changed when a new feature or a major behavior change is added to the product. Patch version goes up by one every time a patch is officially added to the version, usually including bug fixes only. Build Version is used when a special version is released for a customer, usually with a bug fix specific to him. Usually that fix will be rolled up for the next patch or minor version (and Product Management usually marks the bug as "will be released for patch 3" in our tracking system). A: Our R&D department uses 1.0.0.0.0.000: MAJOR.minor.patch.audience.critical_situation.build Please, please, don't do that. A: This kind of question is more about religion war than objective aspects. There is always tons of pros and cons against a numbering scheme or another. All what people could (or should) give you is the scheme they used and why they choose it. On my side, I use a X.Y.Z scheme all are numbers where: * *X indicate a change in the public API that introduce backward incompatibility *Y indicate an addition of some features *Z indicate a fix (either fixing a bug, either changing internal structure without impacting functionnality) Eventually, I use "Beta N" suffix if I want some feedback from the users before an official release is done. No "RC" suffix as nobody is perfect and there will always be bugs ;-) A: The difference between a close and open-source version number policy can also come from a commercial aspect, when the major version can reflect the year of the release for instance. A: What we used to do here is major.minor.platform.fix. major: We increase this number when saved file from this build are no longer compatible with previous build. Exemple: Files saved in version 3.0.0.0 won't be compatible with version 2.5.0.0. minor: We increase this number when a new feature has been added. This feature should be seen by the user. Not a hidden feature for developper. This number is reset to 0 when major is incremented. platform: This is the platform we use for developpement. Exemple: 1 stands for .net framework version 3.5. fix : We increase this number when only bug fixes are included with this new version. This number is reset to 0 when major or minor is incremented. A: Simply Major.Minor.Revision.Build
{ "language": "en", "url": "https://stackoverflow.com/questions/121795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: Behaviour-Driven or Test-Driven Development? I recently heard of BDD and found it very similar to TDD. Which of these two do you use (if any)? and which are the pros and cons of each? A: BDD is similar to TDD but with a different mindset. In BDD you're trying to create executable specifications instead of tests. This is mostly accomplished by using a different vocabulary but similar mechanics as TDD. BDD seems to be a reaction to a lot of cases where people claimed to be doing TDD but were writing integration tests instead of unit tests. BDD people thought talking about tests was misleading and so tests became specifications. This seems a bit metaphysical but there are some good ideas behind it. A: I'm very much of the BDD = TDD done properly camp. If you're doing TDD as originally described by Beck - and practised by many - then there is essentially no difference. What BDD brings to the table is some interesting variants on the language used to describe the process. By using alternate terminology in the descriptions of the process and the tools BDD folk hope to encourage better practices - a laudable goal. I've been doing TDD for so long now it's hard for me to judge whether this actually helps. I think (hope :-) I've already learned many of the lessons that BDD tools/language encourage so that they don't seem to provide much extra value to me. Of course YMMV - and I've not done a whole "real world" project using BDD tools - so I might be taking my personal experiments and extrapolating too far. I'd guess that BDD tools/language may be more useful to folk being introduced to this way of approaching development - since they avoid the whole confusion with "test" being used in the more traditional sense. I've not done this myself yet - and would be interested if folk here have had any such experience. A: BDD is all about running the scenarios. Similar to TDD we will test each and every scenario as a story. The story will be explained by the customer.. basing on the storyline the scenarios will be written. Tools like CUCUMBER made it easy to write scenarios. A: TDD and BDD are pretty much the same. The difference is how we explain it, and therefore how succesful teams end up being in making it work for them. BDD builds upon TDD by formalising the good habits of the best TDD practioners. TDD is a developers tool or guide to write good software and BDD is a good tool to help outside in development with more involvment from the bussiness as it is developed using a ubiquitous language. My experience is that BDD helps on collaboration, and the use of business-readable, executable specifications helps to build a shared language when everyone in the team is involved in writing documentation that describes what the system should do. This helps the whole team to learn the language of the domain together. BDD is what it takes to make TDD succeed.
{ "language": "en", "url": "https://stackoverflow.com/questions/121806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to tell if an Excel Workbook is protected I can use properties of an Excel Worksheet to tell if the worksheet is protected (Worksheet.Protection, Worksheet.ProtectContents etc). How can I tell using VBA if the entire workbook has been protected? A: Found the answer myself: I need the Workbook.ProtectStructure and Workbook.ProtectWindows properties. A: Worksheet.ProtectedContents is what you would need to use, on each Worksheet. So I would set up a loop like this: Public Function wbAllSheetsProtected(wbTarget As Workbook) As Boolean Dim ws As Worksheet wbAllSheetsProtected = True For Each ws In wbTarget.Worksheets If ws.ProtectContents = False Then wbAllProtected = False Exit Function End If Next ws End Function The function will return True if every worksheet is protected, and False if there are any worksheets not protected. I hope this is what you were looking for.
{ "language": "en", "url": "https://stackoverflow.com/questions/121808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Remove Right Click Print Context Menu from Outlook 2007 Is there any way that I can remove the Print item from the context menu when you right-click on an email with VBA? I am forever right-clicking to reply to an email, only to accidentally click Print and have Outlook send it directly to the printer quicker than I can stop it. NB: I am using Outlook 2007. A: Based on the link TcKs provide, that was pretty simple. In the example below I check the type of the item so that it only affects e-mails and not calendar items. To enter the code in outlook, Type Alt + F11, then expand the Microsoft Office Outlook Objects in the Project pane. Then double click the ThisOutlookSession. Then paste this code into the code window. I don't like to check captions like this as you can run into issues with internationalization. But I didn't see an ActionID or anything on the Command. There was a FaceID but that is just the id of the printer icon. Private Sub Application_ItemContextMenuDisplay(ByVal CommandBar As Office.CommandBar, ByVal Selection As Selection) Dim cmdTemp As Office.CommandBarControl If Selection.Count > 0 Then Select Case TypeName(Selection.Item(1)) Case "MailItem" For Each cmdTemp In CommandBar.Controls If cmdTemp.Caption = "&Print" Then cmdTemp.Delete Exit For End If Next cmdTemp Case Else 'Debug.Print TypeName(Selection.Item(1)) End Select End If End Sub A: Thera is sample how to programaticly working with Outlook: How to: Customize an Item Context Menu
{ "language": "en", "url": "https://stackoverflow.com/questions/121810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I use Inno Setup to optionally install a plugin/file in a folder based on a registry entry? Inno Setup is a nice easy to use installer. It is rated high in this stackoverflow question. I have a need to install a plugin to a folder relative to the installation folder of a 3rd Party application. It isn't obvious from the docs how to do this. A: You can find the answer to how to optionally install a file using a registry entry in the documentation and in sample code but it may not be obvious so here is some example script snippets using an Adobe Premiere Plugin as an example: The keys steps are: 1) use the Check: parameter 2) Write a function that calls RegQueryStringValue and parse the path to construct the relative plugin folder destination 3) use {code:} to call a function to return the destination folder // // Copy my plugin file to the Premiere Plugin folder, but only if Premiere is installed. // [Files] Source: "C:\sourceFiles\myplugin.prm"; Check: GetPremierePluginDestination; DestDir: "{code:PluginDestination}"; Flags: ignoreversion overwritereadonly [Code] var sPluginDest : String; // // Search for the path where Premiere Pro was installed. Return true if path found. // Set variable to plugin folder // function GetPremierePluginDestination(): Boolean; var i: Integer; len: Integer; begin sPluginDest := ''; RegQueryStringValue( HKLM, 'SOFTWARE\Adobe\Premiere Pro\CurrentVersion', 'Plug-InsDir', sPluginDest ); len := Length(sPluginDest); if len > 0 then begin i := len; while sPluginDest[i] <> '\' do begin i := i-1; end; i := i+1; Delete(sPluginDest, i, Len-i+1); Insert('Common', sPluginDest, i); end; Result := len > 0; end; // // Use this function to return path to install plugin // function PluginDestination(Param: String) : String; begin Result := sPluginDest; end; I'm not a Pascal programmer so any suggestions on making GetPremiereDestination more efficient are welcome.
{ "language": "en", "url": "https://stackoverflow.com/questions/121812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I redirect and modify extension-less URLs via ASP.NET? We have redesigned the structure to a website which has several business units. Now I want to redirect (301) to the new page. IE: was www.example.com/abc now www.example.com/default.aspx?article=abc I have tried to use Global.asax to do this, and it works properly when I debug through it. if (Request.RawUrl.Contains("abc")) { Response.RedirectLocation = "/default.aspx?article=abc"; Response.StatusCode = 301; Response.StatusDescription = "Moved"; Response.End(); } So http://localhost:1234/example/abc redirects properly, but (where 1234 is the port for the debugging server) http://localhost/example/abc does not redirect, it gives me a 404. Any ideas? Additional info: If I go to http://localhost/example/abc/default.aspx then it redirects properly. A: Well, if the port indicates you are using the built-in web server (the one that comes with VS), this probably works because that always routes requests through the ASP.NET framework. Requests ending with /abc will not automatically route through the ASP.NET framework because IIS may not "know" you want them to. You need to check your IIS settings to make sure such requests are routed to the aspnet_isapi.dll EDIT: To accomplish this, you need to add a wildcard mapping: * *In IIS Manager, expand the local computer, expand the Web Sites folder, right-click the Web site or virtual directory that you want, and then click Properties. *Click the appropriate tab: Home Directory, Virtual Directory, or Directory. *In the Application settings area, click Configuration, and then click the Mappings tab. *To install a wildcard application map, do the following: * *On the Mappings tab, click Add or Insert. *Type the path to the DLL in the Executable text box or click Browse to navigate to it (for example, the ASP.NET 2.0 dll is at c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll on my machine) *For extension, use ".*" without quotes, of course *Select which verbs you want to look for (GET,HEAD,POST,DEBUG are the usual for ASP.NET, you decide) *Make sure "Script engine" or "Application engine" is selected *Uncheck "Check that file exists" *Click okay. I may be off on this, but if I am, hopefully someone will correct me. :) A: You should use the IIS wild card redirection, you will need something like this; *; www.example.com/*; www.example.com/default.aspx?article=$0 There is a reasonable reference at Microsoft If you're using Apache I think you'll need to modify the htaccess file. A: Are you currently testing the site in the Visual Studio web server? That usually runs the site on "localhost:nnnnn" where "nnnnn" is a port number (as above is 1234), it doesn't set it to run without one. If you've got IIS installed on the machine concerned, publish your project to it and you should be able to verify that it works without the "nnnnn" as there doesn't look to be anything in your code that would cause it to not do so. A: Have you made sure the web.config files are the same for each website (assuming :1234 is different to :80) Also, have you tried localhost:80? A: Perhaps you want to take a look at Routing. See: * *http://msdn.microsoft.com/en-us/library/cc668201.aspx *http://blogs.msdn.com/mikeormond/archive/2008/05/14/using-asp-net-routing-independent-of-mvc.aspx A: Your http://localhost/example/abc is not invoking the Global.asax like you expect. Typically http://localhost is running on port 80 (:80). If you want to run your site on port 80 you will need to deploy your site in IIS to run here. A: You need to set up a Handler mapping in IIS to forward all unknown extensions to asp.net. The first one works because cassini is handling all requests, the second one doesn't work because IIS is looking for that directory, and it doesn't exist, instead of the .net framework running the code you have. Here's information on how to do Url Rewriting in asp.net. If possible though, i would suggest using the new Application Request Routing or UrlRewrite.net A: IIS, by default, doesn't hand all requests over to ASP.NET for handling. Only some resource extensions, among them "aspx" will be passed over to asp.net for handling. What's happening when you request http://localhost/example/abc is that IIS tries to locate the directory to see if you have a default file (i.e. default.aspx, index.html) to load from that directory. Since it can't find the directory with the junk "abc" tag in it, it never finds the default.aspx file to load. When you try to load http://localhost/example/abc/default.aspx, IIS sees the "aspx" extension and immediately hands it over to the ASP.NET runtime for handling. The reason that the http://localhost/example/abc request doesn't load is that it never gets handed to ASP.NET, so of course the global.asax never sees it. The Cassini hosted site handles all requests, thus that call does get handled by ASP.NET and the global.asax file. I agree with Darren Kopp, who suggested that you need to set up Handler mapping in IIS to forward unknown extensions to ASP.NET.
{ "language": "en", "url": "https://stackoverflow.com/questions/121813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I replace text inside a div element? I need to set the text within a DIV element dynamically. What is the best, browser safe approach? I have prototypejs and scriptaculous available. <div id="panel"> <div id="field_name">TEXT GOES HERE</div> </div> Here's what the function will look like: function showPanel(fieldName) { var fieldNameElement = document.getElementById('field_name'); //Make replacement here } A: function showPanel(fieldName) { var fieldNameElement = document.getElementById("field_name"); while(fieldNameElement.childNodes.length >= 1) { fieldNameElement.removeChild(fieldNameElement.firstChild); } fieldNameElement.appendChild(fieldNameElement.ownerDocument.createTextNode(fieldName)); } The advantages of doing it this way: * *It only uses the DOM, so the technique is portable to other languages, and doesn't rely on the non-standard innerHTML *fieldName might contain HTML, which could be an attempted XSS attack. If we know it's just text, we should be creating a text node, instead of having the browser parse it for HTML If I were going to use a javascript library, I'd use jQuery, and do this: $("div#field_name").text(fieldName); Note that @AnthonyWJones' comment is correct: "field_name" isn't a particularly descriptive id or variable name. A: If you really want us to just continue where you left off, you could do: if (fieldNameElement) fieldNameElement.innerHTML = 'some HTML'; A: nodeValue is also a standard DOM property you can use: function showPanel(fieldName) { var fieldNameElement = document.getElementById(field_name); if(fieldNameElement.firstChild) fieldNameElement.firstChild.nodeValue = "New Text"; } A: I would use Prototype's update method which supports plain text, an HTML snippet or any JavaScript object that defines a toString method. $("field_name").update("New text"); * *Element.update documentation A: el.innerHTML=''; el.appendChild(document.createTextNode("yo")); A: You can simply use: fieldNameElement.innerHTML = "My new text!"; A: Updated for everyone reading this in 2013 and later: This answer has a lot of SEO, but all the answers are severely out of date and depend on libraries to do things that all current browsers do out of the box. To replace text inside a div element, use Node.textContent, which is provided in all current browsers. fieldNameElement.textContent = "New text"; A: If you're inclined to start using a lot of JavaScript on your site, jQuery makes playing with the DOM extremely simple. http://docs.jquery.com/Manipulation Makes it as simple as: $("#field-name").text("Some new text."); A: Use innerText if you can't assume structure - Use Text#data to update existing text Performance Test A: $('field_name').innerHTML = 'Your text.'; One of the nifty features of Prototype is that $('field_name') does the same thing as document.getElementById('field_name'). Use it! :-) John Topley's answer using Prototype's update function is another good solution. A: The quick answer is to use innerHTML (or prototype's update method which pretty much the same thing). The problem with innerHTML is you need to escape the content being assigned. Depending on your targets you will need to do that with other code OR in IE:- document.getElementById("field_name").innerText = newText; in FF:- document.getElementById("field_name").textContent = newText; (Actually of FF have the following present in by code) HTMLElement.prototype.__defineGetter__("innerText", function () { return this.textContent; }) HTMLElement.prototype.__defineSetter__("innerText", function (inputText) { this.textContent = inputText; }) Now I can just use innerText if you need widest possible browser support then this is not a complete solution but neither is using innerHTML in the raw. A: function showPanel(fieldName) { var fieldNameElement = document.getElementById(field_name); fieldNameElement.removeChild(fieldNameElement.firstChild); var newText = document.createTextNode("New Text"); fieldNameElement.appendChild(newText); } A: Here's an easy jQuery way: var el = $('#yourid .yourclass'); el.html(el.html().replace(/Old Text/ig, "New Text")); A: In HTML put this <div id="field_name">TEXT GOES HERE</div> In Javascript put this var fieldNameElement = document.getElementById('field_name'); if (fieldNameElement) {fieldNameElement.innerHTML = 'some HTML';}
{ "language": "en", "url": "https://stackoverflow.com/questions/121817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "195" }
Q: Setting a key field on a view I am creating an SQL view for a file that strips out the spaces in a particular field. My question is if there is a why to set a key on that new view so a person can still CHAIN the file. We are on V5R3. A: Okay found the answer at http://archive.midrange.com/midrange-l/200809/msg01062.html. It is not possible at V5R3. Supposedly at V6R1 this is possible. A: Could you accomplish the same thing using a logical file or with an OPNQRYF statement? Both of those allow you to set key fields and may be able to strip out the spaces in a file. A: Could you make a copy of the field with the spaces stripped out? You can update the stripped out copy using a file trigger. Then create an index/logical file over the stripped out copy of the field.
{ "language": "en", "url": "https://stackoverflow.com/questions/121821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Any issues using an IBM DataPower ESB w/ WCF development? I'm looking to implement an ESB and wanted to get thoughts related to "how" my web services might change (WCF) or -- how my client apps that consume these services might "need to be revised" (-- other than a new service ref to the ESB path --) The device I'm working with specifically is the "WebSphere DataPower XML Security Gateway XS40" A: I assume you picked the XS40(yellow one) for the security aspects of the gateway. That is enforcing WS-Security, WS-Policy, etc. While the datapower box can be configured to support these, your messages will have to include the WS-Security header information. This information typically goes in the Secuity block of the soap Header and can hold a signature, rsa key, user name tokens, or x509 certificates. More information about WS-Security can be found in the 1.0 spec. (Keep in mind that different ESB 'products' may provide support for different versions of the specification.) Now, if your just looking to use the Datapower box for content based routing (or proxying of web service messages) you'll need to make sure your messages have enough information imbedded for the datapower box to route that message to the correct service. So, assuming that your WCF communication is configured to use soap messages (not the binary .net remoting) datapower shouldn't have any trouble deciphering what's in your messages(xpath) and routing appropriately. A: Assuming that i am going to use basichttpbinding for my wcf service,this is because i am goign to MTOM encoding to transfer a document. MTOM encoded transfers can be done onyl with basichttpbinding. So will i have a problem with IDB data power in this scenario since DP enforces WS-* ?
{ "language": "en", "url": "https://stackoverflow.com/questions/121827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Problem with a gridview, paging and "object reference not set" error I'm stuck with the following problem. I'm trying to implement a basic GridView paged result set, which connects to an Oracle database. By itself, the GridView, and the paged results, work fine. The problem comes when I try to put it in page layout class that we have at work. We have ClassA, which inherits from Page, and is a corporate standard. Then I have ClassB, which inherits from ClassA and which includes application-specific code. The page that the GridView is on inherits from ClassB. This all seems to work fine in other pages, and I don't think it's the source of the problem, but I thought I'd mention it. What happens is that the first time the page with the GridView loads, everything looks normal. The query runs and the first 10 records are displayed, with the numbers for paging below. When I click on "2" or any of the other pages, I get the "yellow screen of death" with the following message: "Object reference not set to an instance of an object". The object being referred to in that error line is "Me", the Page object (ASP.pagename_aspx in the debugger). I don't believe that the exact line it fails on is that important, because I've switched the order of a few statements around and it just fails on the earliest one. I've traced through with the debugger and it looks normal, only that on Page 1 it works fine, and Page 2 it fails. I have implemented the PageIndexChanging event (again, it works by itself if I remove inheritance from ClassB. Also, if I try inheriting directly from ClassA (bypassing ClassB entirely), I still get the problem. Any ideas? Thanks. A: I ran into a similar situation where the base (ClassA in your example) had variables that were set up to handle all the paging and sorting bits, and the GridView was wired up to events that used those variables. Not setting the proper base class variables in my page caused the exact same sort of error. A: When I've had similar problems in the past, it has usually been a databinding problem (not calling DataBind() at the right time so when it tries to look at the next page the DataSource is null). A: I agree with @DotNetDaddy in that you need to make sure you set the datasource on post-back, as this is almost certainly the reason for the "fun" yellow screen of death. The below is a very simple example that shows sorting and paging for a GridView in .NET 2.0+ The below is the exact markup required for this gridview to work correctly w/ my vb code <asp:GridView ID="gridSuppliers" EnableViewState="false" runat="server" OnPageIndexChanging="gridSuppliers_PageIndexChanging" AutoGenerateColumns="false" AllowPaging="true" AllowSorting="true" CssClass="datatable" CellPadding="0" CellSpacing="0" BorderWidth="0" GridLines="None">...</asp:GridView> Next is the code-behind file w/ the required sorting/paging implementation for a collection based databind Partial Public Class _Default Inherits System.Web.UI.Page Implements ISupplierView Private presenter As SupplierPresenter Protected Overrides Sub OnInit(ByVal e As System.EventArgs) MyBase.OnInit(e) presenter = New SupplierPresenter(Me) End Sub Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load presenter.OnViewLoad() End Sub Protected Sub gridSuppliers_PageIndexChanging(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewPageEventArgs) Handles gridSuppliers.PageIndexChanging gridSuppliers.PageIndex = e.NewPageIndex presenter.PopulateSupplierList() End Sub Private Sub gridSuppliers_Sorting(ByVal sender As Object, ByVal e As GridViewSortEventArgs) Handles gridSuppliers.Sorting If DirectCast(ViewState("PreviousSortExpression"), String) = e.SortExpression Then If DirectCast(ViewState("PreviousSortDirection"), String) = "Ascending" Then e.SortDirection = System.Web.UI.WebControls.SortDirection.Descending ViewState("PreviousSortDirection") = "Descending" Else e.SortDirection = System.Web.UI.WebControls.SortDirection.Ascending ViewState("PreviousSortDirection") = "Ascending" End If Else e.SortDirection = System.Web.UI.WebControls.SortDirection.Ascending ViewState("PreviousSortDirection") = "Ascending" End If ViewState("PreviousSortExpression") = e.SortExpression Dim gv As GridView = DirectCast(sender, GridView) If e.SortExpression.Length > 0 Then For Each field As DataControlField In gv.Columns If field.SortExpression = e.SortExpression Then ViewState("PreviousHeaderIndex") = gv.Columns.IndexOf(field) Exit For End If Next End If presenter.PopulateSupplierList() End Sub #Region "ISupplierView Properties" Private ReadOnly Property PageIsPostBack() As Boolean Implements ISupplierView.PageIsPostBack Get Return Page.IsPostBack End Get End Property Private ReadOnly Property SortExpression() As String Implements ISupplierView.SortExpression Get If ViewState("PreviousSortExpression") Is Nothing Then ViewState("PreviousSortExpression") = "CompanyName" End If Return DirectCast(ViewState("PreviousSortExpression"), String) End Get End Property Public ReadOnly Property SortDirection() As String Implements Library.ISupplierView.SortDirection Get If ViewState("PreviousSortDirection") Is Nothing Then ViewState("PreviousSortDirection") = "Ascending" End If Return DirectCast(ViewState("PreviousSortDirection"), String) End Get End Property Public Property Suppliers() As System.Collections.Generic.List(Of Library.Supplier) Implements Library.ISupplierView.Suppliers Get Return DirectCast(gridSuppliers.DataSource(), List(Of Supplier)) End Get Set(ByVal value As System.Collections.Generic.List(Of Library.Supplier)) gridSuppliers.DataSource = value gridSuppliers.DataBind() End Set End Property #End Region End Class And finally, the presenter class used in the code-behind Public Class SupplierPresenter Private mView As ISupplierView Private mSupplierService As ISupplierService Public Sub New(ByVal View As ISupplierView) Me.New(View, New SupplierService()) End Sub Public Sub New(ByVal View As ISupplierView, ByVal SupplierService As ISupplierService) mView = View mSupplierService = SupplierService End Sub Public Sub OnViewLoad() If mView.PageIsPostBack = False Then PopulateSupplierList() End If End Sub Public Sub PopulateSupplierList() Try Dim SupplierList As List(Of Supplier) = mSupplierService.GetSuppliers() SupplierList.Sort(New GenericComparer(Of Supplier)(mView.SortExpression, mView.SortDirection)) mView.Suppliers = SupplierList Catch ex As Exception Throw ex End Try End Sub End Class **the class required to sort a generic collection that is referenced in the presenter Imports System.Reflection Imports System.Web.UI.WebControls Public Class GenericComparer(Of T) Implements IComparer(Of T) Private mDirection As String Private mExpression As String Public Sub New(ByVal Expression As String, ByVal Direction As String) mExpression = Expression mDirection = Direction End Sub Public Function Compare(ByVal x As T, ByVal y As T) As Integer Implements System.Collections.Generic.IComparer(Of T).Compare Dim propertyInfo As PropertyInfo = GetType(T).GetProperty(mExpression) Dim obj1 As IComparable = DirectCast(propertyInfo.GetValue(x, Nothing), IComparable) Dim obj2 As IComparable = DirectCast(propertyInfo.GetValue(y, Nothing), IComparable) If mDirection = "Ascending" Then Return obj1.CompareTo(obj2) Else Return obj2.CompareTo(obj1) End If End Function End Class A: I lost my original unregistered login which I used to post this question. Anyway, Harper Shelby's answer turned out to be correct. There was an unset variable in that base class (a custom object that is our corporate standard) that caused the problem (and no helpful error message). If an admin, or someone with the right powers sees this, you could mark Harper's answer as the one, and close this. Thanks to everyone for their help.
{ "language": "en", "url": "https://stackoverflow.com/questions/121828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: SQL Server 2005: Importing data from SQL Server 2000 In SQL Server 2000, you have the "All Tasks... - Export Data" option. Where is this option the SQL Server 2005 Management Studio? Or, is there a SQL Server 2005 way of doing this? EDIT: I am using the Express edition. EDIT: Joel's response answers my question but Mike's answer gives a great alternative to those of us using the Express edition (vote him up!!). A: You could use the excellent SQL Server Database Publishing Wizard. http://www.microsoft.com/downloads/details.aspx?familyid=56E5B1C5-BF17-42E0-A410-371A838E570A&displaylang=en It allows you to generate a script which contains data, schema, or both. The script can be targeted towards SQL 2000 or SQL 2005. Some use it for web hosting environments. I use it to move data when I have no other option. A: If you're using the express edition of management studio the Import and Export features aren't available. A: DTS has been replaced by SSIS on the business intelligence end of things. A: Probably easiest to just do a backup in SQL 2000 and then import the backup into SQL 2005 using the restore. Those options are available in the Express Edition. A: There is always the BCP option.
{ "language": "en", "url": "https://stackoverflow.com/questions/121837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What software development process do you use? I have always used the agile Feature Driven Development process for developing software. What does everyone else use, and why do you prefer it? I prefer FDD because that's what I started with fresh out of college. In college, everything was very free-form and my "customer" was typically my professor, who might not have had much industry experience outside of doing research for the university. Now, my customer(s) are not so forgiving and I do a lot of work in the medical field. Being agile and having a high level of quality is a must! A: Agile development methodologies with a combination of XP Engineering Practices: * *TDD coupled with Refactoring *YAGNI (You ain't gonna need it) *KISS (Keep it simple stupid) *Refactor to Design Patterns *Pair programming with switching pairs *Shared Code Base *Deploy early and often A: Whatever the current project requires. I do a lot of consulting on my own time for various (mostly PHP-based) web dev. I haven't devoted the time to get to TDD for those projects yet, and many of them are using existing frameworks that don't really make TDD all that easy. At work we're not tooled for TDD yet, so we use a hybrid of agile and old-style spec-based processes. Trying to get a movement towards TDD, but we're a small shop with well-entrenched existing projects (lots on maintenance work) and integration work with ERP systems. I think I could get TDD going on my own integration work (and am making baby steps in that direction) but the other stuff is largely a lost cause. A: I go with Agile Scrum, It gives me feeling of 'connected to the team. And have good control over the milestones and indivudual taks.Morning scrums are very useful. We use Agile Scrum project template in Team System http://www.scrumforteamsystem.com/en/default.aspx A: There seems to be some confusion here: TDD is more about how you implement code and not about managing the overall development of a software project. TDD will not help you decide which features to schedule, when to deliver, or how to set priorities. In contrast, things like Lean/Agile or even Waterfall are about these higher level issues. (My vote is for Scrum which falls firmly into the Lean/Agile area.) XP (Extreme Programming) is interesting because is blends ideas from both of these areas. A: I guess I am old school. I develop to client spec. A vigorous design phase followed by a heads-down development, test, bug-fix cycle. Then, implement. Once the spec is defined and agreed, no further changes can occur. All changes are to wait for development and bug fixes to have been completed. This prevents scope-creep and allows the software to get written, tested, debugged and implemented. At that point changes become enhancements, new features, etc. I have come to find that for almost all my clients over the past 10 years, about 90% of the things that they would have "thrown in" during development, creating scope-creep, are thought of as not being necessary. I cant tell you how many clients have thanked me for keeping them at bay. So I dont know what process you would call this but it works for me and for many other developers I know. A: I'm a fan of Lean Software development, which is promoted by the Poppendiecks, largely based on principles from lean manufacturing, with Toyota as the poster child. It's got a lot in common wit the other Agile methodologies, the focus is on eliminating waste, making use of queueing theory, and a "just in time" mindset (e.g. specifying a story at the last responsible moment). Lean is often associated with Kanban, which is a method for tracking tasks through a pipeline. A: Design by Contract with a complement of unit testing A: We use waterfall where I work but after some pushing on my part we're moving more towards an agile/TDD/CI hybrid model for some of our new projects. God willing, we'll be able to ditch the waterfall method. Every maintenance release we do, our main customer just ignores reqs deadlines and hands us requirement changes right at the last second and then just stares blankly at us while we explain why they can't keep doing that. A: At work we use the ICONIX process. It is a subset of AGILE techniques and it is behavioural requirements driven. The ICONIX process aims to be as low-celebration as possible having as little documentation as possible - in order to allow you to easily keep it up-to-date (this is a big difference from other AGILE processes, for example XP practitioners often do not seem to maintain documentation up-to-date after the 1st draft claiming that their code is the documentation). Here's a practical overview of the process: * *Quick Draft of Functional Requirements *Quick Definition of a Domain Model *Model Use Cases on the base of previous steps *Optional - Draw a throw-away robustness diagram for each use case, just to understand relations between your classes *Draw a Sequence Diagram for each use case *Model your test-cases on the use cases *Implement *Test At each step you review your work as a whole updating your domain model (it's impossible to get it right first time) and adding comments on your use cases. By the end of step 5) you end up with ready-to-implement classes and logic with just little documentation to maintain if you re-factor or change anything: * *Use case diagram *Sequence Diagram for each use case *Test case diagram (or test plan) If you need to add a feature, you add a new use case and follow the whole process. Resources: Iconix process website Iconix Software Engineering website Books References: AGILE Development with ICONIX Process A: Code and Fix !! Just kidding, TDD is really a wonderful way to go. A: Test Driven Design TDD The confidence you get from knowing that a code change has not broken something subtle is great A: I second the vote for Agile. I am exploring Lean these days, but like with any development process, it's not something you can just drop in on your current group. However, there are features of Lean and Agile that can be eased into your current processes and gain immediate value. My former project used the Waterfall method and were proud of it. They've since weaned themselves off of Waterfall and on to Prototype, which is a good step. A: I work for a company that does both web and systems development. Our development model is Rapid Development. We use the more modern definition of it, so it is similar to Agile Development. Without the XP concepts. A: We use scrum too... I think standups can be good in some respect, but sometimes the quick 15 minutes becomes at least 30. A: My personal leanings over the past few years have been toward Lean Development, with a heavy influence of everything that I've learned from XP. I think it's important to note here that Scrum is insufficient as a development process as it does not inform the work of software development, but the work of trying to manage the flow of software development tasks. My thinking has been informed by ICONIX as well. I think it's a great way to approach a use-case and diagram driven environment without getting bogged down in counter-productive bureaucracy.
{ "language": "en", "url": "https://stackoverflow.com/questions/121839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Where can I learn about logarithms? I hear logarithms mentioned quite a lot in the programming context. They seem to be the solution to many problems and yet I can't seem to find a real-world way of making use of them. I've read the Wikipedia entry and that, quite frankly, leaves me none the wiser. So, where can I learn about the real-world programming problems that logarithms solve? Has anyone got any examples of problems they faced that were solved by implementing a logarithm? A: Logs are a type of meta-arithmetic. It's a way of thinking of every number as a (possibly fixed) base raised to an exponent. Operations are performed on the exponents alone. This means that you can do multiplication and division by doing addition and subtraction of the logs. In other words, you put your data into log space, perform a suite of arithmetic, and pull it back into non-log space. If the floating point precision loss and the overhead of transforming in or out of log space is cheap, then you may have an overall win in time. One slick trick you can do with logs is calculate out the number of characters a number will take when printed by taking the log-base-2 of the number and divide it by the log-base-10(2), which is constant time compared with a set of multiplies. A: I've seen logarithms used for displaying tag clouds. This is a page that explains it: Tag Cloud Font Distribution Algorithm A: I assume you have heard about logarithms with contexts to time consumation. A concrete example would be search algorithms. Given a set of ordered data (think a sorted array of int's), you want to find the index key to a value in that data. We can benefit from the fact that the array is sorted (1, 2, 6, 192, 404, 9595, 50000 for example). Let's say we we want to find the index to the value 2. We can minimize our search space by culling (ignoring) half the array each step. We start this search by testing the value at the middle of the array. There are 7 values in the array, we then make the index 7/2 = 3.5 = 3 as int. array[3] is 192. The value we are looking for is 2, therefore we want to continue the search in the lower half of the search space. We completely ignore index 4, 5, 6 since they are all higher than 192, and in turn also higher then 2. Now we have a search space that looks like (1, 2, 6). We then index into middle again (repeat process), and we find the 2 instantly. The search is complete, the index to 2 is 1. This is a very small example, but it shows how such an algorithm works. For 16 values, you need to search at maximum 4 times. For 32 values, you search max 5 times, 64 values 6 times and so on.. 1048576 values are searched in 20 steps. This is far quicker than having to compare each item in the array separately. Of course, this only works for sorted collections of data. A: I recommend e: The Story of a Number for a good foundation of the importance of logarithms, their discovery and relevance to natural phenomena. A: Another way of looking at it is looking at the number of base multipliers in a number. I am sure you can see how this all relates in the following examples. Decimal (base 10): * *log10 (1) = 0 , (10^0) = 1 *log10 (10) = 1 , (10^1) = 10 *log10 (100) = 2 , (10^2) = 100 *log10 (1000) = 3 , (10^3) = 1000 *log10 (10000) = 4 , (10^4) = 10000 *log10 (100000) = 5 , (10^5) = 100000 Binary (base 2): * *log2 (1) = 0 , (2^0) = 1 *log2 (2) = 1 , (2^1) = 2 *log2 (4) = 2 , (2^2) = 4 *log2 (8) = 3 , (2^3) = 8 *log2 (16) = 4 , (2^4) = 16 *log2 (32) = 5 , (2^5) = 32 *log2 (64) = 6 , (2^6) = 64 *log2 (128) = 7 , (2^7) = 128 Hexadecimal (base 16): * *log16 (1) = 0 , (16^0) = 1 *log16 (16) = 1 , (16^1) = 16 *log16 (256) = 2 , (16^2) = 256 *log16 (4096) = 3 , (16^3) = 4096 *log16 (65536) = 4 , (16^4) = 65536 If you want to think in variables: * *log N (X) = Y *(N^Y) = X A: Many (many!) relationships in the real world are logarithmic. For instance, it would not surprise me if the distribution of reputation scores on Stack Overflow is log normal. The vast majority of users will have reputation scores of 1 and a handful of people will have unattainably high reputation. If you apply a logarithmic transformation to that distribution, it would likely be nearly a linear relation. A quick scan of https://stackoverflow.com/users?page=667 shows this to be true. You might be more familiar with The Long Tail concept, which is an application of the logarithmic distribution. A: The only problem I can recall is having to calculate the product of a column in SQL. SQL Server does not have a PRODUCT() aggregate function, so this was accomplished using a sum of the logarithms (using the LOG10() function) of each value. The main drawback was that all numbers in the column had to be positive and non-zero (you cannot calculate a logarithm on a negative number or zero). A: The most obvious usage in every programming example is precision. Put simply, consider storing unsigned integers. How many bits do you need to store X? Well, the maximum value you can store in n bits is 2^n - 1, so you can need log_2 X + 1 bits to store X. Now you can pick short, int, word, long etc with ease. A: One example, out of many : Calculating compound interests at a very small rate with a large number of periods. You can do it the most straightforward way, even using fast exponentiation, but accuracy may suffer, due to the way floats are stored and calculating s * r power n still takes O(ln(n)) operations. With logarithms, it's somewhat more accurate. A = ln( s * r power n ) = ln(s) + n * ln(r) Two lookups in your logarithm database gives you ln(s) and ln(r), with ln(r) begin very small, and floats work at their best accuracy near 0 result = exp(A), a reverse lookup here. It's also the only really efficient way if you work with non-integer exponents, to extract cubic roots for example. A: Check out MIT's Open Courseware: Introduction to Algorithms. Free educations. Awesome. A: One of the more "cool" applications of logarithms I've found is Spiral Storage. It's a hash table that allows you to split one bucket at a time as the table grows, relocating less than half of the records in that bucket to the same, new bucket. Unlike linear hashing, where the performance varies cyclically and all of the buckets tend to be split at around the same time, spiral hashing allows nice, smooth growth of the table. It was published about 30 years ago by G. N. N. Martin, about whom I haven't been able to learn much besides the fact that he also invented Range Encoding. Seems like a smart guy! I haven't been able to get a copy of his original paper, but Per-Åke Larson's paper "Dynamic hash tables" has a very clear description. A: Logarithms in programming are also frequently used in describing the efficiency of an algorithm using Big O notation. For instance, a binary search algorithm would have a worst case scenario of O(log(n)) (on a sorted set), whereas a linear search's worst case is O(n) A: Say you've got $1000, and it's in a savings account with 2.4% interest. How many years do you have to wait until you have $2000 to buy a new laptop? 1000 × 1.024x = 2000 1.024x = 2 x = log 1.024 2 = 29.23 years A: In my own research I came upon a few useful resources: Khan Academy logarithms section This is a terrific set of lessons on logarithms. This comment from a 6th grader sums it up nicely: Thank you so much. This week, my math teacher told me to challenge myself, so I tried logarithms. At first I was like, 'I can't do this, it's too hard'. Then I watched the video, and now they're even fun! I'm in 6th grade, my math teacher is impressed. I can't thank you enough. Ruby Quiz #105: Tournament Matchups This article contains a good example of using a base 2 log to determine the number of rounds required to complete a knock-out tournament given x teams. An Intuitive Guide To Exponential Functions & E An excellent, intuitive (as you'd expect, given the title), guide to e, the base of the natural logarithm. Lots of illustrations and examples make this a gem of an article. Demystifying the Natural Logarithm (ln) This is the followup to the article to one about e and discusses the natural logarithm (ln) which, using the intuitive explanation given in the article, "gives you the time needed to reach a certain level of growth". There's actually loads of good content on the Better Explained site. Truly, a splendid resource. Another tool that I had actually come across before but since completely forgotten about is Instacalc. It seems to be by the same person - Kalid Azad - who authors the Better Explained site. It's a really useful tool when hacking about with maths. A: Logarithms are used quite often in charts and graphs, when one or both axes cover a large range of values. Some natural phenomena are best expressed on a logarithmic scale; some examples are sound pressure levels (SPL in dB) and earthquake magnitude (Richter scale). A: As an example of what Chris is talking about, an algorithm that changes complexity based on the number of bits in a value is (probably) going to have an efficiency described by O(log(n)). Another everyday example of exponents (and hence logarithms) is in the format of IEEE floating point numbers. A: A logarithmic function is simply the inverse of an exponential function, in the same sense that subtraction is the inverse of addition. Just as this equation: a = b + c states the same fact as this equation: a - c = b this equation: b ** p = x (where ** is raising to a power) states the same fact as this equation: log [base b] (x) = p Although b can be any number (e.g. log [base 10] (10,000) = 4) the "natural" base for Mathematics is e (2.718281828...) about which see here. "Common" logarithms, used more in engineering, use a base of 10. A quick-and-dirty (emphasis on dirty) interpretation of the common (base 10) logarithm of some number x is that it is one less than the number of decimal digits required to express numbers the size of x. A: Demystifying the Natural Logarithm (ln) at BetterExplained is the best i have found. It clears the concepts from the base and help you understand the underlying concepts. After that everything seems a cakewalk. A: Here are some sites that I have used: * *http://www.helpalgebra.com/articles/propertiesoflogarithms.htm *http://www.math.unc.edu/Faculty/mccombs/web/alg/classnotes/logs/lognotation.html *http://www.math.unc.edu/Faculty/mccombs/web/alg/classnotes/logs/logprops.html *http://abacus.bates.edu/acad/acad_support/msw/exps_and_logs.pdf *http://people.hofstra.edu/Stefan_Waner/Realworld/calctopic1/logs.html I have used logarithms for calculating the yearly appreciation on a house to determine whether the seller was being fair. House Appreciation Equations Here is the basic equation: * *Previous price = p *New price = n *Appreciation rate = r *Years of appreciation = y p * (1 + r)^y = n So, if the price 6 years ago was $191,000 (by checking your couty auditor's site) and the asking price is $284,000, what is the appreciation rate (which would not take into account any one-time improvement costs)? 191,000 * (1 + r)^6 = 284,000 (1 + r)^6 = 284,000 / 191,000 = 1.486 Using a property of exponents and logarithms… 6 ( log (1 + r) ) = log 1.486 log (1 + r) = (log 1.486) / 6 = 0.02866 Using another property of exponents and logarithms… 10 0.02866 = 1 + r 1.068 = 1 + r r = 1.068 – 1 = 0.068 = 6.8% (kind of high!) To determine what a reasonable price would be…use 4% and allow for whatever improvements they made (which should be listed on the web id they were major…but it wouldn’t include bathroom/kitchen remodeling, etc.) 191,000 * (1 + 0.04)^6 = n n = 241,675 + reasonable cost of improvement which of course will depreciate over time and should not represent 100% of the cost of the improvement
{ "language": "en", "url": "https://stackoverflow.com/questions/121847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Pros and cons of using one file for entire webpage? I'm not sure how I should express this, but I'll give it a try. I recently started coding my portfolio in object-oriented PHP and I'm wondering if it's according to best practices to use a single page where the content changes depending on SQL data and the $_GET variable? If so/not, why? Edit: Take a look at my next post, more in-depth details. A: * *Not scalable *Hard to manage code *Parser has to parse everything *Perfect example of Code Smell *One error crashes your whole site A: If you mean a single landing page (e.g. index.php) which then uses session variables etc. to figure out what code needs to be included, then yes, this is an often used technique. Edit: and by the above I mean what Daniel Papasian explains in detail in his excellent post If you mean placing all of your HTML, SQL and PHP in a single file, then no, for the reasons pointed out by GateKiller. A: The actaul page file should contain only what is diffrent about that page from a standard "page" on your site(eg the page title, the index page may have code to get the latest news, etc). Everythin which is (or may) be used in more than one place, should be moved to external php files, and included. Examples are: * *Database infomation (password, username, etc) *Header/Footer *Login code This makes the code much easyer to manage. Eg if you change the database password, its only one file that needs updating, or if you decided to add a banner to the header, its again only one page not all the pages that need changing. It also makes adding new features much less work, eg a new page may simply be: <?php require ('config.php') require ('start.php') require ('header.php') //custom page stuff require ('footer.php') ?> or adding auto login via cookies, is a simple change to the Login() function (creating a cookie), and start.php (checking for the cookie + calling Login()). Also you can easyily transfer these files to other projects in the future. A: Are you asking about using the front controller pattern, where a single file serves all of your requests? Often this is done with an index.php and mod_rewrite getting all of the requests with the rest of the URL being given to it as a parameter in the query string. http://www.onlamp.com/pub/a/php/2004/07/08/front_controller.html I would tend to recommend this pattern be used for applications, because it gives you a single place to handle things like authentication, and often you'll need to integrate things at a tighter level where having new features be classes that are registered with the controller via some mechanism makes a lot of sense. The concerns about the URLs others have mentioned aren't really accurate, because there is no real relationship between URL structure and file structure, unless you're using ancient techniques of building websites. A good chunk of apache functionality is based on the concept that file/directory structure and URL structure are distinct concepts (alias module, rewrite module, content negotiation, so on and so forth) A: Everything gatekiller mentioned + you also can't utilize late-binding. A: * *Hard to manage code If you are using version control it will be a LOT harder to roll back any changes that might have happened to a single "page" of your site. Since you would have to merge back in for anything that might have come after A: It's not as search engine friendly unless you use mod rewrite A: I tend to disagree with most - if your site is managed by a custom CMS or something similar, there's no reason not to use one page. I did a similar thing with a CMS I wrote a while back. All clients had a single default.asp page that queried the database for theme, content, attachments, and member permissions. To make a change, I just made the change once, and copied it to my other clients if the change required it. This of course wouldn't work in most scenarios. If you have a website that does a lot of DIFFERENT things (my cms just repeated certain functions while loading the page), then multiple pages really is the only way to go. A: For those of you who are interested, there is a framework that uses this exact model. Originally for ColdFusion. There is still a community for this methodology, version 5.5 was released about a year ago (Dec 2007). FuseBox Framework site Wikipedia entry A: This screendump and the following explanation might give a better idea of what my code looks like at the moment. I use the same model as the one that 'Internet Friend', Daniel Papasian and a few others mention; Front Controller. My index page looks like this. require_once 'config.php'; require_once 'class_lib/template.php'; $template = new template($config); $template->dataQuery(); $template->pageCheck(); $template->titleAssembly(); $template->cssAssembly(); $template->metaAssembly(); $template->menuAssembly(); $template->content(); echo $template->publish(); The class construct opens the main template file and loads it into a variable that each method can manipulate with by replacing tags with generated code. Ugly URLs isn't really an issue since I'll be using mod_rewrite to clean it up. However, Papasian has a point, this method would be more suitable for web-based applications and the like. I apologize for not being very specific with my question in the first place. Furthermore, big 'thank you' to everyone who dropped a few lines to help out. A: I often use a php file without the .php extension (ie site) and add <Files site> ForceType application/x-httpd-php </Files> to the .htaccess which makes apache interprete the file as a php file. You can parse vars to the file within the url: http://www.yourdomain.com/site/var1/var2/var3 Use $var_array = explode("/",$_SERVER['REQUEST_URI']); $var1 = $var_array[1]; $var2 = $var_array[2]; $var3 = $var_array[3]; to get the vars. This way you can use a single file with searchengingfriendlyurls without modrewrite. A: re: URL and file structure I converted a site where all the content was in a database and accessed with the index?p=434 model. There was no benefit to using the database, and the site was confusing to the people who had to add content since they had to edit content with the browser and pages were just numbers. I pulled all the content out and put it in separate files. Each had a sensible name and was organized into folders. Each file looked something like this: require('sitelib'); do_header('about', 'About Us'); // content here do_footer(); The client loved it. They were able to use any HTML editor to go in, find the right file and make the change. And they were able to make new pages. All to say: Sometimes, it is useful to have the URL and file structures match.
{ "language": "en", "url": "https://stackoverflow.com/questions/121849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Most efficient T-SQL way to pad a varchar on the left to a certain length? As compared to say: REPLICATE(@padchar, @len - LEN(@str)) + @str A: I'm not sure that the method that you give is really inefficient, but an alternate way, as long as it doesn't have to be flexible in the length or padding character, would be (assuming that you want to pad it with "0" to 10 characters: DECLARE @pad_characters VARCHAR(10) SET @pad_characters = '0000000000' SELECT RIGHT(@pad_characters + @str, 10) A: Perhaps an over kill I have these UDFs to pad left and right ALTER Function [dbo].[fsPadLeft](@var varchar(200),@padChar char(1)='0',@len int) returns varchar(300) as Begin return replicate(@PadChar,@len-Len(@var))+@var end and to right ALTER function [dbo].[fsPadRight](@var varchar(200),@padchar char(1)='0', @len int) returns varchar(201) as Begin --select @padChar=' ',@len=200,@var='hello' return @var+replicate(@PadChar,@len-Len(@var)) end A: I know this was originally asked back in 2008, but there are some new functions that were introduced with SQL Server 2012. The FORMAT function simplifies padding left with zeros nicely. It will also perform the conversion for you: declare @n as int = 2 select FORMAT(@n, 'd10') as padWithZeros Update: I wanted to test the actual efficiency of the FORMAT function myself. I was quite surprised to find the efficiency was not very good compared to the original answer from AlexCuse. Although I find the FORMAT function cleaner, it is not very efficient in terms of execution time. The Tally table I used has 64,000 records. Kudos to Martin Smith for pointing out execution time efficiency. SET STATISTICS TIME ON select FORMAT(N, 'd10') as padWithZeros from Tally SET STATISTICS TIME OFF SQL Server Execution Times: CPU time = 2157 ms, elapsed time = 2696 ms. SET STATISTICS TIME ON select right('0000000000'+ rtrim(cast(N as varchar(5))), 10) from Tally SET STATISTICS TIME OFF SQL Server Execution Times: CPU time = 31 ms, elapsed time = 235 ms. A: Several people gave versions of this: right('XXXXXXXXXXXX'+ @str, @n) be careful with that because it will truncate your actual data if it is longer than n. A: This is simply an inefficient use of SQL, no matter how you do it. perhaps something like right('XXXXXXXXXXXX'+ rtrim(@str), @n) where X is your padding character and @n is the number of characters in the resulting string (assuming you need the padding because you are dealing with a fixed length). But as I said you should really avoid doing this in your database. A: I hope this helps someone. STUFF ( character_expression , start , length ,character_expression ) select stuff(@str, 1, 0, replicate('0', @n - len(@str))) A: probably overkill, I often use this UDF: CREATE FUNCTION [dbo].[f_pad_before](@string VARCHAR(255), @desired_length INTEGER, @pad_character CHAR(1)) RETURNS VARCHAR(255) AS BEGIN -- Prefix the required number of spaces to bulk up the string and then replace the spaces with the desired character RETURN ltrim(rtrim( CASE WHEN LEN(@string) < @desired_length THEN REPLACE(SPACE(@desired_length - LEN(@string)), ' ', @pad_character) + @string ELSE @string END )) END So that you can do things like: select dbo.f_pad_before('aaa', 10, '_') A: this is a simple way to pad left: REPLACE(STR(FACT_HEAD.FACT_NO, x, 0), ' ', y) Where x is the pad number and y is the pad character. sample: REPLACE(STR(FACT_HEAD.FACT_NO, 3, 0), ' ', 0) A: I liked vnRocks solution, here it is in the form of a udf create function PadLeft( @String varchar(8000) ,@NumChars int ,@PadChar char(1) = ' ') returns varchar(8000) as begin return stuff(@String, 1, 0, replicate(@PadChar, @NumChars - len(@String))) end A: @padstr = REPLICATE(@padchar, @len) -- this can be cached, done only once SELECT RIGHT(@padstr + @str, @len) A: select right(replicate(@padchar, @len) + @str, @len) A: In SQL Server 2005 and later you could create a CLR function to do this. A: How about this: replace((space(3 - len(MyField)) 3 is the number of zeros to pad A: I use this one. It allows you to determine the length you want the result to be as well as a default padding character if one is not provided. Of course you can customize the length of the input and output for whatever maximums you are running into. /*=============================================================== Author : Joey Morgan Create date : November 1, 2012 Description : Pads the string @MyStr with the character in : @PadChar so all results have the same length ================================================================*/ CREATE FUNCTION [dbo].[svfn_AMS_PAD_STRING] ( @MyStr VARCHAR(25), @LENGTH INT, @PadChar CHAR(1) = NULL ) RETURNS VARCHAR(25) AS BEGIN SET @PadChar = ISNULL(@PadChar, '0'); DECLARE @Result VARCHAR(25); SELECT @Result = RIGHT(SUBSTRING(REPLICATE('0', @LENGTH), 1, (@LENGTH + 1) - LEN(RTRIM(@MyStr))) + RTRIM(@MyStr), @LENGTH) RETURN @Result END Your mileage may vary. :-) Joey Morgan Programmer/Analyst Principal I WellPoint Medicaid Business Unit A: Here's my solution, which avoids truncated strings and uses plain ol' SQL. Thanks to @AlexCuse, @Kevin and @Sklivvz, whose solutions are the foundation of this code. --[@charToPadStringWith] is the character you want to pad the string with. declare @charToPadStringWith char(1) = 'X'; -- Generate a table of values to test with. declare @stringValues table (RowId int IDENTITY(1,1) NOT NULL PRIMARY KEY, StringValue varchar(max) NULL); insert into @stringValues (StringValue) values (null), (''), ('_'), ('A'), ('ABCDE'), ('1234567890'); -- Generate a table to store testing results in. declare @testingResults table (RowId int IDENTITY(1,1) NOT NULL PRIMARY KEY, StringValue varchar(max) NULL, PaddedStringValue varchar(max) NULL); -- Get the length of the longest string, then pad all strings based on that length. declare @maxLengthOfPaddedString int = (select MAX(LEN(StringValue)) from @stringValues); declare @longestStringValue varchar(max) = (select top(1) StringValue from @stringValues where LEN(StringValue) = @maxLengthOfPaddedString); select [@longestStringValue]=@longestStringValue, [@maxLengthOfPaddedString]=@maxLengthOfPaddedString; -- Loop through each of the test string values, apply padding to it, and store the results in [@testingResults]. while (1=1) begin declare @stringValueRowId int, @stringValue varchar(max); -- Get the next row in the [@stringLengths] table. select top(1) @stringValueRowId = RowId, @stringValue = StringValue from @stringValues where RowId > isnull(@stringValueRowId, 0) order by RowId; if (@@ROWCOUNT = 0) break; -- Here is where the padding magic happens. declare @paddedStringValue varchar(max) = RIGHT(REPLICATE(@charToPadStringWith, @maxLengthOfPaddedString) + @stringValue, @maxLengthOfPaddedString); -- Added to the list of results. insert into @testingResults (StringValue, PaddedStringValue) values (@stringValue, @paddedStringValue); end -- Get all of the testing results. select * from @testingResults; A: I know this isn't adding much to the conversation at this point but I'm running a file generation procedure and its going incredibly slow. I've been using replicate and saw this trim method and figured I'd give it a shot. You can see in my code where the switch between the two is in addition to the new @padding variable (and the limitation that now exists). I ran my procedure with the function in both states with the same results in execution time. So at least in SQLServer2016, I'm not seeing any difference in efficiency that other found. Anyways, here's my UDF that I wrote years ago plus the changes today which is much the same as other's other than it has a LEFT/RIGHT param option and some error checking. CREATE FUNCTION PadStringTrim ( @inputStr varchar(500), @finalLength int, @padChar varchar (1), @padSide varchar(1) ) RETURNS VARCHAR(500) AS BEGIN -- the point of this function is to avoid using replicate which is extremely slow in SQL Server -- to get away from this though we now have a limitation of how much padding we can add, so I've settled on a hundred character pad DECLARE @padding VARCHAR (100) = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' SET @padding = REPLACE(@padding, 'X', @padChar) SET @inputStr = RTRIM(LTRIM(@inputStr)) IF LEN(@inputStr) > @finalLength RETURN '!ERROR!' -- can search for ! in the returned text ELSE IF(@finalLength > LEN(@inputStr)) IF @padSide = 'L' SET @inputStr = RIGHT(@padding + @inputStr, @finalLength) --SET @inputStr = REPLICATE(@padChar, @finalLength - LEN(@inputStr)) + @inputStr ELSE IF @padSide = 'R' SET @inputStr = LEFT(@inputStr + @padding, @finalLength) --SET @inputStr = @inputStr + REPLICATE(@padChar, @finalLength - LEN(@inputStr)) -- if LEN(@inputStr) = @finalLength we just return it RETURN @inputStr; END -- SELECT dbo.PadStringTrim( tblAccounts.account, 20, '~' , 'R' ) from tblAccounts -- SELECT dbo.PadStringTrim( tblAccounts.account, 20, '~' , 'L' ) from tblAccounts A: I have one function that lpad with x decimals: CREATE FUNCTION [dbo].[LPAD_DEC] ( -- Add the parameters for the function here @pad nvarchar(MAX), @string nvarchar(MAX), @length int, @dec int ) RETURNS nvarchar(max) AS BEGIN -- Declare the return variable here DECLARE @resp nvarchar(max) IF LEN(@string)=@length BEGIN IF CHARINDEX('.',@string)>0 BEGIN SELECT @resp = CASE SIGN(@string) WHEN -1 THEN -- Nros negativos grandes con decimales concat('-',SUBSTRING(replicate(@pad,@length),1,@length-len(@string)),ltrim(str(abs(@string),@length,@dec))) ELSE -- Nros positivos grandes con decimales concat(SUBSTRING(replicate(@pad,@length),1,@length-len(@string)),ltrim(str(@string,@length,@dec))) END END ELSE BEGIN SELECT @resp = CASE SIGN(@string) WHEN -1 THEN --Nros negativo grande sin decimales concat('-',SUBSTRING(replicate(@pad,@length),1,(@length-3)-len(@string)),ltrim(str(abs(@string),@length,@dec))) ELSE -- Nros positivos grandes con decimales concat(SUBSTRING(replicate(@pad,@length),1,@length-len(@string)),ltrim(str(@string,@length,@dec))) END END END ELSE IF CHARINDEX('.',@string)>0 BEGIN SELECT @resp =CASE SIGN(@string) WHEN -1 THEN -- Nros negativos con decimales concat('-',SUBSTRING(replicate(@pad,@length),1,@length-len(@string)),ltrim(str(abs(@string),@length,@dec))) ELSE --Ntos positivos con decimales concat(SUBSTRING(replicate(@pad,@length),1,@length-len(@string)),ltrim(str(abs(@string),@length,@dec))) END END ELSE BEGIN SELECT @resp = CASE SIGN(@string) WHEN -1 THEN -- Nros Negativos sin decimales concat('-',SUBSTRING(replicate(@pad,@length-3),1,(@length-3)-len(@string)),ltrim(str(abs(@string),@length,@dec))) ELSE -- Nros Positivos sin decimales concat(SUBSTRING(replicate(@pad,@length),1,(@length-3)-len(@string)),ltrim(str(abs(@string),@length,@dec))) END END RETURN @resp END A: Here is my solution. I can pad any character and it is fast. Went with simplicity. You can change variable size to meet your needs. Updated with a parameter to handle what to return if null: null will return a null if null CREATE OR ALTER FUNCTION code.fnConvert_PadLeft( @in_str nvarchar(1024), @pad_length int, @pad_char nchar(1) = ' ', @rtn_null NVARCHAR(1024) = '') RETURNS NVARCHAR(1024) AS BEGIN DECLARE @rtn NCHAR(1024) = ' ' RETURN RIGHT(REPLACE(@rtn,' ',@pad_char)+ISNULL(@in_str,@rtn_null), @pad_length) END GO CREATE OR ALTER FUNCTION code.fnConvert_PadRight( @in_str nvarchar(1024), @pad_length int, @pad_char nchar(1) = ' ', @rtn_null NVARCHAR(1024) = '') RETURNS NVARCHAR(1024) AS BEGIN DECLARE @rtn NCHAR(1024) = ' ' RETURN LEFT(ISNULL(@in_str,@rtn_null)+REPLACE(@rtn,' ',@pad_char), @pad_length) END GO -- Example SET STATISTICS time ON SELECT code.fnConvert_PadLeft('88',10,'0',''), code.fnConvert_PadLeft(null,10,'0',''), code.fnConvert_PadLeft(null,10,'0',null), code.fnConvert_PadRight('88',10,'0',''), code.fnConvert_PadRight(null,10,'0',''), code.fnConvert_PadRight(null,10,'0',NULL) 0000000088 0000000000 NULL 8800000000 0000000000 NULL A: Here is how I would normally pad a varchar WHILE Len(@String) < 8 BEGIN SELECT @String = '0' + @String END A: To provide numerical values rounded to two decimal places but right-padded with zeros if required I have: DECLARE @value = 20.1 SET @value = ROUND(@value,2) * 100 PRINT LEFT(CAST(@value AS VARCHAR(20)), LEN(@value)-2) + '.' + RIGHT(CAST(@value AS VARCHAR(20)),2) If anyone can think of a neater way, that would be appreciated - the above seems clumsy. Note: in this instance, I'm using SQL Server to email reports in HTML format and so wish to format the information without involving an additional tool to parse the data.
{ "language": "en", "url": "https://stackoverflow.com/questions/121864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "217" }
Q: Why does Django generate HTTP 500 errors for static media when Debug is set to False? I'm preparing to deploy my Django app and I noticed that when I change the "DEBUG" setting to False, all references to static files (i.e., JavaScript, CSS, etc..) result in HTTP 500 errors. Any idea what's causing that issue (and how to fix it)? A: It sounds like you might be trying to serve your static media using the Django development server. Take a look at http://docs.djangoproject.com/en/dev/howto/deployment/ for some deployment scenarios/howtos and http://docs.djangoproject.com/en/dev/howto/static-files/ for how to serve static files (but note the disclaimer about NOT using those methods in production). In general, I'd look at your server logs and see where it's trying to fetch the files from. I suspect the 500 errors are really 404 errors, but they become 500 errors because Django can't find or render the 404.html template. If that's not the case, it would be helpful if you could post the specific 500 error you're getting. A: I would highly recommend letting your web server handle the static requests, without getting to Django. In my urls.py, I only add the static request handler when debug is set to True. Technically, Django serving the static works fine though. Definitely read the short docs page, http://docs.djangoproject.com/en/dev/howto/static-files/. You'll want to use an entry like this in urls.py (r'^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root': '/path/to/media'}) A: You must also check your URLs all over the place. When the DEBUG is set to False, all URLs without trailing "/" are treated as a bug, unlike when you have DEBUG = True, in which case Django will append "/" everywhere it is missing. So, in short, make sure all links end with a slash EVERYWHERE. A: Turns out I'd commented out the SECRET_KEY variable. There was no way for me to know though. Just had to try things out.
{ "language": "en", "url": "https://stackoverflow.com/questions/121866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Automatic casts I am currently suffering a brain fart. I've done this before but I can't remember the exact syntax and I can't look at the code I wrote because I was working at another company at the time. I have this arrangement: class P { // stuff }; class PW : public P { // more stuff }; class PR : public P { // more stuff }; class C { public: P GetP() const { return p; } private: P p; }; // ... P p = c.GetP( ); // valid PW p = c.GetP( ); // invalid PR p = c.GetP( ); // invalid // ... Now I would like to make P interchangeable with PW and PR (and thus PW and PR can be interchanged). I could probably get away with casts but this code change has occurred quite a few times in this module alone. I am pretty sure it is a operator but for the life of me I can't remember what. How do I make P interchangeable with PW and PR with minimal amount of code? Update: To give a bit more clarification. P stands for Project and the R and W stands for Reader and Writer respectively. All the Reader has is the code for loading - no variables, and the writer has code for simply Writing. It needs to be separate because the Reading and Writing sections has various manager classes and dialogs which is none of Projects real concern which is the manipulation of project files. Update: I also need to be able to call the methods of P and PW. So if P has a method a() and PW as a method call b() then I could : PW p = c.GetP(); p.a(); p.b(); It's basically to make the conversion transparent. A: You're trying to coerce actual variables, rather than pointers. To do that would require a cast. However, if your class definition looked like this: class C { public: P* GetP() const { return p; } private: P* p; } Then, whether p was a pointer to a P, a PW, or a PR, your function wouldn't change, and any (virtual) functions called on the P* returned by the function would use the implementation from P, PW or PR depending on what the member p was.. I guess the key thing to remember is the Liskov Substitution Principle. Since PW and PR are subclasses of P, they can be treated as if they were Ps. However, PWs cannot be treated as PRs, and vice versa. A: If you want to get this part to compile: // ... P p = c.GetP( ); // valid PW p = c.GetP( ); // invalid PR p = c.GetP( ); // invalid // ... You need to be able to construct/convert a P into a PW or a PR. You need to do something like this: class PW : public P { PW(const P &); // more stuff }; class PR : public P { PR(const P &); // more stuff }; Or did you mean something more like: class P { operator PW() const; operator PR() const; // stuff }; A: In the code above, you have the opposite of the slicing problem. What you're trying to do is assign from a P to a PW or PR that contain more information than the source object. How do you do this? Say P only has 3 member variables, but PW has 12 additional members - where do the values for these come when you write PW p = c.GetP()? If this assignment actually is valid, which should really indicate some kind of design weirdness, then I would implement PW::operator=(const P&) and PR::operator=(const P&), PW::PW(const P&) and PR::PR(const P&). But I wouldn't sleep too well that night. A: To make PW and PR usable via a P you need to use references (or pointers). So you really need t change the interface of C so it returns a reference. The main problem in the old code was that you were copying a P into a PW or a PR. This is not going to work as the PW and PR potentially have more information than a P and from a type perspective an object of type P is not a PW or a PR. Though PW and PR are both P. Change the code to this and it will compile: If you want to return different objects derived from a P class at runtime then the class C must potentially be able to store all the different types you expect and be specialized at runtime. So in the class below I allow you to specialize by passing in a pointer to an object that will be returned by reference. To make sure the object is exception safe I have wrapped the pointer in a smart pointer. class C { public: C(std::auto_ptr<P> x): p(x) { if (p.get() == NULL) {throw BadInit;} } // Return a reference. P& GetP() const { return *p; } private: // I use auto_ptr just as an example // there are many different valid ways to do this. // Once the object is correctly initialized p is always valid. std::auto_ptr<P> p; }; // ... P& p = c.GetP( ); // valid PW& p = dynamic_cast<PW>(c.GetP( )); // valid Throws exception if not PW PR& p = dynamic_cast<PR>(c.GetP( )); // valid Thorws exception if not PR // ... A: This kind of does something sensible, given that everything is being passed by value. Not sure if it's what you were thinking of. class P { public: template <typename T> operator T() const { T t; static_cast<T&>(t) = *this; return t; } }; A: Perhaps you mean the dynamic_cast operator? A: They are not fully interchangeable. PW is a P. PR is a P. But P is not necessarily a PW, and it is not necessarily a PR. You can use static_cast to cast pointers from PW * to P *, or from PR * to P *. You should not use static_cast to cast actual objects to their super-class because of "slicing". E. g. if you cast an object of PW to P the extra stuff in PW will be "sliced" off. You also cannot use static_cast to cast from P * to PW *. If you really have to do it, use dynamic_cast, which will check at run-time whether the object is actually of the right sub-class, and give you a run-time error if it is not. A: I'm not sure what you mean exactly, but bear with me. They kind of already are. Just call everything P and you can pretend PR and PW are P's. PR and PW are still different though. Making all three equivalent would result in trouble with the Liskov principle. But then why would you give them different names if they are truly equivalent? A: The second and third would be invalid because it is an implicit upcast -- which is a dangerous thing in C++. This is because the class which you're casting to has more functionality than the class it is being assigned, so unless you explicitly cast it yourself, the C++ compiler will throw an error (at least it should). Of course, this is simplifying things slightly (you can use RTTI for certain things which may relate to what you want to do safely without invoking the wrath of bad object'ness) -- but simplicity is always a good way to approach problems. Of course, as stated in a few other solutions, you can get around this problem -- but I think before you try to get around the problem, you may want to rethink the design. A: Use a reference or pointer to P rather than an object: class C { public: P* GetP() const { return p; } private: P* p; }; This will allow a PW* or a PR* to be bound to C.p. However, if you need to go from a P to a PW or PR, you need to use dynamic_cast<PW*>(p), which will return the PW* version of p, or NULL of p is not a PW* (for instance, because it's a PR*). Dynamic_cast has some overhead, though, and it's best avoided if possible (use virtuals). You can also use the typeid() operator to determine the run-time type of an object, but it has several issues, including that you must include and that it can't detect extra derivation.
{ "language": "en", "url": "https://stackoverflow.com/questions/121922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Sharepoint WebPart with AjaxToolkit's Accordion control Do you guys have any resources on creating a Sharepoint webpart that uses the AjaxToolkit controls? I need to create a webpart that uses the Accordion control and I can't find any complete tutorial or walkthrough. I prefer a tutorial/article that doesn't use SmartPart. TIA! A: Check out: http://www.codeplex.com/sharepointajax A: The ajax toolkit and sharepoint don't play very nicely together. The main reason for this is the lack of DOCTYPE declaration in SharePoint's default MasterPage (why they did this, I'll never know). Your best bet, in my humble opinion, is to abandon the ajax control toolkit and use JQuery. If you follow the link to JQuery UI, you will find that they have implemented an accordian control that works very nicely across a wide range of browsers/environments. A: Telerik's Rad Controls for ASP.Net Ajax can be deployed in the Sharepoint environment and have a menu control with the Accordion behavior out of the box. It may save you some time.
{ "language": "en", "url": "https://stackoverflow.com/questions/121935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: RSS/ATOM feed reader for windows (or cross-platform) Looking for a good rss/feed reader for windows or if there are any good cross platforms one i would be really amazed, or good web services (dont like the google one). I want something simplistic and minimalistic. A: I wouldn't call it minimalistic, but FeedDemon is the best I've found for Windows. A: I like the NewsGator family of tools (http://newsgator.com). I mostly use the Mac and web-based versions, but thought FeedDemon was good, too, for the Windows environment. All keep a common subscription list, so you can bounce back and forth as needed. A: I am using Google reader and quite happy with it. Its browser based but has good keyboard support. Check http://www.google.com/reader A: I use Net Vibes it's an online Rss collator that allows you to split your feeds up into multiple topics. I used to use Thunderbird but I can access netvibes from wherever I am. I can also set it up to keep track of my email and social networking sites as there are a bunch of widgets available for it. A: Some people enjoy RSS Bandit, but I'm on a Mac now so I'm not sure how current it is. A: Outlook 2007 Attensa (can plug into outlook 2003) newsgator Internet explorer 8 Mozilla Firefox I am yet to find a good free windows rss reader with decent offline capabilities A: Check out GreatNews A: I am using Wizz RSS, a Firefox plugin. Works ok. I am using mainly to access Sun Java forums. http://www.wizzrss.com/Welcome.php A: Y Use A Rss Reader,,, When You Can get the articles.. Go To The Link TNB A: If you have a Google account already, seriously consider Google Reader. It's available everywhere, with Gears installed it is available offline and binds well to Firefox and Chrome.
{ "language": "en", "url": "https://stackoverflow.com/questions/121937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Assembly.GetCallingAssembly() and static constructors? Ok, so I just ran into the following problem that raised an eyebrow. For various reasons I have a testing setup where Testing classes in a TestingAssembly.dll depend on the TestingBase class in a BaseTestingAssembly.dll. One of the things the TestBase does in the meantime is look for a certain embedded resource in its own and the calling assembly So my BaseTestingAssembly contained the following lines... public class TestBase { private static Assembly _assembly; private static Assembly _calling_assembly; static TestBase() { _assembly = Assembly.GetExecutingAssembly(); _calling_assembly = Assembly.GetCallingAssembly(); } } Static since I figured, these assemblies would be the same over the application's lifetime so why bother recalculating them on every single test. When running this however I noticed that both _assembly and _calling_assembly were being set to BaseTestingAssembly rather than BaseTestingAssembly and TestingAssembly respectively. Setting the variables to non-static and having them initialized in a regular constructor fixed this but I am confused why this happened to begin this. I thought static constructors run the first time a static member gets referenced. This could only have been from my TestingAssembly which should then have been the caller. Does anyone know what might have happened? A: The static constructor is called by the runtime and not directly by user code. You can see this by setting a breakpoint in the constructor and then running in the debugger. The function immediately above it in the call chain is native code. Edit: There are a lot of ways in which static initializers run in a different environment than other user code. Some other ways are * *They're implicitly protected against race conditions resulting from multithreading *You can't catch exceptions from outside the initializer In general, it's probably best not to use them for anything too sophisticated. You can implement single-init with the following pattern: private static Assembly _assembly; private static Assembly Assembly { get { if (_assembly == null) _assembly = Assembly.GetExecutingAssembly(); return _assembly; } } private static Assembly _calling_assembly; private static Assembly CallingAssembly { get { if (_calling_assembly == null) _calling_assembly = Assembly.GetCallingAssembly(); return _calling_assembly; } } Add locking if you expect multithreaded access. A: I think the answer is here in the discussion of C# static constructors. My best guess is that the static constructor is getting called from an unexpected context because: The user has no control on when the static constructor is executed in the program A: Assembly.GetCallingAssembly() simply returns the assembly of the second entry in the call stack. That can very depending upon where how your method/getter/constructor is called. Here is what I did in a library to get the assembly of the first method that is not in my library. (This even works in static constructors.) private static Assembly GetMyCallingAssembly() { Assembly me = Assembly.GetExecutingAssembly(); StackTrace st = new StackTrace(false); foreach (StackFrame frame in st.GetFrames()) { MethodBase m = frame.GetMethod(); if (m != null && m.DeclaringType != null && m.DeclaringType.Assembly != me) return m.DeclaringType.Assembly; } return null; }
{ "language": "en", "url": "https://stackoverflow.com/questions/121946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: C# .NET 3.5 GUI design I'm looking for some programming guides to C# GUI design. I come from the Java camp (where I can happily hand-code Swing/AWT GUIs) and thus don't have a clue where to start :( Also, what difference (if any) is there between the Windows Presentation Foundation and WinForms? A: Chris Sells seems to be 'dah man' with regard to Windows Forms and WPF: http://www.sellsbrothers.com/writing/ http://www.sellsbrothers.com/writing/wfbook http://www.sellsbrothers.com/writing/wpfbook Also well taking a look at Charles Petzold as well: http://www.charlespetzold.com/winforms/index.html MS also have a heap of stuff related to design guidelines and usability from a windows perspective: http://msdn.microsoft.com/en-us/library/aa152962.aspx A: WPF is a totally a different and a new way to look in to the UI architecting and implementation. The cool concept of collaborative development by a designer and C# developer is a biggest advantage(XAML markup actually gives this ability). When you develop a control/UI it will be in a 'lookless' manner and Designer/Integrator can take that same project(XAML) and style it for a greater look and feel. So in short WPF or Silverlight is a paradign shift in the way we do winforms development. So .NET3.5 UI design can be done in two ways. 1) Winforms way 2) WPF-XAML way. I think for a futuristic and modern UI you definitely need WPF than winforms. A: Actually you will probably be more comfortable with hand-coding WPF with your background, I also have done my share of swing interfaces with Java and, although winforms makes it really easy to draw up an interface, I was able to get into WPF quickly because a lot of the layout concepts were the same as Java. Some winforms-only programmers really struggle getting into WPF because of the different layout paradigm. A: Windows Presentation Foundation is a vector-based system that is part of .NET 3.0. It allows you to define your UI in XAML, and can do all sorts of animation, 3D, etc. very easily. It's much newer and still being evalulated by a lot of folks. Windows Forms is a wrapper over older windows UI classes (Win32/MFC or whatever). It came with .NET 1.0 and uses C# to define all the UI and its layout. It's the tried and true UI method. A: No mention of GUI design would be complete without mention of Alan Coopers About Face although at first glance it looks out of date (most of the screenshots are windows 3.1) its content is still valid today
{ "language": "en", "url": "https://stackoverflow.com/questions/121947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Determine site's absolute, fully-qualified url in asp.net How can I consistently get the absolute, fully-qualified root or base url of the site regardless of whether the site is in a virtual directory and regardless of where my code is in the directory structure? I've tried every variable and function I can think of and haven't found a good way. I want to be able to get the url of the current site, i.e. http://www.example.com or if it's a virtual directory, http://www.example.com/DNN/ Here's some of the things I've tried and the result. The only one that includes the whole piece that I want (http://localhost:4471/DNN441) is Request.URI.AbsoluteURI: * *Request.PhysicalPath: C:\WebSites\DNN441\Default.aspx *Request.ApplicationPath: /DNN441 *Request.PhysicalApplicationPath: C:\WebSites\DNN441\ *MapPath: C:\WebSites\DNN441\DesktopModules\Articles\Templates\Default.aspx *RawURL: /DNN441/ModuleTesting/Articles/tabid/56/ctl/Details/mid/374/ItemID/1/Default.aspx *Request.Url.AbsoluteUri: http://localhost:4471/DNN441/Default.aspx *Request.Url.AbsolutePath: /DNN441/Default.aspx *Request.Url.LocalPath: /DNN441/Default.aspx Request.Url.Host: localhost *Request.Url.PathAndQuery: /DNN441/Default.aspx?TabId=56&ctl=Details&mid=374&ItemID=1 A: The accepted answer assumes that the current request is already at the server/virtual root. Try this: Request.Url.GetLeftPart(UriPartial.Authority) + Request.ApplicationPath A: There is some excellent discussion and ideas on Rick Strahl's blog EDIT: I should add that the ideas work with or without a valid HttpContext. EDIT2: Here's the specific comment / code on that post that answers the question A: Found this code here: string appPath = null; appPath = string.Format("{0}://{1}{2}{3}", Request.Url.Scheme, Request.Url.Host, Request.Url.Port == 80 ? string.Empty : ":" + Request.Url.Port, Request.ApplicationPath); A: In reading through the answer provided in Rick Strahl's Blog I found what I really needed was quite simple. First you need to determine the relative path (which for me was the easy part), and pass that into the function defined below: VB.NET Public Shared Function GetFullyQualifiedURL(ByVal s as string) As String Dim Result as URI = New URI(HttpContext.Current.Request.Url, s) Return Result.ToString End Function C# public static string GetFullyQualifiedURL(string s) { Uri Result = new Uri(HttpContext.Current.Request.Url, s); return Result.ToString(); } A: Have you tried AppSettings.RootUrl which is usually configured in the web.config file? A: Are you talking about for use as links? if so, then doing this <a href='/'>goes to root</a> will take you to the default file of the web root. Now, for client side, doing, passing "~/" to the Control::ResolveUrl method will provide you what you're looking for. (http://msdn.microsoft.com/en-us/library/system.web.ui.control.resolveurl.aspx) A: I have no way to validate this at the moment but have you tried "Request.Url.AbsoluteUri" from another machine? It occurs to me that as far as your machine is concerned it's browser is requesting from localhost. I could be wrong though but I think request is relative to the browser and not the webserver.
{ "language": "en", "url": "https://stackoverflow.com/questions/121962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to detect screen resolution in (classic) ASP I want to detect users' screen size and pass this into a charting application (Chart Director by http://www.advsofteng.com) to control how big an image to display. I have to use ASP, but I can only think to use JavaScript to detect screen-size and then pass this into the server-side script. Is there an easier way? Thanks A: No, the server knows nothing about the client other than basic info like IP and browser version. Screen resolution can easily be determined via javascript and passed to the server though, using ajax, or via form submission. A: Here's a couple of links that should help http://www.javascriptkit.com/howto/newtech3.shtml http://www.devcity.net/Articles/64/1/screenresolution.aspx A: Here is my simple solution: javascript code: document.cookie = "screen_w=" + screen.availWidth ; document.cookie = "screen_h=" + screen.availHeight; asp code: screen_w = request.Cookies("screen_w") screen_h = request.Cookies("screen_h") A: No, this is not possible for desktop browsers. I suggest embedding an image that's appropriate for typical screen resolutions, then detecting the canvas size, and rewriting the image's src attribute to reflect that. In any case, you don't want to look at the screen resolution, you want to look at the canvas size. Not everybody uses their browser with a maximised window, especially those with large screens. If you use the screen resolution, then you'll end up serving images that are way too big for some people. A: You can't get the users screen size only the browser window size, and thats doable using javascript. A: The best way that I have found to do this is to create a dummy asp page which simply has the following javascript code in the header: <script type="text/javascript" language="JavaScript"> document.cookie = "screen_w=" + screen.width; location.href = "second_page.asp" </script> Then in second_page.asp you can lookup the cookie, which was set in the first page. If you set and then request in the same page, it won't work first time. myscreenwidth = request.Cookies("screen_w")
{ "language": "en", "url": "https://stackoverflow.com/questions/121979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I add a REGEX match to my J2ME project? The question pretty much sums it up. Just want to run a regular expression match on a string in J2ME. A: JRegex will not run on CLDC device. Try this one: code.google.com/p/regexp-me/. A: Will JRegex run in J2ME? UPDATE: Based on Rory's comment, I recommend JRegex, then. A: JRegex doesn't compile on J2ME (at least in the blackberry dev environment). I'm interested in a regex library as well but JRegex doesn't seem to work for this purpose. A: It's hard to tell what runs on J2ME due to the extreme old age of J2ME's API support, but try Apache ORO. That's sufficiently old, crusty and obsolete that it may, if you're lucky, work on J2ME.
{ "language": "en", "url": "https://stackoverflow.com/questions/121988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I write this in Ruby/Python? Or, can you translate my LINQ to Ruby/Python? Yesterday, I asked this question and never really got an answer I was really happy with. I really would like to know how to generate a list of N unique random numbers using a functional language such as Ruby without having to be extremely imperative in style. Since I didn't see anything I really liked, I've written the solution I was looking for in LINQ: static void Main(string[] args) { var temp = from q in GetRandomNumbers(100).Distinct().Take(5) select q; } private static IEnumerable GetRandomNumbers(int max) { Random r = new Random(); while (true) { yield return r.Next(max); } } Can you translate my LINQ to Ruby? Python? Any other functional programming language? Note: Please try not to use too many loops and conditionals - otherwise the solution is trivial. Also, I'd rather see a solution where you don't have to generate an array much bigger than N so you can then just remove the duplicates and trim it down to N. I know I'm being picky, but I'd really like to see some elegant solutions to this problem. Thanks! Edit: Why all the downvotes? Originally my code sample had the Distinct() after the Take() which, as many pointed out, could leave me with an empty list. I've changed the order in which those methods are called to reflect what I meant in the first place. Apology: I've been told this post came across as rather snobbish. I wasn't trying to imply that LINQ is better than Ruby/Python; or that my solution is much better than everyone else's. My intent is just to learn how to do this (with certain constraints) in Ruby. I'm sorry if I came across as a jerk. A: In Ruby: a = (0..100).entries.sort_by {rand}.slice! 0, 5 Update: Here is a slightly different way: a = (0...100).entries.sort_by{rand}[0...5] EDIT: and In Ruby 1.9 you can do this: Array(0..100).sample(5) A: Hmm... How about (Python): s = set() while len(s) <= N: s.update((random.random(),)) A: I will forgo the simplest solutions using the 'random' module since I take it that's not really what you are after. Here's what I think you are looking for in Python: >>> import random >>> >>> def getUniqueRandomNumbers(num, highest): ... seen = set() ... while len(seen) < num: ... i = random.randrange(0, highest) ... if i not in seen: ... seen.add(i) ... yield i ... >>> To show you how it works: >>> list(getUniqueRandomNumbers(10, 100)) [81, 57, 98, 47, 93, 31, 29, 24, 97, 10] A: Here's another Ruby solution: a = (1..5).collect { rand(100) } a & a I think, with your LINQ statement, the Distinct will remove duplicates after 5 have already been taken, so you aren't guaranteed to get 5 back. Someone can correct me if I'm wrong, though. A: EDIT : Ok, just for fun, a shorter and faster one (and still using iterators). def getRandomNumbers(max, size) : pool = set() return ((lambda x : pool.add(x) or x)(random.randrange(max)) for x in xrange(size) if len(a) < size) print [x for x in gen(100, 5)] [0, 10, 19, 51, 18] Yeah, I know, one-liners should be left to perl lovers, but I think this one is quite powerful isn't it ? Old message here : My god, how complicated is all that ! Let's be pythonic : import random def getRandomNumber(max, size, min=0) : # using () and xrange = using iterators return (random.randrange(min, max) for x in xrange(size)) print set(getRandomNumber(100, 5)) # set() removes duplicates set([88, 99, 29, 70, 23]) Enjoy EDIT : As commentators noticed, this is an exact translation of the question's code. To avoid the problem we got by removing duplicates after generating the list, resulting in too little data, you can choose another way : def getRandomNumbers(max, size) : pool = [] while len(pool) < size : tmp = random.randrange(max) if tmp not in pool : yield pool.append(tmp) or tmp print [x for x in getRandomNumbers(5, 5)] [2, 1, 0, 3, 4] A: >>> import random >>> print random.sample(xrange(100), 5) [61, 54, 91, 72, 85] This should yield 5 unique values in the range 0 — 99. The xrange object generates values as requested so no memory is used for values that aren't sampled. A: In Ruby 1.9: Array(0..100).sample(5) A: Python with Numeric Python: from numpy import * a = random.random_integers(0, 100, 5) b = unique(a) Voilà! Sure you could do something similar in a functional programming style but... why? A: import random def makeRand(n): rand = random.Random() while 1: yield rand.randint(0,n) yield rand.randint(0,n) gen = makeRand(100) terms = [ gen.next() for n in range(5) ] print "raw list" print terms print "de-duped list" print list(set(terms)) # produces output similar to this # # raw list # [22, 11, 35, 55, 1] # de-duped list # [35, 11, 1, 22, 55] A: Well, first you rewrite LINQ in Python. Then your solution is a one-liner :) from random import randrange def Distinct(items): set = {} for i in items: if not set.has_key(i): yield i set[i] = 1 def Take(num, items): for i in items: if num > 0: yield i num = num - 1 else: break def ToArray(items): return [i for i in items] def GetRandomNumbers(max): while 1: yield randrange(max) print ToArray(Take(5, Distinct(GetRandomNumbers(100)))) If you put all the simple methods above into a module called LINQ.py, you can impress your friends. (Disclaimer: of course, this is not actually rewriting LINQ in Python. People have the misconception that LINQ is just a bunch of trivial extension methods and some new syntax. The really advanced part of LINQ, however, is automatic SQL generation so that when you're querying a database, it's the database that implements Distinct() rather than the client side.) A: Here's a transliteration from your solution to Python. First, a generator that creates Random numbers. This isn't very Pythonic, but it's a good match with your sample code. >>> import random >>> def getRandomNumbers( max ): ... while True: ... yield random.randrange(0,max) Here's a client loop that collects a set of 5 distinct values. This is -- again -- not the most Pythonic implementation. >>> distinctSet= set() >>> for r in getRandomNumbers( 100 ): ... distinctSet.add( r ) ... if len(distinctSet) == 5: ... break ... >>> distinctSet set([81, 66, 28, 53, 46]) It's not clear why you want to use a generator for random numbers -- that's one of the few things that's so simple that a generator doesn't simplify it. A more Pythonic version might be something like: distinctSet= set() while len(distinctSet) != 5: distinctSet.add( random.randrange(0,100) ) If the requirements are to generate 5 values and find distinct among those 5, then something like distinctSet= set( [random.randrange(0,100) for i in range(5) ] ) A: Maybe this will suit your needs and look a bit more linqish: from numpy import random,unique def GetRandomNumbers(total=5): while True: yield unique(random.random(total*2))[:total] randomGenerator = GetRandomNumbers() myRandomNumbers = randomGenerator.next() A: Here's another python version, more closely matching the structure of your C# code. There isn't a builtin for giving distinct results, so I've added a function to do this. import itertools, random def distinct(seq): seen=set() for item in seq: if item not in seen: seen.add(item) yield item def getRandomNumbers(max): while 1: yield random.randint(0,max) for item in itertools.islice(distinct(getRandomNumbers(100)), 5): print item A: I can't really read your LINQ, but I think you're trying to get 5 random numbers up to 100 and then remove duplicates. Here's a solution for that: def random(max) (rand * max).to_i end # Get 5 random numbers between 0 and 100 a = (1..5).inject([]){|acc,i| acc << random( 100)} # Remove Duplicates a = a & a But perhaps you're actually looking for 5 distinct random numbers between 0 and 100. In which case: def random(max) (rand * max).to_i end a = [] while( a.size < 5) a << random( 100) a = a & a end Now, this one might violate your sense of "not too many loops," but presumably Take and Distinct are just hiding the looping from you. It would be easy enough to just add methods to Enumerable to hide the while loop.
{ "language": "en", "url": "https://stackoverflow.com/questions/122033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to find GridItem from click on PropertyGrid? I'm trying to do some custom UI behavior on a Windows Forms PropertyGrid control. I'd like to be able to respond to clicks and double-clicks on the GridItems to change the state. For example, to flip a binary or ternary variable through its states. I can get at the underlying view by looking up a child of typename "PropertyGridView" and can hook its Click event. Only problem is then what do I do? I can't find any functions that map mouse coordinates onto grid items. There is a SelectedGridItem but this isn't helpful. There are many places you can click on a control that do not update this property, and so responding to a Click assuming SelectedGridItem is updated will get a lot of incorrect results. Aside from purchasing a commercial property grid control or switching to a gridview of some kind, is there anything I can do here? The PropertyGrid is almost exactly what I need. I'm even considering wandering through with Reflector and doing some very unfriendly things with this control to get the data out that I need. :) More info: I do know about using custom UITypeEditor classes, and am already doing this in other areas (color picker). Unfortunately doing custom UI work requires an extra click (to browse or drop-down some UI). For example, I have embedded a checkbox using UITypeEditor.PaintValue and would really like to be able to just click on it to check/uncheck. A: If you need to flip the values of a simple type you can have an enumeration value displayed in the property grid. This will appear automatically as a drop down list. If you need to create some more clever UI editor I suggest you'll take a look at the following articles that explain how to create custom UI in the property grid: http://msdn.microsoft.com/en-us/library/aa302334.aspx http://msdn.microsoft.com/en-us/library/aa302326.aspx If you want to handle a value change in the property grid to do something in the application or change the values in the property grid you can handle the OnPropertyValueChanged that is raised after each change in the property grid. Handling the mouse click and the mouse double click are not necessary once you can create your own UI editor. UI editors can be drop down editors or modal editors. Again, I strongly suggest you to read the above articles. They are quite good.
{ "language": "en", "url": "https://stackoverflow.com/questions/122036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Has anyone already done the work to make STLPort build with VS2008 and/or an x64 build with VS2005? At present it seems that VS2008 still isn't supported either in the 5.1.5 release or in the STLPort CVS repository. If someone has already done this work then it would be useful to share, if possible :) Likewise it would be useful to know about the changes required for a VS2005 or 2008 x64 build. A: Seems so. A: It turns out that x64 support, whilst not explicitly stated, just works. If you set your environment up to use the x64 tools by running \Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\amd64\vcvarsamd64.bat then run configure.bat for your compiler and build as normal you end up with appropriate libs and dlls. Unfortunately the x64 libs build to the same names as the x86 libs so it's not possible to have a 'side by side' installation of STLPort to allow you to build with either x86 or x64. Edit: I've written up what you need to do to provide side-by-side x64 and x86 support as well as packaging up the changes required for vs2008 builds on my blog. See here: http://www.lenholgate.com/blog/2008/10/stlport-515-and-vs2008-and-x64.html For other versions of Visual Studio see here: http://www.lenholgate.com/blog/2005/12/stlport-50-and-multiple-vc-versions.html, here: http://www.lenholgate.com/blog/2007/05/stlport-513-and-multiple-vc-versions.html and here: http://www.lenholgate.com/blog/2010/07/stlport-521-and-vs2010-and-x64.html
{ "language": "en", "url": "https://stackoverflow.com/questions/122057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: favicon not working in IE I have a site using a custom favicon.ico. The favicon displays as expected in all browsers except IE. When trying to display the favicon in IE, I get the big red x; when displaying the favicon in another browser, it displays just fine. The page source includes and it does work in other browsers. Thanks for your thoughts. EDIT: SOLVED: The source of the issue was the file was a jpg renamed to ico. I created the file as an ico and it is working as expected. Thanks for your input. A: Should anyone make it down to this answer: Same issue: didn't work in IE (including IE 10), worked everywhere else. Turns out that the file was not a "real" .ico file. I fixed this by uploading it to http://www.favicon.cc/ and then downloading it again. First I tested it by generating a random .ico file on this site and using that instead of my original file. Saw that it worked. A: Right you've not been that helpful (providing source would be have been really useful!) but here you go... Some things to check: Is the code like this: <link rel="icon" href="http://www.example.com/favicon.ico" type="image/x-icon" /> <link rel="shortcut icon" href="http://www.example.com/favicon.ico" type="image/x-icon" /> Is it in the <head>? Is the image a real ico file? (renaming a bitmap is not a real .ico! Mildly different format) Does it work when you add the page as a bookmark? A: Did you try putting the icon at the URI "/favicon.ico" ? IE might not know about the link tag way of referring to it. More info from W3. A: If you tried everything above and it still doesn’t work in IE, check your IIS settings if you are using a Windows Server. Make sure that the HTTP Headers > “Enable content expiration” setting, IS NOT SET to “Expire immediately” A: I know this is a really old topic now, but as it's the first one that came up on my google search I just wanted to add my solution to it: I had this problem as well with an icon that was supplied by a client. It displayed in all browsers apart from IE. Adding the link or meta tags didn't work, so I started to look at the format of the icon file. It appeared to be a valid icon file (not just a renamed image), but what fixed it in the end was to convert it to an image, save it as a GIF, and then converting it back to an icon. Also make sure to clear the IE cache while you're testing. A: In IE and FireFox the favicon.ico is only being requested at the first page visited on the site, which means that if the favicon.ico requires log-in (for example your site is a closed site and requires log in) then the icon will not be displayed. The solution is to add an exception for the favicon.ico, for example in ASP.Net you add in the web.config: <location path="favicon.ico"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> A: I had this exact problem and nothing seemed to work. After clearing the browser cache countless times and even updating IE to v9 I found this: http://favicon.htmlkit.com/favicon/ The above link solved the problem perfectly for me! A: <link rel="shortcut icon" type="image/x-icon" href="FolderName/favicon.ico" /> * *Your favicon.ico must be placed between head tag *size : 16 X 16 *and for Internet Explorer it must be transparent (the outer white part should not visible) A: None of the above solutions worked for me. First of all I made sure the icon is in the right format using the website to create favicons suggested above. Then I renamed the icon from 'favicon.ico' to 'myicon.ico' and added the following code to my page (within the <head> tags): <link rel="shortcut icon" href="myicon.ico" type="image/x-icon" /> The icon is on the same folder as the page. This solved the problem for me. The issue behind the scenes had probably something to do with the caching of IE, but I'm not sure. A: Care to share the URL? Many browsers cope with favicons in (e.g.) png format while IE had often troubles. - Also older versions of IE did not check the html source for the location of the favicon but just single-mindedly tried to get "/favicon.ico" from the webserver. A: I once used a PNG as a favicon.ico and it displayed in all browsers except IE. Maybe something in the file causes it to not be recognized by IE. Also make sure it's 32x32. Don't know if it matters though. But it's something I had to make sure in order to see it in IE. Hope it helps. Try to use an ico file from some place else just to see if that works. A: this seems to be an ASPX pages problem, I have never been able to show a favicon in any page for IE (all others yes Chrome, FF and safari) the only sites that I've seen that are the exception to that rule are bing.com, msdn.com and others that belong to MS and run on asp.net, there is something that they are not telling us! even world-known sites cant show in IE eg: manu.com (most browsed sports team in the world) aspx site and fails to dislplay the favicon on IE. http://www.manutd.com/favicon.ico does show the icon. Please prove me wrong. A: THE SOLUTION : * *I created an icon from existing png file by simply changing the extension of the image from png to ico. I use drupal 7 bartik theme, so I uploaded the shortcut icon to the server and it WORKED for Chrome and Firefox but not IE. Also, the image icon was white-blank on the desktop. *Then I took the advice of some guys here and reduced the size of the image to 32x32 pixels using an image editor (gimp 2<< *I uploaded the icon in the same way as earlier, and it worked fine for all browsers. I love you guys on stackoverflow, you helped me solve LOTS of problems. THANK YOU! A: Thanks for all your help.I tried different options but the below one worked for me. <link rel="shortcut icon" href="/favicon.ico" > <link rel="icon" type="/image/ico" href="/favicon.ico" > I have added the above two lines in the header of my page and it worked in all browsers. Thanks A: May be this help others. For me ICON was not getting displayed in IE, even after following all steps. Finally I found a note in MSDN Troubleshooting Shortcut Icons. Verify that Internet Explorer can store the shortcut icon in the Temporary Internet Files folder. If you have set Internet Explorer to not keep a cache, then it will not be able to store the icon and will display the default Internet Explorer shortcut icon instead. I was using IE in "In Private" mode, once I verified in normal mode.... Fav Icon displayed properly. A: Regarding incompatibilities with IE9 I came across this blog post which gives tips for creating a favicon that is recognised by IE9. In an essence, try creating a favicon with the following site: http://www.xiconeditor.com/ A: Check the response headers for your favicon. They must not include "Cache-Control: no-cache". You can check this from the command line using: curl -I http://example.com/favicon.ico or wget --server-response --spider http://example.com/favicon.ico (or use some other tool that will show you response headers) If you see "Cache-Control: no-cache" in there, adjust your server configuration to either remove that header from the favicon response or set a max-age. A: Also - certificate errors (https) can prevent the favicon from appearing. The security team changed our server settings and I started getting "There is a problem with this website’s security certificate." Clicking on "Continue to this website (not recommended)." took me to the website but would NOT show the favicon. A: I'm seeing different behaviors between Windows 10 and Windows Server 2016 and between IE and Edge. I tested using www.microsoft.com. Windows Server 2016 IE 11: Favorites: site icon Address bar: site icon Browser tab: site icon Windows 10 IE 11: Favorites: site icon Address bar: generic blue-E icon Browser tab: generic blue-E icon Windows 10 Edge: Favorites: site icon Address bar: no icon Browser tab: site icon What's the deal with Windows 10 IE showing the generic icon? A: This work crossbrowser for me (IE11, EDGE, CHROME, FIREFOX, OPERA), use https://www.icoconverter.com/ to create .ico file <link data-senna-track="temporary" href="${favicon_url}" rel="Shortcut Icon" /> <link rel="icon" href="${favicon_url}" type="image/x-icon" /> <link rel="shortcut icon" href="${favicon_url}" type="image/x-icon" /> A: Try something like: Add to html: <link id="shortcutIcon" rel="shortcut icon" type="image/x-icon"> <link id="icon" rel="icon" type="image/x-icon"> Add minified script after tag: <script type="text/javascript"> (function(b,c,d,a){a=c+d+b,document.getElementById('shortcutIcon').href=a,document.getElementById('icon').href=a;}(Math.random()*100,(document.querySelector('base')||{}).href,'/assets/images/favicon.ico?v=')); </script> where * *'/assets/images/favicon.ico' related path to .ico *?v='Math.random()*100' - to force browser update favicon.ico Before test clear history: (ctr + shfit + del) A: Run Internet Explorer as Administrator. If you open IE in normal mode then favicon will not display on IE 11 (Win 7). I am not sure about the behavior on other version of browsers.
{ "language": "en", "url": "https://stackoverflow.com/questions/122067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: Sql query to determine status? I have a table in a MSSQL database that looks like this: Timestamp (datetime) Message (varchar(20)) Once a day, a particular process inserts the current time and the message 'Started' when it starts. When it is finished it inserts the current time and the message 'Finished'. What is a good query or set of statements that, given a particular date, returns: * *0 if the process never started *1 if the process started but did not finish *2 if the process started and finished There are other messages in the table, but 'Started' and 'Finished' are unique to this one process. EDIT: For bonus karma, raise an error if the data is invalid, for example there are two 'Started' messages, or there is a 'Finished' without a 'Started'. A: Select Count(Message) As Status From Process_monitor Where TimeStamp >= '20080923' And TimeStamp < '20080924' And (Message = 'Started' or Message = 'Finished') You could modify this slightly to detect invalid conditions, like multiple starts, finishes, starts without a finish, etc... Select Case When SumStarted = 0 And SumFinished = 0 Then 'Not Started' When SumStarted = 1 And SumFinished = 0 Then 'Started' When SumStarted = 1 And SumFinished = 1 Then 'Finished' When SumStarted > 1 Then 'Multiple Starts' When SumFinished > 1 Then 'Multiple Finish' When SumFinished > 0 And SumStarted = 0 Then 'Finish Without Start' End As StatusMessage From ( Select Sum(Case When Message = 'Started' Then 1 Else 0 End) As SumStarted, Sum(Case When Message = 'Finished' Then 1 Else 0 End) As SumFinished From Process_monitor Where TimeStamp >= '20080923' And TimeStamp < '20080924' And (Message = 'Started' or Message = 'Finished') ) As AliasName A: DECLARE @TargetDate datetime SET @TargetDate = '2008-01-01' DECLARE @Messages varchar(max) SET @Messages = '' SELECT @Messages = @Messages + '|' + Message FROM process_monitor WHERE @TargetDate <= Timestamp and Timestamp < DateAdd(dd, 1, @TargetDate) and Message in ('Finished', 'Started') ORDER BY Timestamp desc SELECT CASE WHEN @Messages = '|Finished|Started' THEN 2 WHEN @Messages = '|Started' THEN 1 WHEN @Messages = '' THEN 0 ELSE -1 END A: You are missing a column that uniquely identifies the process. Lets add a int column called ProcessID. You would also need another table to identify processes. If you were relying on your original table, you'd never know about processes that never started because there wouldn't be any row for that process. select ProcessID, ProcessName, CASE WHEN (Select COUNT(*) from ProcessActivity where ProcessActivity.processid = Processes.processid and Message = 'STARTED') = 1 And (Select COUNT(*) from ProcessActivity where ProcessActivity.processid = Processes.processid and Message = 'FINISHED') = 0 THEN 1 WHEN (Select COUNT(*) from ProcessActivity where ProcessActivity.processid = Processes.processid and Message = 'STARTED') = 1 And (Select COUNT(*) from ProcessActivity where ProcessActivity.processid = Processes.processid and Message = 'FINISHED') = 1 THEN 2 ELSE 0 END as Status From Processes A: select count(*) from process_monitor where timestamp > yesterday and timestamp < tomorrow. Alternately, you could use a self join with a max to show the newest message for a particular day: select * from process_monitor where timestamp=(select max(timestamp) where timestamp<next_day);
{ "language": "en", "url": "https://stackoverflow.com/questions/122088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Xsd to rnc (or rng) conversion (unix command line) A brief search shows that all available (uUnix command line) tools that convert from xsd (XML Schema) to rng (RelaxNG) or rnc (compact RelaxNG) have problems of some sort. First, if I use rngconv: $ wget https://msv.dev.java.net/files/documents/61/31333/rngconv.20060319.zip $ unzip rngconv.20060319.zip $ cd rngconv-20060319/ $ java -jar rngconv.jar my.xsd > my.rng It does not have a way to de-normalize elements so all end up being alternative start elements (it also seems to be a bit buggy). Trang is an alternative, but it doesn't support xsd files on the input only on the output (why?). It supports DTD, however. Converting to DTD first comes to mind, but a solid xsd2dtd is hard to find as well. The one below: $ xsltproc http://crism.maden.org/consulting/pub/xsl/xsd2dtd.xsl in.xsd > out.dtd Seems to be buggy. All this is very surprising. For all these years of XML (ab)use, there no decent command line tools for these trivial basic tasks? Are people using only editors? Do those work? I much prefer command line, especially because I'd like to automate these tasks. Any enlightening comments on this? A: Converting XSD is a very hard task; the XSD specification is a bit of a nightmare and extremely complex. From some quick research, it seems that it is easy to go from RelaxNG to XSD, but that the reverse may not be true or even possible (which explains your question about Trang). I don't understand your question about editors - if you are asking if most people end up converting between XSD and RNG by hand, then yes, I expect so. The best advice may be to avoid XSD if possible, or at least use RNG as the definitive document and generate the XSD from that. You might also want to take a look at schematron. A: True, trang does not accept xsd on the input side. Trang can however take a set of xml files which should meet the spec and generate a rnc or rng schema which they would all be valid against. Downsides: * *It requires many compliant xml files (I'd imagine the more the better) *Resulting schema could probably still use some tweaking. Sample Case: If my compliant xml files are stashed in 1.xml 2.xml 3.xml 4.xml 5.xml the following command would tell trang to output a rnc schema that would be valid for all of them: java -jar trang.jar -I xml -O rnc 1.xml 2.xml 3.xml 4.xml 5.xml foo.rnc Conclusion If you have a nice test set of xml files which meet your schema (or you can easily create them) this may be the best option available. I wish you the best of luck. A: There is an online converter (XSD -> RNG) added to the list at http://relaxng.org/#conversion. I have tried to convert maven-v4_0_0.xsd for validation of pom.xml files in emacs, without any luck though. The site also contains the XSL stylesheet that you could use with xsltproc, can't vouch for the quality of the outcome... A: Again, regarding editors: I see that there's no way to do this with Oxygen which seems to be a popular tool. A: Another possibility might be http://www.brics.dk/schematools/, but I didn't try it yet.
{ "language": "en", "url": "https://stackoverflow.com/questions/122089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Apache Mod-Rewrite Primers? I am wondering what primers/guides/tutorials/etc. are out there for learning to rewrite URLs using Apache/.htaccess? Where is a good place to start? My primary interest is learning how to point certain directories to others, and how to use portions of a URL as parameters to a script (i.e. "/some/subdirs/like/this" => "script.php?a=some&b=subdirs&c=like&d=this"). A: I found this to be pretty useful: http://www.addedbytes.com/apache/url-rewriting-for-beginners/ A: I would go straight to the horse's mouth: http://httpd.apache.org/docs/1.3/mod/mod_rewrite.html but as a gentler introduction: http://www.workingwith.me.uk/articles/scripting/mod_rewrite A: The Apache Documentation site has a good introduction to using mod_rewrite. It covers how the directive works and has quite a few examples, eg: RewriteRule ^/games.* /usr/local/games/web RewriteRule ^/product/(.*)/view$ /var/web/productdb/$1 It coveres everything from the basic sytanx for changing the URI (which is what you seemed to be asking about) as well as using regular expressions, conditions and responding with redirects. The apache documents have always been useful to me. O'Reilly's Apache: The Definitive Guide is also a good physical resource. A: What's wrong with the manual? A: The Apache manual has lots of examples. * *URL Rewriting Guide *URL Rewriting Guide - Advanced topics
{ "language": "en", "url": "https://stackoverflow.com/questions/122097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Best practice for parameterizing GWT app? I have a Google Web Toolkit (GWT) application and when I link to it, I want to pass some arguments/parameters that it can use to dynamically retrieve data. E.g. if it were a stock chart application, I would want my link to contain the symbol and then have the GWT app read that and make a request to some stock service. E.g. http://myapp/gwt/StockChart?symbol=GOOG would be the link to my StockChart GWT app and it would make a request to my stock info web service for the GOOG stock. So far, I've been using the server-side code to add Javascript variables to the page and then I've read those variables using JSNI (JavaScript Native Interface). For example: In the host HTML: <script type="text/javascript"> var stockSymbol = '<%= request.getParameter("symbol") %>'; </script> In the GWT code: public static native String getSymbol() /*-{ return $wnd.stockSymbol; }-*/; (Although this code is based on real code that works, I've modified it for this question so I might have goofed somewhere) However, this doesn't always work well in hosted mode (especially with arrays) and since JSNI wasn't around in version 1.4 and previous, I'm guessing there's another/better way. A: If you want to read query string parameters from the request you can use the com.google.gwt.user.client.Window class: // returns whole query string public static String getQueryString() { return Window.Location.getQueryString(); } // returns specific parameter public static String getQueryString(String name) { return Window.Location.getParameter(name); } A: It is also a nice option to 'parameterize' a GWT application using hash values. So, instead of http://myapp/gwt/StockChart?symbol=GOOG use http://myapp/gwt/StockChart#symbol=GOOG There is some nice tooling support for such 'parameters' through GWT's History Mechanism.
{ "language": "en", "url": "https://stackoverflow.com/questions/122098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What GUI Framework would you recommend Management wants us to switch to Infragistics. But my boss says that any 3rd party is destined to fail, and at that point there won't be anything we can do about it. Is he right, is he wrong? I don't know. What are your opinions, is it better to use something developed inside the company, or do you prefer something like DevExpress, Infragistics, Krypton to name a few or is there something else even better that you would recommend instead? A: When shopping for 3rd party controls, look for the option to purchase source code (for a reasonable price). With source code you should be able to make any necessary changes to the components to keep them running well in your environment. The Krypton Suite of controls from Component Factory does just this. Phil offers the source code for the entire suite for an amazing price (currently less than $400). I have used the Krypton Suite in my development for a year now and I have been extremely pleased with it. Krypton gives me the power to create shrink wrapped software with Office 2007 UI look and feel with consistency far beyond any other toolkit I tried. Phil is also very active in the support forums and provides you a direct link to the development path of the software. A: .NET3.5 SP1 is really matured to do ASP.NET and standalone UI development(WPF and Silverlight). What is your main criteria to go for third party components and frameworks. If you just need some charting or any other financial domain tools and control yeah you need to consider third party compoenents for a faster turnaround. Other than that I would see the .NET framework itself has rich libraries to do most of the things. A: I don't think it's entirely bad to rely on third parties. Some are very reputable and will do a great job of supporting you. But on the other hand, some are terrible to work with even if they stay in business. I don't know anything about the frameworks you've mentioned, though. Have you considered an open source framework? That way you can still work on it yourself if all else fails. Of course, you have to take into account licensing requirements when doing this, but I think it's definitely something you should look into if it's appropriate for the project. A: As long as you have access to the source code of the library and are able to modify it and distribute the modified library without paying any royalties, your boss's fears are unfounded. I'd go for DevExpress myself, but they are quite pricey when compared to the other frameworks. A: I think that your boss's concern could be better phrased as "One of these days we're gonna need to change the 3rd party control in a way that requires modifying it's source." Depending on the license that comes with the third party control, this might get sticky. For something like a UI control, in my experience .NET makes it pretty easy to make whatever it is you need anyway. Maybe as a way to resolve the debate you could propose to knock out a quick prototype of whatever control(s) you'd otherwise need to borrow. That will give you insight as to (a) whether the third party library is needed and (b) what requirements you have of a third party library should you choose to go in that direction. A: The important thing to plan for when using 3rd party controls is continuity. If they do go under, what does that mean to you? How much are you relying on their framework, and how much work would it be to switch to something else? IS there something else that does what you need? If you have source code for the component in question, it puts you in a much better position - you can at least fix bugs and possibly even maintain/extend it yourself. On the other end of the spectrum is tightly controlled software, where you have to renew every year and it expires if you don't. If you're using something like this, if they go under, it forces your hand, and you'll have to do something. It's really a balancing act. Saving you work/money vs Probability of them disappearing vs Relying on a 3rd party. I once had a boss that was like yours. The thing they miss is that you're entirely relying on 3rd party. If you're using .NET, you're trusting that Microsoft is not going to go under (probably not..), discontinue it (maybe), or radically change it (quite possible). Of course he's gone now, and we've since started using a handful of 3rd party controls (some open source) that have saved us hundreds of hours of dev time, or just let us do features we never would have done otherwise (because it would take too long). A: If your company makes U.I. then by all means develop and maintain your controls in house. If that is not your main business objective you should find a code vendor that offers source code for the control ( devexpress, telerik...) And when you implement these controls give your self a layer of abstraction so that it is simpler to switch vendors in the future. A: Thank you very much for your insights. I've used Krypton myself but only the part that is free. I think it's more of a "We want it to look slick". I share my boss's concern with using 3rd party controls, but I also agree with you guys in that it's far better to concentrate on the task than on providing controls that look good. The question the manager faced my boss with when he said that it's a bad idea to use Infragistics was "But can you do something similar in a reasonable amount of time ?". The answer was obviously "No". I'll try suggesting Krypton Toolkit, I've used it a bit in the past as I've said before, I did have some trouble with it. I think the datagrid is the main focus of the problem, since most of the toolkits have options for customizing the appearence. Again, thanks so much for your answers. A: I think when you buy third party controls you should think about the value you receive for the money you pay. There are dinosaur vendors like Telerik, Syncfusion, DevExpress, Infragistics which offer GUI with good quality and support, but they are not very cheap, because the brand costs money. There are other great WinForms suites coming from a small vendors such as ComponentFactory, DevComponents, VIBlend, Nevron which offer a good looking controls for WinForms in Office2007 style but much cheaper.
{ "language": "en", "url": "https://stackoverflow.com/questions/122099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the most efficient way to deep clone an object in JavaScript? What is the most efficient way to clone a JavaScript object? I've seen obj = eval(uneval(o)); being used, but that's non-standard and only supported by Firefox. I've done things like obj = JSON.parse(JSON.stringify(o)); but question the efficiency. I've also seen recursive copying functions with various flaws. I'm surprised no canonical solution exists. A: Cloning an object was always a concern in JS, but it was all about before ES6, I list different ways of copying an object in JavaScript below, imagine you have the Object below and would like to have a deep copy of that: var obj = {a:1, b:2, c:3, d:4}; There are few ways to copy this object, without changing the origin: * *ES5+, Using a simple function to do the copy for you: function deepCopyObj(obj) { if (null == obj || "object" != typeof obj) return obj; if (obj instanceof Date) { var copy = new Date(); copy.setTime(obj.getTime()); return copy; } if (obj instanceof Array) { var copy = []; for (var i = 0, len = obj.length; i < len; i++) { copy[i] = deepCopyObj(obj[i]); } return copy; } if (obj instanceof Object) { var copy = {}; for (var attr in obj) { if (obj.hasOwnProperty(attr)) copy[attr] = deepCopyObj(obj[attr]); } return copy; } throw new Error("Unable to copy obj this object."); } *ES5+, using JSON.parse and JSON.stringify. var deepCopyObj = JSON.parse(JSON.stringify(obj)); *Angular: var deepCopyObj = angular.copy(obj); *jQuery: var deepCopyObj = jQuery.extend(true, {}, obj); *Underscore.js & Lodash: var deepCopyObj = _.cloneDeep(obj); //latest version of Underscore.js makes shallow copy Hope these help… A: I use the npm clone library. Apparently it also works in the browser. https://www.npmjs.com/package/clone let a = clone(b) A: In my experience, a recursive version vastly outperforms JSON.parse(JSON.stringify(obj)). Here is a modernized recursive deep object copy function which can fit on a single line: function deepCopy(obj) { return Object.keys(obj).reduce((v, d) => Object.assign(v, { [d]: (obj[d].constructor === Object) ? deepCopy(obj[d]) : obj[d] }), {}); } This is performing around 40 times faster than the JSON.parse... method. A: var clone = function() { var newObj = (this instanceof Array) ? [] : {}; for (var i in this) { if (this[i] && typeof this[i] == "object") { newObj[i] = this[i].clone(); } else { newObj[i] = this[i]; } } return newObj; }; Object.defineProperty( Object.prototype, "clone", {value: clone, enumerable: false}); A: There’s a library (called “clone”), that does this quite well. It provides the most complete recursive cloning/copying of arbitrary objects that I know of. It also supports circular references, which is not covered by the other answers, yet. You can find it on npm, too. It can be used for the browser as well as Node.js. Here is an example on how to use it: Install it with npm install clone or package it with Ender. ender build clone [...] You can also download the source code manually. Then you can use it in your source code. var clone = require('clone'); var a = { foo: { bar: 'baz' } }; // inital value of a var b = clone(a); // clone a -> b a.foo.bar = 'foo'; // change a console.log(a); // { foo: { bar: 'foo' } } console.log(b); // { foo: { bar: 'baz' } } (Disclaimer: I’m the author of the library.) A: Structured Cloning 2022 update: The structuredClone global function is already available in Firefox 94, Node 17 and Deno 1.14 The HTML standard includes an internal structured cloning/serialization algorithm that can create deep clones of objects. It is still limited to certain built-in types, but in addition to the few types supported by JSON it also supports Dates, RegExps, Maps, Sets, Blobs, FileLists, ImageDatas, sparse Arrays, Typed Arrays, and probably more in the future. It also preserves references within the cloned data, allowing it to support cyclical and recursive structures that would cause errors for JSON. Support in Node.js: The structuredClone global function is provided by Node 17.0: const clone = structuredClone(original); Previous versions: The v8 module in Node.js (as of Node 11) exposes the structured serialization API directly, but this functionality is still marked as "experimental", and subject to change or removal in future versions. If you're using a compatible version, cloning an object is as simple as: const v8 = require('v8'); const structuredClone = obj => { return v8.deserialize(v8.serialize(obj)); }; Direct Support in Browsers: Available in Firefox 94 The structuredClone global function will soon be provided by all major browsers (having previously been discussed in whatwg/html#793 on GitHub). It looks / will look like this: const clone = structuredClone(original); Until this is shipped, browsers' structured clone implementations are only exposed indirectly. Asynchronous Workaround: Usable. The lower-overhead way to create a structured clone with existing APIs is to post the data through one port of a MessageChannels. The other port will emit a message event with a structured clone of the attached .data. Unfortunately, listening for these events is necessarily asynchronous, and the synchronous alternatives are less practical. class StructuredCloner { constructor() { this.pendingClones_ = new Map(); this.nextKey_ = 0; const channel = new MessageChannel(); this.inPort_ = channel.port1; this.outPort_ = channel.port2; this.outPort_.onmessage = ({data: {key, value}}) => { const resolve = this.pendingClones_.get(key); resolve(value); this.pendingClones_.delete(key); }; this.outPort_.start(); } cloneAsync(value) { return new Promise(resolve => { const key = this.nextKey_++; this.pendingClones_.set(key, resolve); this.inPort_.postMessage({key, value}); }); } } const structuredCloneAsync = window.structuredCloneAsync = StructuredCloner.prototype.cloneAsync.bind(new StructuredCloner); Example Use: const main = async () => { const original = { date: new Date(), number: Math.random() }; original.self = original; const clone = await structuredCloneAsync(original); // They're different objects: console.assert(original !== clone); console.assert(original.date !== clone.date); // They're cyclical: console.assert(original.self === original); console.assert(clone.self === clone); // They contain equivalent values: console.assert(original.number === clone.number); console.assert(Number(original.date) === Number(clone.date)); console.log("Assertions complete."); }; main(); Synchronous Workarounds: Awful! There are no good options for creating structured clones synchronously. Here are a couple of impractical hacks instead. history.pushState() and history.replaceState() both create a structured clone of their first argument, and assign that value to history.state. You can use this to create a structured clone of any object like this: const structuredClone = obj => { const oldState = history.state; history.replaceState(obj, null); const clonedObj = history.state; history.replaceState(oldState, null); return clonedObj; }; Example Use: 'use strict'; const main = () => { const original = { date: new Date(), number: Math.random() }; original.self = original; const clone = structuredClone(original); // They're different objects: console.assert(original !== clone); console.assert(original.date !== clone.date); // They're cyclical: console.assert(original.self === original); console.assert(clone.self === clone); // They contain equivalent values: console.assert(original.number === clone.number); console.assert(Number(original.date) === Number(clone.date)); console.log("Assertions complete."); }; const structuredClone = obj => { const oldState = history.state; history.replaceState(obj, null); const clonedObj = history.state; history.replaceState(oldState, null); return clonedObj; }; main(); Though synchronous, this can be extremely slow. It incurs all of the overhead associated with manipulating the browser history. Calling this method repeatedly can cause Chrome to become temporarily unresponsive. The Notification constructor creates a structured clone of its associated data. It also attempts to display a browser notification to the user, but this will silently fail unless you have requested notification permission. In case you have the permission for other purposes, we'll immediately close the notification we've created. const structuredClone = obj => { const n = new Notification('', {data: obj, silent: true}); n.onshow = n.close.bind(n); return n.data; }; Example Use: 'use strict'; const main = () => { const original = { date: new Date(), number: Math.random() }; original.self = original; const clone = structuredClone(original); // They're different objects: console.assert(original !== clone); console.assert(original.date !== clone.date); // They're cyclical: console.assert(original.self === original); console.assert(clone.self === clone); // They contain equivalent values: console.assert(original.number === clone.number); console.assert(Number(original.date) === Number(clone.date)); console.log("Assertions complete."); }; const structuredClone = obj => { const n = new Notification('', {data: obj, silent: true}); n.close(); return n.data; }; main(); A: As recursion is just too expensive for JavaScript, and most answers I have found are using recursion, while JSON approach will skip the non-JSON-convertible parts (Function, etc.). So I did a little research and found this trampoline technique to avoid it. Here's the code: /* * Trampoline to avoid recursion in JavaScript, see: * https://www.integralist.co.uk/posts/functional-recursive-javascript-programming/ */ function trampoline() { var func = arguments[0]; var args = []; for (var i = 1; i < arguments.length; i++) { args[i - 1] = arguments[i]; } var currentBatch = func.apply(this, args); var nextBatch = []; while (currentBatch && currentBatch.length > 0) { currentBatch.forEach(function(eachFunc) { var ret = eachFunc(); if (ret && ret.length > 0) { nextBatch = nextBatch.concat(ret); } }); currentBatch = nextBatch; nextBatch = []; } }; /* * Deep clone an object using the trampoline technique. * * @param target {Object} Object to clone * @return {Object} Cloned object. */ function clone(target) { if (typeof target !== 'object') { return target; } if (target == null || Object.keys(target).length == 0) { return target; } function _clone(b, a) { var nextBatch = []; for (var key in b) { if (typeof b[key] === 'object' && b[key] !== null) { if (b[key] instanceof Array) { a[key] = []; } else { a[key] = {}; } nextBatch.push(_clone.bind(null, b[key], a[key])); } else { a[key] = b[key]; } } return nextBatch; }; var ret = target instanceof Array ? [] : {}; (trampoline.bind(null, _clone))(target, ret); return ret; }; A: Single-line ECMAScript 6 solution (special object types like Date/Regex not handled): const clone = (o) => typeof o === 'object' && o !== null ? // only clone objects (Array.isArray(o) ? // if cloning an array o.map(e => clone(e)) : // clone each of its elements Object.keys(o).reduce( // otherwise reduce every key in the object (r, k) => (r[k] = clone(o[k]), r), {} // and save its cloned value into a new object ) ) : o; // return non-objects as is var x = { nested: { name: 'test' } }; var y = clone(x); console.log(x.nested !== y.nested); A: Lodash has a function that handles that for you like so. var foo = {a: 'a', b: {c:'d', e: {f: 'g'}}}; var bar = _.cloneDeep(foo); // bar = {a: 'a', b: {c:'d', e: {f: 'g'}}} Read the docs here. A: ES 2017 example: let objectToCopy = someObj; let copyOfObject = {}; Object.defineProperties(copyOfObject, Object.getOwnPropertyDescriptors(objectToCopy)); // copyOfObject will now be the same as objectToCopy A: Native deep cloning There's now a JS standard called "structured cloning", that works experimentally in Node 11 and later, will land in browsers, and which has polyfills for existing systems. structuredClone(value) If needed, loading the polyfill first: import structuredClone from '@ungap/structured-clone'; See this answer for more details. Older answers Fast cloning with data loss - JSON.parse/stringify If you do not use Dates, functions, undefined, Infinity, RegExps, Maps, Sets, Blobs, FileLists, ImageDatas, sparse Arrays, Typed Arrays or other complex types within your object, a very simple one liner to deep clone an object is: JSON.parse(JSON.stringify(object)) const a = { string: 'string', number: 123, bool: false, nul: null, date: new Date(), // stringified undef: undefined, // lost inf: Infinity, // forced to 'null' re: /.*/, // lost } console.log(a); console.log(typeof a.date); // Date object const clone = JSON.parse(JSON.stringify(a)); console.log(clone); console.log(typeof clone.date); // result of .toISOString() See Corban's answer for benchmarks. Reliable cloning using a library Since cloning objects is not trivial (complex types, circular references, function etc.), most major libraries provide function to clone objects. Don't reinvent the wheel - if you're already using a library, check if it has an object cloning function. For example, * *lodash - cloneDeep; can be imported separately via the lodash.clonedeep module and is probably your best choice if you're not already using a library that provides a deep cloning function *AngularJS - angular.copy *jQuery - jQuery.extend(true, { }, oldObject); .clone() only clones DOM elements *just library - just-clone; Part of a library of zero-dependency npm modules that do just do one thing. Guilt-free utilities for every occasion. A: I know this is an old post, but I thought this may be of some help to the next person who stumbles along. As long as you don't assign an object to anything it maintains no reference in memory. So to make an object that you want to share among other objects, you'll have to create a factory like so: var a = function(){ return { father:'zacharias' }; }, b = a(), c = a(); c.father = 'johndoe'; alert(b.father); A: Assuming that you have only properties and not any functions in your object, you can just use: var newObject = JSON.parse(JSON.stringify(oldObject)); A: If you're using it, the Underscore.js library has a clone method. var newObject = _.clone(oldObject); A: This is the fastest method I have created that doesn't use the prototype, so it will maintain hasOwnProperty in the new object. The solution is to iterate the top level properties of the original object, make two copies, delete each property from the original and then reset the original object and return the new copy. It only has to iterate as many times as top level properties. This saves all the if conditions to check if each property is a function, object, string, etc., and doesn't have to iterate each descendant property. The only drawback is that the original object must be supplied with its original created namespace, in order to reset it. copyDeleteAndReset:function(namespace,strObjName){ var obj = namespace[strObjName], objNew = {},objOrig = {}; for(i in obj){ if(obj.hasOwnProperty(i)){ objNew[i] = objOrig[i] = obj[i]; delete obj[i]; } } namespace[strObjName] = objOrig; return objNew; } var namespace = {}; namespace.objOrig = { '0':{ innerObj:{a:0,b:1,c:2} } } var objNew = copyDeleteAndReset(namespace,'objOrig'); objNew['0'] = 'NEW VALUE'; console.log(objNew['0']) === 'NEW VALUE'; console.log(namespace.objOrig['0']) === innerObj:{a:0,b:1,c:2}; A: There are a lot of answers, but none of them gave the desired effect I needed. I wanted to utilize the power of jQuery's deep copy... However, when it runs into an array, it simply copies the reference to the array and deep copies the items in it. To get around this, I made a nice little recursive function that will create a new array automatically. (It even checks for kendo.data.ObservableArray if you want it to! Though, make sure you make sure you call kendo.observable(newItem) if you want the Arrays to be observable again.) So, to fully copy an existing item, you just do this: var newItem = jQuery.extend(true, {}, oldItem); createNewArrays(newItem); function createNewArrays(obj) { for (var prop in obj) { if ((kendo != null && obj[prop] instanceof kendo.data.ObservableArray) || obj[prop] instanceof Array) { var copy = []; $.each(obj[prop], function (i, item) { var newChild = $.extend(true, {}, item); createNewArrays(newChild); copy.push(newChild); }); obj[prop] = copy; } } } A: I usually use var newObj = JSON.parse( JSON.stringify(oldObje) ); but, here's a more proper way: var o = {}; var oo = Object.create(o); (o === oo); // => false Watch legacy browsers! A: For future reference, the current draft of ECMAScript 6 introduces Object.assign as a way of cloning objects. Example code would be: var obj1 = { a: true, b: 1 }; var obj2 = Object.assign(obj1); console.log(obj2); // { a: true, b: 1 } At the time of writing support is limited to Firefox 34 in browsers so it’s not usable in production code just yet (unless you’re writing a Firefox extension of course). A: There are so many ways to achieve this, but if you want to do this without any library, you can use the following: const cloneObject = (oldObject) => { let newObject = oldObject; if (oldObject && typeof oldObject === 'object') { if(Array.isArray(oldObject)) { newObject = []; } else if (Object.prototype.toString.call(oldObject) === '[object Date]' && !isNaN(oldObject)) { newObject = new Date(oldObject.getTime()); } else { newObject = {}; for (let i in oldObject) { newObject[i] = cloneObject(oldObject[i]); } } } return newObject; } Let me know what you think. A: Object.assign({},sourceObj) only clones the object if their property is not having reference type key. ex obj={a:"lol",b:["yes","no","maybe"]} clonedObj = Object.assign({},obj); clonedObj.b.push("skip")// changes will reflected to the actual obj as well because of its reference type. obj.b //will also console => yes,no,maybe,skip So for the deep cloning is not possible to achieve in this way. The best solution that works is var obj = Json.stringify(yourSourceObj) var cloned = Json.parse(obj); A: Here's a version of ConroyP's answer above that works even if the constructor has required parameters: //If Object.create isn't already defined, we just do the simple shim, //without the second argument, since that's all we need here var object_create = Object.create; if (typeof object_create !== 'function') { object_create = function(o) { function F() {} F.prototype = o; return new F(); }; } function deepCopy(obj) { if(obj == null || typeof(obj) !== 'object'){ return obj; } //make sure the returned object has the same prototype as the original var ret = object_create(obj.constructor.prototype); for(var key in obj){ ret[key] = deepCopy(obj[key]); } return ret; } This function is also available in my simpleoo library. Edit: Here's a more robust version (thanks to Justin McCandless this now supports cyclic references as well): /** * Deep copy an object (make copies of all its object properties, sub-properties, etc.) * An improved version of http://keithdevens.com/weblog/archive/2007/Jun/07/javascript.clone * that doesn't break if the constructor has required parameters * * It also borrows some code from http://stackoverflow.com/a/11621004/560114 */ function deepCopy(src, /* INTERNAL */ _visited, _copiesVisited) { if(src === null || typeof(src) !== 'object'){ return src; } //Honor native/custom clone methods if(typeof src.clone == 'function'){ return src.clone(true); } //Special cases: //Date if(src instanceof Date){ return new Date(src.getTime()); } //RegExp if(src instanceof RegExp){ return new RegExp(src); } //DOM Element if(src.nodeType && typeof src.cloneNode == 'function'){ return src.cloneNode(true); } // Initialize the visited objects arrays if needed. // This is used to detect cyclic references. if (_visited === undefined){ _visited = []; _copiesVisited = []; } // Check if this object has already been visited var i, len = _visited.length; for (i = 0; i < len; i++) { // If so, get the copy we already made if (src === _visited[i]) { return _copiesVisited[i]; } } //Array if (Object.prototype.toString.call(src) == '[object Array]') { //[].slice() by itself would soft clone var ret = src.slice(); //add it to the visited array _visited.push(src); _copiesVisited.push(ret); var i = ret.length; while (i--) { ret[i] = deepCopy(ret[i], _visited, _copiesVisited); } return ret; } //If we've reached here, we have a regular object //make sure the returned object has the same prototype as the original var proto = (Object.getPrototypeOf ? Object.getPrototypeOf(src): src.__proto__); if (!proto) { proto = src.constructor.prototype; //this line would probably only be reached by very old browsers } var dest = object_create(proto); //add this object to the visited array _visited.push(src); _copiesVisited.push(dest); for (var key in src) { //Note: this does NOT preserve ES5 property attributes like 'writable', 'enumerable', etc. //For an example of how this could be modified to do so, see the singleMixin() function dest[key] = deepCopy(src[key], _visited, _copiesVisited); } return dest; } //If Object.create isn't already defined, we just do the simple shim, //without the second argument, since that's all we need here var object_create = Object.create; if (typeof object_create !== 'function') { object_create = function(o) { function F() {} F.prototype = o; return new F(); }; } A: This is my version of object cloner. This is a stand-alone version of the jQuery method, with only few tweaks and adjustments. Check out the fiddle. I've used a lot of jQuery until the day I realized that I'd use only this function most of the time x_x. The usage is the same as described into the jQuery API: * *Non-deep clone: extend(object_dest, object_source); *Deep clone: extend(true, object_dest, object_source); One extra function is used to define if object is proper to be cloned. /** * This is a quasi clone of jQuery's extend() function. * by Romain WEEGER for wJs library - www.wexample.com * @returns {*|{}} */ function extend() { // Make a copy of arguments to avoid JavaScript inspector hints. var to_add, name, copy_is_array, clone, // The target object who receive parameters // form other objects. target = arguments[0] || {}, // Index of first argument to mix to target. i = 1, // Mix target with all function arguments. length = arguments.length, // Define if we merge object recursively. deep = false; // Handle a deep copy situation. if (typeof target === 'boolean') { deep = target; // Skip the boolean and the target. target = arguments[ i ] || {}; // Use next object as first added. i++; } // Handle case when target is a string or something (possible in deep copy) if (typeof target !== 'object' && typeof target !== 'function') { target = {}; } // Loop trough arguments. for (false; i < length; i += 1) { // Only deal with non-null/undefined values if ((to_add = arguments[ i ]) !== null) { // Extend the base object. for (name in to_add) { // We do not wrap for loop into hasOwnProperty, // to access to all values of object. // Prevent never-ending loop. if (target === to_add[name]) { continue; } // Recurse if we're merging plain objects or arrays. if (deep && to_add[name] && (is_plain_object(to_add[name]) || (copy_is_array = Array.isArray(to_add[name])))) { if (copy_is_array) { copy_is_array = false; clone = target[name] && Array.isArray(target[name]) ? target[name] : []; } else { clone = target[name] && is_plain_object(target[name]) ? target[name] : {}; } // Never move original objects, clone them. target[name] = extend(deep, clone, to_add[name]); } // Don't bring in undefined values. else if (to_add[name] !== undefined) { target[name] = to_add[name]; } } } } return target; } /** * Check to see if an object is a plain object * (created using "{}" or "new Object"). * Forked from jQuery. * @param obj * @returns {boolean} */ function is_plain_object(obj) { // Not plain objects: // - Any object or value whose internal [[Class]] property is not "[object Object]" // - DOM nodes // - window if (obj === null || typeof obj !== "object" || obj.nodeType || (obj !== null && obj === obj.window)) { return false; } // Support: Firefox <20 // The try/catch suppresses exceptions thrown when attempting to access // the "constructor" property of certain host objects, i.e. |window.location| // https://bugzilla.mozilla.org/show_bug.cgi?id=814622 try { if (obj.constructor && !this.hasOwnProperty.call(obj.constructor.prototype, "isPrototypeOf")) { return false; } } catch (e) { return false; } // If the function hasn't returned already, we're confident that // |obj| is a plain object, created by {} or constructed with new Object return true; } A: Here is my way of deep cloning a object with ES2015 default value and spread operator const makeDeepCopy = (obj, copy = {}) => { for (let item in obj) { if (typeof obj[item] === 'object') { makeDeepCopy(obj[item], copy) } if (obj.hasOwnProperty(item)) { copy = { ...obj } } } return copy } const testObj = { "type": "object", "properties": { "userId": { "type": "string", "chance": "guid" }, "emailAddr": { "type": "string", "chance": { "email": { "domain": "fake.com" } }, "pattern": ".+@fake.com" } }, "required": [ "userId", "emailAddr" ] } const makeDeepCopy = (obj, copy = {}) => { for (let item in obj) { if (typeof obj[item] === 'object') { makeDeepCopy(obj[item], copy) } if (obj.hasOwnProperty(item)) { copy = { ...obj } } } return copy } console.log(makeDeepCopy(testObj)) A: Looking through this long list of answers nearly all the solutions have been covered except one that I am aware of. This is the list of VANILLA JS ways of deep cloning an object. * *JSON.parse(JSON.stringify( obj ) ); *Through history.state with pushState or replaceState *Web Notifications API but this has the downside of asking the user for permissions. *Doing your own recursive loop through the object to copy each level. *The answer I didn't see -> Using ServiceWorkers. The messages (objects) passed back and forth between the page and the ServiceWorker script will be deep clones of any object. A: Hope this helps. function deepClone(obj) { /* * Duplicates an object */ var ret = null; if (obj !== Object(obj)) { // primitive types return obj; } if (obj instanceof String || obj instanceof Number || obj instanceof Boolean) { // string objecs ret = obj; // for ex: obj = new String("Spidergap") } else if (obj instanceof Date) { // date ret = new obj.constructor(); } else ret = Object.create(obj.constructor.prototype); var prop = null; var allProps = Object.getOwnPropertyNames(obj); //gets non enumerables also var props = {}; for (var i in allProps) { prop = allProps[i]; props[prop] = false; } for (i in obj) { props[i] = i; } //now props contain both enums and non enums var propDescriptor = null; var newPropVal = null; // value of the property in new object for (i in props) { prop = obj[i]; propDescriptor = Object.getOwnPropertyDescriptor(obj, i); if (Array.isArray(prop)) { //not backward compatible prop = prop.slice(); // to copy the array } else if (prop instanceof Date == true) { prop = new prop.constructor(); } else if (prop instanceof Object == true) { if (prop instanceof Function == true) { // function if (!Function.prototype.clone) { Function.prototype.clone = function() { var that = this; var temp = function tmp() { return that.apply(this, arguments); }; for (var ky in this) { temp[ky] = this[ky]; } return temp; } } prop = prop.clone(); } else // normal object { prop = deepClone(prop); } } newPropVal = { value: prop }; if (propDescriptor) { /* * If property descriptors are there, they must be copied */ newPropVal.enumerable = propDescriptor.enumerable; newPropVal.writable = propDescriptor.writable; } if (!ret.hasOwnProperty(i)) // when String or other predefined objects Object.defineProperty(ret, i, newPropVal); // non enumerable } return ret; } https://github.com/jinujd/Javascript-Deep-Clone A: My scenario was a bit different. I had an object with nested objects as well as functions. Therefore, Object.assign() and JSON.stringify() were not solutions to my problem. Using third-party libraries was not an option for me neither. Hence, I decided to make a simple function to use built-in methods to copy an object with its literal properties, its nested objects, and functions. let deepCopy = (target, source) => { Object.assign(target, source); // check if there's any nested objects Object.keys(source).forEach((prop) => { /** * assign function copies functions and * literals (int, strings, etc...) * except for objects and arrays, so: */ if (typeof(source[prop]) === 'object') { // check if the item is, in fact, an array if (Array.isArray(source[prop])) { // clear the copied referenece of nested array target[prop] = Array(); // iterate array's item and copy over source[prop].forEach((item, index) => { // array's items could be objects too! if (typeof(item) === 'object') { // clear the copied referenece of nested objects target[prop][index] = Object(); // and re do the process for nested objects deepCopy(target[prop][index], item); } else { target[prop].push(item); } }); // otherwise, treat it as an object } else { // clear the copied referenece of nested objects target[prop] = Object(); // and re do the process for nested objects deepCopy(target[prop], source[prop]); } } }); }; Here's a test code: let a = { name: 'Human', func: () => { console.log('Hi!'); }, prop: { age: 21, info: { hasShirt: true, hasHat: false } }, mark: [89, 92, { exam: [1, 2, 3] }] }; let b = Object(); deepCopy(b, a); a.name = 'Alien'; a.func = () => { console.log('Wassup!'); }; a.prop.age = 1024; a.prop.info.hasShirt = false; a.mark[0] = 87; a.mark[1] = 91; a.mark[2].exam = [4, 5, 6]; console.log(a); // updated props console.log(b); For efficiency-related concerns, I believe this is the simplest and most efficient solution to the problem I had. I would appreciate any comments on this algorithm that could make it more efficient. A: If there wasn't any builtin one, you could try: function clone(obj) { if (obj === null || typeof (obj) !== 'object' || 'isActiveClone' in obj) return obj; if (obj instanceof Date) var temp = new obj.constructor(); //or new Date(obj); else var temp = obj.constructor(); for (var key in obj) { if (Object.prototype.hasOwnProperty.call(obj, key)) { obj['isActiveClone'] = null; temp[key] = clone(obj[key]); delete obj['isActiveClone']; } } return temp; } A: The following creates two instances of the same object. I found it and am using it currently. It's simple and easy to use. var objToCreate = JSON.parse(JSON.stringify(cloneThis)); A: I think that this is the best solution if you want to generalize your object cloning algorithm. It can be used with or without jQuery, although I recommend leaving jQuery's extend method out if you want you the cloned object to have the same "class" as the original one. function clone(obj){ if(typeof(obj) == 'function')//it's a simple function return obj; //of it's not an object (but could be an array...even if in javascript arrays are objects) if(typeof(obj) != 'object' || obj.constructor.toString().indexOf('Array')!=-1) if(JSON != undefined)//if we have the JSON obj try{ return JSON.parse(JSON.stringify(obj)); }catch(err){ return JSON.parse('"'+JSON.stringify(obj)+'"'); } else try{ return eval(uneval(obj)); }catch(err){ return eval('"'+uneval(obj)+'"'); } // I used to rely on jQuery for this, but the "extend" function returns //an object similar to the one cloned, //but that was not an instance (instanceof) of the cloned class /* if(jQuery != undefined)//if we use the jQuery plugin return jQuery.extend(true,{},obj); else//we recursivley clone the object */ return (function _clone(obj){ if(obj == null || typeof(obj) != 'object') return obj; function temp () {}; temp.prototype = obj; var F = new temp; for(var key in obj) F[key] = clone(obj[key]); return F; })(obj); } A: Use Object.create() to get the prototype and support for instanceof, and use a for() loop to get enumerable keys: function cloneObject(source) { var key,value; var clone = Object.create(source); for (key in source) { if (source.hasOwnProperty(key) === true) { value = source[key]; if (value!==null && typeof value==="object") { clone[key] = cloneObject(value); } else { clone[key] = value; } } } return clone; } A: For future reference, one can use this code ES6: _clone: function(obj){ let newObj = {}; for(let i in obj){ if(typeof(obj[i]) === 'object' && Object.keys(obj[i]).length){ newObj[i] = clone(obj[i]); } else{ newObj[i] = obj[i]; } } return Object.assign({},newObj); } ES5: function clone(obj){ let newObj = {}; for(let i in obj){ if(typeof(obj[i]) === 'object' && Object.keys(obj[i]).length){ newObj[i] = clone(obj[i]); } else{ newObj[i] = obj[i]; } } return Object.assign({},newObj); } E.g var obj ={a:{b:1,c:3},d:4,e:{f:6}} var xc = clone(obj); console.log(obj); //{a:{b:1,c:3},d:4,e:{f:6}} console.log(xc); //{a:{b:1,c:3},d:4,e:{f:6}} xc.a.b = 90; console.log(obj); //{a:{b:1,c:3},d:4,e:{f:6}} console.log(xc); //{a:{b:90,c:3},d:4,e:{f:6}} A: class Handler { static deepCopy (obj) { if (Object.prototype.toString.call(obj) === '[object Array]') { const result = []; for (let i = 0, len = obj.length; i < len; i++) { result[i] = Handler.deepCopy(obj[i]); } return result; } else if (Object.prototype.toString.call(obj) === '[object Object]') { const result = {}; for (let prop in obj) { result[prop] = Handler.deepCopy(obj[prop]); } return result; } return obj; } } A: Without touching the prototypical inheritance you may deep lone objects and arrays as follows; function objectClone(o){ var ot = Array.isArray(o); return o !== null && typeof o === "object" ? Object.keys(o) .reduce((r,k) => o[k] !== null && typeof o[k] === "object" ? (r[k] = objectClone(o[k]),r) : (r[k] = o[k],r), ot ? [] : {}) : o; } var obj = {a: 1, b: {c: 2, d: {e: 3, f: {g: 4, h: null}}}}, arr = [1,2,[3,4,[5,6,[7]]]], nil = null, clobj = objectClone(obj), clarr = objectClone(arr), clnil = objectClone(nil); console.log(clobj, obj === clobj); console.log(clarr, arr === clarr); console.log(clnil, nil === clnil); clarr[2][2][2] = "seven"; console.log(arr, clarr); A: What about asynchronous object cloning done by a Promise? async function clone(thingy /**/) { if(thingy instanceof Promise) { throw Error("This function cannot clone Promises."); } return thingy; } A: For a shallow copy there is a great, simple method introduced in ECMAScript2018 standard. It involves the use of Spread Operator : let obj = {a : "foo", b:"bar" , c:10 , d:true , e:[1,2,3] }; let objClone = { ...obj }; I have tested it in Chrome browser, both objects are stored in different locations, so changing immediate child values in either will not change the other. Though (in the example) changing a value in e will effect both copies. This technique is very simple and straight forward. I consider this a true Best Practice for this question once and for all. A: This is my solution without using any library or native javascript function. function deepClone(obj) { if (typeof obj !== "object") { return obj; } else { let newObj = typeof obj === "object" && obj.length !== undefined ? [] : {}; for (let key in obj) { if (key) { newObj[key] = deepClone(obj[key]); } } return newObj; } } A: Crockford suggests (and I prefer) using this function: function object(o) { function F() {} F.prototype = o; return new F(); } var newObject = object(oldObject); It's terse, works as expected and you don't need a library. EDIT: This is a polyfill for Object.create, so you also can use this. var newObject = Object.create(oldObject); NOTE: If you use some of this, you may have problems with some iteration who use hasOwnProperty. Because, create create new empty object who inherits oldObject. But it is still useful and practical for cloning objects. For exemple if oldObject.a = 5; newObject.a; // is 5 but: oldObject.hasOwnProperty(a); // is true newObject.hasOwnProperty(a); // is false A: Checkout this benchmark: http://jsben.ch/#/bWfk9 In my previous tests where speed was a main concern I found JSON.parse(JSON.stringify(obj)) to be the slowest way to deep clone an object (it is slower than jQuery.extend with deep flag set true by 10-20%). jQuery.extend is pretty fast when the deep flag is set to false (shallow clone). It is a good option, because it includes some extra logic for type validation and doesn't copy over undefined properties, etc., but this will also slow you down a little. If you know the structure of the objects you are trying to clone or can avoid deep nested arrays you can write a simple for (var i in obj) loop to clone your object while checking hasOwnProperty and it will be much much faster than jQuery. Lastly if you are attempting to clone a known object structure in a hot loop you can get MUCH MUCH MORE PERFORMANCE by simply in-lining the clone procedure and manually constructing the object. JavaScript trace engines suck at optimizing for..in loops and checking hasOwnProperty will slow you down as well. Manual clone when speed is an absolute must. var clonedObject = { knownProp: obj.knownProp, .. } Beware using the JSON.parse(JSON.stringify(obj)) method on Date objects - JSON.stringify(new Date()) returns a string representation of the date in ISO format, which JSON.parse() doesn't convert back to a Date object. See this answer for more details. Additionally, please note that, in Chrome 65 at least, native cloning is not the way to go. According to JSPerf, performing native cloning by creating a new function is nearly 800x slower than using JSON.stringify which is incredibly fast all the way across the board. Update for ES6 If you are using Javascript ES6 try this native method for cloning or shallow copy. Object.assign({}, obj); A: function clone(obj) { var clone = {}; clone.prototype = obj.prototype; for (property in obj) clone[property] = obj[property]; return clone; } A: Lodash has a nice _.cloneDeep(value) method: var objects = [{ 'a': 1 }, { 'b': 2 }]; var deep = _.cloneDeep(objects); console.log(deep[0] === objects[0]); // => false A: Shallow copy one-liner (ECMAScript 5th edition): var origin = { foo : {} }; var copy = Object.keys(origin).reduce(function(c,k){c[k]=origin[k];return c;},{}); console.log(origin, copy); console.log(origin == copy); // false console.log(origin.foo == copy.foo); // true And shallow copy one-liner (ECMAScript 6th edition, 2015): var origin = { foo : {} }; var copy = Object.assign({}, origin); console.log(origin, copy); console.log(origin == copy); // false console.log(origin.foo == copy.foo); // true A: There seems to be no ideal deep clone operator yet for array-like objects. As the code below illustrates, John Resig's jQuery cloner turns arrays with non-numeric properties into objects that are not arrays, and RegDwight's JSON cloner drops the non-numeric properties. The following tests illustrate these points on multiple browsers: function jQueryClone(obj) { return jQuery.extend(true, {}, obj) } function JSONClone(obj) { return JSON.parse(JSON.stringify(obj)) } var arrayLikeObj = [[1, "a", "b"], [2, "b", "a"]]; arrayLikeObj.names = ["m", "n", "o"]; var JSONCopy = JSONClone(arrayLikeObj); var jQueryCopy = jQueryClone(arrayLikeObj); alert("Is arrayLikeObj an array instance?" + (arrayLikeObj instanceof Array) + "\nIs the jQueryClone an array instance? " + (jQueryCopy instanceof Array) + "\nWhat are the arrayLikeObj names? " + arrayLikeObj.names + "\nAnd what are the JSONClone names? " + JSONCopy.names) A: Requires new-ish browsers, but... Let's extend the native Object and get a real .extend(); Object.defineProperty(Object.prototype, 'extend', { enumerable: false, value: function(){ var that = this; Array.prototype.slice.call(arguments).map(function(source){ var props = Object.getOwnPropertyNames(source), i = 0, l = props.length, prop; for(; i < l; ++i){ prop = props[i]; if(that.hasOwnProperty(prop) && typeof(that[prop]) === 'object'){ that[prop] = that[prop].extend(source[prop]); }else{ Object.defineProperty(that, prop, Object.getOwnPropertyDescriptor(source, prop)); } } }); return this; } }); Just pop that in prior to any code that uses .extend() on an object. Example: var obj1 = { node1: '1', node2: '2', node3: 3 }; var obj2 = { node1: '4', node2: 5, node3: '6' }; var obj3 = ({}).extend(obj1, obj2); console.log(obj3); // Object {node1: "4", node2: 5, node3: "6"} A: This is a solution with recursion: obj = { a: { b: { c: { d: ['1', '2'] } } }, e: 'Saeid' } const Clone = function (obj) { const container = Array.isArray(obj) ? [] : {} const keys = Object.keys(obj) for (let i = 0; i < keys.length; i++) { const key = keys[i] if(typeof obj[key] == 'object') { container[key] = Clone(obj[key]) } else container[key] = obj[key].slice() } return container } console.log(Clone(obj)) A: When your object is nested and it contains data object, other structured object or some property object, etc then using JSON.parse(JSON.stringify(object)) or Object.assign({}, obj) or $.extend(true, {}, obj) will not work. In that case use lodash. It is simple and easy.. var obj = {a: 25, b: {a: 1, b: 2}, c: new Date(), d: anotherNestedObject }; var A = _.cloneDeep(obj); Now A will be your new cloned of obj without any references.. A: if you find yourself doing this type of thing regular ( eg- creating undo redo functionality ) it might be worth looking into Immutable.js const map1 = Immutable.fromJS( { a: 1, b: 2, c: { d: 3 } } ); const map2 = map1.setIn( [ 'c', 'd' ], 50 ); console.log( `${ map1.getIn( [ 'c', 'd' ] ) } vs ${ map2.getIn( [ 'c', 'd' ] ) }` ); // "3 vs 50" https://codepen.io/anon/pen/OBpqNE?editors=1111 A: With the proposal of the new method Object.fromEntries() that is supported on newer versions of some browsers (reference). I want to contribute with the next recursive approach: const obj = { key1: {key11: "key11", key12: "key12", key13: {key131: 22}}, key2: {key21: "key21", key22: "key22"}, key3: "key3", key4: [1,2,3, {key: "value"}] } const cloneObj = (obj) => { if (Object(obj) !== obj) return obj; else if (Array.isArray(obj)) return obj.map(cloneObj); return Object.fromEntries(Object.entries(obj).map( ([k,v]) => ([k, cloneObj(v)]) )); } // Clone the original object. let newObj = cloneObj(obj); // Make changes on the original object. obj.key1.key11 = "TEST"; obj.key3 = "TEST"; obj.key1.key13.key131 = "TEST"; obj.key4[1] = "TEST"; obj.key4[3].key = "TEST"; // Display both objects on the console. console.log("Original object: ", obj); console.log("Cloned object: ", newObj); .as-console {background-color:black !important; color:lime;} .as-console-wrapper {max-height:100% !important; top:0;} A: Just because I didn't see AngularJS mentioned and thought that people might want to know... angular.copy also provides a method of deep copying objects and arrays. A: I have two good answers depending on whether your objective is to clone a "plain old JavaScript object" or not. Let's also assume that your intention is to create a complete clone with no prototype references back to the source object. If you're not interested in a complete clone, then you can use many of the Object.clone() routines provided in some of the other answers (Crockford's pattern). For plain old JavaScript objects, a tried and true good way to clone an object in modern runtimes is quite simply: var clone = JSON.parse(JSON.stringify(obj)); Note that the source object must be a pure JSON object. This is to say, all of its nested properties must be scalars (like boolean, string, array, object, etc). Any functions or special objects like RegExp or Date will not be cloned. Is it efficient? Heck yes. We've tried all kinds of cloning methods and this works best. I'm sure some ninja could conjure up a faster method. But I suspect we're talking about marginal gains. This approach is just simple and easy to implement. Wrap it into a convenience function and if you really need to squeeze out some gain, go for at a later time. Now, for non-plain JavaScript objects, there isn't a really simple answer. In fact, there can't be because of the dynamic nature of JavaScript functions and inner object state. Deep cloning a JSON structure with functions inside requires you recreate those functions and their inner context. And JavaScript simply doesn't have a standardized way of doing that. The correct way to do this, once again, is via a convenience method that you declare and reuse within your code. The convenience method can be endowed with some understanding of your own objects so you can make sure to properly recreate the graph within the new object. We're written our own, but the best general approach I've seen is covered here: http://davidwalsh.name/javascript-clone This is the right idea. The author (David Walsh) has commented out the cloning of generalized functions. This is something you might choose to do, depending on your use case. The main idea is that you need to special handle the instantiation of your functions (or prototypal classes, so to speak) on a per-type basis. Here, he's provided a few examples for RegExp and Date. Not only is this code brief, but it's also very readable. It's pretty easy to extend. Is this efficient? Heck yes. Given that the goal is to produce a true deep-copy clone, then you're going to have to walk the members of the source object graph. With this approach, you can tweak exactly which child members to treat and how to manually handle custom types. So there you go. Two approaches. Both are efficient in my view. A: Only when you can use ECMAScript 6 or transpilers. Features: * *Won't trigger getter/setter while copying. *Preserves getter/setter. *Preserves prototype informations. *Works with both object-literal and functional OO writing styles. Code: function clone(target, source){ for(let key in source){ // Use getOwnPropertyDescriptor instead of source[key] to prevent from trigering setter/getter. let descriptor = Object.getOwnPropertyDescriptor(source, key); if(descriptor.value instanceof String){ target[key] = new String(descriptor.value); } else if(descriptor.value instanceof Array){ target[key] = clone([], descriptor.value); } else if(descriptor.value instanceof Object){ let prototype = Reflect.getPrototypeOf(descriptor.value); let cloneObject = clone({}, descriptor.value); Reflect.setPrototypeOf(cloneObject, prototype); target[key] = cloneObject; } else { Object.defineProperty(target, key, descriptor); } } let prototype = Reflect.getPrototypeOf(source); Reflect.setPrototypeOf(target, prototype); return target; } A: I am late to answer this question, but I have an another way of cloning the object: function cloneObject(obj) { if (obj === null || typeof(obj) !== 'object') return obj; var temp = obj.constructor(); // changed for (var key in obj) { if (Object.prototype.hasOwnProperty.call(obj, key)) { obj['isActiveClone'] = null; temp[key] = cloneObject(obj[key]); delete obj['isActiveClone']; } } return temp; } var b = cloneObject({"a":1,"b":2}); // calling which is much better and faster then: var a = {"a":1,"b":2}; var b = JSON.parse(JSON.stringify(a)); and var a = {"a":1,"b":2}; // Deep copy var newObject = jQuery.extend(true, {}, a); I have bench-marked the code and you can test the results here: and sharing the results: References: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/hasOwnProperty A: The efficient way to clone(not deep-clone) an object in one line of code An Object.assign method is part of the ECMAScript 2015 (ES6) standard and does exactly what you need. var clone = Object.assign({}, obj); The Object.assign() method is used to copy the values of all enumerable own properties from one or more source objects to a target object. Read more... The polyfill to support older browsers: if (!Object.assign) { Object.defineProperty(Object, 'assign', { enumerable: false, configurable: true, writable: true, value: function(target) { 'use strict'; if (target === undefined || target === null) { throw new TypeError('Cannot convert first argument to object'); } var to = Object(target); for (var i = 1; i < arguments.length; i++) { var nextSource = arguments[i]; if (nextSource === undefined || nextSource === null) { continue; } nextSource = Object(nextSource); var keysArray = Object.keys(nextSource); for (var nextIndex = 0, len = keysArray.length; nextIndex < len; nextIndex++) { var nextKey = keysArray[nextIndex]; var desc = Object.getOwnPropertyDescriptor(nextSource, nextKey); if (desc !== undefined && desc.enumerable) { to[nextKey] = nextSource[nextKey]; } } } return to; } }); } A: This isn't generally the most efficient solution, but it does what I need. Simple test cases below... function clone(obj, clones) { // Makes a deep copy of 'obj'. Handles cyclic structures by // tracking cloned obj's in the 'clones' parameter. Functions // are included, but not cloned. Functions members are cloned. var new_obj, already_cloned, t = typeof obj, i = 0, l, pair; clones = clones || []; if (obj === null) { return obj; } if (t === "object" || t === "function") { // check to see if we've already cloned obj for (i = 0, l = clones.length; i < l; i++) { pair = clones[i]; if (pair[0] === obj) { already_cloned = pair[1]; break; } } if (already_cloned) { return already_cloned; } else { if (t === "object") { // create new object new_obj = new obj.constructor(); } else { // Just use functions as is new_obj = obj; } clones.push([obj, new_obj]); // keep track of objects we've cloned for (key in obj) { // clone object members if (obj.hasOwnProperty(key)) { new_obj[key] = clone(obj[key], clones); } } } } return new_obj || obj; } Cyclic array test... a = [] a.push("b", "c", a) aa = clone(a) aa === a //=> false aa[2] === a //=> false aa[2] === a[2] //=> false aa[2] === aa //=> true Function test... f = new Function f.a = a ff = clone(f) ff === f //=> true ff.a === a //=> false A: For the people who want to use the JSON.parse(JSON.stringify(obj)) version, but without losing the Date objects, you can use the second argument of parse method to convert the strings back to Date: function clone(obj) { var regExp = /^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z$/; return JSON.parse(JSON.stringify(obj), function(k, v) { if (typeof v === 'string' && regExp.test(v)) return new Date(v) return v; }) } // usage: var original = { a: [1, null, undefined, 0, {a:null}, new Date()], b: { c(){ return 0 } } } var cloned = clone(original) console.log(cloned) A: I disagree with the answer with the greatest votes here. A Recursive Deep Clone is much faster than the JSON.parse(JSON.stringify(obj)) approach mentioned. * *Jsperf ranks it number one here: https://jsperf.com/deep-copy-vs-json-stringify-json-parse/5 *Jsben from the answer above updated to show that a recursive deep clone beats all the others mentioned: http://jsben.ch/13YKQ And here's the function for quick reference: function cloneDeep (o) { let newO let i if (typeof o !== 'object') return o if (!o) return o if (Object.prototype.toString.apply(o) === '[object Array]') { newO = [] for (i = 0; i < o.length; i += 1) { newO[i] = cloneDeep(o[i]) } return newO } newO = {} for (i in o) { if (o.hasOwnProperty(i)) { newO[i] = cloneDeep(o[i]) } } return newO } A: // obj target object, vals source object var setVals = function (obj, vals) { if (obj && vals) { for (var x in vals) { if (vals.hasOwnProperty(x)) { if (obj[x] && typeof vals[x] === 'object') { obj[x] = setVals(obj[x], vals[x]); } else { obj[x] = vals[x]; } } } } return obj; }; A: Here is a comprehensive clone() method that can clone any JavaScript object. It handles almost all the cases: function clone(src, deep) { var toString = Object.prototype.toString; if (!src && typeof src != "object") { // Any non-object (Boolean, String, Number), null, undefined, NaN return src; } // Honor native/custom clone methods if (src.clone && toString.call(src.clone) == "[object Function]") { return src.clone(deep); } // DOM elements if (src.nodeType && toString.call(src.cloneNode) == "[object Function]") { return src.cloneNode(deep); } // Date if (toString.call(src) == "[object Date]") { return new Date(src.getTime()); } // RegExp if (toString.call(src) == "[object RegExp]") { return new RegExp(src); } // Function if (toString.call(src) == "[object Function]") { //Wrap in another method to make sure == is not true; //Note: Huge performance issue due to closures, comment this :) return (function(){ src.apply(this, arguments); }); } var ret, index; //Array if (toString.call(src) == "[object Array]") { //[].slice(0) would soft clone ret = src.slice(); if (deep) { index = ret.length; while (index--) { ret[index] = clone(ret[index], true); } } } //Object else { ret = src.constructor ? new src.constructor() : {}; for (var prop in src) { ret[prop] = deep ? clone(src[prop], true) : src[prop]; } } return ret; }; A: Deep copy by performance: Ranked from best to worst * *spread operator ... (primitive arrays - only) *splice(0) (primitive arrays - only) *slice() (primitive arrays - only) *concat() (primitive arrays - only) *custom function, as seen below (any array) *jQuery's $.extend() (any array) *JSON.parse(JSON.stringify()) (primitive and literal arrays - only) *Underscore's _.clone() (primitive and literal arrays - only) *Lodash's _.cloneDeep() (any array) Where: * *primitives = strings, numbers, and booleans *literals = object literals {}, array literals [] *any = primitives, literals, and prototypes Deep copy an array of primitives: let arr1a = [1, 'a', true]; To deep copy arrays with primitives only (i.e. numbers, strings, and booleans), reassignment, slice(), concat(), and Underscore's clone() can be used. Where spread has the fastest performance: let arr1b = [...arr1a]; And where slice() has better performance than concat(): https://jsbench.me/x5ktn7o94d/ let arr1c = arr1a.splice(0); let arr1d = arr1a.slice(); let arr1e = arr1a.concat(); Deep copy an array of primitive and object literals: let arr2a = [1, 'a', true, {}, []]; let arr2b = JSON.parse(JSON.stringify(arr2a)); Deep copy an array of primitive, object literals, and prototypes: let arr3a = [1, 'a', true, {}, [], new Object()]; Write a custom function (has faster performance than $.extend() or JSON.parse): function copy(aObject) { // Prevent undefined objects // if (!aObject) return aObject; let bObject = Array.isArray(aObject) ? [] : {}; let value; for (const key in aObject) { // Prevent self-references to parent object // if (Object.is(aObject[key], aObject)) continue; value = aObject[key]; bObject[key] = (typeof value === "object") ? copy(value) : value; } return bObject; } let arr3b = copy(arr3a); Or use third-party utility functions: let arr3c = $.extend(true, [], arr3a); // jQuery Extend let arr3d = _.cloneDeep(arr3a); // Lodash Note: jQuery's $.extend also has better performance than JSON.parse(JSON.stringify()): * *js-deep-copy *jquery-extend-vs-json-parse A: This is what I'm using: function cloneObject(obj) { var clone = {}; for(var i in obj) { if(typeof(obj[i])=="object" && obj[i] != null) clone[i] = cloneObject(obj[i]); else clone[i] = obj[i]; } return clone; } A: AngularJS Well if you're using angular you could do this too var newObject = angular.copy(oldObject); A: Code: // extends 'from' object with members from 'to'. If 'to' is null, a deep clone of 'from' is returned function extend(from, to) { if (from == null || typeof from != "object") return from; if (from.constructor != Object && from.constructor != Array) return from; if (from.constructor == Date || from.constructor == RegExp || from.constructor == Function || from.constructor == String || from.constructor == Number || from.constructor == Boolean) return new from.constructor(from); to = to || new from.constructor(); for (var name in from) { to[name] = typeof to[name] == "undefined" ? extend(from[name], null) : to[name]; } return to; } Test: var obj = { date: new Date(), func: function(q) { return 1 + q; }, num: 123, text: "asdasd", array: [1, "asd"], regex: new RegExp(/aaa/i), subobj: { num: 234, text: "asdsaD" } } var clone = extend(obj); A: In JavaScript, you can write your deepCopy method like function deepCopy(src) { let target = Array.isArray(src) ? [] : {}; for (let prop in src) { let value = src[prop]; if(value && typeof value === 'object') { target[prop] = deepCopy(value); } else { target[prop] = value; } } return target; } A: Deep copying objects in JavaScript (I think the best and the simplest) 1. Using JSON.parse(JSON.stringify(object)); var obj = { a: 1, b: { c: 2 } } var newObj = JSON.parse(JSON.stringify(obj)); obj.b.c = 20; console.log(obj); // { a: 1, b: { c: 20 } } console.log(newObj); // { a: 1, b: { c: 2 } } 2.Using created method function cloneObject(obj) { var clone = {}; for(var i in obj) { if(obj[i] != null && typeof(obj[i])=="object") clone[i] = cloneObject(obj[i]); else clone[i] = obj[i]; } return clone; } var obj = { a: 1, b: { c: 2 } } var newObj = cloneObject(obj); obj.b.c = 20; console.log(obj); // { a: 1, b: { c: 20 } } console.log(newObj); // { a: 1, b: { c: 2 } } 3. Using Lo-Dash's _.cloneDeep link lodash var obj = { a: 1, b: { c: 2 } } var newObj = _.cloneDeep(obj); obj.b.c = 20; console.log(obj); // { a: 1, b: { c: 20 } } console.log(newObj); // { a: 1, b: { c: 2 } } 4. Using Object.assign() method var obj = { a: 1, b: 2 } var newObj = _.clone(obj); obj.b = 20; console.log(obj); // { a: 1, b: 20 } console.log(newObj); // { a: 1, b: 2 } BUT WRONG WHEN var obj = { a: 1, b: { c: 2 } } var newObj = Object.assign({}, obj); obj.b.c = 20; console.log(obj); // { a: 1, b: { c: 20 } } console.log(newObj); // { a: 1, b: { c: 20 } } --> WRONG // Note: Properties on the prototype chain and non-enumerable properties cannot be copied. 5.Using Underscore.js _.clone link Underscore.js var obj = { a: 1, b: 2 } var newObj = _.clone(obj); obj.b = 20; console.log(obj); // { a: 1, b: 20 } console.log(newObj); // { a: 1, b: 2 } BUT WRONG WHEN var obj = { a: 1, b: { c: 2 } } var newObj = _.cloneDeep(obj); obj.b.c = 20; console.log(obj); // { a: 1, b: { c: 20 } } console.log(newObj); // { a: 1, b: { c: 20 } } --> WRONG // (Create a shallow-copied clone of the provided plain object. Any nested objects or arrays will be copied by reference, not duplicated.) JSBEN.CH Performance Benchmarking Playground 1~3 http://jsben.ch/KVQLd A: As this question is having lot of attention and answers with reference to inbuilt features such as Object.assign or custom code to deep clone, i would like to share some libraries to deep clone, 1. esclone npm install --savedev esclone https://www.npmjs.com/package/esclone Example use in ES6: import esclone from "esclone"; const rockysGrandFather = { name: "Rockys grand father", father: "Don't know :(" }; const rockysFather = { name: "Rockys Father", father: rockysGrandFather }; const rocky = { name: "Rocky", father: rockysFather }; const rockyClone = esclone(rocky); Example use in ES5: var esclone = require("esclone") var foo = new String("abcd") var fooClone = esclone.default(foo) console.log(fooClone) console.log(foo === fooClone) 2. deep copy npm install deep-copy https://www.npmjs.com/package/deep-copy Example: var dcopy = require('deep-copy') // deep copy object var copy = dcopy({a: {b: [{c: 5}]}}) // deep copy array var copy = dcopy([1, 2, {a: {b: 5}}]) 3. clone-deep $ npm install --save clone-deep https://www.npmjs.com/package/clone-deep Example: var cloneDeep = require('clone-deep'); var obj = {a: 'b'}; var arr = [obj]; var copy = cloneDeep(arr); obj.c = 'd'; console.log(copy); //=> [{a: 'b'}] console.log(arr); A: function clone(obj) { var copy; // Handle the 3 simple types, and null or undefined if (null == obj || "object" != typeof obj) return obj; // Handle Date if (obj instanceof Date) { copy = new Date(); copy.setTime(obj.getTime()); return copy; } // Handle Array if (obj instanceof Array) { copy = []; for (var i = 0, len = obj.length; i < len; i++) { copy[i] = clone(obj[i]); } return copy; } // Handle Object if (obj instanceof Object) { copy = {}; for (var attr in obj) { if (obj.hasOwnProperty(attr)) copy[attr] = clone(obj[attr]); } return copy; } throw new Error("Unable to copy obj! Its type isn't supported."); } use the following method instead of JSON.parse(JSON.stringify(obj)) because it is slower than the following method How do I correctly clone a JavaScript object? A: How about merging the keys of the object with its values? function deepClone(o) { var keys = Object.keys(o); var values = Object.values(o); var clone = {}; keys.forEach(function(key, i) { clone[key] = typeof values[i] == 'object' ? Object.create(values[i]) : values[i]; }); return clone; } Note: This method doesn't necessarily make shallow copies, but it only copies with the depth of one inner-object, meaning that when you are given something like {a: {b: {c: null}}}, it will only clone the objects that are directly inside of them, so deepClone(a.b).c is technically a reference to a.b.c, while deepClone(a).b is a clone, not a reference. A: Cloning an object using today's JavaScript: ECMAScript 2015 (formerly known as ECMAScript 6) var original = {a: 1}; // Method 1: New object with original assigned. var copy1 = Object.assign({}, original); // Method 2: New object with spread operator assignment. var copy2 = {...original}; Old browsers may not support ECMAScript 2015. A common solution is to use a JavaScript-to-JavaScript compiler like Babel to output an ECMAScript 5 version of your JavaScript code. As pointed out by @jim-hall, this is only a shallow copy. Properties of properties are copied as a reference: changing one would change the value in the other object/instance.
{ "language": "en", "url": "https://stackoverflow.com/questions/122102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5171" }
Q: ASP.Net MVC Keeping action parameters between postbacks Say I have a page that display search results. I search for stackoverflow and it returns 5000 results, 10 per page. Now I find myself doing this when building links on that page: <%=Html.ActionLink("Page 1", "Search", new { query=ViewData["query"], page etc..%> <%=Html.ActionLink("Page 2", "Search", new { query=ViewData["query"], page etc..%> <%=Html.ActionLink("Page 3", "Search", new { query=ViewData["query"], page etc..%> <%=Html.ActionLink("Next", "Search", new { query=ViewData["query"], page etc..%> I dont like this, I have to build my links with careful consideration to what was posted previously etc.. What I'd like to do is <%=Html.BuildActionLinkUsingCurrentActionPostData ("Next", "Search", new { Page = 1}); where the anonymous dictionary overrides anything currently set by previous action. Essentially I care about what the previous action parameters were, because I want to reuse, it sounds simple, but start adding sort and loads of advance search options and it starts getting messy. Im probably missing something obvious A: I had a similar problem inside an HtmlHelper; I wanted to generate links that linked backed to the current page, with a small adjustment in parameters (think incrementing the page number). So if I had URL /Item/?sort=Name&page=0, I wanted to be able to create links to the same page, but just change the page parameter, and have the sort parameter automatically included (ie /Item/?sort=Name&page=1). My solution was this (for use in an HtmlHelper extension method, but since you can access the same data almost anywhere in MVC, you can adapt it easily to your uses): private static RouteValueDictionary CreateRouteToCurrentPage(HtmlHelper html) { RouteValueDictionary routeValues = new RouteValueDictionary(html.ViewContext.RouteData.Values); NameValueCollection queryString = html.ViewContext.HttpContext.Request.QueryString; foreach (string key in queryString.Cast<string>()) { routeValues[key] = queryString[key]; } return routeValues; } What the method does is take the RouteValueDictionary for the current request and create a copy of it. Then it adds each of the query parameters found in the query string to this route. It does this because, for some reason, the current request's RouteValueDictionary does not contain them (you'd think it would, but it doesn't). You can then take the resultant dictionary, modify only a part of it, for example: routeValues["page"] = 2; and then give that dictionary to the out-of-the-box HtmlHelper methods for them to generate you a URL/etc. A: Whenever you find yourself writing redundant code in your Views, write a helper. The helper could explicitly copy the parameters, as you're doing it now, or it could iterate the entire collection and copy automatically. If it were me, I would probably choose the former. Then you can just call your new helper, instead of rebuilding the parameters every time you make a link. A: I'm a little iffy as to what you are actually trying to do here. I think you are trying to automate the process of creating a list of links with only small changes between them. Apparently in your case the id number of "Page". One way to do it, although possibly not the best is like so (My code makes use of a basic and contrived Product list and the ViewPage and PartialViewPage both use Strongly Typed Models): On your ViewPage you would add code like this: <div id="product_list"> <% foreach (TestMVC.Product product in ViewData.Model) { %> <% Html.RenderPartial("ProductEntry", product); %> <% } %> </div> Your Partial View, in my case "ProductEntry", would then look like this: <div class="product"> <div class="product-name"> <%= Html.ActionLink(ViewData.Model.ProductName, "Detail", new { id = ViewData.Model.id })%> </div> <div class="product-desc"> <%= ViewData.Model.ProductDescription %> </div> </div> All I'm doing in that Partial View is consuming the model/viewdata that was passed from the parent view by the call to Html.RenderPartial In your parent view you could modify a parameter on your model object before the call to Html.RenderPartial in order to set the specific value you are interested in. Hope this helps. A: Following helper method does just that : public static string EnhancedActionLink(this HtmlHelper helper, string linkText, string actionName, string controllerName, bool keepQueryStrings) { ViewContext context = helper.ViewContext; IDictionary<string, object> htmlAttributes = null; RouteValueDictionary routeValues = null; string actionLink = string.Empty; if (keepQueryStrings && context.RequestContext.HttpContext.Request.QueryString.Keys.Count > 0) { routeValues = new RouteValueDictionary(context.RouteData.Values); foreach (string key in context.RequestContext.HttpContext.Request.QueryString.Keys) { routeValues[key] = context.RequestContext.HttpContext.Request.QueryString[key]; } } actionLink = helper.ActionLink(linkText, actionName, controllerName, routeValues, htmlAttributes); return actionLink; } A: take a look on this, it's a good example: http://nerddinnerbook.s3.amazonaws.com/Part8.htm A: After hours spent trying different solutions only this one worked for me: MVC ActionLink add all (optional) parameters from current url A: Here's the actionlink extension public static class ActionLinkExtension { public static MvcHtmlString ActionLinkWithQueryString(this HtmlHelper helper, string linkText, string action, string controller, object routeValues) { var context = helper.ViewContext; var currentRouteValues = new RouteValueDictionary(context.RouteData.Values); foreach (string key in context.HttpContext.Request.QueryString.Keys) { currentRouteValues[key] = context.HttpContext.Request.QueryString[key]; } var newRouteValues = new RouteValueDictionary(routeValues); foreach (var route in newRouteValues) { if (!currentRouteValues.ContainsKey(route.Key)) { currentRouteValues.Add(route.Key, route.Value); } else { currentRouteValues[route.Key] = route.Value; } } return helper.ActionLink(linkText, action, controller, currentRouteValues, null); } } A: Not exactly an answer, but worth pointing out: If you want paging functionality, use the PagedList Nuget package (No need to re-invent the wheel). The following link provides a really nice example of how to use it: ASP.NET Tutorial This is especially useful to you because query strings are saved in the URL when switching between pages.
{ "language": "en", "url": "https://stackoverflow.com/questions/122104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to filter a Java Collection (based on predicate)? I want to filter a java.util.Collection based on a predicate. A: Use CollectionUtils.filter(Collection,Predicate), from Apache Commons. A: Java 8 (2014) solves this problem using streams and lambdas in one line of code: List<Person> beerDrinkers = persons.stream() .filter(p -> p.getAge() > 16).collect(Collectors.toList()); Here's a tutorial. Use Collection#removeIf to modify the collection in place. (Notice: In this case, the predicate will remove objects who satisfy the predicate): persons.removeIf(p -> p.getAge() <= 16); lambdaj allows filtering collections without writing loops or inner classes: List<Person> beerDrinkers = select(persons, having(on(Person.class).getAge(), greaterThan(16))); Can you imagine something more readable? Disclaimer: I am a contributor on lambdaj A: Are you sure you want to filter the Collection itself, rather than an iterator? see org.apache.commons.collections.iterators.FilterIterator or using version 4 of apache commons org.apache.commons.collections4.iterators.FilterIterator A: The setup: public interface Predicate<T> { public boolean filter(T t); } void filterCollection(Collection<T> col, Predicate<T> predicate) { for (Iterator i = col.iterator(); i.hasNext();) { T obj = i.next(); if (predicate.filter(obj)) { i.remove(); } } } The usage: List<MyObject> myList = ...; filterCollection(myList, new Predicate<MyObject>() { public boolean filter(MyObject obj) { return obj.shouldFilter(); } }); A: Let’s look at how to filter a built-in JDK List and a MutableList using Eclipse Collections. List<Integer> jdkList = Arrays.asList(1, 2, 3, 4, 5); MutableList<Integer> ecList = Lists.mutable.with(1, 2, 3, 4, 5); If you wanted to filter the numbers less than 3, you would expect the following outputs. List<Integer> selected = Lists.mutable.with(1, 2); List<Integer> rejected = Lists.mutable.with(3, 4, 5); Here’s how you can filter using a Java 8 lambda as the Predicate. Assert.assertEquals(selected, Iterate.select(jdkList, each -> each < 3)); Assert.assertEquals(rejected, Iterate.reject(jdkList, each -> each < 3)); Assert.assertEquals(selected, ecList.select(each -> each < 3)); Assert.assertEquals(rejected, ecList.reject(each -> each < 3)); Here’s how you can filter using an anonymous inner class as the Predicate. Predicate<Integer> lessThan3 = new Predicate<Integer>() { public boolean accept(Integer each) { return each < 3; } }; Assert.assertEquals(selected, Iterate.select(jdkList, lessThan3)); Assert.assertEquals(selected, ecList.select(lessThan3)); Here are some alternatives to filtering JDK lists and Eclipse Collections MutableLists using the Predicates factory. Assert.assertEquals(selected, Iterate.select(jdkList, Predicates.lessThan(3))); Assert.assertEquals(selected, ecList.select(Predicates.lessThan(3))); Here is a version that doesn't allocate an object for the predicate, by using the Predicates2 factory instead with the selectWith method that takes a Predicate2. Assert.assertEquals( selected, ecList.selectWith(Predicates2.<Integer>lessThan(), 3)); Sometimes you want to filter on a negative condition. There is a special method in Eclipse Collections for that called reject. Assert.assertEquals(rejected, Iterate.reject(jdkList, lessThan3)); Assert.assertEquals(rejected, ecList.reject(lessThan3)); The method partition will return two collections, containing the elements selected by and rejected by the Predicate. PartitionIterable<Integer> jdkPartitioned = Iterate.partition(jdkList, lessThan3); Assert.assertEquals(selected, jdkPartitioned.getSelected()); Assert.assertEquals(rejected, jdkPartitioned.getRejected()); PartitionList<Integer> ecPartitioned = gscList.partition(lessThan3); Assert.assertEquals(selected, ecPartitioned.getSelected()); Assert.assertEquals(rejected, ecPartitioned.getRejected()); Note: I am a committer for Eclipse Collections. A: How about some plain and straighforward Java List<Customer> list ...; List<Customer> newList = new ArrayList<>(); for (Customer c : list){ if (c.getName().equals("dd")) newList.add(c); } Simple, readable and easy (and works in Android!) But if you're using Java 8 you can do it in a sweet one line: List<Customer> newList = list.stream().filter(c -> c.getName().equals("dd")).collect(toList()); Note that toList() is statically imported A: Since java 9 Collectors.filtering is enabled: public static <T, A, R> Collector<T, ?, R> filtering(Predicate<? super T> predicate, Collector<? super T, A, R> downstream) Thus filtering should be: collection.stream().collect(Collectors.filtering(predicate, collector)) Example: List<Integer> oddNumbers = List.of(1, 19, 15, 10, -10).stream() .collect(Collectors.filtering(i -> i % 2 == 1, Collectors.toList())); A: "Best" way is too wide a request. Is it "shortest"? "Fastest"? "Readable"? Filter in place or into another collection? Simplest (but not most readable) way is to iterate it and use Iterator.remove() method: Iterator<Foo> it = col.iterator(); while( it.hasNext() ) { Foo foo = it.next(); if( !condition(foo) ) it.remove(); } Now, to make it more readable, you can wrap it into a utility method. Then invent a IPredicate interface, create an anonymous implementation of that interface and do something like: CollectionUtils.filterInPlace(col, new IPredicate<Foo>(){ public boolean keepIt(Foo foo) { return foo.isBar(); } }); where filterInPlace() iterate the collection and calls Predicate.keepIt() to learn if the instance to be kept in the collection. I don't really see a justification for bringing in a third-party library just for this task. A: Consider Google Collections for an updated Collections framework that supports generics. UPDATE: The google collections library is now deprecated. You should use the latest release of Guava instead. It still has all the same extensions to the collections framework including a mechanism for filtering based on a predicate. A: The Collections2.filter(Collection,Predicate) method in Google's Guava library does just what you're looking for. A: With the ForEach DSL you may write import static ch.akuhn.util.query.Query.select; import static ch.akuhn.util.query.Query.$result; import ch.akuhn.util.query.Select; Collection<String> collection = ... for (Select<String> each : select(collection)) { each.yield = each.value.length() > 3; } Collection<String> result = $result(); Given a collection of [The, quick, brown, fox, jumps, over, the, lazy, dog] this results in [quick, brown, jumps, over, lazy], ie all strings longer than three characters. All iteration styles supported by the ForEach DSL are * *AllSatisfy *AnySatisfy *Collect *Counnt *CutPieces *Detect *GroupedBy *IndexOf *InjectInto *Reject *Select For more details, please refer to https://www.iam.unibe.ch/scg/svn_repos/Sources/ForEach A: Wait for Java 8: List<Person> olderThan30 = //Create a Stream from the personList personList.stream(). //filter the element to select only those with age >= 30 filter(p -> p.age >= 30). //put those filtered elements into a new List. collect(Collectors.toList()); A: This, combined with the lack of real closures, is my biggest gripe for Java. Honestly, most of the methods mentioned above are pretty easy to read and REALLY efficient; however, after spending time with .Net, Erlang, etc... list comprehension integrated at the language level makes everything so much cleaner. Without additions at the language level, Java just cant be as clean as many other languages in this area. If performance is a huge concern, Google collections is the way to go (or write your own simple predicate utility). Lambdaj syntax is more readable for some people, but it is not quite as efficient. And then there is a library I wrote. I will ignore any questions in regard to its efficiency (yea, its that bad)...... Yes, i know its clearly reflection based, and no I don't actually use it, but it does work: LinkedList<Person> list = ...... LinkedList<Person> filtered = Query.from(list).where(Condition.ensure("age", Op.GTE, 21)); OR LinkedList<Person> list = .... LinkedList<Person> filtered = Query.from(list).where("x => x.age >= 21"); A: In Java 8, You can directly use this filter method and then do that. List<String> lines = Arrays.asList("java", "pramod", "example"); List<String> result = lines.stream() .filter(line -> !"pramod".equals(line)) .collect(Collectors.toList()); result.forEach(System.out::println); A: Assuming that you are using Java 1.5, and that you cannot add Google Collections, I would do something very similar to what the Google guys did. This is a slight variation on Jon's comments. First add this interface to your codebase. public interface IPredicate<T> { boolean apply(T type); } Its implementers can answer when a certain predicate is true of a certain type. E.g. If T were User and AuthorizedUserPredicate<User> implements IPredicate<T>, then AuthorizedUserPredicate#apply returns whether the passed in User is authorized. Then in some utility class, you could say public static <T> Collection<T> filter(Collection<T> target, IPredicate<T> predicate) { Collection<T> result = new ArrayList<T>(); for (T element: target) { if (predicate.apply(element)) { result.add(element); } } return result; } So, assuming that you have the use of the above might be Predicate<User> isAuthorized = new Predicate<User>() { public boolean apply(User user) { // binds a boolean method in User to a reference return user.isAuthorized(); } }; // allUsers is a Collection<User> Collection<User> authorizedUsers = filter(allUsers, isAuthorized); If performance on the linear check is of concern, then I might want to have a domain object that has the target collection. The domain object that has the target collection would have filtering logic for the methods that initialize, add and set the target collection. UPDATE: In the utility class (let's say Predicate), I have added a select method with an option for default value when the predicate doesn't return the expected value, and also a static property for params to be used inside the new IPredicate. public class Predicate { public static Object predicateParams; public static <T> Collection<T> filter(Collection<T> target, IPredicate<T> predicate) { Collection<T> result = new ArrayList<T>(); for (T element : target) { if (predicate.apply(element)) { result.add(element); } } return result; } public static <T> T select(Collection<T> target, IPredicate<T> predicate) { T result = null; for (T element : target) { if (!predicate.apply(element)) continue; result = element; break; } return result; } public static <T> T select(Collection<T> target, IPredicate<T> predicate, T defaultValue) { T result = defaultValue; for (T element : target) { if (!predicate.apply(element)) continue; result = element; break; } return result; } } The following example looks for missing objects between collections: List<MyTypeA> missingObjects = (List<MyTypeA>) Predicate.filter(myCollectionOfA, new IPredicate<MyTypeA>() { public boolean apply(MyTypeA objectOfA) { Predicate.predicateParams = objectOfA.getName(); return Predicate.select(myCollectionB, new IPredicate<MyTypeB>() { public boolean apply(MyTypeB objectOfB) { return objectOfB.getName().equals(Predicate.predicateParams.toString()); } }) == null; } }); The following example, looks for an instance in a collection, and returns the first element of the collection as default value when the instance is not found: MyType myObject = Predicate.select(collectionOfMyType, new IPredicate<MyType>() { public boolean apply(MyType objectOfMyType) { return objectOfMyType.isDefault(); }}, collectionOfMyType.get(0)); UPDATE (after Java 8 release): It's been several years since I (Alan) first posted this answer, and I still cannot believe I am collecting SO points for this answer. At any rate, now that Java 8 has introduced closures to the language, my answer would now be considerably different, and simpler. With Java 8, there is no need for a distinct static utility class. So if you want to find the 1st element that matches your predicate. final UserService userService = ... // perhaps injected IoC final Optional<UserModel> userOption = userCollection.stream().filter(u -> { boolean isAuthorized = userService.isAuthorized(u); return isAuthorized; }).findFirst(); The JDK 8 API for optionals has the ability to get(), isPresent(), orElse(defaultUser), orElseGet(userSupplier) and orElseThrow(exceptionSupplier), as well as other 'monadic' functions such as map, flatMap and filter. If you want to simply collect all the users which match the predicate, then use the Collectors to terminate the stream in the desired collection. final UserService userService = ... // perhaps injected IoC final List<UserModel> userOption = userCollection.stream().filter(u -> { boolean isAuthorized = userService.isAuthorized(u); return isAuthorized; }).collect(Collectors.toList()); See here for more examples on how Java 8 streams work. A: I wrote an extended Iterable class that support applying functional algorithms without copying the collection content. Usage: List<Integer> myList = new ArrayList<Integer>(){ 1, 2, 3, 4, 5 } Iterable<Integer> filtered = Iterable.wrap(myList).select(new Predicate1<Integer>() { public Boolean call(Integer n) throws FunctionalException { return n % 2 == 0; } }) for( int n : filtered ) { System.out.println(n); } The code above will actually execute for( int n : myList ) { if( n % 2 == 0 ) { System.out.println(n); } } A: JFilter http://code.google.com/p/jfilter/ is best suited for your requirement. JFilter is a simple and high performance open source library to query collection of Java beans. Key features * *Support of collection (java.util.Collection, java.util.Map and Array) properties. *Support of collection inside collection of any depth. *Support of inner queries. *Support of parameterized queries. *Can filter 1 million records in few 100 ms. *Filter ( query) is given in simple json format, it is like Mangodb queries. Following are some examples. *{ "id":{"$le":"10"} * *where object id property is less than equals to 10. *{ "id": {"$in":["0", "100"]}} * *where object id property is 0 or 100. *{"lineItems":{"lineAmount":"1"}} * *where lineItems collection property of parameterized type has lineAmount equals to 1. *{ "$and":[{"id": "0"}, {"billingAddress":{"city":"DEL"}}]} * *where id property is 0 and billingAddress.city property is DEL. *{"lineItems":{"taxes":{ "key":{"code":"GST"}, "value":{"$gt": "1.01"}}}} * *where lineItems collection property of parameterized type which has taxes map type property of parameteriszed type has code equals to GST value greater than 1.01. *{'$or':[{'code':'10'},{'skus': {'$and':[{'price':{'$in':['20', '40']}}, {'code':'RedApple'}]}}]} * *Select all products where product code is 10 or sku price in 20 and 40 and sku code is "RedApple". A: Use Collection Query Engine (CQEngine). It is by far the fastest way to do this. See also: How do you query object collections in Java (Criteria/SQL-like)? A: Some really great great answers here. Me, I'd like to keep thins as simple and readable as possible: public abstract class AbstractFilter<T> { /** * Method that returns whether an item is to be included or not. * @param item an item from the given collection. * @return true if this item is to be included in the collection, false in case it has to be removed. */ protected abstract boolean excludeItem(T item); public void filter(Collection<T> collection) { if (CollectionUtils.isNotEmpty(collection)) { Iterator<T> iterator = collection.iterator(); while (iterator.hasNext()) { if (excludeItem(iterator.next())) { iterator.remove(); } } } } } A: Using java 8, specifically lambda expression, you can do it simply like the below example: myProducts.stream().filter(prod -> prod.price>10).collect(Collectors.toList()) where for each product inside myProducts collection, if prod.price>10, then add this product to the new filtered list. A: Since the early release of Java 8, you could try something like: Collection<T> collection = ...; Stream<T> stream = collection.stream().filter(...); For example, if you had a list of integers and you wanted to filter the numbers that are > 10 and then print out those numbers to the console, you could do something like: List<Integer> numbers = Arrays.asList(12, 74, 5, 8, 16); numbers.stream().filter(n -> n > 10).forEach(System.out::println); A: I'll throw RxJava in the ring, which is also available on Android. RxJava might not always be the best option, but it will give you more flexibility if you wish add more transformations on your collection or handle errors while filtering. Observable.from(Arrays.asList(1, 2, 3, 4, 5)) .filter(new Func1<Integer, Boolean>() { public Boolean call(Integer i) { return i % 2 != 0; } }) .subscribe(new Action1<Integer>() { public void call(Integer i) { System.out.println(i); } }); Output: 1 3 5 More details on RxJava's filter can be found here. A: The simple pre-Java8 solution: ArrayList<Item> filtered = new ArrayList<Item>(); for (Item item : items) if (condition(item)) filtered.add(item); Unfortunately this solution isn't fully generic, outputting a list rather than the type of the given collection. Also, bringing in libraries or writing functions that wrap this code seems like overkill to me unless the condition is complex, but then you can write a function for the condition. A: https://code.google.com/p/joquery/ Supports different possibilities, Given collection, Collection<Dto> testList = new ArrayList<>(); of type, class Dto { private int id; private String text; public int getId() { return id; } public int getText() { return text; } } Filter Java 7 Filter<Dto> query = CQ.<Dto>filter(testList) .where() .property("id").eq().value(1); Collection<Dto> filtered = query.list(); Java 8 Filter<Dto> query = CQ.<Dto>filter(testList) .where() .property(Dto::getId) .eq().value(1); Collection<Dto> filtered = query.list(); Also, Filter<Dto> query = CQ.<Dto>filter() .from(testList) .where() .property(Dto::getId).between().value(1).value(2) .and() .property(Dto::grtText).in().value(new string[]{"a","b"}); Sorting (also available for the Java 7) Filter<Dto> query = CQ.<Dto>filter(testList) .orderBy() .property(Dto::getId) .property(Dto::getName) Collection<Dto> sorted = query.list(); Grouping (also available for the Java 7) GroupQuery<Integer,Dto> query = CQ.<Dto,Dto>query(testList) .group() .groupBy(Dto::getId) Collection<Grouping<Integer,Dto>> grouped = query.list(); Joins (also available for the Java 7) Given, class LeftDto { private int id; private String text; public int getId() { return id; } public int getText() { return text; } } class RightDto { private int id; private int leftId; private String text; public int getId() { return id; } public int getLeftId() { return leftId; } public int getText() { return text; } } class JoinedDto { private int leftId; private int rightId; private String text; public JoinedDto(int leftId,int rightId,String text) { this.leftId = leftId; this.rightId = rightId; this.text = text; } public int getLeftId() { return leftId; } public int getRightId() { return rightId; } public int getText() { return text; } } Collection<LeftDto> leftList = new ArrayList<>(); Collection<RightDto> rightList = new ArrayList<>(); Can be Joined like, Collection<JoinedDto> results = CQ.<LeftDto, LeftDto>query().from(leftList) .<RightDto, JoinedDto>innerJoin(CQ.<RightDto, RightDto>query().from(rightList)) .on(LeftFyo::getId, RightDto::getLeftId) .transformDirect(selection -> new JoinedDto(selection.getLeft().getText() , selection.getLeft().getId() , selection.getRight().getId()) ) .list(); Expressions Filter<Dto> query = CQ.<Dto>filter() .from(testList) .where() .exec(s -> s.getId() + 1).eq().value(2); A: My answer builds on that from Kevin Wong, here as a one-liner using CollectionUtils from spring and a Java 8 lambda expression. CollectionUtils.filter(list, p -> ((Person) p).getAge() > 16); This is as concise and readable as any alternative I have seen (without using aspect-based libraries) Spring CollectionUtils is available from spring version 4.0.2.RELEASE, and remember you need JDK 1.8 and language level 8+. A: I needed to filter a list depending on the values already present in the list. For example, remove all values following that is less than the current value. {2 5 3 4 7 5} -> {2 5 7}. Or for example to remove all duplicates {3 5 4 2 3 5 6} -> {3 5 4 2 6}. public class Filter { public static <T> void List(List<T> list, Chooser<T> chooser) { List<Integer> toBeRemoved = new ArrayList<>(); leftloop: for (int right = 1; right < list.size(); ++right) { for (int left = 0; left < right; ++left) { if (toBeRemoved.contains(left)) { continue; } Keep keep = chooser.choose(list.get(left), list.get(right)); switch (keep) { case LEFT: toBeRemoved.add(right); continue leftloop; case RIGHT: toBeRemoved.add(left); break; case NONE: toBeRemoved.add(left); toBeRemoved.add(right); continue leftloop; } } } Collections.sort(toBeRemoved, new Comparator<Integer>() { @Override public int compare(Integer o1, Integer o2) { return o2 - o1; } }); for (int i : toBeRemoved) { if (i >= 0 && i < list.size()) { list.remove(i); } } } public static <T> void List(List<T> list, Keeper<T> keeper) { Iterator<T> iterator = list.iterator(); while (iterator.hasNext()) { if (!keeper.keep(iterator.next())) { iterator.remove(); } } } public interface Keeper<E> { boolean keep(E obj); } public interface Chooser<E> { Keep choose(E left, E right); } public enum Keep { LEFT, RIGHT, BOTH, NONE; } } This will bee used like this. List<String> names = new ArrayList<>(); names.add("Anders"); names.add("Stefan"); names.add("Anders"); Filter.List(names, new Filter.Chooser<String>() { @Override public Filter.Keep choose(String left, String right) { return left.equals(right) ? Filter.Keep.LEFT : Filter.Keep.BOTH; } }); A: In my case, I was looking for list with specific field null excluded. This could be done with for loop and fill the temporary list of objects who have no null addresses. but Thanks to Java 8 Streams List<Person> personsList = persons.stream() .filter(p -> p.getAdrress() != null).collect(Collectors.toList()); #java #collection #collections #java8 #streams A: With Guava: Collection<Integer> collection = Lists.newArrayList(1, 2, 3, 4, 5); Iterators.removeIf(collection.iterator(), new Predicate<Integer>() { @Override public boolean apply(Integer i) { return i % 2 == 0; } }); System.out.println(collection); // Prints 1, 3, 5 A: An alternative (more lightweight) alternative to Java collection streams is the Ocl.java library, which uses vanilla collections and lambdas: https://github.com/eclipse/agileuml/blob/master/Ocl.java For example, a simple filter and sum on an ArrayList words could be: ArrayList<Word> sel = Ocl.selectSequence(words, w -> w.pos.equals("NN")); int total = Ocl.sumint(Ocl.collectSequence(sel, w -> w.text.length())); Where Word has String pos; String text; attributes. Efficiency seems similar to the streams option, eg, 10000 words are processed in about 50ms in both versions. There are equivalent OCL libraries for Python, Swift, etc. Basically Java collection streams has re-invented the OCL operations ->select, ->collect, etc, which existed in OCL since 1998.
{ "language": "en", "url": "https://stackoverflow.com/questions/122105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "749" }
Q: Checkout one file from Subversion "It is not possible to check out a single file. The finest level of checkouts you can do is at the directory level." How do I get around this issue when using Subversion? We have this folder in Subversion where we keep all our images. I just want to check out one file (image) from that. This folder is really big and has ton of other stuff which I don't need now. A: A TortoiseSVN equivalent solution of the accepted answer (I had written this in an internal document for my company as we are newly adopting SVN) follows. I thought it would be helpful to share here as well: Checking out a single file: Subversion does not support checkout of a single file, it only supports checkout of directory structures. (Reference: http://subversion.tigris.org/faq.html#single-file-checkout). This is because with every directory that is checked out as a working copy, the metadata regarding modifications/file revisions is stored as an internal hidden folder (.svn/_svn). This is not supported currently (v1.6) for single files. Alternate recommended strategy: You will have to do the checkout directory part only once, following that you can directly go and checkout your single files. Do a sparse checkout of the parent folder and directory structure. A sparse checkout is basically checking out only the folder structure without populating the content files. So you checkout only the directory structures and need not checkout ALL the files as was the concern. Reference: http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-checkout.html Step 1: Proceed to repository browser Step 2: Right click the parent folder within the repository containing all the files that you wish to work on and Select Checkout. Step 3: Within new popup window, ensure that the checkout directory points to the correct location on your local PC. There will also be a dropdown menu labeled “checkout depth”. Choose “Only this item” or “Immediate children, including folders” depending on your requirement. Second option is recommended as, if you want to work on nested folder, you can directly proceed the next time otherwise you will have to follow this whole procedure again for the nested folder. Step 4: The parent folder(s) should now be available within your locally chosen folder and is now being monitored with SVN (a hidden folder “.svn” or “_svn” should now be present). Within the repository now, right click the single file that you wish to have checked out alone and select the “Update Item to revision” option. The single file can now be worked on and checked back into the repository. I hope this helps. A: An update in case what you really need can be covered by having the file included in a checkout of another folder. Since SVN 1.6 you can make file externals, a kind of svn links. It means that you can have another versioned folder that includes a single file. Committing changes to the file in a checkout of this folder is also possible. It's very simple, checkout the folder you want to include the file, and simply add a property to the folder svn propedit svn:externals . with content like this: file.txt /repos/path/to/file.txt After you commit this, the file will appear in future checkouts of the folder. Basically it works, but there are some limitations as described in the documentation linked above. A: Use svn cat or svn export. For that, you don't need to fetch the working directory, but you will not be able to commit any changes you make. If you need to make changes and commit them, you need a working directory, but you don't have to fetch it completely. Checkout the revision where the directory was still/almost empty, and then use 'svn cat' to extract the file from HEAD. A: cd C:\path\dir svn checkout https://server/path/to/trunk/dir/dir/parent_dir--depth empty cd C:\path\dir\parent_dir svn update filename.log (Edit filename.log) svn commit -m "this is a comment." A: The simple answer is that you svn export the file instead of checking it out. But that might not be what you want. You might want to work on the file and check it back in, without having to download GB of junk you don't need. If you have Subversion 1.5+, then do a sparse checkout: svn checkout <url_of_big_dir> <target> --depth empty cd <target> svn up <file_you_want> For an older version of SVN, you might benefit from the following: * *Checkout the directory using a revision back in the distant past, when it was less full of junk you don't need. *Update the file you want, to create a mixed revision. This works even if the file didn't exist in the revision you checked out. *Profit! An alternative (for instance if the directory has too much junk right from the revision in which it was created) is to do a URL->URL copy of the file you want into a new place in the repository (effectively this is a working branch of the file). Check out that directory and do your modifications. I'm not sure whether you can then merge your modified copy back entirely in the repository without a working copy of the target - I've never needed to. If so then do that. If not then unfortunately you may have to find someone else who does have the whole directory checked out and get them to do it. Or maybe by the time you've made your modifications, the rest of it will have finished downloading... A: I'd just browse it and export the single file. If you have HTTP access, just use the web browser to find the file and grab it. If you need to get it back in after editing it, that might be slightly more tedious, but I'm sure there might be an svn import function... A: Go to the repo-browser right-click the file and use 'Save As', I'm using TortoiseSVN though. A: Steve Jessop's answer did not work for me. I read the help files for SVN and if you just have an image you probably don't want to check it in again unless you're doing Photoshop, so export is a better command than checkout as it's unversioned (but that is minor). And the --depth ARG should not be empty but files to get the files in the immediate directory. So you'll get all the fields, not just the one, but empty returns nothing from the repository. svn co --depth files <source> <local dest> or svn export --depth files <source> <local dest> As for the other answers, cat lets you read the content which is good only for text, not images of all things. A: Do something like this: mkdir <your directory>/repos/test svn cat http://svn.red-bean.com/repos/test/readme.txt > <your directory>/repos/test/readme.txt Basically the idea is create the directory where you want to grab the file from SVN. Use the svn cat command and redirect the output to the same named file. By default, the cat will dump information on stdio. A: If you just want a file without revision information use svn export <URL> A: With Subversion 1.5, it becomes possible to check out (all) the files of a directory without checking out any subdirectories (the various --depth flags). Not quite what you asked for, but a form of "less than all." A: Using the sparse check out technique, you CAN check out a particular file that is already checked out or exists...with a simple trick: After checkout of the top level of your repository using the 'this item only' option, in Windows explorer, you MUST first right-click on the file you need to update; choose Repo Browser in context menu; find that file AGAIN in repository browser, and right-click. You should now see the "update item to revision" in context menu. I'm not sure whether it is an undocumented feature or simply a bug. It took me an extended after-work hours to finally find this trick. I'm using TortoiseSVN 1.6.2. A: This issue is covered by issue #823 "svn checkout a single file" originally reported July 27, 2002 by Eric Gillespie [1]. There is a Perl script attached [2] that lets you check out a single file from svn, make changes, and commit the changes back to the repository, but you can't run "svn up" to checkout the rest of the directory. It's been tested with svn-1.3, 1.4 and 1.6. Note the Subversion project originally hosted on tigris.org got moved to apache.org. The original URL of issue # 823 was on tigris.org at this non defunct URL [3]. The Internet Archive Wayback Machine has a copy of this original link [4]. 1 - https://issues.apache.org/jira/browse/SVN-823?issueNumber=823 2 - https://issues.apache.org/jira/secure/attachment/12762717/2_svn-co-one-file.txt 3 - http://subversion.tigris.org/issues/show_bug.cgi?id=823 4 - https://web.archive.org/web/20170401115732/http://subversion.tigris.org/issues/show_bug.cgi?id=823 A: I wanted to checkout a single file to a directory, which was not part of a working copy. Let's get the file at the following URL: http://subversion.repository.server/repository/module/directory/myfile svn co http://subversion.repository.server/repository/module/directory/myfile /**directoryb** So I checked out the given directory containing the target file I wanted to get to a dummy directory, (say etcb for the URL ending with /etc). Then I emptied the file .svn/entries from all files of the target directory I didn't needed, to leave just the file I wanted. In this .svn/entries file, you have a record for each file with its attributes so leave just the record concerning the file you want to get and save. Now you need just to copy then ''.svn'' to the directory which will be a new "working copy". Then you just need to: cp .svn /directory cd /directory svn update myfile Now the directory directory is under version control. Do not forget to remove the directory directoryb which was just a ''temporary working copy''. A: Since none of the other answers worked for me I did it using this hack: $ cd /yourfolder svn co https://path-to-folder-which-has-your-files/ --depth files This will create a new local folder which has only the files from the remote path. Then you can do a symbolic link to the files you want to have here. A: You can do it in two steps: * *Checkout an empty SVN repository with meta information only: $ cd /tmp $ svn co --depth empty http://svn.your.company.ca/training/trunk/sql *Run svn up to update specified file: $ svn up http://svn.your.company.ca/training/trunk/sql/showSID.sql A: If you want to view readme.txt in your repository without checking it out: $ svn cat http://svn.red-bean.com/repos/test/readme.txt This is a README file. You should read this. Tip: If your working copy is out of date (or you have local modifications) and you want to see the HEAD revision of a file in your working copy, svn cat will automatically fetch the HEAD revision when you give it a path: $ cat foo.c This file is in my local working copy and has changes that I've made. $ svn cat foo.c Latest revision fresh from the repository! Source A: Try svn export instead of svn checkout. That works for single files. The reason for the limitation is that checkout creates a working copy, that contains meta-information about repository, revision, attributes, etc. That metadata is stored in subdirectories named '.svn'. And single files don't have subdirectories. A: If you just want to export the file, and you won't need to update it later, you can do it without having to use SVN commands. Using TortoiseSVN Repository Browser, select the file, right click, and then select "Copy URL to clipboard". Paste that URL to your browser, and after login you should be prompted with the file download. This way you can also select desired revision and download an older version of the file. Note that this is valid if your SVN server has a web interface.
{ "language": "en", "url": "https://stackoverflow.com/questions/122107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "340" }
Q: How do you store the markdown using WMD in ASP.NET? I have implemented the WMD control that Stack Overflow uses into a project of mine, it almost works like a charm, but when I save the changes to the database it is saving the HTML version and not the Markdown version. So where I have this in my text box: **boldtext** It is really saving this: <b>boldtext</b> How do I make it save the Markdown version? A: Before you include wmd.js, or whatever you've named the WMD editor JavaScript code locally, add one line of JavaScript code: wmd_options = {"output": "Markdown"}; This will force the output of the editor to Markdown. A: If you're using the new WMD from http://code.google.com/p/wmd-new/, open wmd.js and add this line: wmd.wmd_env.output = 'markdown'; Excerpt: ... wmd.ieCachedRange = null; // cached textarea selection wmd.ieRetardedClick = false; // flag wmd.wmd_env.output = 'markdown'; // force markdown output // Returns true if the DOM element is visible, false if it's hidden. // Checks if display is anything other than none. util.isVisible = function (elem) { ... That should do the trick.
{ "language": "en", "url": "https://stackoverflow.com/questions/122108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How can I split up a PDF file into pages (preferably C#) My client has a multi-page PDF file. They need it split by page. Does anyone know of a way to do this - preferably in C#. A: I did this using ITextSharp -- there are commercial options that may have a good API but this is open source and free, and not hard to use. Check out this code, it's one of their code samples -- it's pretty good. It splits a PDF file into two files at the passed-in page number. You can modify it to loop and split page by page. A: Haven't played with it, but you can look at Aspose.Pdf.Kit for .NET and Java. It is commercial so you'll need to pay licensing feeds, but if you need commercial support it might work for you. A: Siberix offers a reasonably costed commercial library for creating PDF's on the fly in .NET: http://siberix.com You can create the PDF's programmatically or through an XML transformation (and a combination of both IIRC). I've used their library on a couple of projects and have found that not only is their library easy to work with, but their email support is incredible. And the license is quite cheap as well. A: PDFSharp is an open source library which may be what you're after: Key Features * *Creates PDF documents on the fly from any .Net language *Easy to understand object model to compose documents *One source code for drawing on a PDF page as well as in a window or on the printer *Modify, merge, and split existing PDF files This sample shows how to convert a PDF document with n pages into n documents with one page each.
{ "language": "en", "url": "https://stackoverflow.com/questions/122109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Is there a way to get jadclipse working with Eclipse 3.4? I'm a big fan of the Jadclipse plugin and I'd really like to upgrade to Eclipse 3.4 but the plugin currently does not work. Are there any other programs out there that let you use jad to view source of code you navigate to from Eclipse? (Very useful when delving into ambiguous code in stack traces). A: update your eclipse 3.4 for jadeclipse from help-> software updates http://webobjects.mdimension.com/jadclipse/3.3 restart the eclipse. set the jadeclipse properties. it doesn't just works.. this is the solution. A: I'm successfully using JadClipse with Eclipse 3.4 Eclipse 3.4.0.I20080617-2000 JadClipse 3.3.0 It just works! EDIT: Actually, see OlegSOM's answer below for the additional steps that you might need to remember to take, if like me you forget to read documentation sometimes! A: Read attentively the documentation ... : * *The JadClipse plug-in is not activated when I start Eclipse. You'll need to launch Eclipse with the -clean flag to allow the environment to detect the plug-in. Subsequent launching of Eclipse won't require the -clean flag. eclipse -clean *The Eclipse Class File Viewer instead of the JadClipse Class File Viewer is opened. Go to Window > Preferences... > General > Editors > File Associations and make sure that the JadClipse Class File Viewer has the DEFAULT file association for *.class files. ( - press Default button !!!) It really helps :))) A: Nevermind my question above - my problem was my settings for the path to jad.exe and the temp directory. In case anyone else has the same problem I did, make sure the path to the decompiler is correct (like "C:...\jad.exe") and leave the temp directory alone (for me it's "C:\Documents and Settings{user}.net.sf.jadclipse"). This is a pretty good utility - infinately more useful than the default class viewer! A: I can't get to make the plugin work with ganymede (linux version). When setting the jadclipse class viewer i get the following error in the log file of the workspace (.metadata/.log) java.lang.IncompatibleClassChangeError at net.sf.jadclipse.JadclipseClassFileEditor.doOpenBuffer(JadclipseClassFileEditor.java:101) at net.sf.jadclipse.JadclipseClassFileEditor.doSetInput(JadclipseClassFileEditor.java:45) at net.sf.jadclipse.JadclipseActionBarContributor.setActiveEditor(JadclipseActionBarContributor.java:87) at org.eclipse.ui.internal.EditorActionBars.partChanged(EditorActionBars.java:335) at org.eclipse.ui.internal.WorkbenchPage$3.run(WorkbenchPage.java:628) .....(i don't think the rest of the stack trace is important) Perhaps jadclipse isn't compatible with the version of its eclipse dependencies (on this line jadclipse makes a call to a class defined in the JDT plugin), but i didn't have the time to figure this out. EDIT: i've simply recompiled the jar using the svn repository and created a new jar for java 1.5 and it seems to work (Download here). Just download my jar and put in the plugin folder of eclipse and remove the old one. A: I had a problem running JadClipse in Eclipse Ganymede. It turns out the Groovy plugin had conflicted with JadClipse. After removing the groovy plugin, JadClipse ran just fine. Btw here's the problem: Cannot complete the request. See the details. Unsatisfied dependency: [org.codehaus.groovy.eclipse.feature.feature.group 2.0.0.20090814-1100-e34-N] requiredCapability: org.eclipse.equinox.p2.iu/org.codehaus.groovy.eclipse.core.help/[2.0.0.20090814-1100-e34-N,2.0.0.20090814-1100-e34-N] Unsatisfied dependency: [org.codehaus.groovy.eclipse.feature.feature.group 2.0.0.20090814-1100-e34-N] requiredCapability: org.eclipse.equinox.p2.iu/org.codehaus.groovy.jdt.patch.feature.group/[2.0.0.20090814-1100-e34-N,2.0.0.20090814-1100-e34-N] Unsatisfied dependency: [org.codehaus.groovy.jdt.patch.feature.group 2.0.0.20090814-1100-e34-N] requiredCapability: org.eclipse.equinox.p2.iu/org.eclipse.jdt.feature.group/[3.4.2.r342_v20081217-7o7tEAoEEDWEm5HTrKn-svO4BbDI,3.4.2.r342_v20081217-7o7tEAoEEDWEm5HTrKn-svO4BbDI] Unsatisfied dependency: [org.codehaus.groovy.eclipse.core.help 2.0.0.20090814-1100-e34-N] requiredCapability: osgi.bundle/org.eclipse.help/3.3.102 A: I was just able to successfully install jadclipse with Ganymede. In order to do this I: 1) Installed via the help-> software updates http://webobjects.mdimension.com/jadclipse/3.3 2) Put the Jad executable into a directory that is in the execution path of your operating system. Alternatively, you can configure the path to the Jad executable under Window > Preferences... > Java > JadClipse > Path to Decompiler. (Set the full path, e.g. C:\Program Files\Jad\jad.exe) 3)Go to Window > Preferences... > General > Editors > File Associations and make sure that the JadClipse Class File Viewer has the default file association for *.class files. 4) Restart Eclipse (eclipse -clean). It is now working perfectly for me! A: what worked for me is that I went to Window > Preferences... > General > Editors > File Associations and reset the default. I set the default to "Class File Viewer" and the back to "Jadclipse Class File Viewer". No it works for some reason. :) If you're out of luck, try that. A: Follow the instructions in this link http://www.devx.com/Java/Article/22657 But when downloading the jadclipse plugin for Eclipse from http://sourceforge.net/projects/jadclipse/ Just download this jar "net.sf.jadclipse_3.3.0.jar" and put it in the Eclipse plugins folder The rest is the same way it is in the first link. A: I have it working on Eclipse as well: Version: 3.4.1 Build id: M20080911-1700 The plug-in install steps are straightforward - http://jadclipse.sourceforge.net/wiki/index.php/Main_Page#Installation I had to download JAD itself from a mirror site (original site is gone?) - http://www.varaneckas.com/jad I'm on a Windows machine, which might matter. A: I followed bhupendra's method (add via Help > Software Updates > http://webobjects.mdimension.com/jadclipse/3.3) and it worked for me. Using the jar file directly (even restarting with -clean) didn't work. A: To resolve the problem : Go to Window > Preferences... > General > Editors > File Associations and make sure that the JadClipse Class File Viewer has the default file association for *.class files. Restart Eclipse (eclipse -clean). A: using this update site with myeclipse 8.5 seems to work fine: http://webobjects.mdimension.com/jadclipse/3.3 FYI Jeff
{ "language": "en", "url": "https://stackoverflow.com/questions/122110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the popen equivalent to read and write to a child process in Windows? Ruby's standard popen3 module does not work on Windows. Is there a maintained replacement that allows for separating stdin, stdout, and stderr? A: POpen4 gem has a common interface between unix and Windows. The following example (from their website) works like a charm. require 'rubygems' require 'popen4' status = POpen4::popen4("cmd") do |stdout, stderr, stdin, pid| stdin.puts "echo hello world!" stdin.puts "echo ERROR! 1>&2" stdin.puts "exit" stdin.close puts "pid : #{ pid }" puts "stdout : #{ stdout.read.strip }" puts "stderr : #{ stderr.read.strip }" end puts "status : #{ status.inspect }" puts "exitstatus : #{ status.exitstatus }" A: popen3 works with MRI 1.9.x on windows. See http://en.wikibooks.org/wiki/Ruby_Programming/Running_Multiple_Processes
{ "language": "en", "url": "https://stackoverflow.com/questions/122115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you get the DataSource that a report uses in SQL Server Reporting Services 2005 In order to create the proper queries I need to be able to run a query against the same datasource that the report is using. How do I get that information programatically? Preferably the connection string or pieces of data used to build the connection string. A: DataSourceDefinition dataSourceDefinition = reportingService.GetDataSourceContents("DataSourceName"); string connectionString = dataSourceDefinition.ConnectString; A: If you have the right privileges you can can go to http://servername/reports/ and view the data source connection details through there. A: If you're using visual studio just look at the data tab. If you just have access to the report on the SSRS server you can navigate to the report, click the Properties tab, then the Data Sources option on the left. If it's a custom data source you can get the connection info from there. If it's shared, you'll need to navigate to the data source path shown, and can get the connection info from there. EDIT: Also, if you just have the report file itself you should be able to open it in notepad and find the data source information inside. Unless it uses a shared data source I guess... in which case you'll need to find that. EDIT: This answer applied to the question as originally written, before "programmatically" was added.
{ "language": "en", "url": "https://stackoverflow.com/questions/122127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Calling WCF Service from Silverlight Im calling a locally hosted wcf service from silverlight and I get the exception below. Iv created a clientaccesspolicy.xml, which is situated in the route of my host. <?xml version="1.0" encoding="utf-8"?> <access-policy> <cross-domain-access> <policy> <allow-from http-request-headers="*"> <domain uri="*"/> </allow-from> <grant-to> <resource path="/" include-subpaths="true"/> </grant-to> </policy> </cross-domain-access> </access-policy> An error occurred while trying to make a request to URI 'http://localhost:8005/Service1.svc'. This could be due to a cross domain configuration error. Please see the inner exception for more details. ---> {System.Security.SecurityException ---> System.Security.SecurityException: Security error. at MS.Internal.InternalWebRequest.Send() at System.Net.BrowserHttpWebRequest.BeginGetResponseImplementation() at System.Net.BrowserHttpWebRequest.InternalBeginGetResponse(AsyncCallback callback, Object state) at System.Net.AsyncHelper.<>c__DisplayClass4.b__3(Object sendState) --- End of inner exception stack trace --- at System.Net.AsyncHelper.BeginOnUI(BeginMethod beginMethod, AsyncCallback callback, Object state) at System.Net.BrowserHttpWebRequest.BeginGetResponse(AsyncCallback callback, Object state) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.CompleteSend(IAsyncResult result) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.OnSend(IAsyncResult result)} Any ideas on how to progress? A: there are some debugging techniques listed here..one more useful post.. A: I know the service is working correctly, because I added it as a reference to a basic website and it worked. I'll try to play with Fiddler, although there is a slight issue as the xaml control is not embedded into a web page, its using the inbuilt testpage renderer. Here is a few pointers that iv found that need to be checked: Adding a clientaccesspolicy.xml as a shown my question. Adding a crossdomain.xml to the host route: <!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd"> <cross-domain-policy> <allow-http-request-headers-from domain="*" headers="*"/> </cross-domain-policy> Ensure binding is basicHttp as this is the only one supported by silverlight(currently) The service needs this attribute: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] Useful reads: http://weblogs.asp.net/tolgakoseoglu/archive/2008/03/18/silverlight-2-0-and-wcf.aspx http://timheuer.com/blog/archive/2008/06/06/changes-to-accessing-services-in-silverlight-2-beta-2.aspx http://silverlight.net/forums/t/19191.aspx http://timheuer.com/blog/archive/2008/04/09/silverlight-cannot-access-web-service.aspx A: Some debugging techniques available via a webcast I did that attempted to demonstrate some of the techniques I wrote about: https://www.livemeeting.com/cc/mseventsbmo/view?id=1032386656&role=attend&pw=F3D2F263 A: Don't know if your issue is the same but I've just blogged about the major pain I had this weekend trying to get cross-domain happening with my SL app talking to my Console hosted WCF service. http://wallism.wordpress.com/2009/03/01/silverlight-communication-exception/ In a nutshell though, you must have a crossdomain.xml and don't have 'headers="*"' Bad: <allow-access-from domain=""*"" headers="*" /> Good: <allow-access-from domain=""*"" /> <allow-http-request-headers-from domain=""*"" headers=""*"" /> Instead of * for headers you can have "SOAPAction" (work's either way) Oh and when you do get it working you may want to make it a bit more secure :-) Good luck! A: I'd start by making sure Silverlight is actually finding your client access policy file by inspecting the network calls using Fiddler, FireBug, or a similar tool. A: If you are using a WCF service at the same location the Silverlight app was served from you don't need a cross domain policy. I have had similar errors when returning LINQ to SQL data from the client where there was a relation between multiple entities. First make sure you WCF service is working properly. Do this by creating a simple ping function that just echos its input. Make sure you can call this first. If this works and your other function doesn't its something, either with the parameters or return of the function. If the first function also fails use a tool like Fiddler to see what data is send over the wire. Use a . at the end of the host to see the data from localhost. So something like http//localhost:1234./default.aspx and use the same for the WCF address. A: i have same problem. I do see that clientaccesspolicy.xml is fetched by silverlight client app successfully. I did ensure clientaccesspolicy.xml is not malformed by requesting it directly via firefox. The policy is wide open, same as the one above. Now here comes the bizarre twist. If I remove clientaccesspolicy.xml and instead add the Flash style crossdomain.xml policy file then it works. I do see through inspecting the network, how clientaccesspolicy.xml is requested first unsuccessfully and then silverlight falls back to crossdomain.xml. So I do have a work around, but I prefer making clientaccesspolicy.xml work so that there is no additional unneeded network round trip. Any suggestions? A: I found the book, Data-Driven Services with Silverlight 2 by John Papa to go over this extensively. I had the same issues and this excellent book explains it all. A: Make sure Endpoints and Binding of WCF service are defined correctly. To call WCF service from same application does not need cross domain policy file. A: I had a similar problem and removing the service reference and adding it back again solved the problem for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/122144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I disable work item creation at the end of a failed Team Foundation Build? I'm using Team Foundation Build but we aren't yet using TFS for problem tracking, so I would like to disable the work item creation on a failed build. Is there any way to do this? I tried commenting out the work item info in the TFSBuild.proj file for the build type but that didn't do the trick. A: Try adding this inside the PropertyGroup in your TFSBuild.proj: <SkipWorkItemCreation>true</SkipWorkItemCreation> If you are curious as to how this works, Microsoft.TeamFoundation.Build.targets contians the following: <Target Name="CoreCreateWorkItem" Condition=" '$(SkipWorkItemCreation)'!='true' and '$(IsDesktopBuild)'!='true' " DependsOnTargets="$(CoreCreateWorkItemDependsOn)"> <PropertyGroup> <WorkItemTitle>$(WorkItemTitle) $(BuildNumber)</WorkItemTitle> <BuildLogText>$(BuildlogText) &lt;a href='file:///$(DropLocation)\$(BuildNumber)\BuildLog.txt'&gt;$(DropLocation)\$(BuildNumber)\BuildLog.txt&lt;/a &gt;.</BuildLogText> <ErrorWarningLogText Condition="!Exists('$(MSBuildProjectDirectory)\ErrorsWarningsLog.txt')"></ErrorWarningLogText> <ErrorWarningLogText Condition="Exists('$(MSBuildProjectDirectory)\ErrorsWarningsLog.txt')">$(ErrorWarningLogText) &lt;a href='file:///$(DropLocation)\$(BuildNumber)\ErrorsWarningsLog.txt'&gt;$(DropLocation)\$(BuildNumber)\ErrorsWarningsLog.txt&lt;/a &gt;.</ErrorWarningLogText> <WorkItemDescription>$(DescriptionText) %3CBR%2F%3E $(BuildlogText) %3CBR%2F%3E $(ErrorWarningLogText)</WorkItemDescription> </PropertyGroup> <CreateNewWorkItem TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" BuildNumber="$(BuildNumber)" Description="$(WorkItemDescription)" TeamProject="$(TeamProject)" Title="$(WorkItemTitle)" WorkItemFieldValues="$(WorkItemFieldValues)" WorkItemType="$(WorkItemType)" ContinueOnError="true" /> </Target> You can override any of this functionality in your own build script, but Microsoft provide the handy SkipWorkItemCreation condition at the top, which you can use to cancel execution of the whole target. A: If you are using tfs2010 or above you can do this in the build definition itself. In the Process tab of Build Definition set the Create Work Item on failure property to false (under the Advanced section)
{ "language": "en", "url": "https://stackoverflow.com/questions/122154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Is there an easy way to attach source in Eclipse? I'm a big fan of the way Visual Studio will give you the comment documentation / parameter names when completing code that you have written and ALSO code that you are referencing (various libraries/assemblies). Is there an easy way to get inline javadoc/parameter names in Eclipse when doing code complete or hovering over methods? Via plugin? Via some setting? It's extremely annoying to use a lot of libraries (as happens often in Java) and then have to go to the website or local javadoc location to lookup information when you have it in the source jars right there! A: 1) Hold Control+ left click on the method you want to see. Then Eclipse will bring you to the Source Not Found page. 2) Click on "Attach Source" 3) 4) Navigate to C:\Program Files\Java\jdk-9.0.1\lib\src.zip 5) Click OK Now you should see the source code. A: I've found that sometimes, you point to the directory you'd assume was correct, and then it still states that it can't find the file in the attached source blah blah. These times, I've realized that the last path element was "src". Just removing this path element (thus indeed pointing one level above the actual path where the "org" or "com" folder is located) magically makes it work. Somehow, Eclipse seems to imply this "src" path element if present, and if you then have it included in the source path, Eclipse chokes. Or something like that. A: yes there is a easy way... go to ... http://sourceforge.net/projects/jdk7src/ and download the zip file. Then attach this to the eclipse. Give the path where you have downloaded the zip file in eclipse. We can then browse through the source. A: When you add a jar file to a classpath you can attach a source directory or zip or jar file to that jar. In the Java Build Path properties, on the Libraries tab, expand the entry for the jar and you'll see there's an item for the source attachment. Select this item and then click the Edit button. This lets you select the folder, jar or zip that contains the source. Additionally, if you select a class or a method in the jar and CTRL+CLICK on it (or press F3) then you'll go into the bytecode view which has an option to attach the source code. Doing these things will give you all the parameter names as well as full javadoc. If you don't have the source but do have the javadoc, you can attach the javadoc via the first method. It can even reference an external URL if you don't have it downloaded. A: * *Click on the JAVA code you want to see. (Click on List to open List.java if you want to check source code for java.util.List) *Click on "Attach Source" button. *You will be asked to "Select the location (folder, JAR or zip) containing the source for rt.jar). *Select "External location" option. Locate the src.zip file. *Path for src.zip is : *\Java\jdk1.8.0_45\src.zip A: * *Put source files into a zip file (as it does for java source) *Go to Project properties -> Libraries *Select Source attachment and click 'Edit' *On Source Attachment Configuration click 'Variable' *On "Variable Selection" click 'New' *Put a meaningful name and select the zip file created in step 1 A: Short answer would be yes. You can attach source using the properties for a project. Go to Properties (for the Project) -> Java Build Path -> Libraries Select the Library you want to attach source/javadoc for and then expand it, you'll see a list like so: Source Attachment: (none) Javadoc location: (none) Native library location: (none) Access rules: (No restrictions) Select Javadoc location and then click Edit on the right hahnd side. It should be quite straight forward from there. A: An easy way of doing this is : * *Download the respective SRC files/folder. *In the eclipse editor, Ctrl+click or F3 on a method/class you need the source for. A new tab opens up which says "No attached source found". *Click the "attach sources" button, click the "attach source folder" button, browse to the location of the downloaded SRC folder. Done! (p.s : Button labels may vary slightly, but this is pretty much it.) A: Up until yesterday I was stuck painstakingly downloading source zips for tons of jars and attaching them manually for every project. Then a colleague turned me on to The Java Source Attacher. It does what eclipse should do - a right click context menu that says "Attach Java Source". It automatically downloads the source for you and attaches it. I've only hit a couple libraries it doesn't know about and when that happens it lets you contribute the url back to the community so no one else will have a problem with that library. A: Another option is to right click on your jar file which would be under (Your Project)->Referenced Libraries->(your jar) and click on properties. Then click on Java Source Attachment. And then in location path put in the location for your source jar file. This is just another approach to attaching your source file. A: Another way is to add the folder to your source lookup path: http://help.eclipse.org/helios/index.jsp?topic=%2Forg.eclipse.jdt.doc.user%2Freference%2Fviews%2Fdebug%2Fref-editsourcelookup.htm A: If you build your libraries with gradle, you can attach sources to the jars you create and then eclipse will automatically use them. See the answer by @MichaelOryl How to build sources jar with gradle. Copied here for your reference: jar { from sourceSets.main.allSource } The solution shown is for use with the gradle java plugin. Mileage may vary if you're not using that plugin. A: Another thought for making that easier when using an automated build: When you create a jar of one of your projects, also create a source files jar: project.jar project-src.jar Instead of going into the build path options dialog to add a source reference to each jar, try the following: add one source reference through the dialog. Edit your .classpath and using the first jar entry as a template, add the source jar files to each of your other jars. This way you can use Eclipse's navigation aids to their fullest while still using something more standalone to build your projects. A: I was going to ask for an alternative to attaching sources to the same JAR being used across multiple projects. Originally, I had thought that the only alternative is to re-build the JAR with the source included but looking at the "user library" feature, you don't have to build the JAR with the source. This is an alternative when you have multiple projects (related or not) that reference the same JAR. Create an "user library" for each JAR and attach the source to them. Next time you need a specific JAR, instead of using "Add JARs..." or "Add External JARs..." you add the "user library" that you have previously created. You would only have to attach the source ONCE per JAR and can re-use it for any number of projects. A: For those who are writing Eclipse plugins and want to include the source... in the feature ExportWizard there is an option for including the source: A: Just click on attach source and select folder path ... name will be same as folder name (in my case). Remember one thing you need to select path upto project folder base location with "\" at suffix ex D:\MyProject\ A: It may seem like overkill, but if you use maven and include source, the mvn eclipse plugin will generate all the source configuration needed to give you all the in-line documentation you could ask for.
{ "language": "en", "url": "https://stackoverflow.com/questions/122160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "134" }
Q: What is the simplest way to handle images in Drupal? I am used to TYPO3 where I just can upload an image within the content element an then just determine the size an so on. Is there a way to handle images in drupal somehow like this? A: Image upload support for Drupal is a bit of a jungle. The most basic way to do this is with image.module and img_assist. This will add a link below each textarea allowing an upload, and when you upload one it inserts a custom tag into the content body that specifies the image, its title, its size and alignment, etc. Note that this inserts the image reference into the body text of the node and gives the author control over how & where the image appears. This may be all you need if the site is just for your personal use and you're looking to do something simple like insert images into your blog posts. An alternative (especially if others are going to create content and you want it to always look good without a lot of hassle and user training) is to restrict the placement of images in your theme--let people upload images as attachments, and render the images in standard slots outside of the body text. This is often done using the CCK imagefield to allow specifying up to N images--so you add separate fields to the Create Content node where the relevant images are specified. One can be marked as special and you can pull that one out to be the thumbnail that goes with the teaser. (IIRC, imagefield may not be ready for D6 yet.) To make this scenario work better, you probably want images auto-resized to a standard size that fits into your theme, and a thumbnail version to be auto-generated. A module like imagecache can do this, though it's not the easiest thing to set up. The IMCE module is a DHTML/JavaScript uploader UI that allows the user to browse previously uploaded images on the server. (There's control over what folders they can see.) IMCE has an associated CCK IMCE ImageField field type to replace the regular imagefield. IMCE also integrates with TinyMCE and FCKeditor to replace their own uploader UIs. (IMCE and IMCE's imagefield seem to work on D6.) Some people swear by the Asset module for uploading & selection of previously uploaded content; I believe it can also help embed images hosted on Flickr and videos from YouTube. Currently only available for Drupal 5. A: Not sure exactly what you're looking to do, but a good place to start would be by taking a look at the filefield and imagecache modules for use with CCK content types. Scald also looks promising, but is still awaiting an official release. A: Here's a detailed discussion of this topic. A: The simplest way , IMHO is using the image assist module. A: I'm just starting to learn about drupal, but here are some of my finds in the Drupal-image-gallery department - perhaps one of these will do what you want: Flickrup PROG Gallery Gallery2 A: Take a look at this screencast: http://www.youtube.com/watch?v=TLB9-1t_mrE It cleared out a bunch of confusion I was facing when first tackling this issue. Following the setup outlined in your video will probably get very close to what you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/122173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Using ASP.Net 3.5 SP1 Routing with SharePoint 2007 I'm trying to setup some friendly URLs on a SharePoint website. I know that I can do the ASP.Net 2.0 friendly URLs using RewritePath, but I was wondering if it was possible to make use of the System.Web.Routing that comes with ASP.NET 3.5 SP1. I think I've figured out how to get my route table loaded, but I'm not clear on what method to use to get the correct IHttpHandler to pass out. Thanks! A: I've been asked to look at this as part of an Share Point evaluation process. My understanding is that the uri template is essentially host name followed by the recursive folder structure. This is further complicated by Share Point truncating the uri at 255 characters. So if you have a particularly deep or verbose folder structure then your uri can become invalid. I was thinking about essentially prettifying / tidying up the uri by follow a human readable convention and convert to the Share Point convention. i.e: http://myhostname.com/docs/human-resources/case-files/2009/reviews/ed-blackburn.docx converts to Share Points: http://myhostname.com/human%20resources/case%20files/2009/reviews/ed%20blackburn.docx Any additional required services can be controlled by the controller. If longer than 255 characters some kind of tinyurl approach would be my initial suggestion. A: It should be as easy as the below. var route = new Route("blah/{*path}", new MyRouteHandler()); RouteTable.Routes.Add(route); public class MyRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { // return some HTTP handler here } } Then register System.Web.Routing.UrlRoutingModule under HTTP modules in web.config and you should be good to go. <add name="Routing" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> A: I ended up taking what Ryan had: var route = new Route("blah/{*path}", new MyRouteHandler()); RouteTable.Routes.Add(route); public class MyRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { //rewrite to some know sharepoint path HttpContext.Current.RewritePath("~/Pages/Default.aspx"); // return some HTTP handler here return new DefaultHttpHandler(); }} That seems to work ok for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/122175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best practice for an environment based configuration file in Ruby on Rails I have several properties that are dependent on the environment the application is running. For example, there are links to another application that is being developed concurrantly that get displayed in the header and footer. I want those links to be different depending on what environment they are in. This is slightly different than the way we use the out of box enviornment configuration files because our system administrator has mongrel running in 'Production' mode even on the development server. I only run mongrel in 'Development' mode on my laptop. Is there a standard way for handling situations like this? Or should we run mongrel in "Development" mode on the dev server and so on up the line. In that case, what happens if have an extra level in our env hierarchy. (Dev, Test, UAT, Production) A: You can go with a custom config file. Check out this thread. A: Running in production mode on UAT is definitely correct, you want that to work as closely to production as possible. I assume the test server is not a server where you run CI on the project test suite but more some kind of integration server where people from inside the team can test new features before the users get their hands on it: this is more of a mixed case, but I would probably have it run in dev mode actually if only for the clearer error messages and improved logging (a lot of bugs are bound to be found there and you will want maximum information) I guesse the dev server is some kind of integration server for the devs themselves,here again, running it in dev mode would probably be more beneficial with regard to the errors raised and logs. As for the answer to your specific question I would definitely have a look at the thread mentionned by @webmat since you should find your answer there and you could also have a look here
{ "language": "en", "url": "https://stackoverflow.com/questions/122178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How best update a running application on a remote machine So my build machine spits out a new executable and I would like to update my test machine with the new build. In order to do this I would need to somehow kill the process on the remote machine, copy over the new binary and start it running. And for some reason pskill and psexec don't work due to some weird IT setup. What would be a good way to do this? A: You could have your executable regularly poll some drop location for the presence of a new version, and when one is found shut down cleanly and pass control to the new version, e.g. using an exec() call.
{ "language": "en", "url": "https://stackoverflow.com/questions/122187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: For list controls, should find-as-you-type match at the start of an entry, or anywhere in an entry? I have a list control in GTK+ (a gtk.TreeView with one column), with "find-as-you type" enabled (so typing any text will open a small search field for searching through the list entries). Now, if the user enters some search text like "abc", should I search only for entries starting with "abc", or should I search for entries that contain "abc" somewhere in their text? (links to relevant Human Interface Guidelines appreciated) A: As Omer Kooheji said, the correct answer depends a lot on what the listbox contains. However, on the basis of the Principle of least astonishment, I would recommend matching at the start of an entry; that's the way it happens with most list boxes (in the Web, in time zone selection in Linux installations, etc. for example), so that is the behaviour that most users would expect. However, that is a generic advice without knowing the exact application. If your application is such that people might not know the exact start but might know some substring in between, it obviously makes much more sense to match the input anywhere. A: As a user, I appreciate a "contains" search rather than a "starts with". Sometimes you can't remember exactly what you're looking for and it's more helpful to suggest things that are similar to your search query rather than using it as a straight filter. There are times when there are multiple way to list something as well, ie: Shining, The - King, Stephen The Shining - Stephen King King, Stephen - The Shining etc.. In my opinion, typing in "Shining" should return any of those results. A: Surely the correct answer is it depends? What is your subject matter? I'd vote for anywhere in the text or matching any whole word in the text (i.e. not having to match the middle of words) if you are feeling lazy. Although there are some instances where and exact match from the first word is preferable. A: ideally, a find as you type will do partial matching on ordered characters until a word boundary is reached. For example (in pseudo-code): var input = getInput(); input =~ s/(.)/$1.*/g; return find_items(input); // Assuming this takes a regexp as its input This means that for input = "Shing" And a database containing {..., Sine, Shining, 'The Shining', ...} The output will be {Shining, 'The Shining'} When a word boundary is reached, the matching should change to match contiguous word parts. Roughly: var input = getInput(); input =~ s/(\w+)/$1.*/g; return find_items(input); // Assuming this takes a regexp as its input Such that for input = "Th Shi" And the same database as above The output will be {'The Shining'} Edit (Addressing the UI Guidelines request): You could do worse than watching this video
{ "language": "en", "url": "https://stackoverflow.com/questions/122192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Automated Test Framework - Windows CE Looking for a way to drive a Compact Framework app running on a CE device. All we basically need is to be able to write little scripts that press buttons and click on things. Anyone know of such a tool or it best to just hack one up ourselves? A: Unfortunately there are no nice, unified tools (that I've found anyway) for testing CF apps. No one provides mocking, since the CF CLR is missing things like Emit, making the taks difficult for a small market. Microsoft provides unit test capabilities in Studio and Team Foundation Server for smart device apps, but they don't do UI, debugging the tests is amazingly painful and just running tests is slow, so they tend to be good for regression tests and not much else. Microsoft provides some tools and a tool framework for desktop-driven testing in the CE Test Kit (CETK), including the DATK that Alan alludes to. They also provide things like the Hopper Test Tool, which they use as part of their logo testing. If none of these seem to work for you, a fairly rapid way to set up testing that's still driven from the PC (which I think all testing should, else it tends to be painful to run, tough to automate, and a bear to log pass/failure data), the you can use the CoreCon APIs or the Remote Tools Framework to build your communication pipe and test framework. I sincerely hope that the VSD (Studio for Devices) team is dogfooding TFS and that we get a much richer toolset with the next release of Studio. A: The Windows Mobile 6 SDK (assuming you're CE6 based) comes with the Windows Mobile TestKit - which has tools for writing UI automation. If you're CE5 based, platform builder (the tools used to build devices) comes with something called the DATK (device automation toolkit) - this was the predecessor to the WMTK mentioned above. A: Look at TestComplete - they said that new version 7 would be able to test Windows Mobile applications A: You can automate CE and Windows mobile at GUI level using a tool such as Eggplant in conjunction with a remote control tool such as SOTI pocket controller or MS Remote Display Controller. Personally, I'd prefer an object based tool to an image matching tool, for reasons of robustness and maintainability. You can also automate directly with SOTI but I found it cumbersome, as explained here A: Slightly off topic, but we (www.orbiz.biz, if it's still alive) did a kind of a port of NUnit, so we had a runner on the device, and executed the CF code on the device and ran tests. Works quite well - I dont think it was a big change from the original one, so the newer NUnit's might work with the newer CF. Sorry, I dont have the code, and the company kinda doesn't exist anymore, otherwise, I'd be happy to share what we had :(
{ "language": "en", "url": "https://stackoverflow.com/questions/122198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can you hook a SharePoint 2007 feature into the Application_Start of a site? I was wondering if there is a good way to hook into the Application_Start of a SharePoint 2007 site when developing a feature? I know I can directly edit the Global.asax file in the site root, but is there a way to do this so that it gets deployed with the feature? Thanks! A: My gut feeling on this is that it won't be possible. Application_Start is called by the runtime as the asp.net engine is starting up, so there most likely can't be any way to hook the handler outside of modifying the Global.asax - e.g. the hook must be declarative and persistent as it has to survive the application stopping/unloading. So, if you have to write to the global.asax, I guess you could write a Feature EventReceiver to perform the modification. That aside, can you give more details on the why? Perhaps there are other angles of attack. The idea of modifying the global.asax on the fly makes me feel ill. That can't be good. Oisin A: This is actually possible, but it doesn't involve the Global.asax file. Many of Microsoft's examples demonstrate wiring code in via the Global.asax, but this is not a best-practices approach when it comes to SharePoint. Ideally, your code should get packaged as a Feature and deployed via WSP (as you already know). The key lies in implementing the code in question as an HttpModule (i.e., a type that implements the IHttpModule interface) and wiring it into the ASP.NET pipeline servicing your SharePoint application. Roughly speaking, these are the steps: * *Create a class that implements the IHttpModule interface. *Implement the Init method in your HttpModule; this is called when the HttpApplication (in this case, the SPHttpApplication) is setup, and it gives you an opportunity to carry out processing, wire-up event delegates for other pipeline events, etc. *Create an SPFeatureReceiver that will add and remove your HttpModule from target web.config files on activation and deactivation, respectively. This is carried out using the SPWebConfigModification type to update the <httpModules> node in target web.config files. *Package all as a Feature and deploy via WSP. For more information on HttpModule development, see http://msdn.microsoft.com/en-us/library/ms227673.aspx. For some additional detail on the SPWebConfigModification type, see http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spwebconfigmodification.aspx. Result: a class that can handle application startup and is deployable via Feature. No manual file hacking required. I've successfully used this in a number of scenarios -- most recently with a custom caching provider (IVaryByCustomHandler) that needed to register itself for callbacks with the SPHttpApplication when it started. Though your question is a bit older, I hope this helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/122205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I get the IP Address of a local computer? In C++, what's the easiest way to get the local computer's IP address and subnet mask? I want to be able to detect the local machine's IP address in my local network. In my particular case, I have a network with a subnet mask of 255.255.255.0 and my computer's IP address is 192.168.0.5. I need to get these had two values programmatically in order to send a broadcast message to my network (in the form 192.168.0.255, for my particular case) Edit: Many answers were not giving the results I expected because I had two different network IP's. Torial's code did the trick (it gave me both IP addresses). Edit 2: Thanks to Brian R. Bondy for the info about the subnet mask. A: The question is trickier than it appears, because in many cases there isn't "an IP address for the local computer" so much as a number of different IP addresses. For example, the Mac I'm typing on right now (which is a pretty basic, standard Mac setup) has the following IP addresses associated with it: fe80::1%lo0 127.0.0.1 ::1 fe80::21f:5bff:fe3f:1b36%en1 10.0.0.138 172.16.175.1 192.168.27.1 ... and it's not just a matter of figuring out which of the above is "the real IP address", either... they are all "real" and useful; some more useful than others depending on what you are going to use the addresses for. In my experience often the best way to get "an IP address" for your local computer is not to query the local computer at all, but rather to ask the computer your program is talking to what it sees your computer's IP address as. e.g. if you are writing a client program, send a message to the server asking the server to send back as data the IP address that your request came from. That way you will know what the relevant IP address is, given the context of the computer you are communicating with. That said, that trick may not be appropriate for some purposes (e.g. when you're not communicating with a particular computer) so sometimes you just need to gather the list of all the IP addresses associated with your machine. The best way to do that under Unix/Mac (AFAIK) is by calling getifaddrs() and iterating over the results. Under Windows, try GetAdaptersAddresses() to get similar functionality. For example usages of both, see the GetNetworkInterfaceInfos() function in this file. A: Winsock specific: // Init WinSock WSADATA wsa_Data; int wsa_ReturnCode = WSAStartup(0x101,&wsa_Data); // Get the local hostname char szHostName[255]; gethostname(szHostName, 255); struct hostent *host_entry; host_entry=gethostbyname(szHostName); char * szLocalIP; szLocalIP = inet_ntoa (*(struct in_addr *)*host_entry->h_addr_list); WSACleanup(); A: Also, note that "the local IP" might not be a particularly unique thing. If you are on several physical networks (wired+wireless+bluetooth, for example, or a server with lots of Ethernet cards, etc.), or have TAP/TUN interfaces setup, your machine can easily have a whole host of interfaces. A: The problem with all the approaches based on gethostbyname is that you will not get all IP addresses assigned to a particular machine. Servers usually have more than one adapter. Here is an example of how you can iterate through all Ipv4 and Ipv6 addresses on the host machine: void ListIpAddresses(IpAddresses& ipAddrs) { IP_ADAPTER_ADDRESSES* adapter_addresses(NULL); IP_ADAPTER_ADDRESSES* adapter(NULL); // Start with a 16 KB buffer and resize if needed - // multiple attempts in case interfaces change while // we are in the middle of querying them. DWORD adapter_addresses_buffer_size = 16 * KB; for (int attempts = 0; attempts != 3; ++attempts) { adapter_addresses = (IP_ADAPTER_ADDRESSES*)malloc(adapter_addresses_buffer_size); assert(adapter_addresses); DWORD error = ::GetAdaptersAddresses( AF_UNSPEC, GAA_FLAG_SKIP_ANYCAST | GAA_FLAG_SKIP_MULTICAST | GAA_FLAG_SKIP_DNS_SERVER | GAA_FLAG_SKIP_FRIENDLY_NAME, NULL, adapter_addresses, &adapter_addresses_buffer_size); if (ERROR_SUCCESS == error) { // We're done here, people! break; } else if (ERROR_BUFFER_OVERFLOW == error) { // Try again with the new size free(adapter_addresses); adapter_addresses = NULL; continue; } else { // Unexpected error code - log and throw free(adapter_addresses); adapter_addresses = NULL; // @todo LOG_AND_THROW_HERE(); } } // Iterate through all of the adapters for (adapter = adapter_addresses; NULL != adapter; adapter = adapter->Next) { // Skip loopback adapters if (IF_TYPE_SOFTWARE_LOOPBACK == adapter->IfType) { continue; } // Parse all IPv4 and IPv6 addresses for ( IP_ADAPTER_UNICAST_ADDRESS* address = adapter->FirstUnicastAddress; NULL != address; address = address->Next) { auto family = address->Address.lpSockaddr->sa_family; if (AF_INET == family) { // IPv4 SOCKADDR_IN* ipv4 = reinterpret_cast<SOCKADDR_IN*>(address->Address.lpSockaddr); char str_buffer[INET_ADDRSTRLEN] = {0}; inet_ntop(AF_INET, &(ipv4->sin_addr), str_buffer, INET_ADDRSTRLEN); ipAddrs.mIpv4.push_back(str_buffer); } else if (AF_INET6 == family) { // IPv6 SOCKADDR_IN6* ipv6 = reinterpret_cast<SOCKADDR_IN6*>(address->Address.lpSockaddr); char str_buffer[INET6_ADDRSTRLEN] = {0}; inet_ntop(AF_INET6, &(ipv6->sin6_addr), str_buffer, INET6_ADDRSTRLEN); std::string ipv6_str(str_buffer); // Detect and skip non-external addresses bool is_link_local(false); bool is_special_use(false); if (0 == ipv6_str.find("fe")) { char c = ipv6_str[2]; if (c == '8' || c == '9' || c == 'a' || c == 'b') { is_link_local = true; } } else if (0 == ipv6_str.find("2001:0:")) { is_special_use = true; } if (! (is_link_local || is_special_use)) { ipAddrs.mIpv6.push_back(ipv6_str); } } else { // Skip all other types of addresses continue; } } } // Cleanup free(adapter_addresses); adapter_addresses = NULL; // Cheers! } A: You can use gethostname followed by gethostbyname to get your local interface internal IP. This returned IP may be different from your external IP though. To get your external IP you would have to communicate with an external server that will tell you what your external IP is. Because the external IP is not yours but it is your routers. //Example: b1 == 192, b2 == 168, b3 == 0, b4 == 100 struct IPv4 { unsigned char b1, b2, b3, b4; }; bool getMyIP(IPv4 & myIP) { char szBuffer[1024]; #ifdef WIN32 WSADATA wsaData; WORD wVersionRequested = MAKEWORD(2, 0); if(::WSAStartup(wVersionRequested, &wsaData) != 0) return false; #endif if(gethostname(szBuffer, sizeof(szBuffer)) == SOCKET_ERROR) { #ifdef WIN32 WSACleanup(); #endif return false; } struct hostent *host = gethostbyname(szBuffer); if(host == NULL) { #ifdef WIN32 WSACleanup(); #endif return false; } //Obtain the computer's IP myIP.b1 = ((struct in_addr *)(host->h_addr))->S_un.S_un_b.s_b1; myIP.b2 = ((struct in_addr *)(host->h_addr))->S_un.S_un_b.s_b2; myIP.b3 = ((struct in_addr *)(host->h_addr))->S_un.S_un_b.s_b3; myIP.b4 = ((struct in_addr *)(host->h_addr))->S_un.S_un_b.s_b4; #ifdef WIN32 WSACleanup(); #endif return true; } You can also always just use 127.0.0.1 which represents the local machine always. Subnet mask in Windows: You can get the subnet mask (and gateway and other info) by querying subkeys of this registry entry: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces Look for the registry value SubnetMask. Other methods to get interface information in Windows: You could also retrieve the information you're looking for by using: WSAIoctl with this option: SIO_GET_INTERFACE_LIST A: You cannot do that in Standard C++. I'm posting this because it is the only correct answer. Your question asks how to do it in C++. Well, you can't do it in C++. You can do it in Windows, POSIX, Linux, Android, but all those are OS-specific solutions and not part of the language standard. Standard C++ does not have a networking layer at all. I assume you have this wrong assumption that C++ Standard defines the same scope of features as other language standards, Java. While Java might have built-in networking (and even a GUI framework) in the language's own standard library, C++ does not. While there are third-party APIs and libraries which can be used by a C++ program, this is in no way the same as saying that you can do it in C++. Here is an example to clarify what I mean. You can open a file in C++ because it has an fstream class as part of its standard library. This is not the same thing as using CreateFile(), which is a Windows-specific function and available only for WINAPI. A: from torial: If you use winsock, here's a way: http://tangentsoft.net/wskfaq/examples/ipaddr.html As for the subnet portion of the question; there is not platform agnostic way to retrieve the subnet mask as the POSIX socket API (which all modern operating systems implement) does not specify this. So you will have to use whatever method is available on the platform you are using. A: I suggest my code. DllExport void get_local_ips(boost::container::vector<wstring>& ips) { IP_ADAPTER_ADDRESSES* adapters = NULL; IP_ADAPTER_ADDRESSES* adapter = NULL; IP_ADAPTER_UNICAST_ADDRESS* adr = NULL; ULONG adapter_size = 0; ULONG err = 0; SOCKADDR_IN* sockaddr = NULL; err = ::GetAdaptersAddresses(AF_UNSPEC, GAA_FLAG_SKIP_ANYCAST | GAA_FLAG_SKIP_MULTICAST | GAA_FLAG_SKIP_DNS_SERVER | GAA_FLAG_SKIP_FRIENDLY_NAME, NULL, NULL, &adapter_size); adapters = (IP_ADAPTER_ADDRESSES*)malloc(adapter_size); err = ::GetAdaptersAddresses(AF_UNSPEC, GAA_FLAG_SKIP_ANYCAST | GAA_FLAG_SKIP_MULTICAST | GAA_FLAG_SKIP_DNS_SERVER | GAA_FLAG_SKIP_FRIENDLY_NAME, NULL, adapters, &adapter_size); for (adapter = adapters; NULL != adapter; adapter = adapter->Next) { if (adapter->IfType == IF_TYPE_SOFTWARE_LOOPBACK) continue; // Skip Loopback if (adapter->OperStatus != IfOperStatusUp) continue; // Live connection only for (adr = adapter->FirstUnicastAddress;adr != NULL; adr = adr->Next) { sockaddr = (SOCKADDR_IN*)(adr->Address.lpSockaddr); char ipstr [INET6_ADDRSTRLEN] = { 0 }; wchar_t ipwstr[INET6_ADDRSTRLEN] = { 0 }; inet_ntop(AF_INET, &(sockaddr->sin_addr), ipstr, INET_ADDRSTRLEN); mbstowcs(ipwstr, ipstr, INET6_ADDRSTRLEN); wstring wstr(ipwstr); if (wstr != "0.0.0.0") ips.push_back(wstr); } } free(adapters); adapters = NULL; } A: A modified version of this answer. Added headers and libs. It's also based on these pages: GetAdaptersAddresses IP_ADAPTER_ADDRESSES_LH IP_ADAPTER_UNICAST_ADDRESS_LH In short, to get an IPv4 address, you call GetAdaptersAddresses() to get the adapters, then run through the IP_ADAPTER_UNICAST_ADDRESS structures starting with FirstUnicastAddress and get the Address field to then convert it to a readable format with inet_ntop(). Prints info in the format: [ADAPTER]: Realtek PCIe [NAME]: Ethernet 3 [IP]: 123.123.123.123 Can be compiled with: cl test.cpp or, if you need to add libs dependencies in the command line: cl test.cpp Iphlpapi.lib ws2_32.lib #include <winsock2.h> #include <iphlpapi.h> #include <stdio.h> #include <ws2tcpip.h> // Link with Iphlpapi.lib and ws2_32.lib #pragma comment(lib, "Iphlpapi.lib") #pragma comment(lib, "ws2_32.lib") void ListIpAddresses() { IP_ADAPTER_ADDRESSES* adapter_addresses(NULL); IP_ADAPTER_ADDRESSES* adapter(NULL); DWORD adapter_addresses_buffer_size = 16 * 1024; // Get adapter addresses for (int attempts = 0; attempts != 3; ++attempts) { adapter_addresses = (IP_ADAPTER_ADDRESSES*) malloc(adapter_addresses_buffer_size); DWORD error = ::GetAdaptersAddresses(AF_UNSPEC, GAA_FLAG_SKIP_ANYCAST | GAA_FLAG_SKIP_MULTICAST | GAA_FLAG_SKIP_DNS_SERVER | GAA_FLAG_SKIP_FRIENDLY_NAME, NULL, adapter_addresses, &adapter_addresses_buffer_size); if (ERROR_SUCCESS == error) { break; } else if (ERROR_BUFFER_OVERFLOW == error) { // Try again with the new size free(adapter_addresses); adapter_addresses = NULL; continue; } else { // Unexpected error code - log and throw free(adapter_addresses); adapter_addresses = NULL; return; } } // Iterate through all of the adapters for (adapter = adapter_addresses; NULL != adapter; adapter = adapter->Next) { // Skip loopback adapters if (IF_TYPE_SOFTWARE_LOOPBACK == adapter->IfType) continue; printf("[ADAPTER]: %S\n", adapter->Description); printf("[NAME]: %S\n", adapter->FriendlyName); // Parse all IPv4 addresses for (IP_ADAPTER_UNICAST_ADDRESS* address = adapter->FirstUnicastAddress; NULL != address; address = address->Next) { auto family = address->Address.lpSockaddr->sa_family; if (AF_INET == family) { SOCKADDR_IN* ipv4 = reinterpret_cast<SOCKADDR_IN*>(address->Address.lpSockaddr); char str_buffer[16] = {0}; inet_ntop(AF_INET, &(ipv4->sin_addr), str_buffer, 16); printf("[IP]: %s\n", str_buffer); } } printf("\n"); } free(adapter_addresses); adapter_addresses = NULL; } int main() { ListIpAddresses(); return 0; } A: I was able to do it using DNS service under VS2013 with the following code: #include <Windns.h> WSADATA wsa_Data; int wsa_ReturnCode = WSAStartup(0x101, &wsa_Data); gethostname(hostName, 256); PDNS_RECORD pDnsRecord; DNS_STATUS statsus = DnsQuery(hostName, DNS_TYPE_A, DNS_QUERY_STANDARD, NULL, &pDnsRecord, NULL); IN_ADDR ipaddr; ipaddr.S_un.S_addr = (pDnsRecord->Data.A.IpAddress); printf("The IP address of the host %s is %s \n", hostName, inet_ntoa(ipaddr)); DnsRecordListFree(&pDnsRecord, DnsFreeRecordList); I had to add Dnsapi.lib as addictional dependency in linker option. Reference here. A: Can't you just send to INADDR_BROADCAST? Admittedly, that'll send on all interfaces - but that's rarely a problem. Otherwise, ioctl and SIOCGIFBRDADDR should get you the address on *nix, and WSAioctl and SIO_GET_BROADCAST_ADDRESS on win32. A: In DEV C++, I used pure C with WIN32, with this given piece of code: case IDC_IP: gethostname(szHostName, 255); host_entry=gethostbyname(szHostName); szLocalIP = inet_ntoa (*(struct in_addr *)*host_entry->h_addr_list); //WSACleanup(); writeInTextBox("\n"); writeInTextBox("IP: "); writeInTextBox(szLocalIP); break; When I click the button 'show ip', it works. But on the second time, the program quits (without warning or error). When I do: //WSACleanup(); The program does not quit, even clicking the same button multiple times with fastest speed. So WSACleanup() may not work well with Dev-C++..
{ "language": "en", "url": "https://stackoverflow.com/questions/122208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59" }
Q: How can I mock/fake/stub sealed OracleException with no public constructor? In my tests I need to test what happens when an OracleException is thrown (due to a stored procedure failure). I am trying to setup Rhino Mocks to Expect.Call(....).Throw(new OracleException()); For whatever reason however, OracleException seems to be sealed with no public constructor. What can I do to test this? Edit: Here is exactly what I'm trying to instantiate: public sealed class OracleException : DbException { private OracleException(string message, int code) { ...} } A: For oracle's managed data access (v 4.121.1.0) the constructor changed again var ci = typeof(OracleException).GetConstructor(BindingFlags.NonPublic | BindingFlags.Instance, null, new Type[] { typeof(int), typeof(string), typeof(string), typeof(string) }, null); var c = (OracleException)ci.Invoke(new object[] { 1234, "", "", "" }); A: Here is how you do it: ConstructorInfo ci = typeof(OracleException).GetConstructor(BindingFlags.NonPublic | BindingFlags.Instance, null, new Type[] {typeof(string), typeof(int)}, null); var c = (OracleException)ci.Invoke(new object[] { "some message", 123 }); Thanks to all that helped, you have been upvoted A: Use reflection to instantiate OracleException. See this blog post A: I'm using the Oracle.DataAccess.Client data provider client. I am having trouble constructing a new instance of an OracleException object, but it keeps telling me that there are no public constructors. I tried all of the ideas shown above and keep getting a null reference exception. object[] args = { 1, "Test Message" }; ConstructorInfo ci = typeof(OracleException).GetConstructor(BindingFlags.NonPublic | BindingFlags.Instance, null, System.Type.GetTypeArray(args), null); var e = (OracleException)ci.Invoke(args); When debugging the test code, I always get a NULL value for 'ci'. Has Oracle changed the library to not allow this? What am I doing wrong and what do I need to do to instantiate an OracleException object to use with NMock? By the way, I'm using the Client library for version 10g. Thanks, Charlie A: It seems that Oracle changed their constructors in later versions, therefore the solution above will not work. If you only want to set the error code, the following will do the trick for 2.111.7.20: ConstructorInfo ci = typeof(OracleException) .GetConstructor( BindingFlags.NonPublic | BindingFlags.Instance, null, new Type[] { typeof(int) }, null ); Exception ex = (OracleException)ci.Invoke(new object[] { 3113 }); A: Use reflection to instantiate the OracleException object? Replace new OracleException() with object[] args = ... ; (OracleException)Activator.CreateInstance(typeof(OracleException), args) A: You can always get all the constructors like this ConstructorInfo[] all = typeof(OracleException).GetConstructors( BindingFlags.NonPublic | BindingFlags.Instance);` For Oracle.DataAccess 4.112.3.0 this returned 7 constructors The one I wanted was the second one in the list which took 5 arguments, int, string, string, string, int. I was surprised by the fifth argument because in ILSpy it looked like this internal OracleException(int errCode, string dataSrc, string procedure, string errMsg) { this.m_errors = new OracleErrorCollection(); this.m_errors.Add(new OracleError(errCode, dataSrc, procedure, errMsg)); } So, to get the constructor I wanted I ended up using ConstructorInfo constructorInfo = typeof(OracleException).GetConstructor( BindingFlags.NonPublic | BindingFlags.Instance, null, new Type[] { typeof(int), typeof(string), typeof(string), typeof(string), typeof(int) }, null);` A: Good solution George. This also works for SqlException too: ConstructorInfo ci = typeof( SqlErrorCollection ).GetConstructor( BindingFlags.NonPublic | BindingFlags.Instance, null, new Type[] { }, null ); SqlErrorCollection errorCollection = (SqlErrorCollection) ci.Invoke(new object[]{}); ci = typeof( SqlException ).GetConstructor( BindingFlags.NonPublic | BindingFlags.Instance, null, new Type[] { typeof( string ), typeof( SqlErrorCollection ) }, null ); return (SqlException) ci.Invoke( new object[] { "some message", errorCollection } ); -dave A: Can you write a trivial stored procedure that fails/errors each time, then use that to test?
{ "language": "en", "url": "https://stackoverflow.com/questions/122215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: PHP unserialize keeps throwing same error over 100 times part 2 So I have a large 2d array that i serialize, but when I attempt to unserialize the array it just throws the same error to the point of nearly crashing Firefox. The error is: Warning: unserialize() [function.unserialize]: Node no longer exists in /var/www/dev/wc_paul/inc/analyzerTester.php on line 24 I would include the entire serialized array that I echo out but last time I tried that on this form it crashed my Firefox. Does anyone have any idea why this might be happening? I'm sure this is an array. However, it was originally an XML response from another server that I then pulled values from to build the array. If it can't be serialized I can accept that I guess... but how should I go about saving it then? A: Usually, when you get an error message, you can figure out a great deal by simply searching the web for that very message. For example, when you put Node no longer exists into Google, you end up with a concise explanation of why this is happening, along with a solution, as the very first hit. A: to answer your second question about how else you could save the data why not output the xml responce directly to a file and save it locally, then read from the local file when required.
{ "language": "en", "url": "https://stackoverflow.com/questions/122216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Invalid file descriptor problem with Git on Windows I've been using Git on Linux for about a year, and everything works fine. Since recently, a colleague has joined development and he's using Windows. Everything works fine there as well, but sometimes when he tries to push changes to a remote repository (bare) on Linux server it bails out with 'Invalid file descriptor' message. I update the same remote repository using Linux git without any problems. We tried WinGit 0.2 and MSysGit (downloaded today, uses Git 1.5.6). Both have the same problem. I should mention that network is working without any problems. I can clone the whole repository again from scrach. I just cannot push any changes to it. Has anyone seen something like this before? A: I'm not a git user so this is a complete guess: has the TCP connection been broken? Try capturing network traffic with Wireshark. A: Maybe you have problem with your anti virus. I had the same problem on my machine, I was(still) use nod32, just disable thread protection module IMON, that could fix the problem. A: git on Win32 is known to be iffy. Have you tried the latest msysgit? It's a port of 1.6.0.2 (released September 23rd.) Also, is there any way you could get a more verbose / trace output from the failing git command?
{ "language": "en", "url": "https://stackoverflow.com/questions/122226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: .NET can't find the cookie that I created in classic asp in some cases I'm the only developer supporting a website that's a mix of classic asp and .NET. I had to add some .net pages to a classic asp application. This application requires users to login. The login page, written in classic asp, creates a cookie that .net pages use to identify the logged in user and stores information in session vars for the other classic asp pages to use. In classic asp the cookie code is as follows: response.cookies("foo")("value1") = value1 response.cookies("foo")("value2") = value2 response.cookies("foo").Expires = DateAdd("N", 15, Now()) response.cookies("foo").Path = "/" In the .NET codebehind Page_Load code, I check for the cookie using the code below. if(!IsPostBack) { if(Request.Cookies["foo"] != null) { ... } else { //redirect to cookie creation page, cookiefoo.asp } } The vast majority of the time this works with no problems. However, we have some users that get redirected to a cookie creation page because the Page_Load code can't find the cookie. No matter how many times the user is redirected to the cookie creation page, the referring page still can find the cookie, foo. The problem is happening in IE7 and I've tried modifying the privacy and cookie settings in the browser but can't seem to recreate the problem the user is having. Does anyone have any ideas why this could be happening with IE7? Thanks. A: Using Fiddler is a good way to try to figure out what's going on. You can see the exact Set-Cookie that the browser is getting. A: One thing about cookies that can be very hard to debug is that cookies are case-sensitive when it comes to the domain. So, a cookie that was issued for "www.abc.com/dir1/alpha/" can not be accessed from code if the user typed "www.abc.com/dir1/Alpha/". Another thing to watch out for in your ASP.NET pages is Page.Response.Redirect("~/alpha"); ASP.NET will modify the case of the relative URL based on the case of the actual filepath. If you have mixed-cases in your directory structure, then use Page.Response.Redirect(Page.ResolveUrl("~/alpha").ToLower()); A: General rule of cross-platform cookie testing: dump all the cookies to screen so you can see your data. It might not explain exactly why it's happening, but it should tell you what is happening. A: Okay second answer: If it's a really small subset of users, you might want to see what software they have installed on their computer. There might be a common anti-spyware/adware application on all their machines that is messing with your cookies. If you can't get answers from those users on that, or there's nothing suspect it might be worth creating a special little test script to write a cookie and post the result back to you on the next page-load. You want to keep it as simple as possible (no user interaction after loading the link) so make it: * *Set the cookie in classic ASP *Redirect to another ASP page. *Read the cookies and email it to you. *Redirect to a ASPNET page. *Read the cookie and email it. You might want to set a cookie in ASPNET too and try reading it back out. A: The code you posted should break if the IE7 privacy settings is set to block first party cookies. Did you try that in your experiments? Using Fiddler as Lou said it a good idea although the Set-Cookie response header is less interesting than subsequent Cookie request headers to see whether the client is fowarding them to the server. A: I recall this happening in a similar application I had written; some pages were classic ASP, while others ASP.NET. The cookies were seemingly disappearing between the .NET and classic ASP pages. IIRC, the .NET side needed to Server.HTMLEncode/HTMLDecode the cookies. Sorry, but I don't have the exact code I used, but this should get you going in the right direction. A: decades later (this surely deserves a medal) It turn out for me it was special characters in the cookie key. Specificaly the case that got my atention was a cookie with the key utlização, it would always come as null. So we've added a fail safe, .Replace("ç", "c").Replace("ã", "a") when generating the cookie key. And it fixed it. The cookie had always shown in the browser, so it took a while to observe this pattern. Thanks MS.
{ "language": "en", "url": "https://stackoverflow.com/questions/122229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the comparative robustness of C++ Builder versions? Our development team work with Borland C++ Builder 6 and CodeGear C++ Builder 2007 (as well as Visual Studio). I hear a lot of comments that the Builder 2007 IDE crashes a lot more than BCB6 does. Does anyone out there have any experience of the C++ Builder 2009 IDE yet, particularly with a decent size application, and if so how does this compare with 6 or 2007 in terms of overall robustness? A: My experiences with BCB2009 so far have been mostly positive. the IDE seems stable and installs etc are much faster. However, I haven't yet moved a major project over to 2009, but I can almost guarantee you it will be non-trivial because of the Unicode changes. You will need to switch to new versions of any third-party libraries/components that you use, as well as sanity-checking all string use in your existing code. There are no 'best practice' guidelines for doing this yet, either. Of course, if you were not using the VCL, this wouldn't be a problem - but then, why would you use BCB...? A: I haven't used C++ Builder 2009 but maybe this will help you. According to Chris Pattinson (QA manager at CodeGear), they made over 4000 fixes in both Delphi and C++ Builder 2009 (See this blog: http://blogs.codegear.com/chrispattinson/2008/09/19/38897) He links to an article on cn.codegear.com which details all of the fixes for C++ Builder, see http://dn.codegear.com/article/38715
{ "language": "en", "url": "https://stackoverflow.com/questions/122234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Handling a colon in an element ID in a CSS selector JSF is setting the ID of an input field to search_form:expression. I need to specify some styling on that element, but that colon looks like the beginning of a pseudo-element to the browser so it gets marked invalid and ignored. Is there anyway to escape the colon or something? input#search_form:expression { ///... } A: Using a backslash before the colon doesn't work in many versions of IE (particularly 6 and 7; possibly others). A workaround is to use the hexadecimal code for the colon - which is \3A example: input#search_form\3A expression { } This works in all browsers: Including IE6+ (and possibly earlier?), Firefox, Chrome, Opera, etc. It's part of the CSS2 standard. A: You can escape it with a backslash input#search_form\:expression { ///... } From the CSS Spec 4.1.3 Characters and case The following rules always hold: All CSS style sheets are case-insensitive, except for parts that are not under the control of CSS. For example, the case-sensitivity of values of the HTML attributes "id" and "class", of font names, and of URIs lies outside the scope of this specification. Note in particular that element names are case-insensitive in HTML, but case-sensitive in XML. In CSS, identifiers (including element names, classes, and IDs in selectors) can contain only the characters [a-z0-9] and ISO 10646 characters U+00A1 and higher, plus the hyphen (-) and the underscore (_); they cannot start with a digit, or a hyphen followed by a digit. Identifiers can also contain escaped characters and any ISO 10646 character as a numeric code (see next item). For instance, the identifier "B&W?" may be written as "B\&W\?" or "B\26 W\3F". Note that Unicode is code-by-code equivalent to ISO 10646 (see [UNICODE] and [ISO10646]). In CSS 2.1, a backslash () character indicates three types of character escapes. First, inside a string, a backslash followed by a newline is ignored (i.e., the string is deemed not to contain either the backslash or the newline). Second, it cancels the meaning of special CSS characters. Any character (except a hexadecimal digit) can be escaped with a backslash to remove its special meaning. For example, "\"" is a string consisting of one double quote. Style sheet preprocessors must not remove these backslashes from a style sheet since that would change the style sheet's meaning. Third, backslash escapes allow authors to refer to characters they can't easily put in a document. In this case, the backslash is followed by at most six hexadecimal digits (0..9A..F), which stand for the ISO 10646 ([ISO10646]) character with that number, which must not be zero. (It is undefined in CSS 2.1 what happens if a style sheet does contain a character with Unicode codepoint zero.) If a character in the range [0-9a-f] follows the hexadecimal number, the end of the number needs to be made clear. There are two ways to do that: with a space (or other whitespace character): "\26 B" ("&B"). In this case, user agents should treat a "CR/LF" pair (U+000D/U+000A) as a single whitespace character. by providing exactly 6 hexadecimal digits: "\000026B" ("&B") In fact, these two methods may be combined. Only one whitespace character is ignored after a hexadecimal escape. Note that this means that a "real" space after the escape sequence must itself either be escaped or doubled. If the number is outside the range allowed by Unicode (e.g., "\110000" is above the maximum 10FFFF allowed in current Unicode), the UA may replace the escape with the "replacement character" (U+FFFD). If the character is to be displayed, the UA should show a visible symbol, such as a "missing character" glyph (cf. 15.2, point 5). Note: Backslash escapes, where allowed, are always considered to be part of an identifier or a string (i.e., "\7B" is not punctuation, even though "{" is, and "\32" is allowed at the start of a class name, even though "2" is not). The identifier "te\st" is exactly the same identifier as "test". A: I had the same problem with colons, and I was unable to change them (couldn't access the code outputing colons) and I wanted to fetch them with CSS3 selectors with jQuery. I put it here, cause it might be helpful for someone input[id="something:something"] worked fine in jQuery selectors, and it might work in stylesheets as well (might have browser issues) A: In JSF 2,0, you can specify the separator using the web.xml file as init-param of javax.faces.SEPARATOR_CHAR Read this: * *Is it possible to change the element id separator in JSF? A: Backslash: input#search_form\:expression { ///...} * *See also Using Namespaces with CSS (MSDN) A: This article will tell you how to escape any character in CSS. Now, there’s even a tool for it: http://mothereff.in/css-escapes#0search%5fform%3Aexpression TL;DR All the other answers to this question are incorrect. You need to escape both the underscore (to prevent IE6 from ignoring the rule altogether in some edge cases) and the colon character for the selector to work properly across different browsers. Technically, the colon character can be escaped as \:, but that doesn’t work in IE < 8, so you’ll have to use \3a: #search\_form\3a expression {} A: I work in a ADF framework and I often times have to use JQuery to select elements. This format works for me. This works in IE8 also. $('[id*="gantt1::majorAxis"]').css('border-top', 'solid 1px ' + mediumGray); A: I found only this format worked for me for IE7 (Firefox too), and I use JSF/Icefaces 1.8.2. Say form id=FFF, element id=EEE var jq=jQuery.noConflict(); jq(document).ready(function() { jq("[id=FFF:EEE]").someJQueryLibFunction({ jQuery lib function options go here }) });
{ "language": "en", "url": "https://stackoverflow.com/questions/122238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "160" }
Q: Floated Divs Obeying/Not Obeying Vertical-Align Within a table cell that is vertical-align:bottom, I have one or two divs. Each div is floated right. Supposedly, the divs should not align to the bottom, but they do (which I don't understand, but is good). However, when I have two floated divs in the cell, they align themselves to the same top line. I want the first, smaller, div to sit all the way at the bottom. Another acceptable solution is to make it full height of the table cell. It's difficult to explain, so here's the code: <style type="text/css"> table { border-collapse: collapse; } td { border:1px solid black; vertical-align:bottom; } .h { float:right; background: #FFFFCC; } .ha { float:right; background: #FFCCFF; } </style> <table> <tr> <td> <div class="ha">@</div> <div class="h">Title Text<br />Line 2</div> </td> <td> <div class="ha">@</div> <div class="h">Title Text<br />Line 2<br />Line 3</div> </td> <td> <div class="h">Title Text<br />Line 2</div> </td> <td> <div class="h">Title Text<br />Line 2</div> </td> <td> <div class="h">Title Text<br />Line 2</div> </td> </tr> <tr> <td> <div class="d">123456789</div> </td> <td> <div class="d">123456789</div> </td> <td> <div class="d">123456789</div> </td> <td> <div class="d">123456789</div> </td> <td> <div class="d">123456789</div> </td> </tr> </table> Here are the problems: * *Why does the @ sign sit at the same level as the yellow div? *Supposedly vertical-align doesn't apply to block elements (like a floated div) 1. But it does! *How can I make the @ sit at the bottom or make it full height of the table cell? I am testing in IE7 and FF2. Target support is IE6/7, FF2/3. Clarification: The goal is to have the red @ on the bottom line of the table cell, next to the yellow box. Using clear on either div will put them on different lines. Additionally, the cells can have variable lines of text - therefore, line-height will not help. A: i've found this article to be extremely useful in understanding and troubleshooting vertical-align: Understanding vertical-align, or "How (Not) To Vertically Center Content" A: I never answered the first two questions, so feel free to give your answers below. But I did solve the last problem, of how to make it work. I added a containing div to the two divs inside the table cells like so: <table> <tr> <td> <div class="t"> <div class="h">Title Text<br />Line 2</div> <div class="ha">@</div> </div> </td> Then I used the following CSS <style type="text/css"> table { border-collapse: collapse; } td { border:1px solid black; vertical-align:bottom; } .t { position: relative; width:150px; } .h { background: #FFFFCC; width:135px; margin-right:15px; text-align:right; } .ha { background: #FFCCFF; width:15px; height:18px; position:absolute; right:0px; bottom:0px; } </style> The key to it all is for a div to be position absolutely relative to it's parent the parent must be declared position:relative A: Add clear: both to the second element. If you want to @ to be below the yellow box, put it last in HTML code. A: If you don't want both divs on the same line then don't float them both right. If you put the @ below the text in the markup and then set the float to 'clear' it would put it below the text. A: http://www.w3.org/TR/CSS2/visudet.html#line-height This property affects the vertical positioning inside a line box of the boxes generated by an inline-level element. The following values only have meaning with respect to a parent inline-level element, or to a parent block-level element, if that element generates anonymous inline boxes; they have no effect if no such parent exists. There is always confusion about the vertical-align property in CSS, because in most cases it doesn't do what you expect it to do. This is because it isn't the same as valign, which is allowable in many HTML 4 tags. For further information, you can check out: http://www.ibloomstudios.com/articles/vertical-align_misuse/ http://www.ibloomstudios.com/articles/applied_css_vertical-align/ The link which David Alpert posted is incredibly useful in this matter.
{ "language": "en", "url": "https://stackoverflow.com/questions/122239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: VS2005 "Add New Item..." dialog - default item After installing a third-party SDK, it very discourteously makes one if its templates the default item in "Add New Item..." dialog in Visual Studio 2005. This is also the case for all other similar dialogs - "Add Class...", "Add User Control..." etc. Is there a way to change this behavior? A: You may have to manually modify the SortOrder on the Item templates yourself. You can do this by following these directions: 1) Find the Item Template(s) Item Templates for VS2005 are stored in the following locations: (Installed Templates) <VisualStudioInstallDir>\Common7\IDE\ItemTemplates\Language\Locale\ (Custom Templates) My Documents\Visual Studio 2005\Templates\ItemTemplates\Language\ 2) Open the template zip file to modify the .vstemplate file. Each Item Template is stored in a .zip file, so you will need to open the zip file that pertains to the template you want to modify. Open the template's .vstemplate file and find the SortOrder property under the TemplateData section. The following is a sample file: <TemplateData> <Name>SomeITem</Name> <Description>Description</Description> <ProjectType>>CSharp</ProjectType> <SortOrder>1000</SortOrder> <DefaultName></DefaultName> <ProvideDefaultName>true</ProvideDefaultName> </TemplateData> Modify the SortOrder value using the following rules: * *The default value is 100, and all values must be multiples of 10. *The SortOrder element is ignored for user-created templates. All user-created templates are sorted alphabetically. *Templates that have low sort order values appear in either the New Project or New Add Item dialog box before templates that have high sort order values. Once you've made edits to the template definitions you'll need to open a command prompt and navigate to the directory that contains devenv.exe, and type "devenv /setup". This presumably rebuilds some internal settings and until you do this you won't see any difference. A: I've just noticed this file on my PC: C:\Program Files\Microsoft Visual Studio 8\VC\VCNewItems\NewItems.vsdir It's a text file, so you could check if the offending third-party stuff is in there. A: Try looking at the registry under HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\ I see some relevant entries on my machine under HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0 for VS2008.
{ "language": "en", "url": "https://stackoverflow.com/questions/122253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: accessing constants in JSP (without scriptlet) I have a class that defines the names of various session attributes, e.g. class Constants { public static final String ATTR_CURRENT_USER = "current.user"; } I would like to use these constants within a JSP to test for the presence of these attributes, something like: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <%@ page import="com.example.Constants" %> <c:if test="${sessionScope[Constants.ATTR_CURRENT_USER] eq null}"> <%-- Do somthing --%> </c:if> But I can't seem to get the sytax correct. Also, to avoid repeating the rather lengthy tests above in multiple places, I'd like to assign the result to a local (page-scoped) variable, and refer to that instead. I believe I can do this with <c:set>, but again I'm struggling to find the correct syntax. UPDATE: Further to the suggestion below, I tried: <c:set var="nullUser" scope="session" value="${sessionScope[Constants.ATTR_CURRENT_USER] eq null}" /> which didn't work. So instead, I tried substituting the literal value of the constant. I also added the constant to the content of the page, so I could verify the constant's value when the page is being rendered <c:set var="nullUser" scope="session" value="${sessionScope['current.user'] eq null}" /> <%= "Constant value: " + WebHelper.ATTR_CURRENT_PARTNER %> This worked fine and it printed the expected value "current.user" on the page. I'm at a loss to explain why using the String literal works, but a reference to the constant doesn't, when the two appear to have the same value. Help..... A: You can define Constants.ATTR_CURRENT_USER as a variable with c:set,just as below: <c:set var="ATTR_CURRENT_USER" value="<%=Constants.ATTR_CURRENT_USER%>" /> <c:if test="${sessionScope[ATTR_CURRENT_USER] eq null}"> <%-- Do somthing --%> </c:if> A: the topic is quite old, but anyway..:) I found nice solution to have Constants available through JSTL. You should prepare a map using reflection and put it wherever you want. The map will always contain all the constants you define in Constants class. You can put it into ServletContext using listener and enjoy constants in JSTL like: ${CONSTANTS["CONSTANT_NAME_IN_JAVA_CLASS_AS_A_STRING"]} CONSTANTS here is a key you used putting map into Context   :-) The following is a piece of my code building a map of the constant fields: Map<String, Object> map = new HashMap<String, Object>(); Class c = Constants.class; Field[] fields = c.getDeclaredFields(); for (Field field : fields) { int modifier = field.getModifiers(); if (Modifier.isPublic(modifier) && Modifier.isStatic(modifier) && Modifier.isFinal(modifier)) { try { map.put(field.getName(), field.get(null));//Obj param of get method is ignored for static fields } catch (IllegalAccessException e) { /* ignorable due to modifiers check */ } } } A: It's not working in your example because the ATTR_CURRENT_USER constant is not visible to the JSTL tags, which expect properties to be exposed by getter functions. I haven't tried it, but the cleanest way to expose your constants appears to be the unstandard tag library. ETA: Old link I gave didn't work. New links can be found in this answer: Java constants in JSP Code snippets to clarify the behavior you're seeing: Sample class: package com.example; public class Constants { // attribute, visible to the scriptlet public static final String ATTR_CURRENT_USER = "current.user"; // getter function; // name modified to make it clear, later on, // that I am calling this function // and not accessing the constant public String getATTR_CURRENT_USER_FUNC() { return ATTR_CURRENT_USER; } } Snippet of the JSP page, showing sample usage: <%-- Set up the current user --%> <% session.setAttribute("current.user", "Me"); %> <%-- scriptlets --%> <%@ page import="com.example.Constants" %> <h1>Using scriptlets</h1> <h3>Constants.ATTR_CURRENT_USER</h3> <%=Constants.ATTR_CURRENT_USER%> <br /> <h3>Session[Constants.ATTR_CURRENT_USER]</h3> <%=session.getAttribute(Constants.ATTR_CURRENT_USER)%> <%-- JSTL --%> <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <jsp:useBean id="cons" class="com.example.Constants" scope="session"/> <h1>Using JSTL</h1> <h3>Constants.getATTR_CURRENT_USER_FUNC()</h3> <c:out value="${cons.ATTR_CURRENT_USER_FUNC}"/> <h3>Session[Constants.getATTR_CURRENT_USER_FUNC()]</h3> <c:out value="${sessionScope[cons.ATTR_CURRENT_USER_FUNC]}"/> <h3>Constants.ATTR_CURRENT_USER</h3> <c:out value="${sessionScope[Constants.ATTR_CURRENT_USER]}"/> <%-- Commented out, because otherwise will error: The class 'com.example.Constants' does not have the property 'ATTR_CURRENT_USER'. <h3>cons.ATTR_CURRENT_USER</h3> <c:out value="${sessionScope[cons.ATTR_CURRENT_USER]}"/> --%> <hr /> This outputs: Using scriptlets Constants.ATTR_CURRENT_USER current.user Session[Constants.ATTR_CURRENT_USER] Me Using JSTL Constants.getATTR_CURRENT_USER_FUNC() current.user Session[Constants.getATTR_CURRENT_USER_FUNC()] Me Constants.ATTR_CURRENT_USER A: Plugin a Custom EL Resolver to the EL resolver chain, which will resolve the constants. An EL Resolver is Java class extending javax.el.ELResolver class. Thanks, A: Static properties aren't accessible in EL. The workaround I use is to create a non-static variable which assigns itself to the static value. public final static String MANAGER_ROLE = 'manager'; public String manager_role = MANAGER_ROLE; I use lombok to generate the getter and setter so that's pretty well it. Your EL looks like this: ${bean.manager_role} Full code at https://rogerkeays.com/access-java-static-methods-and-constants-from-el A: I am late to the discussion, but my approach is a little different. I use a custom tag handler to give JSP pages the constant values (numeric or string) it needs. Here is how I did it: Supposed I have a class that keeps all the constants: public class AppJspConstants implements Serializable { public static final int MAXLENGTH_SIGNON_ID = 100; public static final int MAXLENGTH_PASSWORD = 100; public static final int MAXLENGTH_FULLNAME = 30; public static final int MAXLENGTH_PHONENUMBER = 30; public static final int MAXLENGTH_EXTENSION = 10; public static final int MAXLENGTH_EMAIL = 235; } I also have this extremely simple custom tag: public class JspFieldAttributes extends SimpleTagSupport { public void doTag() throws JspException, IOException { getJspContext().setAttribute("maxlength_signon_id", AppJspConstants.MAXLENGTH_SIGNON_ID); getJspContext().setAttribute("maxlength_password", AppJspConstants.MAXLENGTH_PASSWORD); getJspContext().setAttribute("maxlength_fullname", AppJspConstants.MAXLENGTH_FULLNAME); getJspContext().setAttribute("maxlength_phonenumber", AppJspConstants.MAXLENGTH_PHONENUMBER); getJspContext().setAttribute("maxlength_extension", AppJspConstants.MAXLENGTH_EXTENSION); getJspContext().setAttribute("maxlength_email", AppJspConstants.MAXLENGTH_EMAIL); getJspBody().invoke(null); } } Then I have a StringHelper.tld. Inside, I have this : <tag> <name>fieldAttributes</name> <tag-class>package.path.JspFieldAttributes</tag-class> <body-content>scriptless</body-content> <info>This tag provide HTML field attributes that CCS is unable to do.</info> </tag> On the JSP, I include the StringHelper.tld the normal way: <%@ taglib uri="/WEB-INF/tags/StringHelper.tld" prefix="stringHelper" %> Finally, I use the tag and apply the needed values using EL. <stringHelper:fieldAttributes> [snip] <form:input path="emailAddress" cssClass="formeffect" cssErrorClass="formEffect error" maxlength="**${maxlength_email}**"/>&nbsp; <form:errors path="emailAddress" cssClass="error" element="span"/> [snip] </stringHelper:fieldAttributes> A: First, your syntax had an extra "]" which was causing an error. To fix that, and to set a variable you would do this: <c:set var="nullUser" scope="session" value="${sessionScope[Constants.ATTR_CURRENT_USER] eq null}" /> <c:if test="${nullUser}"> <h2>First Test</h2> </c:if> <c:if test="${nullUser}"> <h2>Another Test</h2> </c:if>
{ "language": "en", "url": "https://stackoverflow.com/questions/122254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: IMAP: how to move a message from one folder to another (using the IMAP commands, not with the assistance of any other mail package) A: I guess you COPY the message to the new folder and then delete (EXPUNGE) it in the old one. RFC3501 HINT There's no DELETE command that does what you mean, you have to flag the message as deleted and then EXPUNGE the mailbox. Have a look at the RFC. Be careful with DELETE, as it deletes whole mailboxes, not single mails. A: There are multiple ways to do that. The best one is the UID MOVE command defined in RFC 6851 from early 2013: C: a UID MOVE 42:69 foo S: * OK [COPYUID 432432 42:69 1202:1229] S: * 22 EXPUNGE S: (more expunges) S: a OK Done Presence of this extension is indicated by the MOVE capability. If it isn't available, but UIDPLUS (RFC 4315) is, the second best option is to use the combination of UID STORE, UID COPY and UID EXPUNGE: C: a01 UID COPY 42:69 foo S: a01 OK [COPYUID 432432 42:69 1202:1229] Copied C: a02 UID STORE 42:69 +FLAGS.SILENT (\Deleted) S: a02 OK Stored C: a03 UID EXPUNGE 42:69 S: * 10 EXPUNGE S: * 10 EXPUNGE S: * 10 EXPUNGE S: a03 Expunged If the UIDPLUS is missing, there is nothing reasonable that you can do -- the EXPUNGE command permanently removes all messages which are marked for deletion, including those which you have not touched. The best this is to just use the UID COPY and UID STORE in that case. A: I'm not sure how well-versed you are in imap-speak, but basically after login, "SELECT" the source mailbox, "COPY" the messages, and "EXPUNGE" the messages (or "DELETE" the old mailbox if it is empty now :-). a login a s b select source c copy 1 othermbox d store 1 +flags (\Deleted) e expunge would be an example of messages to send. (Note: imap messages require a uniqe prefix before each command, thus the "a b c" in front) See RFC 2060 for details. A: If you have the uid of the email which is going to be moved. import imaplib obj = imaplib.IMAP4_SSL('imap.gmail.com', 993) obj.login('username', 'password') obj.select(src_folder_name) apply_lbl_msg = obj.uid('COPY', msg_uid, desti_folder_name) if apply_lbl_msg[0] == 'OK': mov, data = obj.uid('STORE', msg_uid , '+FLAGS', '(\Deleted)') obj.expunge() Where msg_uid is the uid of the mail.
{ "language": "en", "url": "https://stackoverflow.com/questions/122267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: WPF hosting windows forms elements inside a ScrollViewer when putting a ScrollViewer inside a window(not keeping all the window's size) inside the ScrollViewer there's (with other stuff) a WinFormsHost and a control inside (let's say a DateTimePicker). when scrolling, the inner winforms control keeps being visible when there's no longer a reason (it's outside of the scrolling region), so it "floats" above what's outside of the ScrollViewer any solutions for that? A: According to this msdn link WindowsFormsHost elements are always drawn on top of other WPF elements, and they are unaffected by z-order I don't think there's an easy solution. You might want to consider having the windows forms control handle the scrolling itself instead of using WPF's ScrollViewer. A: Yes, there's a solution. Use the following custom WindowsFormsHost class. class WindowsFormsHostEx : WindowsFormsHost { private PresentationSource _presentationSource; public WindowsFormsHostEx() { PresentationSource.AddSourceChangedHandler(this, SourceChangedEventHandler); } protected override void OnWindowPositionChanged(Rect rcBoundingBox) { base.OnWindowPositionChanged(rcBoundingBox); ParentScrollViewer.ScrollChanged += ParentScrollViewer_ScrollChanged; ParentScrollViewer.SizeChanged += ParentScrollViewer_SizeChanged; ParentScrollViewer.Loaded += ParentScrollViewer_Loaded; if (Scrolling || Resizing) { if (ParentScrollViewer == null) return; GeneralTransform tr = RootVisual.TransformToDescendant(ParentScrollViewer); var scrollRect = new Rect(new Size(ParentScrollViewer.ViewportWidth, ParentScrollViewer.ViewportHeight)); var intersect = Rect.Intersect(scrollRect, tr.TransformBounds(rcBoundingBox)); if (!intersect.IsEmpty) { tr = ParentScrollViewer.TransformToDescendant(this); intersect = tr.TransformBounds(intersect); } else intersect = new Rect(); int x1 = (int)Math.Round(intersect.Left); int y1 = (int)Math.Round(intersect.Top); int x2 = (int)Math.Round(intersect.Right); int y2 = (int)Math.Round(intersect.Bottom); SetRegion(x1, y1, x2, y2); this.Scrolling = false; this.Resizing = false; } } private void ParentScrollViewer_Loaded(object sender, RoutedEventArgs e) { this.Resizing = true; } private void ParentScrollViewer_SizeChanged(object sender, SizeChangedEventArgs e) { this.Resizing = true; } private void ParentScrollViewer_ScrollChanged(object sender, ScrollChangedEventArgs e) { if (e.VerticalChange != 0 || e.HorizontalChange != 0 || e.ExtentHeightChange != 0 || e.ExtentWidthChange != 0) Scrolling = true; } protected override void Dispose(bool disposing) { base.Dispose(disposing); if (disposing) PresentationSource.RemoveSourceChangedHandler(this, SourceChangedEventHandler); } private void SourceChangedEventHandler(Object sender, SourceChangedEventArgs e) { ParentScrollViewer = FindParentScrollViewer(); } private ScrollViewer FindParentScrollViewer() { DependencyObject vParent = this; ScrollViewer parentScroll = null; while (vParent != null) { parentScroll = vParent as ScrollViewer; if (parentScroll != null) break; vParent = LogicalTreeHelper.GetParent(vParent); } return parentScroll; } private void SetRegion(int x1, int y1, int x2, int y2) { SetWindowRgn(Handle, CreateRectRgn(x1, y1, x2, y2), true); } private Visual RootVisual { get { _presentationSource = PresentationSource.FromVisual(this); return _presentationSource.RootVisual; } } private ScrollViewer ParentScrollViewer { get; set; } private bool Scrolling { get; set; } private bool Resizing { get; set; } [DllImport("User32.dll", SetLastError = true)] static extern int SetWindowRgn(IntPtr hWnd, IntPtr hRgn, bool bRedraw); [DllImport("gdi32.dll")] static extern IntPtr CreateRectRgn(int nLeftRect, int nTopRect, int nRightRect, int nBottomRect); } A: Just add in ScrollViewer control property: VerticalScrollBarVisibility="Auto" and set Height to Your max height. That's all.
{ "language": "en", "url": "https://stackoverflow.com/questions/122271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Constructor parameters for controllers without a DI container for ASP.NET MVC Does anyone have any code examples on how to create controllers that have parameters other than using a Dependency Injection Container? I see plenty of samples with using containers like StructureMap, but nothing if you wanted to pass in the dependency class yourself. A: One way is to create a ControllerFactory: public class MyControllerFactory : DefaultControllerFactory { public override IController CreateController( RequestContext requestContext, string controllerName) { return [construct your controller here] ; } } Then, in Global.asax.cs: private void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); ControllerBuilder.Current.SetControllerFactory( new MyNamespace.MyControllerFactory()); } A: You can use poor-man's dependency injection: public ProductController() : this( new Foo() ) { //the framework calls this } public ProductController(IFoo foo) { _foo = foo; } A: You can create an IModelBinder that spins up an instance from a factory - or, yes, the container. =)
{ "language": "en", "url": "https://stackoverflow.com/questions/122273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Quickly check whether sudo permissions are available I want to be able to quickly check whether I both have sudo access and my password is already authenticated. I'm not worried about having sudo access specifically for the operation I'm about to perform, but that would be a nice bonus. Specifically what I'm trying to use this for is a script that I want to be runnable by a range of users. Some have sudo access. All know the root password. When they run my script, I want it to use sudo permissions without prompting for a password if that is possible, and otherwise to fall back to asking for the root password (because they might not have sudo access). My first non-working attempt was to fork off sudo -S true with STDIN closed or reading from /dev/null. But that still prompts for the password and waits a couple of seconds. I've tried several other things, including waiting 0.3sec to see whether it succeeded immediately, but everything I try ends up failing in some situation. (And not because my timeout is too short.) It's difficult to figure out what goes on, because I can't just strace like I normally would. One thing I know doesn't work is to close STDIN or attach it to a pipe before running sudo -S true. I was hoping that would make the password prompt immediately fail, but it still prompts and behaves strangely. I think it might want a terminal. A: With newer versions of sudo there's an option for this purpose: sudo -n true I use true here for a no-op, but you could use any command. A: I don't know what the ultimate reason is for needing to do this, but I think maybe you might need to rethink whatever the core reason is. Trying to do complicated, unusual things with permissions frequently leads to security holes. More specifically, whatever it is the script is trying to do might be better done with setuid instead of sudo. (I'd also have to wonder why so many people have the root password. Sudo is there specifically to avoid giving people the root password.) A: Running sudo -S true < /dev/null &>/dev/null seems to work, although it delays for a second before failing. A: getent group admin | grep $particular_user You could use whoami to get the current user. Edit: But that doesn't help you find if you're still authed to do sudo tasks... Hmm..
{ "language": "en", "url": "https://stackoverflow.com/questions/122276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you translate this regular-expression idiom from Perl into Python? I switched from Perl to Python about a year ago and haven't looked back. There is only one idiom that I've ever found I can do more easily in Perl than in Python: if ($var =~ /foo(.+)/) { # do something with $1 } elsif ($var =~ /bar(.+)/) { # do something with $1 } elsif ($var =~ /baz(.+)/) { # do something with $1 } The corresponding Python code is not so elegant since the if statements keep getting nested: m = re.search(r'foo(.+)', var) if m: # do something with m.group(1) else: m = re.search(r'bar(.+)', var) if m: # do something with m.group(1) else: m = re.search(r'baz(.+)', var) if m: # do something with m.group(2) Does anyone have an elegant way to reproduce this pattern in Python? I've seen anonymous function dispatch tables used, but those seem kind of unwieldy to me for a small number of regular expressions... A: Yeah, it's kind of annoying. Perhaps this will work for your case. import re class ReCheck(object): def __init__(self): self.result = None def check(self, pattern, text): self.result = re.search(pattern, text) return self.result var = 'bar stuff' m = ReCheck() if m.check(r'foo(.+)',var): print m.result.group(1) elif m.check(r'bar(.+)',var): print m.result.group(1) elif m.check(r'baz(.+)',var): print m.result.group(1) EDIT: Brian correctly pointed out that my first attempt did not work. Unfortunately, this attempt is longer. A: I'd suggest this, as it uses the least regex to accomplish your goal. It is still functional code, but no worse then your old Perl. import re var = "barbazfoo" m = re.search(r'(foo|bar|baz)(.+)', var) if m.group(1) == 'foo': print m.group(1) # do something with m.group(1) elif m.group(1) == "bar": print m.group(1) # do something with m.group(1) elif m.group(1) == "baz": print m.group(2) # do something with m.group(2) A: Starting Python 3.8, and the introduction of assignment expressions (PEP 572) (:= operator), we can now capture the condition value re.search(pattern, text) in a variable match in order to both check if it's not None and then re-use it within the body of the condition: if match := re.search(r'foo(.+)', text): # do something with match.group(1) elif match := re.search(r'bar(.+)', text): # do something with match.group(1) elif match := re.search(r'baz(.+)', text) # do something with match.group(1) A: With thanks to this other SO question: import re class DataHolder: def __init__(self, value=None, attr_name='value'): self._attr_name = attr_name self.set(value) def __call__(self, value): return self.set(value) def set(self, value): setattr(self, self._attr_name, value) return value def get(self): return getattr(self, self._attr_name) string = u'test bar 123' save_match = DataHolder(attr_name='match') if save_match(re.search('foo (\d+)', string)): print "Foo" print save_match.match.group(1) elif save_match(re.search('bar (\d+)', string)): print "Bar" print save_match.match.group(1) elif save_match(re.search('baz (\d+)', string)): print "Baz" print save_match.match.group(1) A: Alternatively, something not using regular expressions at all: prefix, data = var[:3], var[3:] if prefix == 'foo': # do something with data elif prefix == 'bar': # do something with data elif prefix == 'baz': # do something with data else: # do something with var Whether that is suitable depends on your actual problem. Don't forget, regular expressions aren't the swiss army knife that they are in Perl; Python has different constructs for doing string manipulation. A: def find_first_match(string, *regexes): for regex, handler in regexes: m = re.search(regex, string): if m: handler(m) return else: raise ValueError find_first_match( foo, (r'foo(.+)', handle_foo), (r'bar(.+)', handle_bar), (r'baz(.+)', handle_baz)) To speed it up, one could turn all regexes into one internally and create the dispatcher on the fly. Ideally, this would be turned into a class then. A: Here's the way I solved this issue: matched = False; m = re.match("regex1"); if not matched and m: #do something matched = True; m = re.match("regex2"); if not matched and m: #do something else matched = True; m = re.match("regex3"); if not matched and m: #do yet something else matched = True; Not nearly as clean as the original pattern. However, it is simple, straightforward and doesn't require extra modules or that you change the original regexs. A: Using named groups and a dispatch table: r = re.compile(r'(?P<cmd>foo|bar|baz)(?P<data>.+)') def do_foo(data): ... def do_bar(data): ... def do_baz(data): ... dispatch = { 'foo': do_foo, 'bar': do_bar, 'baz': do_baz, } m = r.match(var) if m: dispatch[m.group('cmd')](m.group('data')) With a little bit of introspection you can auto-generate the regexp and the dispatch table. A: r""" This is an extension of the re module. It stores the last successful match object and lets you access it's methods and attributes via this module. This module exports the following additional functions: expand Return the string obtained by doing backslash substitution on a template string. group Returns one or more subgroups of the match. groups Return a tuple containing all the subgroups of the match. start Return the indices of the start of the substring matched by group. end Return the indices of the end of the substring matched by group. span Returns a 2-tuple of (start(), end()) of the substring matched by group. This module defines the following additional public attributes: pos The value of pos which was passed to the search() or match() method. endpos The value of endpos which was passed to the search() or match() method. lastindex The integer index of the last matched capturing group. lastgroup The name of the last matched capturing group. re The regular expression object which as passed to search() or match(). string The string passed to match() or search(). """ import re as re_ from re import * from functools import wraps __all__ = re_.__all__ + [ "expand", "group", "groups", "start", "end", "span", "last_match", "pos", "endpos", "lastindex", "lastgroup", "re", "string" ] last_match = pos = endpos = lastindex = lastgroup = re = string = None def _set_match(match=None): global last_match, pos, endpos, lastindex, lastgroup, re, string if match is not None: last_match = match pos = match.pos endpos = match.endpos lastindex = match.lastindex lastgroup = match.lastgroup re = match.re string = match.string return match @wraps(re_.match) def match(pattern, string, flags=0): return _set_match(re_.match(pattern, string, flags)) @wraps(re_.search) def search(pattern, string, flags=0): return _set_match(re_.search(pattern, string, flags)) @wraps(re_.findall) def findall(pattern, string, flags=0): matches = re_.findall(pattern, string, flags) if matches: _set_match(matches[-1]) return matches @wraps(re_.finditer) def finditer(pattern, string, flags=0): for match in re_.finditer(pattern, string, flags): yield _set_match(match) def expand(template): if last_match is None: raise TypeError, "No successful match yet." return last_match.expand(template) def group(*indices): if last_match is None: raise TypeError, "No successful match yet." return last_match.group(*indices) def groups(default=None): if last_match is None: raise TypeError, "No successful match yet." return last_match.groups(default) def groupdict(default=None): if last_match is None: raise TypeError, "No successful match yet." return last_match.groupdict(default) def start(group=0): if last_match is None: raise TypeError, "No successful match yet." return last_match.start(group) def end(group=0): if last_match is None: raise TypeError, "No successful match yet." return last_match.end(group) def span(group=0): if last_match is None: raise TypeError, "No successful match yet." return last_match.span(group) del wraps # Not needed past module compilation For example: if gre.match("foo(.+)", var): # do something with gre.group(1) elif gre.match("bar(.+)", var): # do something with gre.group(1) elif gre.match("baz(.+)", var): # do something with gre.group(1) A: how about using a dictionary? match_objects = {} if match_objects.setdefault( 'mo_foo', re_foo.search( text ) ): # do something with match_objects[ 'mo_foo' ] elif match_objects.setdefault( 'mo_bar', re_bar.search( text ) ): # do something with match_objects[ 'mo_bar' ] elif match_objects.setdefault( 'mo_baz', re_baz.search( text ) ): # do something with match_objects[ 'mo_baz' ] ... however, you must ensure there are no duplicate match_objects dictionary keys ( mo_foo, mo_bar, ... ), best by giving each regular expression its own name and naming the match_objects keys accordingly, otherwise match_objects.setdefault() method would return existing match object instead of creating new match object by running re_xxx.search( text ). A: Expanding on the solution by Pat Notz a bit, I found it even the more elegant to:   - name the methods the same as re provides (e.g. search() vs. check()) and   - implement the necessary methods like group() on the holder object itself: class Re(object): def __init__(self): self.result = None def search(self, pattern, text): self.result = re.search(pattern, text) return self.result def group(self, index): return self.result.group(index) Example Instead of e.g. this: m = re.search(r'set ([^ ]+) to ([^ ]+)', line) if m: vars[m.group(1)] = m.group(2) else: m = re.search(r'print ([^ ]+)', line) if m: print(vars[m.group(1)]) else: m = re.search(r'add ([^ ]+) to ([^ ]+)', line) if m: vars[m.group(2)] += vars[m.group(1)] One does just this: m = Re() ... if m.search(r'set ([^ ]+) to ([^ ]+)', line): vars[m.group(1)] = m.group(2) elif m.search(r'print ([^ ]+)', line): print(vars[m.group(1)]) elif m.search(r'add ([^ ]+) to ([^ ]+)', line): vars[m.group(2)] += vars[m.group(1)] Looks very natural in the end, does not need too many code changes when moving from Perl and avoids the problems with global state like some other solutions. A: A minimalist DataHolder: class Holder(object): def __call__(self, *x): if x: self.x = x[0] return self.x data = Holder() if data(re.search('foo (\d+)', string)): print data().group(1) or as a singleton function: def data(*x): if x: data.x = x[0] return data.x A: My solution would be: import re class Found(Exception): pass try: for m in re.finditer('bar(.+)', var): # Do something raise Found for m in re.finditer('foo(.+)', var): # Do something else raise Found except Found: pass A: Here is a RegexDispatcher class that dispatches its subclass methods by regular expression. Each dispatchable method is annotated with a regular expression e.g. def plus(self, regex: r"\+", **kwargs): ... In this case, the annotation is called 'regex' and its value is the regular expression to match on, '\+', which is the + sign. These annotated methods are put in subclasses, not in the base class. When the dispatch(...) method is called on a string, the class finds the method with an annotation regular expression that matches the string and calls it. Here is the class: import inspect import re class RegexMethod: def __init__(self, method, annotation): self.method = method self.name = self.method.__name__ self.order = inspect.getsourcelines(self.method)[1] # The line in the source file self.regex = self.method.__annotations__[annotation] def match(self, s): return re.match(self.regex, s) # Make it callable def __call__(self, *args, **kwargs): return self.method(*args, **kwargs) def __str__(self): return str.format("Line: %s, method name: %s, regex: %s" % (self.order, self.name, self.regex)) class RegexDispatcher: def __init__(self, annotation="regex"): self.annotation = annotation # Collect all the methods that have an annotation that matches self.annotation # For example, methods that have the annotation "regex", which is the default self.dispatchMethods = [RegexMethod(m[1], self.annotation) for m in inspect.getmembers(self, predicate=inspect.ismethod) if (self.annotation in m[1].__annotations__)] # Be sure to process the dispatch methods in the order they appear in the class! # This is because the order in which you test regexes is important. # The most specific patterns must always be tested BEFORE more general ones # otherwise they will never match. self.dispatchMethods.sort(key=lambda m: m.order) # Finds the FIRST match of s against a RegexMethod in dispatchMethods, calls the RegexMethod and returns def dispatch(self, s, **kwargs): for m in self.dispatchMethods: if m.match(s): return m(self.annotation, **kwargs) return None To use this class, subclass it to create a class with annotated methods. By way of example, here is a simple RPNCalculator that inherits from RegexDispatcher. The methods to be dispatched are (of course) the ones with the 'regex' annotation. The parent dispatch() method is invoked in call. from RegexDispatcher import * import math class RPNCalculator(RegexDispatcher): def __init__(self): RegexDispatcher.__init__(self) self.stack = [] def __str__(self): return str(self.stack) # Make RPNCalculator objects callable def __call__(self, expression): # Calculate the value of expression for t in expression.split(): self.dispatch(t, token=t) return self.top() # return the top of the stack # Stack management def top(self): return self.stack[-1] if len(self.stack) > 0 else [] def push(self, x): return self.stack.append(float(x)) def pop(self, n=1): return self.stack.pop() if n == 1 else [self.stack.pop() for n in range(n)] # Handle numbers def number(self, regex: r"[-+]?[0-9]*\.?[0-9]+(?:[eE][-+]?[0-9]+)?", **kwargs): self.stack.append(float(kwargs['token'])) # Binary operators def plus(self, regex: r"\+", **kwargs): a, b = self.pop(2) self.push(b + a) def minus(self, regex: r"\-", **kwargs): a, b = self.pop(2) self.push(b - a) def multiply(self, regex: r"\*", **kwargs): a, b = self.pop(2) self.push(b * a) def divide(self, regex: r"\/", **kwargs): a, b = self.pop(2) self.push(b / a) def pow(self, regex: r"exp", **kwargs): a, b = self.pop(2) self.push(a ** b) def logN(self, regex: r"logN", **kwargs): a, b = self.pop(2) self.push(math.log(a,b)) # Unary operators def neg(self, regex: r"neg", **kwargs): self.push(-self.pop()) def sqrt(self, regex: r"sqrt", **kwargs): self.push(math.sqrt(self.pop())) def log2(self, regex: r"log2", **kwargs): self.push(math.log2(self.pop())) def log10(self, regex: r"log10", **kwargs): self.push(math.log10(self.pop())) def pi(self, regex: r"pi", **kwargs): self.push(math.pi) def e(self, regex: r"e", **kwargs): self.push(math.e) def deg(self, regex: r"deg", **kwargs): self.push(math.degrees(self.pop())) def rad(self, regex: r"rad", **kwargs): self.push(math.radians(self.pop())) # Whole stack operators def cls(self, regex: r"c", **kwargs): self.stack=[] def sum(self, regex: r"sum", **kwargs): self.stack=[math.fsum(self.stack)] if __name__ == '__main__': calc = RPNCalculator() print(calc('2 2 exp 3 + neg')) print(calc('c 1 2 3 4 5 sum 2 * 2 / pi')) print(calc('pi 2 * deg')) print(calc('2 2 logN')) I like this solution because there are no separate lookup tables. The regular expression to match on is embedded in the method to be called as an annotation. For me, this is as it should be. It would be nice if Python allowed more flexible annotations, because I would rather put the regex annotation on the method itself rather than embed it in the method parameter list. However, this isn't possible at the moment. For interest, take a look at the Wolfram language in which functions are polymorphic on arbitrary patterns, not just on argument types. A function that is polymorphic on a regex is a very powerful idea, but we can't get there cleanly in Python. The RegexDispatcher class is the best I could do. A: import re s = '1.23 Million equals to 1230000' s = re.sub("([\d.]+)(\s*)Million", lambda m: str(round(float(m.groups()[0]) * 1000_000))+m.groups()[1], s) print(s) 1230000 equals to 1230000
{ "language": "en", "url": "https://stackoverflow.com/questions/122277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: Silverlight development [Visual Studio 2008 vs Expression Blend] I'm a .Net developer and want to get into developing Silverlight applications. I have VS 2008 but wanted to know if I should/need expression blend. What are pros/cons of having blend along side VS? A: I don't mean to push my own questions, but this might help you a little, especially if you are not familiar with Expression Studio or Expression Blend: "What exactly is Microsoft Expression Studio and how does it integrate with Visual Studio?" A: The major pro is that it is the visual design surface for the XAML for Silverlight. Right now, the visual design surface in VS for Silverlight is read-only. I always describe Blend as a graphical XAML editor. It is incredibly powerful for helping you to understand XAML, easily create animations that may be needed, template controls, position your application elements, etc. A: If you are going to do minimal XAML stuff then the free and excellent tool KaXAML will help you. Take a deep look here http://www.kaxaml.com/ A: Blend is very useful for designing, and its new feature of control visual states editing it can be very useful. However, the XAML it generates might be more difficult to maintain than custom written one. Personally, I use blend whenever I need to generate a certain visual effect, and then copy the XAML into the actual solution, but in an ideal environment, you should be providing the designer with a raw XAML generated in VS, and he should edit it with Blend. A: You will want blend for prototyping animations, making templates, changing colors. I don't use it everyday, once you put in a valueconverter it renders that element unrenderable. To do anything with Blend I create a new project in Blend, mock up what i want and cut the xaml out. So it's rare that I use it. You don't NEED it though. Get the June preview or get it from MSDN, but dont go and buy it. If you get the preview you can use it for learning animations and xaml. After you work with xaml enough it's faster to go to the xaml and not use any of the tools. Designers use Adobe products and Macs... So the only thing you NEED is a converter.
{ "language": "en", "url": "https://stackoverflow.com/questions/122278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can the Java File method "canWrite()" support locking? I have a Java application that monitors a folder for incoming XML files. When a new file is detected I need to test the file that it is not currently being updated and is closed. My thought is to use File.canWrite() to test this. Is there any issue with doing this? Is this a good way to test that a file has been completely written? Other ideas I am throwing around are: * *Parse the incoming XML file and test that the closing tag is there. *Check for the EoF character. I just am not sure that any of these methods will handle all scenarios. A: Additionally, if you do a check followed by a write, then you have a race condition. The state could change between the check and the write. Sometimes its best to try and do the thing you want and handle errors gracefully. perhaps an n-attempt retry mechanism with a increased fallback delay time. Or redefine your test. In this case, you could perhaps test that the filesize hasn't changed over a period of time before processing it. Another option is to split the code into two, you could have another thread -- perhaps a quartz task -- responsible for moving finished files into a different directory that your main code processes. A: One thing that appears to work in Windows is this - Create a File() object that represents the file in question (using constructor with full filename) - Create a second identical File Object, same way. - Try firstFile.renameTo(secondFile) This dummy renaming exercise seems to succeed with files that are not open for editing by another app (I tested with Word), but fails if they are open. And as the nw filename = the old filename it doesn't create any other work. A: No, canWrite is not suitable for this purpose. In general the file will be writable even if another process is writing. You need a higher level protocol to coordinate the locking. If you plan to use this code on a single platform, you may be able to use NIO's FileLock facility. But read the documentation carefully, and note that on many platforms, the lock is only advisory. Another approach is to have one process write the file with a name that your process won't recognize, then rename the file to a recognizable name when the write is complete. On most platforms, the rename operation is atomic if the source and destination are the same file system volume. The name change might use a different file extension, or even moving the file from one directory to another (on the same volume). Since in this case you are working exclusively with XML, looking for a close tag would work, but it isn't foolproof—what if there are comments after the final markup, or the writer or simply doesn't write valid XML? Looking for the EOF will not work. There will always be an EOF, even when the writer has just opened the file and hasn't written anything yet. If this weren't so, the easiest thing would be to allow the reader to start parsing as soon as the file showed up; it would simply block until the writer closed the file. But the file system doesn't work this way. Every file has an end, even if some process is currently moving it. A: As far as I know, there is no way to tell if another process currently has an open handle to a file from Java. One option is to use the FileLock class from new io. This isn't supported on all platforms, but if the files are local and the process writing the file cooperates, this should work for any platform supporting locks. A: If you control both the reader and writer, then a potential locking technique would be to create a lock directory -- which is typically an atomic operation -- for the read and the write process duration. If you take this type of approach, you have to manage the potential failure of a process resulting in a "hanging" lock directory. As Cheekysoft mentioned, files are not atomic and are ill suited for locking. If you don't control the writer -- for instance if it's being produced by an FTP daemon -- then the rename technique or delay for time span technique are your best options.
{ "language": "en", "url": "https://stackoverflow.com/questions/122282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do you Schedule Index Updates in CouchDB As far as I understand, CouchDB indexes are updated when a view is queried. Assuming there are more reads than writes, isn't this bad for scaling? How would I configure CouchDB to update indexes on writes, or better yet, on a schedule? A: You can't and also, why would you want that? Think about it like that: * *When you import data into MySQL you can turn of indizes because it's more expensive to update the index for every row you insert, than it is to update the index for 100 writes (or however many rows you import) in a single run. *This is why CouchDB updates the index on read because it's less expensive to integrate those 100 changes at the same time, then each change when it's written. This is one of the advantages of CouchDB! :) I am not saying that this is a CouchDB only feature, but it's just smart to do this on read. One thing you could do is read with update=false, which is a dirty read and might not return what you expect. If you always do this, you could schedule a "regular" read through a cronjob and update your index with that. I just don't think it makes sense. A: CouchDB does regenerate views on update, but only on what has changed since the last read access to the view. Assuming your read volume greatly outweighs your write volume, this shouldn't be a problem. When you're changing large numbers of documents at once this could lead to the possibility of the first read requests taking a noticeable amount of time. To alleviate this a few different possibilities have been suggested. Most rely on registering with CouchDB's update notifications and triggering reads automatically. An example script for doing exactly that is available on the CouchDB wiki at [1]. [1] http://wiki.apache.org/couchdb/RegeneratingViewsOnUpdate A: a) "Scaling" is such an overloaded term. What "kind" of scaling are you referring to? (Either way, I can't see how it affects you negatively). b) Update on writes: Just query your view after a write. Note that adding a bunch of data to the index is more resource friendly (that not specific to CouchDB). So you might want to trigger your view every N writes. c) Scheduled: Set up a cronjob that queries your view every M minutes. d) Wait for CouchDB to evolve to provide you with the infrastructure that allows you to set this up with a configuration parameter. e) (BEST OPTION). Get your hands dirty and help us out polishing CouchDB! Any contributions are highly appreciated. d) RTFM (blink :)
{ "language": "en", "url": "https://stackoverflow.com/questions/122298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How can I get a summary of my cvs conflicts when doing a cvs update on the command line? Is there an easy way to get a conflict summary after running a cvs update? I work on a large project and after doing some work I need to do an update. The list of changes coming back from the cvs update command is several pages long and I'd like to see only the list of conflicts (starts with 'C') repeated at the end of the cvs update command output. The solution needs to work from the command line. If my normal output is: M src/file1.txt M src/file2.txt cvs server: conflicts found ... C src/file3.txt M src/file4.txt M src/file5.txt I want my new output to be: M src/file1.txt M src/file2.txt cvs server: conflicts found ... C src/file3.txt M src/file4.txt M src/file5.txt Conflict Summary: C src/file3.txt I want this to be a single command (possibly a short script or alias) that outputs the normal cvs output as it happens followed by a summary of conflicts. A: Given the specification, it seems that you need a minor adaptation of Martin York's solution (because that only shows the conflicts and not the normal log information). Something like this - which might be called 'cvsupd': tmp=${TMPDIR:-/tmp}/cvsupd.$$ trap "rm -f $tmp; exit 1" 0 1 2 3 13 15 cvs update "$@" | tee $tmp if grep -s '^C' $tmp then echo echo Conflict Summary: grep '^C' $tmp fi rm -f $tmp trap 0 exit 0 The trap commands ensure that the log file is not left around. It catches the normal signals - HUP, INT, QUIT, PIPE and TERM (respectively) and 0 traps any other exit from the shell. A: I don't have cvs handy what is the exact format of the output of cvs update I seem to remember C <filename> If so you could use: cvs update | tee log | grep "^C" The full output of cvs is saved into log for use at another point. Then we grep for all lines beginning with 'C' Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/122301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: As System in Sqlplus, how do I query another user's table? According to select name from system_privilege_map System has been granted: SELECT ANY TABLE ...and lots of other * ANY TABLES. Plainly running select * from the_table; select * from the_table; ...nets the given response: ERROR at line 1: ORA-00942: table or view does not exist I can log in as that user and run the same command just fine. I'm running under the assumption I should be able to run queries (select in this case) agaisnt a general user's DB table. Is my assumption correct, and if so, how do I do it? A: As the previous responses have said, you can prefix the object name with the schema name: SELECT * FROM schema_name.the_table; Or you can use a synonym (private or public): CREATE (PUBLIC) SYNONYM the_table FOR schema_name.the_table; Or you can issue an alter session command to set the default schema to the the one you want: ALTER SESSION SET current_schema=schema_name; Note that this just sets the default schema, and is the equivalent of prefixing all (unqualified) object names with schema_name. You can still prefix objects with a different schema name to access an object from another schema. Using SET current_schema does not affect your privileges: you still have the privileges of the user you logged in as, not the schema you have set. A: If the_table is owned by user "some_user" then: select * from some_user.the_table; A: You need to do: SELECT * FROM schema_name.the_table; Or use SYNONYMs... CREATE SYNONYM the_table FOR schema_name.the_table;
{ "language": "en", "url": "https://stackoverflow.com/questions/122302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you include/exclude a certain type of files under Subversion? I'm getting confused with the include/exclude jargon, and my actual SVN client doesn't seem to have (or I've been unable to find it easily) a simple option to add or remove a certain type of files for version control. Let's say for example I've added the entire Visual Studio folder, with its solutions, projects, debug files, etc., but I only want to version the actual source files. What would be the simplest way to do that? A: You're probably safest excluding particular filetypes, rather than picking those you want to include, as you could then add a new type and not realize it wasn't versioned. On a per-directory basis, you can edit the svn:ignore property. Run svn propedit svn:ignore . for each relevant directory to bring up an editor with a list of patterns to ignore. Then put a pattern on each line corresponding to the filetype you'd like to ignore: *.user *.exe *.dll and what have you. Alternatively, as has been suggested, you can add those patterns to the global-ignores property in your ~/.subversion/config file (or "%APPDATA%\Subversion\config" on Windows - see Configuration Area Layout in the red bean book for more information). In that case, separate the patterns with spaces. Here's mine. # at the beginning of the line introduces a comment. I've ignored Ankh .Load files and all *.resharper.user files: ### Set global-ignores to a set of whitespace-delimited globs ### which Subversion will ignore in its 'status' output, and ### while importing or adding files and directories. # global-ignores = *.o *.lo *.la #*# .*.rej *.rej .*~ *~ .#* .DS_Store global-ignores = Ankh.Load *.resharper.user A: This can be achieved using the svn:ignore property, or the global-ignores property in your ~/.subversion/config file. (Scroll to the top of that first link to see instructions on editing properties.) By using svn propset or svn propedit on a directory, you will be able to make Subversion ignore all files matching that pattern within the specific directory. If you change global-ignores in ~/.subversion/config's [miscellany] section, however, Subversion will ignore such files no matter where they are located. A: See blog post svn:ignore. I know using TortoiseSVN that I can click on a root folder where I have something checked out and can add arbitrary properties by selecting the "Properties" menu item. In this case you would just specify file patterns to exclude. The blog post for command line stuff, but I'm sure it will work fine with whatever client you're using. A: At the lowest level, SVN allows you to ignore certain files or patterns with svn:ignore attribute. VS add-ins for SVN such as VisualSVN will automatically ignore those files on your behalf. If you're using TortoiseSVN, you can right-click files and folders in Explorer and choose Add to Ignore List. A: Using the svn:ignore property, you can use wildcards. A: Another way, when using TortoiseSVN, you can select "Commit..." and then right click on a file and move to changelist "ignore-on-commit". A: If you use Eclipse (I use Spring Tool Suite): Preferences > Team > Ignored Resources click on Add Pattern, write .DS_Store (or whatever) and Save. This acts globally in your workspace.
{ "language": "en", "url": "https://stackoverflow.com/questions/122313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Template Constraints C++ In C# we can define a generic type that imposes constraints on the types that can be used as the generic parameter. The following example illustrates the usage of generic constraints: interface IFoo { } class Foo<T> where T : IFoo { } class Bar : IFoo { } class Simpson { } class Program { static void Main(string[] args) { Foo<Bar> a = new Foo<Bar>(); Foo<Simpson> b = new Foo<Simpson>(); // error CS0309 } } Is there a way we can impose constraints for template parameters in C++. C++0x has native support for this but I am talking about current standard C++. A: Check out Boost The Boost Concept Check Library (BCCL) The Concept Check library allows one to add explicit statement and checking of concepts in the style of the proposed C++ language extension. A: If you use C++11, you can use static_assert with std::is_base_of for this purpose. For example, #include <type_traits> template<typename T> class YourClass { YourClass() { // Compile-time check static_assert(std::is_base_of<BaseClass, T>::value, "type parameter of this class must derive from BaseClass"); // ... } } A: Using C++20, yes there is: Constraints and concepts Perhaps you want to guarantee a template is derived from a specific class: #include <concepts> template<class T, class U> concept Derived = std::is_base_of<U, T>::value; class ABase { }; class ADerived : ABase { }; template<Derived<ABase> T> class AClass { T aMemberDerivedFromABase; }; The following then compiles like normal: int main () { AClass<ADerived> aClass; return 0; } But now when you do something against the contraint: class AnotherClass { }; int main () { AClass<AnotherClass> aClass; return 0; } AnotherClass is not derived from ABase, therefore my compiler (GCC) gives roughly the following error: In function 'int main()': note: constraints not satisfied note: the expression 'std::is_base_of<U, T>::value [with U = ABase; T = AnotherClass]' evaluated to 'false' 9 | concept Derived = std::is_base_of<U, T>::value; As you can imagine this feature is very useful and can do much more than constraining a class to have a specific base. A: "Implicitly" is the correct answer. Templates effectively create a "duck typing" scenario, due to the way in which they are compiled. You can call any functions you want upon a template-typed value, and the only instantiations that will be accepted are those for which that method is defined. For example: template <class T> int compute_length(T *value) { return value->length(); } We can call this method on a pointer to any type which declares the length() method to return an int. Thusly: string s = "test"; vector<int> vec; int i = 0; compute_length(&s); compute_length(&vec); ...but not on a pointer to a type which does not declare length(): compute_length(&i); This third example will not compile. This works because C++ compiles a new version of the templatized function (or class) for each instantiation. As it performs that compilation, it makes a direct, almost macro-like substitution of the template instantiation into the code prior to type-checking. If everything still works with that template, then compilation proceeds and we eventually arrive at a result. If anything fails (like int* not declaring length()), then we get the dreaded six page template compile-time error. A: As someone else has mentioned, C++0x is getting this built into the language. Until then, I'd recommend Bjarne Stroustrup's suggestions for template constraints. Edit: Boost also has an alternative of its own. Edit2: Looks like concepts have been removed from C++0x. A: Sort of. If you static_cast to an IFoo*, then it will be impossible to instantiate the template unless the caller passes a class that can be assigned to an IFoo *. A: You can do it. Create the base template. Make it have only Private constructors. Then create specializations for each case you want to allow (or make the opposite if the disallowed list is much smaller than the allowed list). The compiler will not allow you to instantiate the templates that use the version with private constructors. This example only allow instantiation with int and float. template<class t> class FOO { private: FOO(){}}; template<> class FOO<int>{public: FOO(){}}; template<> class FOO<float>{public: FOO(){}}; Its not a short and elegant way of doing it, but its possible. A: You can put a guard type on IFoo that does nothing, make sure it's there on T in Foo: class IFoo { public: typedef int IsDerivedFromIFoo; }; template <typename T> class Foo<T> { typedef typename T::IsDerivedFromIFoo IFooGuard; } A: Only implicitly. Any method you use in a method that is actually called is imposed on the template parameter. A: Look at the CRTP pattern (Curiously Recursive Template Pattern). It is designed to help support static inheritence.
{ "language": "en", "url": "https://stackoverflow.com/questions/122316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "83" }
Q: Why doesn't VB.NET 9 have Automatic Properties like C# 3? Would having a nice little feature that makes it quicker to write code like Automatic Properties fit very nicely with the mantra of VB.NET? Something like this would work perfect: Public Property FirstName() As String Get Set End Property UPDATE: VB.NET 10 (coming with Visual Studio 2010 and .NET 4.0) will have Automatic Properties. Here's a link that shows a little info about the feature: http://geekswithblogs.net/DarrenFieldhouse/archive/2008/12/01/new-features-in-vb.net-10-.net-4.0.aspx In VB.NET 10 Automatic Properties will be defines like this: Public Property CustomerID As Integer A: One reason many features get delayed in VB is that the development structure is much different than in C# and additionally, that often more thought goes into details. The same seems to be true in this case, as suggested by Paul Vick's post on the matter. This is unfortunate because it means a delay in many cases (automatic properties, iterator methods, multiline lambdas, to name but a few) but on the other hand, the VB developers usually get a much more mature feature in the long run (looking at the discussion, this will be especially true for iterator methods). So, long story short: VB 10 will (hopefully!) see automatic properties. A: It also wasn't as big of a pain point in vb.net, since visual studio will automatically create 90% of the skeleton code of a property for you whereas with C# you used to have to type it all out. A: If you want to do properties a little quicker, try code snippets. Type: Property and just after typing the "y", press the Tab key :-). I realize this doesn't answer the particular question, but does give you what the VB team provided... A: I know this post is old so you may already know but VB is getting Auto Properties in next version of VS. Based on response to feedback and Channel9. A: C# and VB.NET don't exactly line up on new features in their first versions. Usually, by the next version C# catches up with some VB.NET features and vice versa. I kind of like literal XML from VB.NET, and hoping they add that to C#. A: There's no particular reason really. It's been always been the case that even when VB.NET and C# are touted to be equally powerful (and to be fair, they are) their syntaxes and some of the structures sometimes differ. You have two different development teams working on the languages, so it's something you can expect to happen. A: automatic properties are not necessary in vb the concession one makes by using an automatic property is that you can not modify the Get and Set. If you dont require those, just make a public data field. VB has had automatic properties for years. They just called them something else.
{ "language": "en", "url": "https://stackoverflow.com/questions/122324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do I find the location of my Python site-packages directory? How do I find the location of my site-packages directory? A: There are two types of site-packages directories, global and per user. * *Global site-packages ("dist-packages") directories are listed in sys.path when you run: python -m site For a more concise list run getsitepackages from the site module in Python code: python -c 'import site; print(site.getsitepackages())' Caution: In virtual environments getsitepackages is not available with older versions of virtualenv, sys.path from above will list the virtualenv's site-packages directory correctly, though. In Python 3, you may use the sysconfig module instead: python3 -c 'import sysconfig; print(sysconfig.get_paths()["purelib"])' *The per user site-packages directory (PEP 370) is where Python installs your local packages: python -m site --user-site If this points to a non-existing directory check the exit status of Python and see python -m site --help for explanations. Hint: Running pip list --user or pip freeze --user gives you a list of all installed per user site-packages. Practical Tips * *<package>.__path__ lets you identify the location(s) of a specific package: (details) $ python -c "import setuptools as _; print(_.__path__)" ['/usr/lib/python2.7/dist-packages/setuptools'] *<module>.__file__ lets you identify the location of a specific module: (difference) $ python3 -c "import os as _; print(_.__file__)" /usr/lib/python3.6/os.py *Run pip show <package> to show Debian-style package information: $ pip show pytest Name: pytest Version: 3.8.2 Summary: pytest: simple powerful testing with Python Home-page: https://docs.pytest.org/en/latest/ Author: Holger Krekel, Bruno Oliveira, Ronny Pfannschmidt, Floris Bruynooghe, Brianna Laugher, Florian Bruhin and others Author-email: None License: MIT license Location: /home/peter/.local/lib/python3.4/site-packages Requires: more-itertools, atomicwrites, setuptools, attrs, pathlib2, six, py, pluggy A: This is what worked for me: python -m site --user-site A: You should try this command to determine pip's install location Python 2 pip show six | grep "Location:" | cut -d " " -f2 Python 3 pip3 show six | grep "Location:" | cut -d " " -f2 A: >>> import site; site.getsitepackages() ['/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages'] (or just first item with site.getsitepackages()[0]) A: A modern stdlib way is using sysconfig module, available in version 2.7 and 3.2+. Unlike the current accepted answer, this method still works regardless of whether or not you have a virtual environment active. Note: sysconfig (source) is not to be confused with the distutils.sysconfig submodule (source) mentioned in several other answers here. The latter is an entirely different module and it's lacking the get_paths function discussed below. Additionally, distutils is deprecated in 3.10 and will be unavailable soon. Python currently uses eight paths (docs): * *stdlib: directory containing the standard Python library files that are not platform-specific. *platstdlib: directory containing the standard Python library files that are platform-specific. *platlib: directory for site-specific, platform-specific files. *purelib: directory for site-specific, non-platform-specific files. *include: directory for non-platform-specific header files. *platinclude: directory for platform-specific header files. *scripts: directory for script files. *data: directory for data files. In most cases, users finding this question would be interested in the 'purelib' path (in some cases, you might be interested in 'platlib' too). The purelib path is where ordinary Python packages will be installed by tools like pip. At system level, you'll see something like this: # Linux $ python3 -c "import sysconfig; print(sysconfig.get_path('purelib'))" /usr/local/lib/python3.8/site-packages # macOS (brew installed python3.8) $ python3 -c "import sysconfig; print(sysconfig.get_path('purelib'))" /usr/local/Cellar/python@3.8/3.8.3/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages # Windows C:\> py -c "import sysconfig; print(sysconfig.get_path('purelib'))" C:\Users\wim\AppData\Local\Programs\Python\Python38\Lib\site-packages With a venv, you'll get something like this # Linux /tmp/.venv/lib/python3.8/site-packages # macOS /private/tmp/.venv/lib/python3.8/site-packages # Windows C:\Users\wim\AppData\Local\Temp\.venv\Lib\site-packages The function sysconfig.get_paths() returns a dict of all of the relevant installation paths, example on Linux: >>> import sysconfig >>> sysconfig.get_paths() {'stdlib': '/usr/local/lib/python3.8', 'platstdlib': '/usr/local/lib/python3.8', 'purelib': '/usr/local/lib/python3.8/site-packages', 'platlib': '/usr/local/lib/python3.8/site-packages', 'include': '/usr/local/include/python3.8', 'platinclude': '/usr/local/include/python3.8', 'scripts': '/usr/local/bin', 'data': '/usr/local'} A shell script is also available to display these details, which you can invoke by executing sysconfig as a module: python -m sysconfig Addendum: What about Debian / Ubuntu? As some commenters point out, the sysconfig results for Debian systems (and Ubuntu, as a derivative) are not accurate. When a user pip installs a package it will go into dist-packages not site-packages, as per Debian policies on Python packaging. The root cause of the discrepancy is because Debian patch the distutils install layout, to correctly reflect their changes to the site, but they fail to patch the sysconfig module. For example, on Ubuntu 20.04.4 LTS (Focal Fossa): root@cb5e85f17c7f:/# python3 -m sysconfig | grep packages platlib = "/usr/lib/python3.8/site-packages" purelib = "/usr/lib/python3.8/site-packages" root@cb5e85f17c7f:/# python3 -m site | grep packages '/usr/local/lib/python3.8/dist-packages', '/usr/lib/python3/dist-packages', USER_SITE: '/root/.local/lib/python3.8/site-packages' (doesn't exist) It looks like the patched Python installation that Debian/Ubuntu are distributing is a bit hacked up, and they will need to figure out a new plan for 3.12+ when distutils is completely unavailable. Probably, they will have to start patching sysconfig as well, since this is what pip will be using for install locations. A: Answer to old question. But use ipython for this. pip install ipython ipython import imaplib imaplib? This will give the following output about imaplib package - Type: module String form: <module 'imaplib' from '/usr/lib/python2.7/imaplib.py'> File: /usr/lib/python2.7/imaplib.py Docstring: IMAP4 client. Based on RFC 2060. Public class: IMAP4 Public variable: Debug Public functions: Internaldate2tuple Int2AP ParseFlags Time2Internaldate A: A solution that: * *outside of virtualenv - provides the path of global site-packages, *insidue a virtualenv - provides the virtualenv's site-packages ...is this one-liner: python -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())" Formatted for readability (rather than use as a one-liner), that looks like the following: from distutils.sysconfig import get_python_lib print(get_python_lib()) Source: an very old version of "How to Install Django" documentation (though this is useful to more than just Django installation) A: Let's say you have installed the package 'django'. import it and type in dir(django). It will show you, all the functions and attributes with that module. Type in the python interpreter - >>> import django >>> dir(django) ['VERSION', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', 'get_version'] >>> print django.__path__ ['/Library/Python/2.6/site-packages/django'] You can do the same thing if you have installed mercurial. This is for Snow Leopard. But I think it should work in general as well. A: For those who are using poetry, you can find your virtual environment path with poetry debug: $ poetry debug Poetry Version: 1.1.4 Python: 3.8.2 Virtualenv Python: 3.8.2 Implementation: CPython Path: /Users/cglacet/.pyenv/versions/3.8.2/envs/my-virtualenv Valid: True System Platform: darwin OS: posix Python: /Users/cglacet/.pyenv/versions/3.8.2 Using this information you can list site packages: ls /Users/cglacet/.pyenv/versions/3.8.2/envs/my-virtualenv/lib/python3.8/site-packages/ A: I made a really simple function that gets the job done import site def get_site_packages_dir(): return [p for p in site.getsitepackages() if p.endswith(("site-packages", "dist-packages"))][0] get_site_packages_dir() # '/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages' If you want to retrieve the results using the terminal: python3 -c "import site;print([p for p in site.getsitepackages() if p.endswith(('site-packages', 'dist-packages')) ][0])" /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages A: As others have noted, distutils.sysconfig has the relevant settings: import distutils.sysconfig print distutils.sysconfig.get_python_lib() ...though the default site.py does something a bit more crude, paraphrased below: import sys, os print os.sep.join([sys.prefix, 'lib', 'python' + sys.version[:3], 'site-packages']) (it also adds ${sys.prefix}/lib/site-python and adds both paths for sys.exec_prefix as well, should that constant be different). That said, what's the context? You shouldn't be messing with your site-packages directly; setuptools/distutils will work for installation, and your program may be running in a virtualenv where your pythonpath is completely user-local, so it shouldn't assume use of the system site-packages directly either. A: The native system packages installed with python installation in Debian based systems can be found at : /usr/lib/python2.7/dist-packages/ In OSX - /Library/Python/2.7/site-packages by using this small code : from distutils.sysconfig import get_python_lib print get_python_lib() However, the list of packages installed via pip can be found at : /usr/local/bin/ Or one can simply write the following command to list all paths where python packages are. >>> import site; site.getsitepackages() ['/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages'] Note: the location might vary based on your OS, like in OSX >>> import site; site.getsitepackages() ['/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/site-python', '/Library/Python/2.7/site-packages'] A: pip show will give all the details about a package: https://pip.pypa.io/en/stable/reference/pip_show/ [pip show][1] To get the location: pip show <package_name>| grep Location In Linux, you can go to site-packages folder by: cd $(python -c "import site; print(site.getsitepackages()[0])") A: I had to do something slightly different for a project I was working on: find the relative site-packages directory relative to the base install prefix. If the site-packages folder was in /usr/lib/python2.7/site-packages, I wanted the /lib/python2.7/site-packages part. I have, in fact, encountered systems where site-packages was in /usr/lib64, and the accepted answer did NOT work on those systems. Similar to cheater's answer, my solution peeks deep into the guts of Distutils, to find the path that actually gets passed around inside setup.py. It was such a pain to figure out that I don't want anyone to ever have to figure this out again. import sys import os from distutils.command.install import INSTALL_SCHEMES if os.name == 'nt': scheme_key = 'nt' else: scheme_key = 'unix_prefix' print(INSTALL_SCHEMES[scheme_key]['purelib'].replace('$py_version_short', (str.split(sys.version))[0][0:3]).replace('$base', '')) That should print something like /Lib/site-packages or /lib/python3.6/site-packages. A: Something that has not been mentioned which I believe is useful, if you have two versions of Python installed e.g. both 3.8 and 3.5 there might be two folders called site-packages on your machine. In that case you can specify the python version by using the following: py -3.5 -c "import site; print(site.getsitepackages()[1]) A: All the answers (or: the same answer repeated over and over) are inadequate. What you want to do is this: from setuptools.command.easy_install import easy_install class easy_install_default(easy_install): """ class easy_install had problems with the fist parameter not being an instance of Distribution, even though it was. This is due to some import-related mess. """ def __init__(self): from distutils.dist import Distribution dist = Distribution() self.distribution = dist self.initialize_options() self._dry_run = None self.verbose = dist.verbose self.force = None self.help = 0 self.finalized = 0 e = easy_install_default() import distutils.errors try: e.finalize_options() except distutils.errors.DistutilsError: pass print e.install_dir The final line shows you the installation dir. Works on Ubuntu, whereas the above ones don't. Don't ask me about windows or other dists, but since it's the exact same dir that easy_install uses by default, it's probably correct everywhere where easy_install works (so, everywhere, even macs). Have fun. Note: original code has many swearwords in it. A: An additional note to the get_python_lib function mentioned already: on some platforms different directories are used for platform specific modules (eg: modules that require compilation). If you pass plat_specific=True to the function you get the site packages for platform specific packages. A: A side-note: The proposed solution (distutils.sysconfig.get_python_lib()) does not work when there is more than one site-packages directory (as recommended by this article). It will only return the main site-packages directory. Alas, I have no better solution either. Python doesn't seem to keep track of site-packages directories, just the packages within them. A: This works for me. It will get you both dist-packages and site-packages folders. If the folder is not on Python's path, it won't be doing you much good anyway. import sys; print [f for f in sys.path if f.endswith('packages')] Output (Ubuntu installation): ['/home/username/.local/lib/python2.7/site-packages', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages'] A: This should work on all distributions in and out of virtual environment due to it's "low-tech" nature. The os module always resides in the parent directory of 'site-packages' import os; print(os.path.dirname(os.__file__) + '/site-packages') To change dir to the site-packages dir I use the following alias (on *nix systems): alias cdsp='cd $(python -c "import os; print(os.path.dirname(os.__file__))"); cd site-packages' A: For Ubuntu, python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" ...is not correct. It will point you to /usr/lib/pythonX.X/dist-packages This folder only contains packages your operating system has automatically installed for programs to run. On ubuntu, the site-packages folder that contains packages installed via setup_tools\easy_install\pip will be in /usr/local/lib/pythonX.X/dist-packages The second folder is probably the more useful one if the use case is related to installation or reading source code. If you do not use Ubuntu, you are probably safe copy-pasting the first code box into the terminal. A: from distutils.sysconfig import get_python_lib print get_python_lib()
{ "language": "en", "url": "https://stackoverflow.com/questions/122327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1332" }
Q: Scale image down, but not up in latex I have a command which includes an includegraphics command - I can pass an image to my command, and it will do some standard formatting for me before actually including the image. Some of the images that I'm including via this command are smaller than \textwidth, while some are larger. I'd like to scale the larger images down to \textwidth, while not scaling the smaller images up - this means I can't just do \includegraphics[width=\textwidth]{img} Is there a way to specify a maxwidth? Or, can I get the width of the image somehow so I can do something like \ifthenelse{\imagewidth > \textwidth}{% \includegraphics[width=\textwidth]{img}}{% \includegraphics{img}} A: To get the width of the image you can use this code: \newlength{\imgwidth} \settowidth{\imgwidth}{\includegraphics{img}} You could use this in the document preamble to create a new command to automatically set the width: \usepackage{graphicx} \usepackage{calc} \newlength{\imgwidth} \newcommand\scalegraphics[1]{% \settowidth{\imgwidth}{\includegraphics{#1}}% \setlength{\imgwidth}{\minof{\imgwidth}{\textwidth}}% \includegraphics[width=\imgwidth]{#1}% } and then, in your document: \scalegraphics{img} I hope this helps! A: I like an additional parameter for optionally scaling the image down or up a bit, so my version of \scalegraphics looks like this: \newcommand\scalegraphics[2][]{% \settowidth{\imgwidth}{\includegraphics{#2}}% \setlength{\imgwidth}{\minof{#1\imgwidth}{\textwidth}}% \includegraphics[width=\imgwidth]{#2}% } A: The adjustbox package is usefull for this. Below you will find a short example. It allows the following (besides triming, clipping, adding margins and relative scaling: \documentclass[a4paper]{article} \usepackage[demo]{graphicx} \usepackage[export]{adjustbox} \begin{document} \adjustbox{max width=\linewidth}{\includegraphics[width=.5\linewidth,height=3cm]{}} \adjustbox{max width=\linewidth}{\includegraphics[width=2\linewidth,height=3cm]{}} \includegraphics[width=2\linewidth,height=3cm,max width=\linewidth]{} \end{document} If you use the export package option most of its keys can be used directly with \includegraphics. FOr instance the key relevant to you, max width. A: If what you want to constrain is not an image but a standalone .tex file, you can slightly modify ChrisN's \scalegraphics to \newlength{\inputwidth} \newcommand\maxwidthinput[2][\linewidth]{% \settowidth{\inputwidth}{#2}% \setlength{\inputwidth}{\minof{\inputwidth}{#1}}% \resizebox{\inputwidth}{!}{#2} } and then use it like so \maxwidthinput{\input{standalone}} \maxwidthinput[0.5\textwidth]{\input{standalone}} And of course, adjustbox as suggested by ted will work as well: \usepackage{adjustbox} ... \adjustbox{max width=\linewidth}{\input{standalone}} A: After a few minutes of searching through CTAN manuals and Google results, I think I can safely say that what you want to do is either impossible or very hard. My only recommendation is that you have two commands, one for small images and one for large, or one command with an option. There may be a way, but I leave it to other S.O. LaTeX wizards to provide a better answer. Edit: I am wrong, see above.
{ "language": "en", "url": "https://stackoverflow.com/questions/122348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: how do I create an array or list of ASP.NET checkboxlists I know how to use the checkboxlist in ASP.NET to display options retrieved from a database. What I don't know how to do is to make this 2-dimensional. That is, I need a list of checkboxlists where I don't know how long the list is; both dimensions of the checkboxlist will be determined by list of people (pulled from database) list of tasks (pulled from database) and the user of the web page will click in column/row to specify which people will be assigned which tasks. Right now I'm thinking that my only option is to brute-force it by creating a table and populate each cell with its own checkbox. (yuck) Is there a more elegant way to create a 2-dimensional array of checkboxes with labels for both rows and columns? A: I would use a repeater along with a checkboxlist. Depending on how your database is setup you could have each checkboxlist databound. A: I've done this before and resorted to the brute-force method you suggest. It's not as nasty as you'd think. Other solutions that were declarative and databound would likely be just as convoluted and confusing. A: You can programmitaclly use a GridView control. It's inherently two-dimensional and you can use databound CheckBoxFields for it. A: I use the ASPxGridView from DevExpress. It has a control column type of Selected (or something like that) which will display a checkbox in the column with the other column populated from your bound datasource. The User can select any rows desired by checking the checkbox on the row and you can get all the selected rows easily inot a collection to process. DevExpress components really do get rid of a lot of brute-force programming. A: If you're looking for a quick and dirty way, you can use the AJAX Control Toolkit with the two controls and can populate one based on the other. If that's not what you're looking for, I'd do it the brute force way.
{ "language": "en", "url": "https://stackoverflow.com/questions/122359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to empty/flush Windows READ disk cache in C#? If I am trying to determine the read speed of a drive, I can code a routine to write files to a filesystem and then read those files back. Unfortunately, this doesn't give an accurate read speed because Windows does disk read caching. Is there a way to flush the disk read cache of a drive in C# / .Net (or perhaps with Win32 API calls) so that I can read the files directly from the drive without them being cached? A: Why DIY? If you only need to determine drive speed and not really interested in learning how to flush I/O buffers from .NET, you may just use DiskSpd utility from http://research.microsoft.com/barc/Sequential_IO/. It has random/sequential modes with and without buffer flushing. The page also has some I/O related research reports you might find useful. A: const int FILE_FLAG_NO_BUFFERING = 0x20000000; return new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read,64 * 1024, (FileOptions)FILE_FLAG_NO_BUFFERING | FileOptions.Asynchronous & FileOptions.SequentialScan); A: Constantin: Thanks! That link has a command-line EXE which does the testing I was looking for. I also found a link off that page to a more interesting article (in Word and PDF) on this page: Sequential File Programming Patterns and Performance with .NET In this article, it talks about Un-buffered File Performance (iow, no read/write caching -- just raw disk performance.) Quoted directly from the article: There is no simple way to disable FileStream buffering in the V2 .NET framework. One must invoke the Windows file system directly to obtain an un-buffered file handle and then ‘wrap’ the result in a FileStream as follows in C#: [DllImport("kernel32", SetLastError=true)] static extern unsafe SafeFileHandle CreateFile( string FileName, // file name uint DesiredAccess, // access mode uint ShareMode, // share mode IntPtr SecurityAttributes, // Security Attr uint CreationDisposition, // how to create uint FlagsAndAttributes, // file attributes SafeFileHandle hTemplate // template file ); SafeFileHandle handle = CreateFile(FileName, FileAccess.Read, FileShare.None, IntPtr.Zero, FileMode.Open, FILE_FLAG_NO_BUFFERING, null); FileStream stream = new FileStream(handle, FileAccess.Read, true, 4096); Calling CreateFile() with the FILE_FLAG_NO_BUFFERING flag tells the file system to bypass all software memory caching for the file. The ‘true’ value passed as the third argument to the FileStream constructor indicates that the stream should take ownership of the file handle, meaning that the file handle will automatically be closed when the stream is closed. After this hocus-pocus, the un-buffered file stream is read and written in the same way as any other. A: Response of Fix was almost right and better than PInvoke. But it has bugs and doesn't works... To open up File w/o caching one needs to do following: const FileOptions FileFlagNoBuffering = (FileOptions)0x20000000; FileStream file = new FileStream(fileName, fileMode, fileAccess, fileShare, blockSize, FileFlagNoBuffering | FileOptions.WriteThrough | fileOptions); Few rules: * *blockSize must be hard drive cluster size aligned (4096 most of the time) *file position change must be cluster size aligned *you can't read/write less than blockSize or block not aligned to it's size And don't forget - there is also HDD Cache (which slower and smaller than OS cache) which you can't turn off by that (but sometimes FileOptions.WriteThrough helps for not caching writes). With those options you have no reason for flushing, but make sure you've properly tested that this approach won't slow things down in case your implementation of cache is slower. A: I found this article and it seems that this is a complicated program because you also have to flush other caches.
{ "language": "en", "url": "https://stackoverflow.com/questions/122362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Simulate Control-Alt-Delete key sequence in Vista and XP Can I simulate in C#/C++ code Control+Alt+Delete sequence in Vista? When UAC enabled/disabled? How it is done in XP? Can you provide a code sample that works in Vista? A: Existing code to simulate the Secure Attention Sequence (SAS), which most people refer to as control alt delete or ctrl-alt-del, no longer works in Windows Vista. It seems that Microsoft offers a library that exports a function called SimulateSAS(). It is not public and one is supposed to request it by sending a mail to saslib@microsoft.com. There is a similar library available with the following features: * *Works both with and without User Account Control (UAC) *Supports current, console and any Terminal Server session *Does not need a driver *The calling application does not need to be signed or have a special manifest *Supports multiple programming languages Please note that this library is not free. Meanwhile you can contact info@simulatesas.com if you are interested in it. A: Please use below information, "saslib@microsoft.com" is deprecated and less likely to get any responses. Below information is sufficient. Beginning with the public availability of the Windows 7 Operating System and accompanying Software Development Kit (SDK), SAS functionality for Vista applications will only be available through the Windows SDK. The release support through email of the SASLIB package, and the saslib will be discontinued. Information on how to download the platform SDK can be found on the Microsoft Download Center page for the “Windows SDK for Windows 7 and .Net Framework 3.5 SP1” at the following link: http://www.microsoft.com/downloads/details.aspx?FamilyID=c17ba869-9671-4330-a63e-1fd44e0e2505&displaylang=en. After you install this SDK you will find the redistributable sas.dll in the redist directory: \Program Files\Microsoft SDKs\Windows\v7.0\redist\x86\sas.dll \Program Files\Microsoft SDKs\Windows\v7.0\redist\amd64\sas.dll \Program Files\Microsoft SDKs\Windows\v7.0\redist\ia64\sas.dll A: I had bookmarked this URL, hope it help. http://softltd.wordpress.com/simulate-ctrl-alt-del-in-windows-vista-7-and-server-2008/ A: PostMessage(HWND_BROADCAST, WM_HOTKEY, 0, MAKELONG(MOD_ALT | MOD_CONTROL, VK_DELETE)); You get PostMessage from the user32 dll edit: CodeProject article that has code for it edit: There is some discussion from VNC on why that won't work in Vista and how to set up UAC to allow it. A: You have to call next code from service process only HDESK desktop = OpenDesktopW(L"Winlogon", 0, TRUE, DESKTOP_CREATEMENU | DESKTOP_CREATEWINDOW | DESKTOP_ENUMERATE | DESKTOP_HOOKCONTROL | DESKTOP_WRITEOBJECTS | DESKTOP_READOBJECTS | DESKTOP_SWITCHDESKTOP | GENERIC_WRITE); int result = SetThreadDesktop(desktop); if (result) { HMODULE sasdll = LoadLibraryA("sas.dll"); if (sasdll) { typedef void(__stdcall * SendSAS_t)(BOOL); SendSAS_t sendSAS = (SendSAS_t)GetProcAddress(sasdll, "SendSAS"); if (sendSAS) sendSAS(FALSE); } } CloseDesktop(desktop);
{ "language": "en", "url": "https://stackoverflow.com/questions/122367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How would you implement MVC in a Windows Forms application? I don't develop too many desktop / Windows Forms applications, but it had occurred to me that there may be some benefit to using the MVC (Model View Controller) pattern for Windows Forms .NET development. Has anyone implemented MVC in Windows Forms? If so, do you have any tips on the design? A: What I've done in the past is use something similar, Model-View-Presenter. [NOTE: This article used to be available on the web. To see it now, you'll need to download the CHM, and then view the file properties and click Unblock. Then you can open the CHM and find the article. Thanks a million, Microsoft! sigh] The form is the view, and I have an IView interface for it. All the processing happens in the presenter, which is just a class. The form creates a new presenter, and passes itself as the presenter's IView. This way for testing you can pass in a fake IView instead, and then send commands to it from the presenter and detect the results. If I were to use a full-fledged Model-View-Controller, I guess I'd do it this way: * *The form is the view. It sends commands to the model, raises events which the controller can subscribe to, and subscribes to events from the model. *The controller is a class that subscribes to the view's events and sends commands to the view and to the model. *The model raises events that the view subscribes to. This would fit with the classic MVC diagram. The biggest disadvantage is that with events, it can be hard to tell who's subscribing to what. The MVP pattern uses methods instead of events (at least the way I've implemented it). When the form/view raises an event (e.g. someButton.Click), the form simply calls a method on the presenter to run the logic for it. The view and model don't have any direct connection at all; they both have to go through the presenter. A: According to Microsoft, the UIP Application Block mentioned by @jasonbunting is "archived." Instead, look at the Smart Client Application Block or the even newer Smart Client Software Factory, which supports both WinForms and WPF SmartParts. A: Well, actually Windows Forms implements a "free-style" version of MVC, much like some movies implement some crappy "free-style" interpretation of some classic books (Romeo & Juliet come to mind). I'm not saying Windows Forms' implementation is bad, it's just... different. If you use Windows Forms and proper OOP techniques, and maybe an ORM like EntitySpaces for your database access, then you could say that: * *The ORM/OOP infrastructure is the Model *The Forms are the Views *The event handlers are the Controller Although having both View and Controller represented by the same object make separating code from representation way more difficult (there's no easy way to plug-in a "GTK+ view" in a class derived from Microsoft.Windows.Forms.Form). What you can do, if you are careful enough. Is keep your form code completely separate from your controller/model code by only writing GUI related stuff in the event handlers, and all other business logic in a separate class. In that case, if you ever wanted to use GTK+ to write another View layer, you would only need to rewrite the GUI code. A: Check into the User Interface Process (UIP) Application Block. I don't know much about it but looked at it a few years ago. There may be newer versions, check around. "The UIP Application Block is based on the model-view-controller (MVC) pattern." A: Windows Forms isn't designed from the ground up to use MVC. You have two options. First, you can roll your own implementation of MVC. Second, you can use an MVC framework designed for Windows Forms. The first is simple to start doing, but the further in you get, the more complex it is. I'd suggest looking for a good, preexisting and well-tested, MVC framework designed to work with Windows Forms. I believe this blog post is a decent starting point. For anybody starting out, I'd suggest skipping Windows Forms and developing against WPF, if you have the option. It's a much better framework for creating the UI. There are many MVC frameworks being developed for WPF, including this one and that one. A: Take a look at the MS Patterns and Practices Smart Client application block which has some guidance and classes which walk you through implementing a model view presenter patter in windows forms - take a look at the reference application included. For WPF this is being superseced by the prism project The software factories approach is a great way to learn best practices
{ "language": "en", "url": "https://stackoverflow.com/questions/122388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68" }
Q: Router to handle multiple public IP addresses I am presently running several websites and a mail server from my home network. I have a business DSL account with 8 public IP addresses (1 by itself, and 7 in a block). To handle routing/firewall/gateway, I am presently using RRAS, DNS, & DHCP from Windows 2003 running on a ancient (circa 2001) PC -- which I suspect is going to fail any time now. What I would like to do is replace that with a simple router. Have a consumer model LinkSys Wifi-router, which I'm presently just using as an access point (don't have the model number handy, but it's one of their standard models). It seems to be able to handle all the NAT/firewall/DHCP tasks -- except for handling routing the multiple public addresses. (e.g., I need x.x.x.123, port 21 getting to one machine, but port 80 of x.x.x.123 & x.x.x.124 to going to another, and x.x.x.123, port 5000 to still another etc). So my questions are: *Can this be done with standard Linksys router, which they just don't explain in the consumer manual? *Can this be done ... if I replace the firmware with a community/OS version (and if so, which one?) *If neither of the above, can someone recommend a profession router (preferably with wifi) that does do this, which is close to a consumer level price point. *Alternately, is there a reliable OS/3rd party replacement to RRAS which handles this (since RRAS is the part causing the most trouble) *Alternate-Alternately, can someone point to a VERY simple HOWTO to doing this (ie. follow these steps and forget about it), to installing a LINUX system to do this) (since I assume I can run Linux longer on the old machine)? A: This can't be done on a Linksys router with stock firmware. It can be done if you load a third-party firmware, but there's no GUI (afaik) to accomplish it, so you'll be hacking system shell scripts which is pretty hairy. I would recommend getting a low-power or older PC and installing PFSense. PFSense is an open-source router appliance OS distribution with a very easy to use web front end. A: Install DD-wrt On your linksys box. I believe this will have everything you need link text
{ "language": "en", "url": "https://stackoverflow.com/questions/122394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What are reserved filenames for various platforms? I'm not asking about general syntactic rules for file names. I mean gotchas that jump out of nowhere and bite you. For example, trying to name a file "COM<n>" on Windows? A: Full description of legal and illegal filenames on Windows: http://msdn.microsoft.com/en-us/library/aa365247.aspx A: A tricky Unix gotcha when you don't know: Files which start with - or -- are legal but a pain in the butt to work with, as many command line tools think you are providing options to them. Many of those tools have a special marker "--" to signal the end of the options: gzip -9vf -- -mydashedfilename A: As others have said, device names like COM1 are not possible as filenames under Windows because they are reserved devices. However, there is an escape method to create and access files with these reserved names, for example, this command will redirect the output of the ver command into a file called COM1: ver > "\\?\C:\Users\username\COM1" Now you will have a file called COM1 that 99% of programs won't be able to open, and will probably freeze if you try to access. Here's the Microsoft article that explains how this "file namespace" works. Basically it tells Windows not to do any string processing on the text and to pass it straight through to the filesystem. This trick can also be used to work with paths longer than 260 characters. A: The boost::filesystem Portability Guide has a lot of good info. A: Well, for MSDOS/Windows, NUL, PRN, LPT<n> and CON. They even cause problems if used with an extension: "NUL.TXT" A: From: http://www.grouplogic.com/knowledge/index.cfm/fuseaction/view_Info/docID/111. The following characters are invalid as file or folder names on Windows using NTFS: / ? < > \ : * | " and any character you can type with the Ctrl key. In addition to the above illegal characters the caret ^ is also not permitted under Windows Operating Systems using the FAT file system. Under Windows using the FAT file system file and folder names may be up to 255 characters long. Under Windows using the NTFS file system file and folder names may be up to 256 characters long. Under Window the length of a full path under both systems is 260 characters. In addition to these characters, the following conventions are also illegal: * *Placing a space at the end of the name *Placing a period at the end of the name The following file names are also reserved under Windows: * *aux, *com1, *com2, *... *com9, *lpt1, *lpt2, *... *lpt9, *con, *nul, *prn A: Unless you're touching special directories, the only illegal names on Linux are '.' and '..'. Any other name is possible, although accessing some of them from the shell requires using escape sequences. EDIT: As Vinko Vrsalovic said, files starting with '-' and '--' are a pain from the shell, since those character sequences are interpreted by the application, not the shell.
{ "language": "en", "url": "https://stackoverflow.com/questions/122400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Is there an Application to Create Regular Expression Out of Text by Selecting Wanted Area? I hope this is programmer-related question. I'm in the hobby business of C# programming. For my own purposes I need to parse html files and the best idea is..regular expression. As many found out, it's quite time consuming to learn them and thus I'm quite interested if you know about some application that would be able to take input (piece of any code), understand what i need (by Me selecting a piece of the code I need to "cut out"), and give me the proper regular expression for it or more options. As I've heard, Regex is a little science of itself, so it might not be as easy as I'd imagine. A: Yes there is Roy Osherove wrote exactly what you're looking for - regulazy A: Not real answer to your question, as it has nothing to do with regex, but HtmlAgilityPack may help you with your parsing. A: You might also want to try txt2re : http://txt2re.com/, which tries to identify patterns in a user-supplied string and allows to build a regex out of them. A: I gotta agree with Sunny on this one: if you're parsing html, you're better off converting it to XML (using the HTML Agility pack it's trivially easy) and then you can using XPATH expressions rather than regular expressions, it's far better suited to the job.
{ "language": "en", "url": "https://stackoverflow.com/questions/122402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How to copy and paste code without rich text formatting? I used to often find myself coping a piece of code from a website/Word document etc only to discover that when doing Paste I would end up with the desired code plus some extra HTML tags/text, basically extra formatting information. To get rid of that extra ballast I would paste the text to the Notepad and then copy it again and then paste to the desired destination. Pretty recently I discovered a simple and free tool for Windows called PureText that allows me to cut the Notepad step. It basically adds an extra keyboard shortcut e.g WinKey + V that will do the Paste action without formatting; just pure text. I find it very handy. I was wondering what approach would you use yourselves? Are they any better/easier to use tools around? A: From websites, using Firefox, I use the CopyPlainText extension. A: Just to summarize the available options: Tools * *PureText - free tool for Windows *Use AutoHotkey and write your own macro as suggested by Dean Browsers * *To copy plain text from a browser: Copy As Plain Text or CopyPlainText (suggested by cori) - Firefox extensions *To paste without formatting to a browser (Firefox/Chrome at least): CTRL+⇧ Shift+V on Windows/Linux, see below for Mac OS X. Other * *Under Mac OS X, you can ⇧ Shift+⌥ Alt+⌘ Command+V to paste with the "current" format (Edit -> Paste and Match Style); or ⌘ Command+⇧ Shift+V to paste without formatting (by Kamafeather) *Paste to Notepad (or other text editor), and then copy from Notepad and paste again *For single-line text: paste to any non-rich text field (browser URL, textarea, search/find inputs, etc.) Please feel free to edit/add new items A: If you're pasting into Word you can use the Paste Special command. A: Just for reference, under Mac OS X, you can use ⌘ Command+⇧ Shift+V to paste without formatting or with the "current" format. Note: in some apps it's ⌘ Command+⇧ Shift+⌥ Alt+V (see "Edit" Menu → "Paste and Match Style") A: I'm a big fan of Autohotkey. I defined a 'paste plain text' macro that works in any application. It runs when I press Ctrl+Shift+V and pastes a plain version of whatever is on the clipboard. The nice thing about Autohotkey: you can code things to work the way you want them to work across all applications. ^+v:: ; Convert any copied files, HTML, or other formatted text to plain text Clipboard = %Clipboard% ; Paste by pressing Ctrl+V SendInput, ^v return A: I have Far.exe as the first item in the start menu. Richtext in the clipboard -> ctrl-escape,arrdown,enter,shift-f4,$,enter shift-insert,ctrl-insert,alt-backspace, f10,enter -> plaintext in the clipboard Pros: no mouse, just blind typing, ends exactly where i was before Cons: ANSI encoding - international symbols are lost Luckily, I do not have to do that too often :) A: Nice find with your PureText. I had build, before I change keyboard, a key that was running a macro that was copying-pasting-copying in notepad for this task! I'll give a try to your software since I do not have any macro key now :( A: I wrote an unpublished java app to monitor the clipboard, replacing items that offered text along with other richer formats, with items only offering the plain text format. A: Look for a little clipboard icon that pops up at the end of the material you pasted. Click on this and choose "keep text only". A: I use OpenOffice.org and that offers a paste special option, where you can omit the formatting altogether. If you are not bound to MS Word, it's IMO worth a try and it's free :-) A: Whenever these plugins and options aren't available I just use my good ol friend notepad. Paste content into notepad where it won't accept the extra formatting and then copy it right back out. Sort of hacky but oh well. It works! A: I usually work with Notepad2, all the text I copy from the web are pasted there and then reused, that allows me to clean it (from format and make modifications). You can download Notepad2 here A: If you are using MS Word then try ALT+E, S, U, Enter (Uses the Paste Special)
{ "language": "en", "url": "https://stackoverflow.com/questions/122404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: What's the nearest substitute for a function pointer in Java? I have a method that's about ten lines of code. I want to create more methods that do exactly the same thing, except for a small calculation that's going to change one line of code. This is a perfect application for passing in a function pointer to replace that one line, but Java doesn't have function pointers. What's my best alternative? A: @sblundy's answer is great, but anonymous inner classes have two small flaws, the primary being that they tend not to be reusable and the secondary is a bulky syntax. The nice thing is that his pattern expands into full classes without any change in the main class (the one performing the calculations). When you instantiate a new class you can pass parameters into that class which can act as constants in your equation--so if one of your inner classes look like this: f(x,y)=x*y but sometimes you need one that is: f(x,y)=x*y*2 and maybe a third that is: f(x,y)=x*y/2 rather than making two anonymous inner classes or adding a "passthrough" parameter, you can make a single ACTUAL class that you instantiate as: InnerFunc f=new InnerFunc(1.0);// for the first calculateUsing(f); f=new InnerFunc(2.0);// for the second calculateUsing(f); f=new InnerFunc(0.5);// for the third calculateUsing(f); It would simply store the constant in the class and use it in the method specified in the interface. In fact, if KNOW that your function won't be stored/reused, you could do this: InnerFunc f=new InnerFunc(1.0);// for the first calculateUsing(f); f.setConstant(2.0); calculateUsing(f); f.setConstant(0.5); calculateUsing(f); But immutable classes are safer--I can't come up with a justification to make a class like this mutable. I really only post this because I cringe whenever I hear anonymous inner class--I've seen a lot of redundant code that was "Required" because the first thing the programmer did was go anonymous when he should have used an actual class and never rethought his decision. A: The Google Guava libraries, which are becoming very popular, have a generic Function and Predicate object that they have worked into many parts of their API. A: One of the things I really miss when programming in Java is function callbacks. One situation where the need for these kept presenting itself was in recursively processing hierarchies where you want to perform some specific action for each item. Like walking a directory tree, or processing a data structure. The minimalist inside me hates having to define an interface and then an implementation for each specific case. One day I found myself wondering why not? We have method pointers - the Method object. With optimizing JIT compilers, reflective invocation really doesn't carry a huge performance penalty anymore. And besides next to, say, copying a file from one location to another, the cost of the reflected method invocation pales into insignificance. As I thought more about it, I realized that a callback in the OOP paradigm requires binding an object and a method together - enter the Callback object. Check out my reflection based solution for Callbacks in Java. Free for any use. A: Sounds like a strategy pattern to me. Check out fluffycat.com Java patterns. A: To do the same thing without interfaces for an array of functions: class NameFuncPair { public String name; // name each func void f(String x) {} // stub gets overridden public NameFuncPair(String myName) { this.name = myName; } } public class ArrayOfFunctions { public static void main(String[] args) { final A a = new A(); final B b = new B(); NameFuncPair[] fArray = new NameFuncPair[] { new NameFuncPair("A") { @Override void f(String x) { a.g(x); } }, new NameFuncPair("B") { @Override void f(String x) { b.h(x); } }, }; // Go through the whole func list and run the func named "B" for (NameFuncPair fInstance : fArray) { if (fInstance.name.equals("B")) { fInstance.f(fInstance.name + "(some args)"); } } } } class A { void g(String args) { System.out.println(args); } } class B { void h(String args) { System.out.println(args); } } A: oK, this thread is already old enough, so very probably my answer is not helpful for the question. But since this thread helped me to find my solution, I'll put it out here anyway. I needed to use a variable static method with known input and known output (both double). So then, knowing the method package and name, I could work as follows: java.lang.reflect.Method Function = Class.forName(String classPath).getMethod(String method, Class[] params); for a function that accepts one double as a parameter. So, in my concrete situation I initialized it with java.lang.reflect.Method Function = Class.forName("be.qan.NN.ActivationFunctions").getMethod("sigmoid", double.class); and invoked it later in a more complex situation with return (java.lang.Double)this.Function.invoke(null, args); java.lang.Object[] args = new java.lang.Object[] {activity}; someOtherFunction() + 234 + (java.lang.Double)Function.invoke(null, args); where activity is an arbitrary double value. I am thinking of maybe doing this a bit more abstract and generalizing it, as SoftwareMonkey has done, but currently I am happy enough with the way it is. Three lines of code, no classes and interfaces necessary, that's not too bad. A: For each "function pointer", I'd create a small functor class that implements your calculation. Define an interface that all the classes will implement, and pass instances of those objects into your larger function. This is a combination of the "command pattern", and "strategy pattern". @sblundy's example is good. A: Check out lambdaj http://code.google.com/p/lambdaj/ and in particular its new closure feature http://code.google.com/p/lambdaj/wiki/Closures and you will find a very readable way to define closure or function pointer without creating meaningless interface or use ugly inner classes A: Wow, why not just create a Delegate class which is not all that hard given that I already did for java and use it to pass in parameter where T is return type. I am sorry but as a C++/C# programmer in general just learning java, I need function pointers because they are very handy. If you are familiar with any class which deals with Method Information you can do it. In java libraries that would be java.lang.reflect.method. If you always use an interface, you always have to implement it. In eventhandling there really isn't a better way around registering/unregistering from the list of handlers but for delegates where you need to pass in functions and not the value type, making a delegate class to handle it for outclasses an interface. A: None of the Java 8 answers have given a full, cohesive example, so here it comes. Declare the method that accepts the "function pointer" as follows: void doCalculation(Function<Integer, String> calculation, int parameter) { final String result = calculation.apply(parameter); } Call it by providing the function with a lambda expression: doCalculation((i) -> i.toString(), 2); A: When there is a predefined number of different calculations you can do in that one line, using an enum is a quick, yet clear way to implement a strategy pattern. public enum Operation { PLUS { public double calc(double a, double b) { return a + b; } }, TIMES { public double calc(double a, double b) { return a * b; } } ... public abstract double calc(double a, double b); } Obviously, the strategy method declaration, as well as exactly one instance of each implementation are all defined in a single class/file. A: Anonymous inner class Say you want to have a function passed in with a String param that returns an int. First you have to define an interface with the function as its only member, if you can't reuse an existing one. interface StringFunction { int func(String param); } A method that takes the pointer would just accept StringFunction instance like so: public void takingMethod(StringFunction sf) { int i = sf.func("my string"); // do whatever ... } And would be called like so: ref.takingMethod(new StringFunction() { public int func(String param) { // body } }); EDIT: In Java 8, you could call it with a lambda expression: ref.takingMethod(param -> bodyExpression); A: You need to create an interface that provides the function(s) that you want to pass around. eg: /** * A simple interface to wrap up a function of one argument. * * @author rcreswick * */ public interface Function1<S, T> { /** * Evaluates this function on it's arguments. * * @param a The first argument. * @return The result. */ public S eval(T a); } Then, when you need to pass a function, you can implement that interface: List<Integer> result = CollectionUtilities.map(list, new Function1<Integer, Integer>() { @Override public Integer eval(Integer a) { return a * a; } }); Finally, the map function uses the passed in Function1 as follows: public static <K,R,S,T> Map<K, R> zipWith(Function2<R,S,T> fn, Map<K, S> m1, Map<K, T> m2, Map<K, R> results){ Set<K> keySet = new HashSet<K>(); keySet.addAll(m1.keySet()); keySet.addAll(m2.keySet()); results.clear(); for (K key : keySet) { results.put(key, fn.eval(m1.get(key), m2.get(key))); } return results; } You can often use Runnable instead of your own interface if you don't need to pass in parameters, or you can use various other techniques to make the param count less "fixed" but it's usually a trade-off with type safety. (Or you can override the constructor for your function object to pass in the params that way.. there are lots of approaches, and some work better in certain circumstances.) A: Method references using the :: operator You can use method references in method arguments where the method accepts a functional interface. A functional interface is any interface that contains only one abstract method. (A functional interface may contain one or more default methods or static methods.) IntBinaryOperator is a functional interface. Its abstract method, applyAsInt, accepts two ints as its parameters and returns an int. Math.max also accepts two ints and returns an int. In this example, A.method(Math::max); makes parameter.applyAsInt send its two input values to Math.max and return the result of that Math.max. import java.util.function.IntBinaryOperator; class A { static void method(IntBinaryOperator parameter) { int i = parameter.applyAsInt(7315, 89163); System.out.println(i); } } import java.lang.Math; class B { public static void main(String[] args) { A.method(Math::max); } } In general, you can use: method1(Class1::method2); instead of: method1((arg1, arg2) -> Class1.method2(arg1, arg2)); which is short for: method1(new Interface1() { int method1(int arg1, int arg2) { return Class1.method2(arg1, agr2); } }); For more information, see :: (double colon) operator in Java 8 and Java Language Specification §15.13. A: You can also do this (which in some RARE occasions makes sense). The issue (and it is a big issue) is that you lose all the typesafety of using a class/interface and you have to deal with the case where the method does not exist. It does have the "benefit" that you can ignore access restrictions and call private methods (not shown in the example, but you can call methods that the compiler would normally not let you call). Again, it is a rare case that this makes sense, but on those occasions it is a nice tool to have. import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; class Main { public static void main(final String[] argv) throws NoSuchMethodException, IllegalAccessException, IllegalArgumentException, InvocationTargetException { final String methodName; final Method method; final Main main; main = new Main(); if(argv.length == 0) { methodName = "foo"; } else { methodName = "bar"; } method = Main.class.getDeclaredMethod(methodName, int.class); main.car(method, 42); } private void foo(final int x) { System.out.println("foo: " + x); } private void bar(final int x) { System.out.println("bar: " + x); } private void car(final Method method, final int val) throws IllegalAccessException, IllegalArgumentException, InvocationTargetException { method.invoke(this, val); } } A: If you have just one line which is different you could add a parameter such as a flag and a if(flag) statement which calls one line or the other. A: You may also be interested to hear about work going on for Java 7 involving closures: What’s the current state of closures in Java? http://gafter.blogspot.com/2006/08/closures-for-java.html http://tech.puredanger.com/java7/#closures A: New Java 8 Functional Interfaces and Method References using the :: operator. Java 8 is able to maintain method references ( MyClass::new ) with "@ Functional Interface" pointers. There are no need for same method name, only same method signature required. Example: @FunctionalInterface interface CallbackHandler{ public void onClick(); } public class MyClass{ public void doClick1(){System.out.println("doClick1");;} public void doClick2(){System.out.println("doClick2");} public CallbackHandler mClickListener = this::doClick; public static void main(String[] args) { MyClass myObjectInstance = new MyClass(); CallbackHandler pointer = myObjectInstance::doClick1; Runnable pointer2 = myObjectInstance::doClick2; pointer.onClick(); pointer2.run(); } } So, what we have here? * *Functional Interface - this is interface, annotated or not with @FunctionalInterface, which contains only one method declaration. *Method References - this is just special syntax, looks like this, objectInstance::methodName, nothing more nothing less. *Usage example - just an assignment operator and then interface method call. YOU SHOULD USE FUNCTIONAL INTERFACES FOR LISTENERS ONLY AND ONLY FOR THAT! Because all other such function pointers are really bad for code readability and for ability to understand. However, direct method references sometimes come handy, with foreach for example. There are several predefined Functional Interfaces: Runnable -> void run( ); Supplier<T> -> T get( ); Consumer<T> -> void accept(T); Predicate<T> -> boolean test(T); UnaryOperator<T> -> T apply(T); BinaryOperator<T,U,R> -> R apply(T, U); Function<T,R> -> R apply(T); BiFunction<T,U,R> -> R apply(T, U); //... and some more of it ... Callable<V> -> V call() throws Exception; Readable -> int read(CharBuffer) throws IOException; AutoCloseable -> void close() throws Exception; Iterable<T> -> Iterator<T> iterator(); Comparable<T> -> int compareTo(T); Comparator<T> -> int compare(T,T); For earlier Java versions you should try Guava Libraries, which has similar functionality, and syntax, as Adrian Petrescu has mentioned above. For additional research look at Java 8 Cheatsheet and thanks to The Guy with The Hat for the Java Language Specification §15.13 link. A: If anyone is struggling to pass a function that takes one set of parameters to define its behavior but another set of parameters on which to execute, like Scheme's: (define (function scalar1 scalar2) (lambda (x) (* x scalar1 scalar2))) see Pass Function with Parameter-Defined Behavior in Java A: Since Java8, you can use lambdas, which also have libraries in the official SE 8 API. Usage: You need to use a interface with only one abstract method. Make an instance of it (you may want to use the one java SE 8 already provided) like this: Function<InputType, OutputType> functionname = (inputvariablename) { ... return outputinstance; } For more information checkout the documentation: https://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html A: Prior to Java 8, nearest substitute for function-pointer-like functionality was an anonymous class. For example: Collections.sort(list, new Comparator<CustomClass>(){ public int compare(CustomClass a, CustomClass b) { // Logic to compare objects of class CustomClass which returns int as per contract. } }); But now in Java 8 we have a very neat alternative known as lambda expression, which can be used as: list.sort((a, b) -> { a.isBiggerThan(b) } ); where isBiggerThan is a method in CustomClass. We can also use method references here: list.sort(MyClass::isBiggerThan);
{ "language": "en", "url": "https://stackoverflow.com/questions/122407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "307" }
Q: Change datatype when importing Excel file into Access Is there any way to change the default datatype when importing an Excel file into Access? (I'm using Access 2003, by the way). I know that I sometimes have the freedom to assign any datatype to each column that is being imported, but that could only be when I'm importing non-Excel files. EDIT: To be clear, I understand that there is a step in the import process where you are allowed to change the datatype of the imported column. In fact, that's what I'm asking about. For some reason - maybe it's always Excel files, maybe there's something else - I am sometimes not allowed to change the datatype: the dropdown box is grayed out and I just have to live with whatever datatype Access assumes is correct. For example, I just tried importing a large-ish Excel file (12000+ rows, ~200 columns) in Access where column #105 (or something similar) was filled with mostly numbers (codes: 1=foo, 2=bar, etc), though there are a handful of alpha codes in there too (A=boo, B=far, etc). Access assumed it was a Number datatype (even after I changed the Format value in the Excel file itself) and so gave me errors on those alpha codes. If I had been allowed to change the datatype on import, it would have saved me some trouble. Am I asking for something that Access just won't do, or am I missing something? Thanks. EDIT: There are two answers below that give useful advice. Saving the Excel file as a CSV and then importing that works well and is straightforward like Chris OC says. The advice for saving an import specification is very helpful too. However, I chose the registry setting answer by DK as the "Accepted Answer". I liked it as an answer because it's a one-time-only step that can be used to solve my major problem (having Access incorrectly assign a datatype). In short, this solution doesn't allow me to change the datatype myself, but it makes Access accurately guess the datatype so that there are fewer issues. A: There are a couple of ways to do this. The most straightforward way is to convert the .xls file to a .csv file in Excel, so you can import into Access using the Import Text Wizard, which allows you to choose the data types of every column during the import. The other benefit to doing this is that the import of a csv (or text) file is so much faster than the import of an xls file. If you're going to do this import more than once, save the import setup settings as an import specification. (When in the Import Text Wizard, click on the "Advanced..." button on the bottom left, then click on "Save As" and give a specification name to save the changes you just made.) A: This may be caused by Excel Jet driver default settings. Check out the following registry key and change it's value from default 8 to 0, meaning "guess column data type based on all values, not just first 8 rows." [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel] "TypeGuessRows"=dword:00000000 Please, tell if this works. A: open your excel file. In Home tab change format from General to Text. then import into access A: Access can do what you need, but there is no straightforward way for that. You'd have to manage some recordsets, one being your Excel data, the other one being your final Access table. Once both recordsets are open, you can transfer data from one recordset to the other by browsing your Excel data and adding it to your Access table. At this stage, it will be possible to change datatype as requested. A: When importing from CSV files you can also have a look at schema.ini you will find that with this you can control every aspect of the import process. A: Access will let you specify the datatype at the import process. the problem is at the "Append" process for the following times, it will not ask about the import to datatype, and it will forget you changed it. I think it is a bug in MS Access. A: This is an old post but the issue persists! I agree with Deepak. To continue that thought: Access determines the field types when linking to or importing Excel files based on the field types in Excel. If they are all default, it looks at the first X rows of data. A few ways to fix this: * *Open the Excel file and add about 6 rows (under the field headers if any) that emulate the type you want. If you prefer all text, enter 'abcdef' in each cell of those first six rows. *Open the Excel file, highlight all cells, right click, and change format to 'Text' or whatever format you like. Save, then link/import into Access. *Use a macro (VBA script) to do step 2 for you each time: Public Function fcn_ChangeExcelFormat() On Error GoTo ErrorExit Dim strExcelFile As String Dim XL_App As Excel.Application Dim XL_WB As Excel.Workbook Dim XL_WS As Excel.Worksheet strExcelFile = "C:\My Files\MyExcelFile.xlsx" Set XL_App = New Excel.Application Set XL_WB = XL_App.Workbooks.Open(strExcelFile, , False) Set XL_WS = XL_WB.Sheets(1) ' 1 can be changed to "Your Worksheet Name" With XL_WS .Cells.NumberFormat = "@" 'Equiv to Right Click...Format Cells...Text End With XL_WB.Close True XL_App.Quit NormalExit: Set XL_WB = Nothing Set XL_App = Nothing Exit Function ErrorExit: strMsg = "There was an error updating the Excel file! " & vbCr & vbCr & _ "Error " & Err.Number & ": " & Err.Description MsgBox strMsg, vbExclamation, "Error" Resume NormalExit End Function A: Access will do this. In your import process you can define what the data type of each column is.
{ "language": "en", "url": "https://stackoverflow.com/questions/122422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Build sequencing when using distributed version control Right now, we are using Perforce for version control. It has the handy feature of a strictly increasing change number that we can use to refer to builds, eg "you'll get the bugfix if your build is at least 44902". I'd like to switch over to using a distributed system (probably git) to make it easier to branch and to work from home. (Both of which are perfectly possible with Perforce, but the git workflow has some advantages.) So although "tributary development" would be distributed and not refer to a common revision sequencing, we'd still maintain a master git repo that all changes would need to feed into before a build was created. What's the best way to preserve strictly increasing build ids? The most straightforward way I can think of is to have some sort of post-commit hook that fires off whenever the master repo gets updated, and it registers (the hash of) the new tree object (or commit object? I'm new to git) with a centralized database that hands out ids. (I say "database", but I'd probably do it with git tags, and just look for the next available tag number or something. So the "database" would really be .git/refs/tags/build-id/.) This is workable, but I'm wondering if there is an easier, or already-implemented, or standard/"best practice" way of accomplishing this. A: git rev-list BRANCHNAME --count this is much less resource intensive than git log --pretty=oneline | wc -l A: git tag may be enough for what you need. Pick a tag format that everyone will agree not to use otherwise. Note: when you tag locally, a git push will not update the tags on the server. Use git push --tags for that. A: Monotonically increasing number corresponding to the current commit could be generated with git log --pretty=oneline | wc -l which returns a single number. You can also append current sha1 to that number, to add uniqueness. This approach is better than git describe, because it does not require you to add any tags, and it automatically handles merges. It could have problems with rebasing, but rebasing is "dangerous" operation anyway. A: I second the suggestion of using git describe. Provided that you have a sane versioning policy, and you don't do any crazy stuff with your repository, git describe will always be monotonic (at least as monotonic as you can be, when your revision history is a DAG instead of a tree) and unique. A little demonstration: git init git commit --allow-empty -m'Commit One.' git tag -a -m'Tag One.' 1.2.3 git describe # => 1.2.3 git commit --allow-empty -m'Commit Two.' git describe # => 1.2.3-1-gaac161d git commit --allow-empty -m'Commit Three.' git describe # => 1.2.3-2-g462715d git tag -a -m'Tag Two.' 2.0.0 git describe # => 2.0.0 The output of git describe consists of the following components: * *The newest tag reachable from the commit you are asking to describe *The number of commits between the commit and the tag (if non-zero) *The (abbreviated) id of the commit (if #2 is non-zero) #2 is what makes the output monotonic, #3 is what makes it unique. #2 and #3 are omitted, when the commit is the tag, making git describe also suitable for production releases. A: You should investigate git describe. It gives a unique string that describes the current branch (or any passed commit id) in terms of the latest annotated tag, the number of commits since that tag and an abbreviated commit id of the head of the branch. Presumably you have a single branch that you perform controlled build releases off. In this case I would tag an early commit with a known tag format and then use git describe with the --match option to describe the current HEAD relative to a the known tag. You can then use the result of git describe as is or if you really want just a single number you can use a regex to chop the number out of the tag. Assuming that you never rewind the branch the number of following commits will always identify a unique point in the branch's history. e.g. (using bash or similar) # make an annotated tag to an early build in the repository: git tag -a build-origin "$some_old_commitid" # describe the current HEAD against this tag and pull out a build number expr "$(git describe --match build-origin)" : 'build-origin-\([0-9]*\)-g' A: I'd use "Labels" Create a label whenever you have a successful (or even unsuccessful) build, and you'll be able to identify that build forever. It's not quite the same, but it does provide those build numbers, while still providing the benefits of distributed development. A: As you probably know, git computes a hash (a number) that uniquely identifies a node of the history. Using these, although they are not strictly increasing, seems like it would be good enough. (Even better, they always correspond to the source, so if you have the hash, you have the same code.) They're big numbers, but mostly you can get by with 6 or so of the leading digits. For example, That bug was fixed at 064f2ea... A: With Mercurial you can use the following command : # get the parents id, the local revision number and the tags [yjost@myhost:~/my-repo]$ hg id -nibt 03b6399bc32b+ 23716+ default tip See hg identify
{ "language": "en", "url": "https://stackoverflow.com/questions/122424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: How does theming for ziya charts work? I'm implementing charts using The Ziya Charts Gem. Unfortunately, the documentation isn't really helpful or I haven't had enough coffee to figure out theming. I know I can set a theme using chart.add(:theme, 'whatever') Problem: I haven't found any predefined themes, nor have I found a reference to the required format. A: If you install the ZiYa plug-in into your Rails application there should be a themes directory where you said. Just copy one of the existing themes, change its name to whatever you want, and then modify it however you like. Another options for nice Flash charts is Open Flash Chart. I moved from Ziya/SWF Charts to Open Flash Chart when working on Flash charts in a Rails app I was working on. There is also a Rails plug-in for Open Flash Chart. Besides the fact that it is easier to work with, Open Flash Chart is open source, so if you can hack Flash you can customize it. A: As I understand it, the themes are used by initializing the theme directory in your ziya.rb file like so: Ziya.initialize(:themes_dir => File.join( File.dirname(__FILE__), %w[.. .. public charts themes]) ) And you'll need to set up the proper directory, in this case public/charts/themes. It doesn't come with any in there to start with as I recall. Are you having problems past this? A: To partly answer my own question, there are some themes in the website sources which can be checked out at svn co svn://rubyforge.org/var/svn/liquidrail/samples/charting (then go to /public/charts/themes/)
{ "language": "en", "url": "https://stackoverflow.com/questions/122445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Specifying a VC++ Redistributable version for ClickOnce prerequisite My ClickOnce application uses a third party tool that requires the Visual C++ 2005 redistributable. The third party tool will not work if only the VC++ 2008 redistributable is installed. However, in Visual Studio 2008, the ClickOnce prerequisites do not allow a version to be specified for the VC++ redistributable; it will add a VC++ 2008 prerequisite, which makes sense most of the time. However, in this situation, an earlier version is required. ClickOnce is required, so merge modules are out of the question. Any ideas of how to specify the version? A: If you can find a machine with VS 2005 installed, the solution shouldn't be too hard. You have the ability to customize what appears in the Prerequisites dialog on the Publish tab of your project. * *On a machine with VS 2005 installed, go to \Program Files\Microsoft Visual Studio 8\SDK\v2.0\BootStrapper\Packages and copy the vsredist_x86 folder to the machine you are publishing from. *Rename the folder, call it vsredist_x86_2005 or something similar. *Inside the folder, edit the \en\package.xml file. Change the <String Name="DisplayName"> tag to something that makes sense (Visual C++ 2005 Runtime Libraries (x86)) to differentiate it from the existing 2008 package. *Copy the folder to C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bootstrapper\Packages. *Restart Visual Studio if it is open. Now, when you open the Prerequisites dialog you should see a new entry for the 2005 package. I didn't completely test this solution so I may have missed a few details but hopefully this gets you started. A: I believe you can open the manifest file for your app and modify the versions of the redists your app should be linking against. The listings in the manifest should match what you have in your C:\Windows\WinSxS dirs. There is a CodeProject page that gives a good description of using different redistributables. A: I just installed Visual Studio 2005. Here is an original bootstrapper: C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\BootStrapper\Packages\vcredist_x86\ \en\package.xml <?xml version="1.0" encoding="utf-8" ?> <Package xmlns="http://schemas.microsoft.com/developer/2004/01/bootstrapper" Name="DisplayName" Culture="Culture" > <!-- Defines a localizable string table for error messages--> <Strings> <String Name="DisplayName">Visual C++ Runtime Libraries (x86)</String> <String Name="Culture">en</String> <String Name="AdminRequired">You do not have the permissions required to install Visual C++ Runtime Libraries (x86). Please contact your administrator.</String> <String Name="InvalidPlatformWin9x">Installation of Visual C++ Runtime Libraries (x86) is not supported on Windows 95. Contact your application vendor.</String> <String Name="InvalidPlatformWinNT">Installation of Visual C++ Runtime Libraries (x86) is not supported on Windows NT 4.0. Contact your application vendor.</String> <String Name="GeneralFailure">A failure occurred attempting to install Visual C++ Runtime Libraries (x86).</String> </Strings> </Package> \product.xml <?xml version="1.0" encoding="utf-8" ?> <Product xmlns="http://schemas.microsoft.com/developer/2004/01/bootstrapper" ProductCode="Microsoft.Visual.C++.8.0.x86" > <!-- Defines list of files to be copied on build --> <PackageFiles> <PackageFile Name="vcredist_x86.exe"/> </PackageFiles> <InstallChecks> <MsiProductCheck Property="VCRedistInstalled" Product="{A49F249F-0C91-497F-86DF-B2585E8E76B7}"/> </InstallChecks> <!-- Defines how to invoke the setup for the Visual C++ 8.0 redist --> <!-- TODO: Needs EstrimatedTempSpace, LogFile, and an update of EstimatedDiskSpace --> <Commands Reboot="Defer"> <Command PackageFile="vcredist_x86.exe" Arguments=' /q:a ' > <!-- These checks determine whether the package is to be installed --> <InstallConditions> <BypassIf Property="VCRedistInstalled" Compare="ValueGreaterThanOrEqualTo" Value="3"/> <!-- Block install if user does not have admin privileges --> <FailIf Property="AdminUser" Compare="ValueEqualTo" Value="false" String="AdminRequired"/> <!-- Block install on Win95 --> <FailIf Property="Version9X" Compare="VersionLessThan" Value="4.10" String="InvalidPlatformWin9x"/> <!-- Block install on NT 4 or less --> <FailIf Property="VersionNT" Compare="VersionLessThan" Value="5.00" String="InvalidPlatformWinNT"/> </InstallConditions> <ExitCodes> <ExitCode Value="0" Result="Success"/> <ExitCode Value="3010" Result="SuccessReboot"/> <DefaultExitCode Result="Fail" FormatMessageFromSystem="true" String="GeneralFailure" /> </ExitCodes> </Command> </Commands> </Product> \vcredist_x86.exe SHA1: 95040f80b0d203e1abaec4e06e0ec0e01c507d03
{ "language": "en", "url": "https://stackoverflow.com/questions/122451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the meaning and impication of one-element-too-large array/queue? just wondering what it is. Edit: I know it's not a type of array but just a feature. So what does it mean by one-element-too-large ? A: Misunderstanding of language specific indexing conventions? A: Intent to use a "end of data" marker? A: Maybe the classic fence-post/fence-panel counting problem?
{ "language": "en", "url": "https://stackoverflow.com/questions/122453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Handling file paths cross platform Do any C++ GNU standalone classes exist which handle paths cross platform? My applications build on Windows and LInux. Our configuration files refer to another file in a seperate directory. I'd like to be able to read the path for the other configuration file into a class which would work on both Linux or Windows. Which class would offer the smallest footprint to translate paths to use on either system? Thanks A: Filesystem library in boost will probably help you. A: Unless you're using absolute paths, there's no need to translate at all - Windows automatically converts forward slashes into backslashes, so if you use relative paths with forward slash path separators, you'll be golden. You should really avoid absolute paths if at all possible. A: try boost::filesystem A: There are many ways, IMHO the correct answer is to redesign your program to avoid manipulating paths. I posted an answer here: https://stackoverflow.com/a/40980510/2345997 which is relevant. ways: * *Add a command line option which allows a user to specify the path in question instead of reading it from a config file. *Add a command line option so that the user can specify a base path. Paths in the config file will be interpreted as located under this base path. *Split your config file into three. One file will have cross platform configuration, another file will have windows only configuration and a final file will have Linux only configuration. Then the user can specify the correct path for both Windows and Linux. On windows your program will read the cross-platform config file and the windows only config file. On Linux it will read the cross-platform file and the Linux only config file. *Add preprocessing to your config file parsing. This will allow you to have one config file where the user can make your program ignore some of the lines in the file depending on which OS the program is running on. Therefore, the user will be able to specify the path to the file twice. Once for Linux, and once for Windows. *Change the design so that the files are always located in the same directory as your executable - then the user only specifies file names in the config file rather than paths to files. *Use a simple function that switches "/" to "\". Then document to the user that they must specify paths as Linux paths and this transformation will be applied for windows. *Create your own path mini-language for this and document it to the user. E.g: "/" - specifies a directory separator, {root} - expands to the root of the filesystem, {cwd} - expands to the current directory, {app} - expands to the path to your application etc... Then the user can specify file paths like: {root}/myfiles/bob.txt on both platforms. *Some paths will work on both platforms. E.g: relative paths like ../my files/bill.txt. Restrict your application to only work with these paths. Document this limitation and how your application handles paths to the user.
{ "language": "en", "url": "https://stackoverflow.com/questions/122455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How to retrieve namespaces in XML files using Xpath I have an XML file that starts like this: <Elements name="Entities" xmlns="XS-GenerationToolElements"> I'll have to open a lot of these files. Each of these have a different namespace but will only have one namespace at a time (I'll never find two namespaces defined in one xml file). Using XPath I'd like to have an automatic way to add the given namespace to the namespace manager. So far, i could only get the namespace by parsing the xml file but I have a XPathNavigator instance and it should have a nice and clean way to get the namespaces, right? -- OR -- Given that I only have one namespace, somehow make XPath use the only one that is present in the xml, thus avoiding cluttering the code by always appending the namespace. A: There are a few techniques that you might try; which you use will depend on exactly what information you need to get out of the document, how rigorous you want to be, and how conformant the XPath implementation you're using is. One way to get the namespace URI associated with a particular prefix is using the namespace:: axis. This will give you a namespace node whose name is the prefix and whose value is the namespace URI. For example, you could get the default namespace URI on the document element using the path: /*/namespace::*[name()=''] You might be able to use that to set up the namespace associations for your XPathNavigator. Be warned, though, that the namespace:: axis is one of those corners of XPath 1.0 that isn't always implemented. A second way of getting that namespace URI is to use the namespace-uri() function on the document element (which you've said will always be in that namespace). The expression: namespace-uri(/*) will give you that namespace. An alternative would be to forget about associating a prefix with that namespace, and just make your path namespace-free. You can do this by using the local-name() function whenever you need to refer to an element whose namespace you don't know. For example: //*[local-name() = 'Element'] You could go one step further and test the namespace URI of the element against the one of the document element, if you really wanted: //*[local-name() = 'Element' and namespace-uri() = namespace-uri(/*)] A final option, given that the namespace seems to mean nothing to you, would be to run your XML through a filter that strips out the namespaces. Then you won't have to worry about them in your XPath at all. The easiest way to do that would be simply to remove the xmlns attribute with a regular expression, but you could do something more complex if you needed to do other tidying at the same time. A: Unfortunately, XPath doesn't have any concept of "default namespace". You need to register namespaces with prefixes with the XPath context, and then use those prefixes in your XPath expressions. It means for very verbose xpath, but it's a basic shortcoming of XPath 1. Apparently XPath 2 will address this, but that's no use to you right now. I suggest that you programmatically examine your XML document for the namespace, associate that namespace with a prefix in the XPath context, then use the prefix in the xpath expressions. A: This 40-line xslt transformation provides all the useful information about the namespaces in a given XML document: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ext="http://exslt.org/common" exclude-result-prefixes="ext" > <xsl:output omit-xml-declaration="yes" indent="yes"/> <xsl:strip-space elements="*"/> <xsl:key name="kNsByNsUri" match="ns" use="@uri"/> <xsl:variable name="vXmlNS" select="'http://www.w3.org/XML/1998/namespace'"/> <xsl:template match="/"> <xsl:variable name="vrtfNamespaces"> <xsl:for-each select= "//namespace::* [not(. = $vXmlNS) and . = namespace-uri(..) ]"> <ns element="{name(..)}" prefix="{name()}" uri="{.}"/> </xsl:for-each> </xsl:variable> <xsl:variable name="vNamespaces" select="ext:node-set($vrtfNamespaces)/*"/> <namespaces> <xsl:for-each select= "$vNamespaces[generate-id() = generate-id(key('kNsByNsUri',@uri)[1]) ]"> <namespace uri="{@uri}"> <xsl:for-each select="key('kNsByNsUri',@uri)/@element"> <element name="{.}" prefix="{../@prefix}"/> </xsl:for-each> </namespace> </xsl:for-each> </namespaces> </xsl:template> </xsl:stylesheet> When applied on the following XML document: <a xmlns="my:def1" xmlns:n1="my:n1" xmlns:n2="my:n2" xmlns:n3="my:n3"> <b> <n1:d/> </b> <n1:c> <n2:e> <f/> </n2:e> </n1:c> <n2:g/> </a> the wanted result is produced: <namespaces> <namespace uri="my:def1"> <element name="a" prefix=""/> <element name="b" prefix=""/> <element name="f" prefix=""/> </namespace> <namespace uri="my:n1"> <element name="n1:d" prefix="n1"/> <element name="n1:c" prefix="n1"/> </namespace> <namespace uri="my:n2"> <element name="n2:e" prefix="n2"/> <element name="n2:g" prefix="n2"/> </namespace> </namespaces>
{ "language": "en", "url": "https://stackoverflow.com/questions/122463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }