text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: What is the closest thing to Slime for Scheme? I do most of my development in Common Lisp, but there are some moments when I want to switch to Scheme (while reading Lisp in Small Pieces, when I want to play with continuations, or when I want to do some scripting in Gauche, for example). In such situations, my main source of discomfort is that I don't have Slime (yes, you may call me an addict).
What is Scheme's closest counterpart to Slime? Specifically, I am most interested in:
*
*Emacs integration (this point is obvious ;))
*Decent tab completion (ideally, c-w-c-c TAB should expand to call-with-current-continuation). It may be even symbol-table based (ie. it doesn't have to notice a function I defined in a let at once).
*Function argument hints in the minibuffer (if I have typed (map |) (cursor position is indicated by |)), I'd like to see (map predicate . lists) in the minibuffer
*Sending forms to the interpreter
*Integration with a debugger.
I have ordered the features by descending importance.
My Scheme implementations of choice are:
*
*MzScheme
*Ikarus
*Gauche
*Bigloo
*Chicken
It would be great if it worked at least with them.
A: You also might consider Scheme Complete:
http://www.emacswiki.org/cgi-bin/wiki/SchemeComplete
It basically provides tab-completion.
A: A commentator has said: "DrScheme IDE has emacs key bindings" and it is a highly regarded IDE with many of the features you explicitly listed.
Additionally, scheme-mode for Emacs provides some of the features from SLIME - the integrated REPL, the ability to send forms to that REPL and to load entire files. As far as I know, there is no equivalent, in general for the scheme's you've listed, for things like connecting to a running image remotely (versus a scheme repl in an Emacs buffer), or the debugger integration.
A: For my work with mzscheme i usually use cmuscheme + quack, that provide almost what i need during development.
Bigloo comes with very powerful bee-mode.
And for gauche you can use GCA package that provides names completion, display of function's descriptions & inserting of code templates
Update: I published article about Scheme + Emacs integration on my site
A: Well... I would say Slime for scheme is the closest thing to Slime for Scheme ;)
A: You can use Chicken Scheme with slime by using swank-chicken.
I'd suggest taking a look at geiser mode, but it only supports Racket and Guile right now which I don't see on your list.
A: I haven't used it, but you might try Quack with mzscheme.
SLIME is pretty hard to beat though. There's a lot of niceness going on in the SWANK end of it.
A: SLIME's contrib directory seems to have SWANK implementations for MIT Scheme and Kawa.
A: Geiser provides an excellent environment for Scheme. The latest version now also can interact with Chez Scheme, Chibi Scheme, Chicken Scheme as well as that old standby MIT Scheme, in addition to Guile and Racket. I would suggest installing it via Melpa, specially in order to get the latest version handling the much wider selection of REPLs.
A: There's now a Slime backend for various Scheme's called r7rs-swank.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/110911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
}
|
Q: Changing Hostname / IP Address of Windows System Mounted as an Image I'm looking for a way to change the hostname and IP address of a Windows XP system that is mounted via a loop-back image on a Linux system. So basically I have access to the Windows XP system on a file level, but I cannot execute any programs on it. A way similar to editing the /etc/hostname and whatever network configuration file under Linux.
The only ways I've found so far would include running a tool after boot, e.g. MS sysprep or use a solution like Acronis Snap Deploy.
A: You can use chntpw tool to edit Windows registry offline. Here's an example of how to use it.
The keys you're looking for are these:
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\ComputerName\ComputerName
HKEY_LOCAL_MACHINE\SYSTEM\Current Control Set\
Services\Tcpip\Parameters\Interfaces\{<Interface GUID>}
Under your interface's GUID you'll find many keys, the ones you need are:
IPAddress (REG_MULTI_SZ) = x.x.x.x
SubnetMask (REG_MULTI_SZ) = x.x.x.x
DefaultGateway (REG_MULTI_SZ) = x.x.x.x
Do take a look at the rest of they keys in there, you might find some interesting information.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/110920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How do I close a tkinter window? How do I end a Tkinter program? Let's say I have this code:
from Tkinter import *
def quit():
# code to exit
root = Tk()
Button(root, text="Quit", command=quit).pack()
root.mainloop()
How should I define the quit function to exit my application?
A: I think you wrongly understood the quit function of Tkinter. This function does not require you to define.
First, you should modify your function as follows:
from Tkinter import *
root = Tk()
Button(root, text="Quit", command=root.quit).pack()
root.mainloop()
Then, you should use '.pyw' suffix to save this files and double-click the '.pyw' file to run your GUI, this time, you can end the GUI with a click of the Button, and you can also find that there will be no unpleasant DOS window. (If you run the '.py' file, the quit function will fail.)
A: Illumination in case of confusion...
def quit(self):
self.destroy()
exit()
A) destroy() stops the mainloop and kills the window, but leaves python running
B) exit() stops the whole process
Just to clarify in case someone missed what destroy() was doing, and the OP also asked how to "end" a tkinter program.
A: def quit()
root.quit()
or
def quit()
root.destroy()
A: The usual method to exit a Python program:
sys.exit()
(to which you can also pass an exit status) or
raise SystemExit
will work fine in a Tkinter program.
A: In case anyone wants to bind their Escape button to closing the entire GUI:
master = Tk()
master.title("Python")
def close(event):
sys.exit()
master.bind('<Escape>',close)
master.mainloop()
A: The easiest way would be to click the red button (leftmost on macOS and rightmost on Windows).
If you want to bind a specific function to a button widget, you can do this:
class App:
def __init__(self, master)
frame = Tkinter.Frame(master)
frame.pack()
self.quit_button = Tkinter.Button(frame, text = 'Quit', command = frame.quit)
self.quit_button.pack()
Or, to make things a little more complex, use protocol handlers and the destroy() method.
import tkMessageBox
def confirmExit():
if tkMessageBox.askokcancel('Quit', 'Are you sure you want to exit?'):
root.destroy()
root = Tk()
root.protocol('WM_DELETE_WINDOW', confirmExit)
root.mainloop()
A: you only need to type this:
root.destroy()
and you don't even need the quit() function cause when you set that as commmand it will quit the entire program.
A: you dont have to open up a function to close you window, unless you're doing something more complicated:
from Tkinter import *
root = Tk()
Button(root, text="Quit", command=root.destroy).pack()
root.mainloop()
A: import tkinter as tk
def quit(root):
root.destroy()
root = tk.Tk()
tk.Button(root, text="Quit", command=lambda root=root:quit(root)).pack()
root.mainloop()
A: You should use destroy() to close a Tkinter window.
from Tkinter import *
#use tkinter instead of Tkinter (small, not capital T) if it doesn't work
#as it was changed to tkinter in newer Python versions
root = Tk()
Button(root, text="Quit", command=root.destroy).pack() #button to close the window
root.mainloop()
Explanation:
root.quit()
The above line just bypasses the root.mainloop(), i.e., root.mainloop() will still be running in the background if quit() command is executed.
root.destroy()
While destroy() command vanishes out root.mainloop(), i.e., root.mainloop() stops. <window>.destroy() completely destroys and closes the window.
So, if you want to exit and close the program completely, you should use root.destroy(), as it stops the mainloop() and destroys the window and all its widgets.
But if you want to run some infinite loop and don't want to destroy your Tkinter window and want to execute some code after the root.mainloop() line, you should use root.quit(). Example:
from Tkinter import *
def quit():
global root
root.quit()
root = Tk()
while True:
Button(root, text="Quit", command=quit).pack()
root.mainloop()
#do something
See What is the difference between root.destroy() and root.quit()?.
A: In idlelib.PyShell module, root variable of type Tk is defined to be global
At the end of PyShell.main() function it calls root.mainloop() function which is an infinite loop and it runs till the loop is interrupted by root.quit() function. Hence, root.quit() will only interrupt the execution of mainloop
In order to destroy all widgets pertaining to that idlelib window, root.destroy() needs to be called, which is the last line of idlelib.PyShell.main() function.
A: I normally use the default tkinter quit function, but you can do your own, like this:
from tkinter import *
from tkinter.ttk import *
window = Tk()
window.geometry('700x700') # 700p x 700p screen
def quit(self):
proceed = messagebox.askyesno('Quit', 'Quit?')
proceed = bool(proceed) # So it is a bool
if proceed:
window.quit()
else:
# You don't really need to do this
pass
btn1 = Button(window, text='Quit', command=lambda: quit(None))
window.mainloop()
A: For menu bars:
def quit():
root.destroy()
menubar = Menu(root)
filemenu = Menu(menubar, tearoff=0)
filemenu.add_separator()
filemenu.add_command(label="Exit", command=quit)
menubar.add_cascade(label="menubarname", menu=filemenu)
root.config(menu=menubar)
root.mainloop()
A: I use below codes for the exit of Tkinter window:
from tkinter import*
root=Tk()
root.bind("<Escape>",lambda q:root.destroy())
root.mainloop()
or
from tkinter import*
root=Tk()
Button(root,text="exit",command=root.destroy).pack()
root.mainloop()
or
from tkinter import*
root=Tk()
Button(root,text="quit",command=quit).pack()
root.mainloop()
or
from tkinter import*
root=Tk()
Button(root,text="exit",command=exit).pack()
root.mainloop()
A: Code snippet below. I'm providing a small scenario.
import tkinter as tk
from tkinter import *
root = Tk()
def exit():
if askokcancel("Quit", "Do you really want to quit?"):
root.destroy()
menubar = Menu(root, background='#000099', foreground='white',
activebackground='#004c99', activeforeground='white')
fileMenu = Menu(menubar, tearoff=0, background="grey", foreground='black',
activebackground='#004c99', activeforeground='white')
menubar.add_cascade(label='File', menu=fileMenu)
fileMenu.add_command(label='Exit', command=exit)
root.config(bg='#2A2C2B',menu=menubar)
if __name__ == '__main__':
root.mainloop()
I have created a blank window here & add file menu option on the same window(root window), where I only add one option exit.
Then simply run mainloop for root.
Try to do it once
A: Of course you can assign the command to the button as follows, however, if you are making a UI, it is recommended to assign the same command to the "X" button:
def quit(self): # Your exit routine
self.root.destroy()
self.root.protocol("WM_DELETE_WINDOW", self.quit) # Sets the command for the "X" button
Button(text="Quit", command=self.quit) # No ()
A: There is a simple one-line answer:
Write - exit() in the command
That's it!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/110923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "122"
}
|
Q: XNA Game Studio 3D model editor suggestions I want to create basic low-poly 3D models to use in XNA Game Studio games. What cheap/free tools do people recommend?
And does anyone have any links or snippets of code that show how to load 3D models of some sort into XNA and then draw them on the screen?
A: Take a look at trueSpace also, its just become free - but as Evil Activity stated; Blender is also a good sugestion i just never really got use to its interface and how to do stuff. trueSpace is a little more easy in that way, but i think that blender is more powerful.
I guess you know http://creators.xna.com/, there is a 3D tutorial you can look at here:
http://creators.xna.com/en-US/education/gettingstarted
A: Blender is a free 3D modeling tool. Here is an article covering from installation of blender, exporting, to the importing of a model made in Blender into the XNA enviroment:
Getting started with Blender 3D and XNA
A: Ill second Blender.
You can find some handy tutorials linking XNA and Blender here;
http://www.stromcode.com/
A: yes, truespace is a good choice. It's the first 3d authoring program I ever used. It's pretty simple, and it's free. you can find tutorials on the official website http://www.caligari.com/
If you don't plan on selling your game, you can download 3D Studio MAX student license. It's the full program for 3 years for free. The learning curve is pretty easy too, especially for modeling. You can find good tutorials on digital tutors. some are free, some are not. There's also 3Dbuzz you can check out for tutorials.
A: Having worked a modeler and texture artist for over 3 years, I have tried the majority of modeling tools and keep coming back to Nevercenter's Silo. It's the most elegant, well designed sub-d polygonal modeling application I have ever had the pleasure of using. Once you've used Silo, you'll wonder why anyone uses anything else.
A: Houdini. It have too a free tool, Apprentice.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/110924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Read divert sockets in java? If I was to create a ipfw divert rule to forward all FTP traffic to a specific socket, is it possible to use Java to connect to the socekt and read the packet information? If so, how would i go about reading/writing to the scoket?
A: not sure what you mean. If you're using a divert rule, then all you have to do is listen on that ip:port combination in your java app and you're all set. If you want to read the actual destination endpoint information, you'll need to use JNI for that.
A: Yes, it's like a normal socket, you can read/write from/to it, but on Mac OS X, if you do want to modify the packet and insert it back, you need to recalculate the tcp check sum first.
http://blog.loudhush.ro/2006/08/using-divert-sockets-on-mac-os-x.html
This is a good post that introduce the basic usage of the divert socket on Mac OS X. You can actually create the rule in your C code.
For you case, just scan the packet for TCP or IP header and parse for whatever you want
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/110926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is there a valid reason for enforcing a maximum width of 80 characters in a code file, this day and age? Seriously. On a 22" monitor, it only covers maybe a quarter of the screen. I need some ammo to axe down this rule.
I'm not saying that there shouldn't be a limit; I'm just saying, 80 characters is very small.
A: The only thing I enforce to stay within 80 chars is my commenting.
Personally...I'm devoting all my brain power (what little there is) to coding right, it's a pain to have to go back and break everything up at the 80 char limit when I could be spending my time on the next function. Yes, Resharper could do it for me I suppose but then it freaks me out a little that a 3rd party product is making decisions on my code layout and changes it ("Please don't break my code into two lines HAL. HAL?").
That said, I do work on a fairly small team and all of our monitors are fairly large so worrying about what bothers my fellow programmers isn't a huge concern as far as that goes.
Seems though some languages encourage longer lines of code for the sake of more bang for the buck (short hand if then statements).
A: The origin of 80-column text formatting is earlier than 80-column terminals -- the IBM punch card dates back to 1928, and its legacy to paper tapes in 1725! This is reminiscent of the (apocryphal) story that the US railway gauge was determined by the width of chariot wheels in Roman Britain.
I sometimes find it a bit constricting, but it makes sense to have some standard limit, so 80 columns it is.
Here's the same topic covered by Slashdot.
And here's an old-school Fortran Statement:
A: In the Linux coding standard, not only do they keep the 80 character limit, but they also use 8 space indentation.
Part of the reasoning is that if you ever reach the right margin, you should consider moving an indentation level into a separate function.
This will make clearer code because regardless of indentation lengths, it is harder to read code with many nested control structures.
A: The other answers already summed things up nicely, but it is also worth considering when you might want to copy & paste some code into an email, or if not code then a diff.
That's a time when having a "max width" is useful.
A: I have two 20" 1600x1200 monitors and I stick to 80 columns because it lets me display multiple text editor windows side-by-side. Using the '6x13' font (the trad. xterm font) 80 columns take up 480 pixels plus the scrollbar and window borders. This allows one to have three windows of this type on a 1600x1200 monitor. On windows the Lucida Console font won't quite do this (the minimun usable size is 7 pixels wide) but a 1280x1024 monitor will display two columns and a 1920x1200 monitor such as an HP LP2465 will display 3. It will also leave a bit of room at the side for the various explorer, properties and other windows from Visual Studio.
Additionally very long lines of text are hard to read. For text the optimum is 66 characters. There is a point where excessively long identifiers start to be counterproductive because they make it hard to lay out code coherently. Good layout and indentation provides visual cues as to the code structure and some languages (Python comes to mind) use indentation explicitly for this.
However, The standard class libraries for Java and .Net tend to have a preponderance of very long identifiers so one cannot necessarily guarantee to be able to do this. In this case, laying out code with line-breaks still helps to make the structure explicit.
Note that you can get windows versions of '6x13' fonts Here.
A: 80 characters is a ridiculous limit these days. Split your code lines where it makes sense, not according to any arbitrary character limit.
A: People say long lines of code tend to be complex. Consider a simple Java class:
public class PlaintiffServiceImpl extends RemoteServiceServlet implements PlaintiffService {
This is 94 characters long and the class name is quite short (by GWT standards). It would be difficult to read on 2 lines and it is very readable on one line.
Being pragmatic about it and thus allowing "backwards compatibility", I'd say 100 characters is the right width.
A: You are not the only person who is going to maintain your code.
The next person who does might have a 17" screen or might need large fonts to read the text. The limit has to be somewhere and 80 chars is the convention due to previous screen limitations. Can you think of any new standard (120) and why it is a good idea to use that other then "that's what fits on my monitor at Xpt font?"
Remember, there are always exceptions to every rule so it you have a particular line or block of code that makes sense to be more than 80 chars then be a rebel.
But take the time first to think "is this code really that bad that it can not live within 80 chars?"
A: I've widened my code out to 100 characters which fits comfortably in less than half my screen on my Macbook. 120 characters is probably the limit before lines start to get too long and complex. You don't want to get too wide else you encourage compound statements and deeply nested control structures.
The right margin is nature's way of telling you to perform an extra method refactoring.
A: I wonder if this might cause more problems in this day and age. Remember that in C (and possibly other languages) there are rules for how long a function name can be. Therefore, you often see very hard-to-understand names in C code. The good thing is that they don't use a lot of space. But every time I look at code in some language like C# or Java the method names are often very long, which makes it close to impossible to keep your code at a 80 characters length. I don't think 80 characters are valid today, unless you need to be able to print the code, etc.
A: As the author of coding guidelines for my employer I have upped the line length from 80 to 132. Why this value? Well, like others pointed out, 80 is the length of many old hardware terminals. And 132 is as well! It's the line width when terminals are in wide mode. Any printer could also make hardcopies in wide mode with a condensed font.
The reason for not staying at 80 is that I rather
*
*prefer longer names with a meaning for identifiers
*not bother with typedefs for structs and enums in C (they are BAD, they HIDE useful information! Ask Peter van der Linden in "Deep C Secrets" if you don't believe it), so the code has more struct FOO func(struct BAR *aWhatever, ...) than code of typedef fanatics.
and under these rules just 80 chars/line cause ugly line wraps more often than my eyes deem acceptable (mostly in prototypes and function definitions).
A: As others have said, I think it's best for (1) printing and (2) displaying multiple files side by side vertically.
A: I like to limit my width to 100 chars or so to allow two SxS editors on a widescreen monitor. I don't think that there is any good reason for a limit of exactly 80 chars anymore.
A: You should just do it for the sake of everyone who doesn't have a 22 inch widescreen monitor. Personally, I work on a 17 inch 4:3 monitor, and I find that more than sufficiently wide. However, I also have 3 of those monitors, so I still have lots of usable screen space.
Not only that, but the human eye actually has problems reading text if the lines are too long. It's too easy to get lost in which line you are on. Newspapers are 17 inches across (or somethign like that), but you don't see them writing all the way across the page, same goes for magazines and other printed items. It's actually easier to read if you keep the columns narrow.
A: There's already a lot of good answers to this, but it's worth mentioning that in your IDE you might have a list of files on the left, and a list of functions on the right (or any other configuration).
You're code is just one part of the environment.
A: Use proportional fonts.
I'm serious. I can usually get the equivalence of 100-120 characters in a line without sacrificing readability or printability. In fact it's even easier to read with a good font (e.g., Verdana) and syntax coloring. It looks a little strange for a few days, but you quickly get used to it.
A: I think the practice of keeping code to 80 (or 79) columns was originally created to support people editing code on 80-column dumb terminals or on 80-column printouts. Those requirement have mostly gone away now, but there are still valid reasons to keep the 80 column rule:
*
*To avoid wrapping when copying code into email, web pages, and books.
*To view multiple source windows side-by-side or using a side-by-side diff viewer.
*To improve readability. Narrow code can be read quickly without having to scan your eyes from side to side.
I think the last point is the most important. Though displays have grown in size and resolution in the last few years, eyes haven't.
A: When you have a sequence of statements that repeat with minor variations it can be easier to see the similarities and differences if the they are grouped into lines so that the differences align vertically.
I'd argue that the following is much more readable than it would have been if I'd split it over multiple lines:
switch(Type) {
case External_BL: mpstrd["X"] = ptDig1.x - RadialClrX; mpstrd["Y"] = ptDig1.y - RadialClrY; break;
case External_BR: mpstrd["X"] = ptDig1.x + RadialClrX; mpstrd["Y"] = ptDig1.y - RadialClrY; break;
case External_TR: mpstrd["X"] = ptDig1.x + RadialClrX; mpstrd["Y"] = ptDig1.y + RadialClrY; break;
case External_TL: mpstrd["X"] = ptDig1.x - RadialClrX; mpstrd["Y"] = ptDig1.y + RadialClrY; break;
case Internal_BL: mpstrd["X"] = ptDig1.x + RadialClrX; mpstrd["Y"] = ptDig1.y + RadialClrY; break;
case Internal_BR: mpstrd["X"] = ptDig1.x - RadialClrX; mpstrd["Y"] = ptDig1.y + RadialClrY; break;
case Internal_TR: mpstrd["X"] = ptDig1.x - RadialClrX; mpstrd["Y"] = ptDig1.y - RadialClrY; break;
case Internal_TL: mpstrd["X"] = ptDig1.x + RadialClrX; mpstrd["Y"] = ptDig1.y - RadialClrY; break;
}
Update: In the comment's it's been suggested that this would be a more succinct way of doing the above:
switch(Type) {
case External_BL: dxDir = - 1; dyDir = - 1; break;
case External_BR: dxDir = + 1; dyDir = - 1; break;
case External_TR: dxDir = + 1; dyDir = + 1; break;
case External_TL: dxDir = - 1; dyDir = + 1; break;
case Internal_BL: dxDir = + 1; dyDir = + 1; break;
case Internal_BR: dxDir = - 1; dyDir = + 1; break;
case Internal_TR: dxDir = - 1; dyDir = - 1; break;
case Internal_TL: dxDir = + 1; dyDir = - 1; break;
}
mpstrd["X"] = pt1.x + dxDir * RadialClrX;
mpstrd["Y"] = pt1.y + dyDir * RadialClrY;
although it now fits in 80 columns I think my point still stands and I just picked a bad example. It does still demonstrate that placing multiple statements on a line can improve readability.
A: Printing a monospaced font at default sizes is (on A4 paper) 80 columns by 66 lines.
A: I thing not enforcing 80 characters means eventually word wrapping.
IMO, any length chosen for a max-width line is not always appropriate and word wrapping should be a possible answer.
And that is not as easy as it sound.
It is implemented in jedit
(source: jedit.org) which offers word wrap
But it is bitterly missed in eclipse from a looong time ! (since 2003 in fact), mainly because a word wrap for text editor involves:
*
*Wrapped line information is for the text viewer, code navigation, vertical rulers.
*Unwrapped line information is required for functionalities like goto line, line numbering ruler column, current line highlight, saving file.
A: I try to keep things down near 80 characters for a simple reason: too much more than that means my code is becoming too complicated. Overly verbose property/method names, class names, etc. cause as much harm as terse ones.
I'm primarily a Python coder, so this produces two sets of limitations:
*
*Don't write long lines of code
*Don't indent too much
When you start to reach two or three levels of indentation, your logic gets confusing. If you can't keep a single block on the same page, your code is getting too complicated and tricky to remember. If you can't keep a single line within 80 characters, your line is getting overly complicated.
It's easy in Python to write relatively concise code (see codegolf) at the expense of readability, but it's even easier to write verbose code at the expense of readability. Helper methods are not a bad thing, nor are helper classes. Excessive abstraction can be a problem, but that's another challenge of programming.
When in doubt in a language like C write helper functions and inline them if you don't want the overhead of calling out to another function and jumping back. In most cases, the compiler will handle things intelligently for you.
A: I'm diffing side-by-side all day long and I don't have a freakin' 22 inch monitor. I don't know if I ever will. This, of course, is of little interest to write-only programmers enjoying arrow-coding and 300-char lines.
A: I use the the advantage of bigger screens to have multiple pieces of code next to eachother.
You won't get any ammo from me. In fact, I'd hate to see it changed since in emergencies I still see rare cases where I need to change code from a text-console.
A: Super-long lines are harder to read. Just because you can get 300 characters across on your monitor doesn't mean you should make the lines that long. 300 characters is also way too complex for a statement unless you have no choice (a call that needs a whole bunch of parameters.)
I use 80 characters as a general rule but I'll go beyond that if enforcing it would mean putting a line break in an undesirable location.
A: I actually follow a similar rule for my own code but only because of printing code to an A4 page - 80 columns is about the right width for my desired font size.
But that's personal preference and probably not what you were after (since you want ammo to go the other way).
What don't you question the reasoning behind the limit - seriously, if no-one can come up with a good reason why it's so, you have a good case for having it removed from your coding standards.
A: Yes, because even in this day and age, some of us are coding on terminals (ok, mostly terminal emulators), where the display can only display 80 chars. So, at least for the coding I do, I really appreciate the 80 char rule.
A: I force my students to squeeze into 80 columns so I can print out their code and mark it up.
And about 17 years ago I let my own code expand to 88 columns, because I started doing everything using Noweb and 88 columns is what fits in a nicely printed document using TeX.
I indent by only two spaces, but the extra room is wonderful.
A: I still think that the limit isn't limited on the visual part. Sure, the monitors and resolutions are big enough to show even more characters in one line nowadays, but does it increase the readability?
If the limit is really enforced it's also a good reason to re-think the code and not to put everything into one line. It's the same with indentation - if you need to much levels your code needs to be re-thought.
A: Breaking at 80 characters is something you do while coding, not afterwards. Same with comments, of course. Most editors can assist you in seeing where the 80-characters limit is.
(This may be a little OT, but in Eclipse there is an option which formats the code when you save it (according to whatever rules you want). This is a little freaky at first, but after a while you start to accept that the formatting is no more in your hands than the generated code is.)
A: If we had one of these, we wouldn't be having this discussion! ;-)
But seriously the issues that people have raised in their answers are quite legitimate. However the original poster was not arguing against a limit, merely that 80 columns is too few.
The issue of emailing code snippets has some merit. But considering the evil things that most email clients do to pre-formatted text I think that line wrapping is only one of your problems.
As for printing I usually find that 100 character lines will very comfortably fit onto a printed page.
A: I try and keep my lines below 80 columns. The strongest reason is that i often find myself using grep and less to browse my code when working at the command-line. I really don't like how terminals are breaking long source lines (they after all aren't made for that job). Another reason is that i find it looks better if everything fits into the line and isn't broken by the editor. For example having parameters of long function calls nicely aligned below each other and similar stuff.
A: We did a survey recently. Almost everyone uses vim inside an gnome-terminal, and if we do a vertical split the column count is 78 as standard font size and screen resolution 1280x1024.
So we all agreed to a coding standard with a column count of (around) 75 characters. It's ok.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/110928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "179"
}
|
Q: interval rails caching I need to cache a single page. I've used ActionController's caches_page for this. But now, I'd like to expire AND regenerate it once in every 10 minutes. What are my options?
later: I'd like to not use any external tools for this, like cron. The important point is interval-based expiry of the cache.
A: You can also use this if you want to have fragments timeout.
A: AFAIK rails page caching compares the cache time on request and regenerates if necessary. If you need to forcibly flush that cache check out Sweepers.
http://www.railsenvy.com/2007/2/28/rails-caching-tutorial#sweepers
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/110930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How should I choose between GET and POST methods in HTML forms? I wish to know all the pros and cons about using these two methods. In particular the implications on web security.
Thanks.
A: To choose between them I use this simple rule:
GET for reads. (reading data and displaying it)
POST for anything that writes (i.e updating a database table, deleting an entry, etc.)
The other consideration is that GET is subjected to the maximum URI length and of course can't handle file uploads.
This page has a good summary.
A: In addition to the fine answers from e.g. Micke, I want to point out an important difference in how browser interfaces handle pages requested with GET vs. POST.
If you reload a GET-requested page, the browser will just fetch the URL again (from the server or from cache), However if you reload a POST, the browser will show a slightly confusing warning popup about reposting data, which the user may then cancel (leading to an even more confusing "expired" page). Same thing if you use back or history to return to a page which is the result of a POST.
This is of course based on the different semantics: GET-requests are supposed to be idempotent - i.e, you can do it several times without changing anything. POSTs on the other hand are for actions with side effects, like signing up for something, bying something, posting a comment on forum. Typically the user dont expect to repeat this action when reloading, so the warning is sensible. However, avoid to use POST if the action is safely repeatable (like a search), since the warning is not necessary and would just be a confusing to the user.
A point regarding security: If you have a password field in a GET-form the password will get masked for prying eyes when you type it in, however, it will be plainly visible in the address bar when you hit submit! But apart from that, there is no real security in either GET and POST, so use SSL if that is a concern.
A: Take a look at RFC 2616: Section 9 "HTTP/1.1 Method definitions"
A: Both GET and POST have their place. You should not rely on any of them for security.
GET requests
*
*are easily cachable
*are easily bookmarkable
*are subject to URI length limitation
*may show parameters in access logs
POST requests
*
*allows file uploading
*allows large data
*does not show parameters in browser address bar
Do you want the result of the form submission to be bookmarkable (think Google search)? Use GET.
Would you like the result of the form submission to be cachable? Use GET.
Are your requests not idempotent (safely repeatable)? Use POST and then always redirect to a page that is suitable to get via HTTP GET.
Do you need file uploads? Use POST.
A: GET passes data in the URL, POST passes the same data in the HTTP content, both are exactly the same from a security standpoint (that is, completely insecure unless you do something about it yourself, like using HTTPS).
GET is limited by the maximum URL length supported by the browser and web server, so it can only be used in short forms.
From an HTTP standard viewpoint GET requests should not change the site and browsers/ spiders are much more likely to make GET requests on their own (without the user actually clicking something) then POST requests.
A: GET should not have side-effects: http://www.w3.org/DesignIssues/Axioms.html#state
POST forms should be used when a submission has side effects.
Neither method has any real implication on security, use SSL if you're concerned about security.
A: If you are passing things like passwords or other sensitive information, always use POST and make sure you are using SSL so that data doesn't travel between the client and server in clear-text.
Security-wise, the downside of using GET is that all the submitted data will be in the URL, and therefore stored locally on the client in the browser history.
A: GET and POST method in HTTP are two most popular methods used to transfer data from client to server using HTTP(Hyper Text Transfer Protocol) protocol. Both GET and POST can be used to send request and receive response but there are significant difference between them.
What is GET HTTP Request?
HTTP protocol supports several request method you can use while sending request using HTTP or HTTPS protocol. GET is one of them. As the name suggest GET method is to retrieve a page from HTTP Server. One important property of GET request is that any request parameter or query parameter is passed as URL encoded string, appended using "?" character which makes it non secure because whatever information you pass in URL String is visible to everybody.
When to use HTTP GET request
As I said GET method is not secure and hence not a suitable choice for transferring confidential data but GET method is extremely useful for retrieving static content from web server. here are some examples where a using GET method make sense:
There is no side effect of repeated request. for example clicking a link which points to another page. it doesn't matter if you click the link twice or thrice , This also gives chance browser of server to catch the response for faster retrieval.
You are not passing any sensitive and confidential information. instead you just passing some configuration data or session id.
You want URL pointed by HTTP GET request to be bookmark-able.
Data requires to be sent to Server is not large and can safely accommodated in maximum length of URL supported by all browser. In general different browser has different character limit for URL length but having it under limit is good choice.
What is POST HTTP method
POST HTTP request is denoted by method: POST in HTTP request. In POST method data is not sent as part of URL string to server instead in POST, data is sent as part of message body. Almost all authentication request is sent via POST method in HTTP world. POST method is secure because data is not visible in URL String and can be safely encrypted using HTTPS for further security. All sensitive and confidential information sent to be server must go on POST request and via HTTPS (HTTP with SSL). POST method is also used for submitting information to server, any information which can alter state of application like adding item into shopping cart, making payments etc. here are some examples where you should consider using POST method in HTTP request:
Use POST if you are sending large data which can not be fit into URL in case of GET.
Use POST method if you are passing sensitive and confidential information to server e.g. user_id, password, account number etc.
Use POST method if you are submitting data which can alter state of application e.g. adding items into cart for passing that cart for payment processing.
Use POST if you are writing secure application and don't want to show query parameters in URL.
Difference between GET and POST method in HTTP Protocol
Most of the difference between GET and POST has been already discussed in there respective section. It all depends upon requirement when you want to choose GET and POST and knowledge of these differences help you to make that decision.
GET method passes request parameter in URL String while POST method passes request parameter in request body.
GET request can only pass limited amount of data while POST method can pass large amount of data to server.
GET request can be bookmarked and cached unlike POST requests.
GET is mostly used for view purpose (e.g. SQL SELECT) while POST is mainly use for update purpose (e.g. SQL INSERT or UPDATE).
Referenced from here
A: Use GET if you want the result to be bookmarkable.
A: GET might be easier to debug because you can monitor all sent values in the address bar without any additional tools, But there is a limitation on the maximum length so with a few variables you may exceed this.
POST isn't much more secure these days 'cause with free tools like Fiddler & co. You can grip the values very easily. But there is no real limitation of the length or amount of values you can submit this way and your URLs are looking more user-friendly.
So my all-time suggestion would be to use POST instead of GET.
A: David M's answer get's my vote.
I just wanted to add one item that I heard about, maybe it was an urban legend??
Someone had a site with links that were only for internal use to delete files on their website. All was well until a webspider ( I think it was google ) somehow found these links and merrily followed each one causing all the files on his site to be deleted. The links used GET and should have used POST as spiders don't follow POST links.
A: The Google search engine is an example of a GET form, because you should be able to search twice in a row and not affect the results by doing this. It also has the nice effect that you can link to a search results page, because it is a normal GET request, like any other address.
As said previously, use POST for deleting or updating data, but I'd like to add that you should immediately redirect your user to a GET page.
http://en.wikipedia.org/wiki/Post/Redirect/Get
A: It depends on the type of data and size of data you want to transfer. With GET you can pass a maximum of 255 characters to the action page. With POST method, you dont have such limitations. POST gives more privacy to the data as it is not displayed anywhere. Anything you send using the GET method is displayed in the address bar of the broser.
Many of the search sites normally uses the GET method as this gives you the facility to bookmark your search queries. Hope this helps.
A: One security issue in GET that a is often overlooked is that the web server log contains the fully URL of every page access. For GET requests, this includes all the query parameters. This is saved to the server log in plain text even if you access the site securely.
The server logs are often used by site statistics apps, so it's not just the server admin who might see it.
The same caveat applies with third party tracking software, such as google analytics - they record the full URL of the page, again including the GET query parameters and reports it to the analytics user.
Therefore, if you are submitting sensitive data (passwords, card numbers, etc etc), even if it's via AJAX and never appears in the browser's actual URL bar, you should always use POST.
A: Both set of values is easily monitored by hackers or other stuff, but GET is less secure in the way that its very visible what the values are (right in the addressbar).
Use SSL for security if that is needed.
A good advice: Always use POST for forms, use querystrings (?value=products), when you are not posting things, but are trying to GET a specific page, like a product page. Hence the names POST and GET :)
A: Generally best to use POST because it's a bit better hidden for snooping, better handling of spaces/encoding in the fields with some browsers, and especially because of limitations in the overall length of GET fields.
A: One gotcha I noticed the other day and it was a real "DUH!" moment for me.
We have a third party search engine on our site and they use the GET method to post the search query to their code. In addition, I had some code that looked for possible SQL injection attacks in the querystring. My code was screwing everything up because it was looking for words like "EXEC", "UPDATE", "DELETE", etc. Well, turns out the user was looking for "EXECUTIVE MBA" and my code found "EXEC" in "EXECUTIVE" and banned their IP.
Believe me, I'm not bragging about my code, just saying that choosing between GET and POST has semi-far reaching implications other than "do I want my passwords showing up in the querystring".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/110933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: ZX Spectrum AY-3-8912 playback in XNA Game Studio Are there any libraries, pieces of code or suchlike that'll let me play ZX Spectrum .ay files in my XNA Game Studio games?
A: You should convert .ay files to wav first. There is a program here
to do that. It also comes with source code so someone who have some free time might help by creating a content importer & processor from it? :-)
A: If you want to code it yourself you need:
*
*Zilog Z80A CPU emulator
not that easy to do but there are some free C/C++ sources for it. I use my own.
*3 channel AY 8910/8912 PSG chip emulator
this is much more simple then the CPU. it is just tone generator but the documentation is not very good for it so you need to experiment a lot.
*1 Bit digital Speaker emulation
*.AY files do not always use AY-chip for sound output. Some games combine AY and Build In speaker. I am not sure now if AY also do have supprot for covox or not ... But if yes then you need to include also 8/16 bit mono/stereo covox usually on some i8255 chip.
*keyboard emulation
*variable HW architecture support
*.AY files store music for more architectures ... there are differences between Sharp, Amstard CPC, ZX48, ZX128, clones ... Some have different crystal frequencies, some have different channel mixing to reproductors, and also the IO addresses can differ not to mention memory paging issues.
*Sound output
This is target platform dependent and as I do not use XNA will not touch this subject (as it is already answered/accepted)
The AY player looks like this:
*
*first load the AY header
*detect target platform and configure your emulator to match it
*load the AY binary to target memory zones
*set the registers and start the emulation
So if you want to code AY player you will end up with writing Z80 emulator as *.AY files are programs not sound recordings. As many AY files uses speaker then you will need to properly emulate the contention model otherwise timing issues will occur which can be heard especially on Speaker...
To improve quality you can apply FIR filters to simulate the PWM-like controlling of speaker which many effects use.
A: If you need sound in XNA, and want XBOX 360 support you need to use the supported file formats. You are properbly better of by trying to convert the .ay files to a format that XNA natively support.
If you only want to support Windows, then search for a .NET Library that can play them, it will work in XNA on windows if it works in .NET.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/110973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Pros and cons of Localisation of technical words? This question is directed to the non-english speaking people here.
It is somewhat biased because SO is an "english-speaking" web forum, so... In the other hand, most developers would know english anyway...
In your locale culture, are technical words translated into locale words ? For example, how "Design Pattern", or "Factory", or whatever are written/said in german, spanish, etc. etc. when used by IT? Are the english words prefered? The local translation? Do the two version (english/locale) are evenly used?
Edit
Could you write with your answer the locale translation of "Design Pattern"?
In french, according to Wikipedia.fr, it is "Patron de conception", which translates back as "Model of Conceptualization" (I guess).
A: We do have some odd mix here in Brazil. Many books are translated into Portuguese, but the originals are commonly avaliable too. Mix that with the Internet and basically everybody has to know the terms in both languages, because you never know how the next person will reference them. And till recently most translations were conducted by some folks not linked with IT in any way. Some terms are simply bad translated.
Design Pattern is a good example. The GoF book is called "Padrões de Projeto". But projeto means project too. So most people do call it Design Patterns, but call the patterns with translated names (Fábrica Abstrata instead of Abstract Factory, Fachada instead of Facade). And I have seen people call Design Patterns as "Padrões de Desenho", as some do think desenho (means design too, but also draw) reflect better the design phase of a software development.
While I do see value in translate some terms to make the conversation more fluent (many, many brazilians have some trouble with the 'th' words. The phonem just doesn't exist in portuguese...), this commonly cause misunderstandings when somebody just hasn't been exposed to some obscure translation. It's obviously better to sticky with the original terms. And be very strict to, when the need to use a translation exist, do not choose a obscure translation.
A: Coming from Switzerland and speaking German I vote for keeping them in English. I have been at an IBM congress some time back (OS/2 0.9 developpers conference, I0m giving away my age here). At that time most people were not as familiar with the names of interface components (combobox, listbox, button) as they are today, especially not the many mainframe programmers attending.
So everything was translated simultaniously into different languages. And i mean everything. This let to the effect that:
*
*A wrong standard set of names was
put in place
*Programmers from different nations
were not able to talk to each other
*It was really hard to follow the
talks, especially if you had some previous knowledge
The only way to go about was to have one ear covered with the headphones while listening to the English original speach and trying to put the English names of things in the right spot in the German translation. It was so tyring.
A: You can translate words to another language, but you are usually not translating the mindset they belong to (that's why good translations cost a lot of money). Technical terms that build an obvious group in one language may also lose their linguistic connection in other languages.
One of the worst examples I stumble upon regularly is the word 'experience' that is rampant in U.S. marketing lingo. Everything is an experience nowadays. Now some folks translate it into German "Erfahrung", and it just sounds terrible because it does not fit with anything in the German mindset. We don't think of using a tool or software as an Erfahrung. The word may be translated correctly, but completely off the mark considering the mindset.
Edit answer: German for "design pattern" is Entwurfsmuster. It's sometimes used in lectures and presentations. My almost daily fun event is the translation of English "to cast" as germanized verb "casten". As casting in C/C++ has been considered evil for quite some time and causes lots of bugs, "casten" is usually a problem. Now "casten" sounds phonetically identical to German first name "Karsten". So whenever casting is the cause of an error I can remark that it has all been caused by Karsten ;)
A: I am from Austria, hence learned German as first language.
Design Pattern becomes 'Entwurfsmuster' which is a pretty decent translation not losing that much in translation, still we all use the English 'original' here in Austria, even people speaking an awful English use the English words. Makes it easier....
For completeness: it is not completely right, that only the rest of the world is using English terms, 'you' also use some 'foreign' words:
*
*(my favorite:) Gedankenexperiment seems to be well known
*eigenvalues and eigenvectors in math
*Kindergarden is related to the German Kindergarten
And recentry i stumbeled upon a site called: übernote from the German word über.
A: No thanks. Leave the English technical terms be. Translation is awkward, generally ugly and confusing.
Sometimes I have the opposite problem, here in Italy. You try not to mix english and italian... so methods and classes are named "findUserBySocialSecurityNumber", "delete", and so on.
But business terms are often impossible to translate (italian unique identifying code is "Codice Fiscale", which is not a social security number or anything), so it's not unusual to meet methods named "findUserByCodiceFiscale", which I admit is pretty silly. :)
EDIT: design pattern in Italian might be translated schemi di progettazione (or struttura di progettazione, according to Wikipedia), but I've never heard it in conversations.
A: Most IT staff in Poland use english terms, polish language is used only when communicating with business or users.
Personally I tend to set locale on all my computers to english - I know which word is used in english manual, but I'm not sure how was it translated in polish version.
On my university all of lectures were in polish, sometimes we had no idea when tutor used some translated terminology (like "kompilator skrośny" [cross compiller] or "krotka" [record in a database]).
"Design Patterns" translated to polish is "wzorce projektowe".
A: In my technical blog everything is translated in french. Using english words for computing, just because most of the concepts behind were invented in the USA would be as stupid as using only german words for printing (because of Gutenberg) or greek words for politics.
Of course, like any rule, there are exceptions. It is sometimes difficult to find a good translation (I use "bit" and "pizza", not the french translation). And it is better to have no translation than a bad one (such as "toile" for "web", a serious translation error).
A: What I normally do is to danish-sify (I'm from denmark), or use the danish words for IT terms when talking to business people, or normal non-it people. If it is IT people i just use the english terms.
A: I am a non-english speaker from germany.
In my opinion it does make no sense to translate programming related words from english into another language. At university (although it is located in germany) nearly all lectures were held in english, all good programming books are written in english, most important web sites (like this one ;-) ) use english. So you get used to terms like "design pattern" and "factory" from the very beginning on. Sometimes I even do not recognize, that I use some english technical words, when I talk to my colleagues in german.
And it's also very helpful, when you are a member of a team with people from different nations, because you have a common "language" and every one understands what is meant when talking about a "factory".
A: In addition to the other good answers, I would add that writing code that was transliterated from another language is just bad form. Makes the code clunky and non-straightforward, everybody will translate it differently (even speakers of the same language), and when it comes to multi-national teams it becomes impossible to understand. This includes variables, class and function names, even comments...
I've performed code reviews for dutch teams, turkish teams, teams all over the world - we've actually had to hire translators to explain the code to us. This gets quite ridiculous, as the translator doesnt understand the code either... (eventually we gave up and had the local team give us some key words, but still...)
If the coding conventions are localized, it becomes impossible to understand - even localized to a language I speak!
A: Oh please, keep them english. Im a non-native english speaker from germany, and I always start to giggle when I come across german texts with technical terms translated to german.
My personal favorite is the german translation for stack:
Stack translates to Keller Datenspeicher. Translating this back to english gives something like Basement data storage. (Uh - where's the stack, and what has that to do with basement?) No young programmer here understands the german anachronisms anymore.
Stackoverflow translates to Keller Datenspeicher Überlauf btw. (Basement data storage overflow)
A: I think technical terms should be kept in their original language, which is english in pretty much any case. I remember lectures about operating systems - virtual memory specifically - and they talked of tilings. I thought "what the hell? There is a tiling in my kitchen and my bathroom, but not in my memory!" and ended up deeply confused. Later, I understood they translated mempages in a ridiculous way. Similar "interesting" translations involve stockpile for heap, cellar for stack and other things I forgot.
Generally, I think the translation might be friendly towards users who don't know what you are talking about. Anyone else will know the "real" term much better - even if he does not, he will find more information easier with the "real" term.
I do not think there exists a real disadvantage to this. Doctors have their own technical language, too, and even construction workers have their own language, so why should be programmers have no special language for them? You just can't talk to a surgeon without knowing the term "cut" (I cannot find a good example, because I do not know their language), and thus, you cannot talk to a programmer about what he does without knowing the term "design pattern", "factory" or "minimized binary decision diagram". If you don't, accept you are unable to talk to the programmer, or ask him about the basics and learn more about the terms yourself.
A:
I think technical terms should be kept in their original language, which is english in pretty much any case
Let's see: Stack was first proposed in 1955 and then patented in 1957 by early German computer scientist Friedrich L. Bauer. So, everyone all over the world: from now on use "Stapelspeicher" instead of "stack" and "Stapelspeicherüberlauf" instead of "stackoverflow";) (Note to myself: check if stapelspeicherüberlauf.de is available...)
But I agree, it's better to use (and I do) the English words, since they are known everywhere.
A: I'm italian, and I think that technical words translated sounds totally funny!
It's better sticking to the english version, also for clarity: everybody know (should know) what design patterns are, but "schemi di progettazione" is somehow more obscure.
A: While I am a English speaker, I work for a Swiss company based in France. In my experience most people use English technical terms. The French Government tries to impose French terms , but no one pay any attention.
A: Design pattern in Swedish - designmönster. So it works quite well, as do many other translations, but many just come out sounding silly. Usually swedes have a pretty good understanding of english, so in conversation it tends to be a sort of mix between swedish, english terms, and sometimes swenglish terms - usually english words with swedish grammar :)
"Did you commit the branch?" becomes
"Har du committat branchen?"
Code should be all english though. Localized variable names are EVIL.
A: My Native Language is Arabic, and I speak French/English also.
Arabic is one of the hardest languages for the tech terms... I strongly prefer to stick to the english version of the Tech word
i Find users familiar with the tech terms in english more than arabic (at least the ones i dealt with)
however the arabic use of terms has to be for legal reasons, so somehow we have to put up with that with some softwares
We used to play a little game in the Tech Department, which is Guess the Arabic Synonym and we would laugh so hard at the unexpected word that a bunch of geeky guys didn't know, so imagine how it would be like for users
A: Design patterns in Danish would be something like "designmønstre", but I have never seen anyone use it.
With less than 6 million native speakers worldwide most computer science lingo is not translated to Danish. Instead the English terms are used. This obviously makes it easier for the reader to find other international sources on the subject, but it also leaves the original language somewhat mangled by the foreign terms.
A: Italian here. Usually, Italian programmers prefer using English names for things, although they often misprouonce them (as well as most english words). The main reasons for this are:
*
*Some terms are only used in the English form, and it's hard or impossible to translate them: stack, for example.
*Some words sound better in the English form: matrice (matrix) is the translation for array, but it's quite uncommon because it sounds bad.
Design pattern translates with schema di progettazione.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/110987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: flex webservice client Has anybody any useful links that can be used to consume a web service using flex?
The easier the solution the better
Thanks
A: Try http://livedocs.adobe.com/flex/3/langref/mx/rpc/soap/mxml/WebService.html for SOAP services. You just have to specify the WSDL location and the event handlers and call the service.
Flex Builder 3 also contains code generation capabilities for creating proxies for web services.
http://livedocs.adobe.com/flex/3/html/help.html?content=data_4.html
A: I found this tutorial pretty helpful - it gives clear examples of consuming a basic webservice, with code on both sides (server and flex).
One thing to remember when accessing remote webservices on a remote server from flex is the need for a crossdomain.xml - the security model in flex needs to be explicitly told to be allowed access a service from a remote domain.
A: I recommend AMF for consuming your own services (Java Remote Object is standard but there are others like pyAMF, RubyAMF).
This worked well for me to consume a REST web service:
http://code.google.com/p/as3httpclient/wiki/Links
Example
BlazeDS supports accessing external domains without a Crossdomain.xml:
http://www.adobe.com/cfusion/communityengine/index.cfm?event=showdetails&postId=10284&productId=2
A: Flex Builder 3 comes with code generation tools that let you build the actionscript objects that correspond to the server side transfer object exposed by the eb service wsdl. It can make your life easier when working with web services. Here is a good overview by Zee Yang.
Brian Riley and Clint Modien have written an open source tool called VOFactory which lets you cast wsdl objects to actionscript objects on the fly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How to get a full call stack in Visual Studio 2005? How can I get a full call stack for a c++ application developed with Visual Studio 2005? I would like to have a full call stack including the code in the system libraries.
Do I have to change some settings in Visual Studio, or do I have to install additional software?
A: *
*Get debug information for all project dependencies. This is specified under the "Configuration Properties -> C/C++ -> General" section of the project properties.
*On the menu, go to "Tools -> Options" then select "Debugging -> Symbols".
*Add a new symbol location (the folder icon) that points to Microsoft's free symbol server “symsrvsymsrv.dllc:\symbols*http://msdl.microsoft.com/downloads/symbols“
*Fill out the "cache symbols" field with some place locally so you don't go to the internet all the time.
A: Agree with Clay, but for Symbols Server you should get the latest symsrv.DLL from "Debugging Tools For Windows", a free Microsoft download.
(Since you explicitly asked what you need to download, I presume you don't have it yet)
A: Or, optionally (assuming that Visual Studio is not installed), grab a copy of Windows Debugging Tools, install and either run your app from within the debugger (windbg.exe) or have it attach to an already running app:
windbg[.exe] -pn program.exe
or
windbg[.exe] -p process_id
Break in the debugger at the point you want to observe for stack trace (Ctrl+Break). Switch to the thread of interest (most probably the main thread of execution):
~0s
Fix up symbols for system modules (and probably for the app as well if available):
* fix up symbols for app
.sympath path_to_app_symbols
* configure where debugger will download and store system symbols
.symfix+ path_where_system_symbols_will_be_stored
* force debugger to reload symbols
.reload
Issue a call stack command:
kb
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Large array arithmetics in C# Which is the best way to store a 2D array in c# in order to optimize performance when performing lots of arithmetic on the elements in the array?
We have large (approx 1.5G) arrays, which for example we want to multiply with each other element by element. Performance is critical. The context in which this is done is in c#. Is there any smart way of storing the arrays and iterating over them? Could we write these parts in unmanaged C++ and will this really increase performance? The arrays need to be accessible to the rest of the c# program.
Currently (in c) the array is stored as a single long vector. We perform calculations on each element in the array and overwrite the old value. The calculations are usually unique for each element in the vector.
Timing experiments show that storing and iterating over the data as an array in C# is slower than storing it as a 2D array. I would like to know if there is an even better way of handling the data. The specific arithmetics performed are not relevant for the question.
A: Anna,
Here is a great page that discusses the performance difference between tradition scientific programming languages (fortran, C++) and c#.
http://msdn.microsoft.com/en-us/magazine/cc163995.aspx
According to the article C#, when using rectangular arrays (2d) can be a very good performer. Here is a graph that shows the difference in performance between jagged arrays (an array of arrays) and rectangular arrays (multi-dimensional) arrays.
alt text http://i.msdn.microsoft.com/cc163995.fig08.gif
I would suggest experimenting yourself, and use the Performance Analysis in VS 2008 to compare.
If using C# is "fast enough" then your application will be that much easier to maintain.
Good Luck!
A: For best array performance, make sure you're using a single dimension array with lower index of 0.
To access the elements of the array as fast as possible, you can use unsafe pointers like so:
int[] array = Enumerable.Range(0, 1000).ToArray();
int count = 0;
unsafe {
fixed (int* pArray = array) {
for (int i = 0; i < array.Length; i++) {
count += *(pArray + i);
}
}
}
EDIT Drat! Didn't notice you said 2D array. This trick won't work with a multi-dimensional array so I'm not sure how much help it will be. Although you could turn any array into a single-dimension array by doing some arithmetic on the array index. Just depends on if you care about the performance hit in indexing the array or in iterating over the array.
A: If you download F#, and reference one of the runtime libraries (I think it's FSharp.PowerPack), and use Microsoft.FSharp.Maths.Matrix. It optimises itself based on whether you are using a dense or sparse matrix.
A: Do you iterate the matrix by row or by colum or both? Do you always access nearby elements or do you do random accesses on the matrix.
If there is some locality in your accesses but you're not accessing it sequential (typical in matrix multiplication for example) then you can get a huge performance difference by storing your matrix in a more cache-friendly way.
A pretty easy way to do that is to write a little access function to turn your row/colum indices into an index and work on a one dimensional matrix, the cache-friendy way.
The function should group nearby coordinates into nearby indices. The morton-order can be used if you work on power of two sizes. For non-power sizes you can often bring just the lowest 4 bits into morton order and use normal index-arithmetic for the upper bits. You'll still get a significant speed-up, even if the coordinate to index conversion looks seems to be a costly operation.
http://en.wikipedia.org/wiki/Z-order_(curve) <-- sorry, can't link that SO does not like URL's with a dash in it. You have to cut'n'paste.
A speed up of factor 10 and more are realistic btw. It depends on the algorithm you ron over your matrices though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Comparison between Centralized and Distributed Version Control Systems What are the benefits and drawbacks with using Centralized versus Distributed Version Control Systems (DVCS)? Have you run into any problems in DVCS and how did you safeguard against these problems? Keep the discussion tool agnostic and flaming to minimum.
For those wondering what DVCS tools are available, here is a list of the best known free/open source DVCSs:
*
*Git, (written in C) used by the Linux Kernel and Ruby on Rails.
*Mercurial, (written in Python) used by Mozilla and OpenJDK.
*Bazaar, (written in Python) used by Ubuntu developers.
*Darcs, (written in Haskell).
A: To some extent, the two schemes are equivalent:
*
*A distributed VCS can trivially emulate a centralised one if you just always push your changes to some designated upstream repository after every local commit.
*A centralised VCS won't usually be able to emulate a distributed one quite as naturally, but you can get something very similar if you use something like quilt on top of it. Quilt, if you're not familiar with it, is a tool for managing large sets of patches on top of some upstream project. The idea here is that the DVCS commit command is implemented by creating a new patch, and the push command is implemented by committing every outstanding patch to the centralised VCS and then discarding the patch files. This sounds a bit awkward, but in practice it actually works rather nicely.
Having said that, there are a couple of things which DVCSes traditionally do very well and which most centralised VCSes make a bit of a hash of. The most important of these is probably branching: a DVCS will make it very easy to branch the repository or to merge branches which are no longer needed, and will keep track of history while you do so. There's no particular reason why a centralised scheme would have trouble with this, but historically nobody seems to have quite gotten it right yet. Whether that's actually a problem for you depends on how you're going to organise development, but for many people it's a significant consideration.
The other posited advantage of DVCSes is that they work offline. I've never really had much use for that; I mostly do development either at the office (so the repository's on the local network) or at home (so there's ADSL). If you do a lot of development on laptops while traveling then this might be more of a consideration for you.
There aren't actually very many gotchas which are specific to DVCSes. There's a slightly greater tendency for people to go quiet, because you can commit without pushing and it's easy to end up polishing things in private, but apart from that we haven't had very many problems. This may be because we have a significant number of open source developers, who are usually familiar with the patch-trading model of development, but incoming closed source developers also seem to pick things up reasonably quickly.
A: From my answer to a different question:
Distributed version control systems
(DVCSs) solve different problems than
Centralized VCSs. Comparing them is
like comparing hammers and
screwdrivers.
Centralized VCS systems are
designed with the intent that there is
One True Source that is Blessed, and
therefore Good. All developers work
(checkout) from that source, and then
add (commit) their changes, which then
become similarly Blessed. The only
real difference between CVS,
Subversion, ClearCase, Perforce,
VisualSourceSafe and all the other
CVCSes is in the workflow,
performance, and integration that each
product offers.
Distributed VCS systems are
designed with the intent that one
repository is as good as any other,
and that merges from one repository to
another are just another form of
communication. Any semantic value as
to which repository should be trusted
is imposed from the outside by
process, not by the software itself.
The real choice between using one type
or the other is organizational -- if
your project or organization wants
centralized control, then a DVCS is a
non-starter. If your developers are
expected to work all over the
country/world, without secure
broadband connections to a central
repository, then DVCS is probably your
salvation. If you need both, you're
fsck'd.
A: I have been using subversion for many years now and I was really happy with it.
Then the GIT buzz started and I just had to test it. And for me, the main selling point was branching. Oh boy. Now I no longer need to clean my repository, go back a few version or any of the silly things I did when using subversion. Everything is cheap in dvcs. I have only tried fossil and git though, but I have used perforce, cvs and subversion and it looks like dvcs all have really cheap branching and tagging. No longer need to copy all code to one side and therefore merging is just a breeze.
Any dvcs can be setup with a central server, but what you get is among other things
You can checkin any small change you like, as Linus says if you need to use more than one sentence to describe what you just did, you are doing too much.
You can have your way with the code, branch, merge, clone and test all locally without causing anyone to download huge amount of data.
And you only need to push the final changes into the central server.
And you can work with no network.
So in short, using a version control is always a good thing. Using dvcs is cheaper (in KB and bandwidth), and I think it is more fun to use.
To checkout Git : http://git-scm.com/
To checkout Fossil : http://www.fossil-scm.org
To checkout Mercurial : https://www.mercurial-scm.org
Now, I can only recommend dvcs systems, and you easily can use a central server
A:
To those who think distributed systems don't allow authoritative
copies please note that there are plenty of places where distributed
systems have authoritative copies, the perfect example is probably
Linus' kernel tree. Sure lots of people have their own trees but
almost all of them flow toward Linus' tree.
That said I use to think that distributed SCM's were only useful for
lots of developers doing different things but recently have decided
that anything a centralized repository can do a distributed one can do
better.
For example, say you are a solo developer working on your own personal
project. A centralized repository might be an obvious choice but
consider this scenario. You are away from network access (on a plane,
at a park, etc) and want to work on your project. You have your local
copy so you can do work fine but you really want to commit because you
have finished one feature and want to move on to another, or you found
a bug to fix or whatever. The point is that with a centralized repo
you end up either mashing all the changes together and commiting them
in a non-logical changeset or you manually split them out later.
With a distributed repo you go on business as usual, commit, move on,
when you have net access again you push to your "one true repo" and
nothing changed.
Not to mention the other nice thing about distributed repos: full
history available always. You need to look at the revision logs when
away from the net? You need to annotate the source to see how a bug
was introduced? All possible with distributed repos.
Please please don't believe that distributed vs centralized is about
ownership or authoritative copies or anything like that. The reality
is distributed is the next step in evolution of SCM's.
A: Distributed VCS are appealing in many ways, but one disadvantage that will be important to my company is the issue of managing non-mergable files (typically binary, e.g. Excel documents). Subversion deals with this by supporting the "svn:needs-lock" property, which means you must get a lock for the non-mergable file before you edit it. It works well. But that work-flow requires a centralised repository model, which is contrary to the DVCS concept.
So if you want to use a DVCS, it is not really appropriate for managing files that are non-mergable.
A: The main problem (aside from the obvious bandwidth issue) is ownership.
That is to be sure to different (geographic) site are not working on the same element than the other.
Ideally, the tool is able to assign ownership to a file, a branch or even a repository.
To answer the comments of this answer, you really want the tool to tell you who owns what, and then communicate (through phone, IM or mail) with the distant site.
If you have not ownership mechanism... you will "communicate", but often too late ;) (i.e.: after having done concurrent development on an identical set of files in the same branch. The commit can get messy)
A: For me this is another discussion about a personal taste and it's rather difficult to be really objective. I personally prefer Mercurial over the other DVCS. I like to write hooks in the same language as Mercurial is written in and the smaller network overhead - just to say some of my own reasons.
A: Not really a comparison, but here are what big projects are using:
Centralized VCSes
*
*Subversion
Apache, GCC, Ruby, MPlayer, Zope, Plone, Xiph, FreeBSD, WebKit, ...
*CVS
CVS
Distributed VCSes
*
*git
Linux kernel, KDE, Perl, Ruby on Rails, Android, Wine, Fedora, X.org, Mediawiki, Django, VLC, Mono, Gnome, Samba, CUPS, GnuPG, Emacs ELPA...
*mercurial (hg)
Mozilla and Mozdev, OpenJDK (Java), OpenSolaris, ALSA, NTFS-3G, Dovecot, MoinMoin, mutt, PETSc, Octave, FEniCS, Aptitude, Python, XEmacs, Xen, Vim, Xine...
*bzr
Emacs, Apt, Mailman, MySQL, Squid, ... also promoted within Ubuntu.
*darcs
ghc, ion, xmonad, ... popular within Haskell community.
*fossil
SQLite
A: W. Craig Trader said this about DVCS and CVCS:
If you need both, you're fsck'd.
I wouldn't say you're fsck'd when using both. Practically developers who use DVCS tools usually try to merge their changes (or send pull requests) against a central location (usually to a release branch in a release repository). There is some irony with developers who use DVCS but in the end stick with a centralized workflow, you can start to wonder if the Distributed approach really is better than Centralized.
There are some advantages with DVCS over a CVCS:
*
*The notion of uniquely recognizable commits makes sending patches between peers painless. I.e. you make the patch as a commit, and share it with others developers who need it. Later when everyone wants to merge together, that particular commit is recognized and can be compared between branches, having less chance of merge conflict. Developers tend to send patches to each other by USB stick or e-mail regardless of versioning tool you use. Unfortunately in the CVCS case, version control will register the commits as seperate, failing to recognize that the changes are the same, leading to a higher chance of merge conflict.
*You can have local experimental branches (cloned repositories can also be considered a branch) that you don't need to show to others. That means, breaking changes don't need to affect developers if you haven't pushed anything upstream. In a CVCS, when you still have a breaking change, you may have to work offline until you've fixed it and commit the changes by then. This approach effectively defeats the purpose of using versioning as a safety net but it is a necessary evil in CVCS.
*In today's world, companies usually work with off-shore developers (or if even better they want to work from home). Having a DVCS helps these kind of projects out because it eliminates the need of a reliable network connection since everyone has their own repo.
…and some disadvantages that usually have workarounds:
*
*Who has the latest revision? In a CVCS, the trunk usually has the latest revision, but in a DVCS it may not be plainly obvious. The workaround is using rules of conduct, that the developers in a project have to come to an agreement in which repo to merge their work against.
*Pessimistic locks, i.e. a file is locked when making a check-out, are usually not possible because of concurrency that may happen between repositories in DVCS. The reason file locking exists in version control is because developers want to avoid merge conflicts. However, locking has the disadvantage of slowing development down as two developers can't work on same piece of code simultaneously as with a long transaction model and it isn't full proof warranty against merge conflicts. The only sane ways regardless of version control is to combat big merge conflicts is to have good code architecture (like low coupling high cohesion) and divide up your work tasks so that they have low impact on the code (which is easier said than done).
*In proprietary projects it would be disastrous if the whole repository becomes publically available. Even more so if a disgruntled or malicious programmer gets hold of a cloned repository. Source code leakage is a severe pain for proprietary businesses. DVCS's makes this plain simple as you only need to clone the repository, while some CM systems (such as ClearCase) tries to restrict that access. However in my opinion, if you have an enough amount of dysfunctionality in your company culture then no version control in the world will help you against source code leakage.
A: Everybody these days is on the bandwagon about how DVCSs are superior, but Craig's comment is important. In a DVCS, each person has the entire history of the branch. If you are working with a lot of binary files, (for example, image files or FLAs) this requires a huge amount of space and you can't do diffs.
A: During my search for the right SCM, I found the following links to be of great help:
*
*Better SCM Initiative : Comparison. Comparison of about 26 version control systems.
*Comparison of revision control software. Wikipedia article comparing about 38 version control systems covering topics like technical differences, features, user interfaces, and more.
*Distributed version control systems. Another comparison, but focussed mainly on distributed systems.
A: I have a feeling that Mercurial (and other DVCS) are more sophisticated than the centralised ones. For instance, merging a branch in Mercurial keeps the complete history of the branch whereas in SVN you have to go to the branch directory to see the history.
A: Another plus for distributed SCM even in solo developer scenario is if you, like many of us out there, have more than one machine you work on.
Lets say you have a set of common scripts. If each machine you work on has a clone you can on demand update and change your scripts. It gives you:
*
*a time saver, especially with ssh keys
*a way to branch differences between different systems (e.g. Red Hat vs Debian, BSD
vs Linux, etc)
A: W. Craig Trader's answer sums up most of it, however, I find that personal work style makes a huge difference as well. Where I currently work we use subversion as our One True Source, however, many developers use git-svn on their personal machines to compensate for workflow issue we have (failure of management, but that's another story). In any case. its really about balancing what feature sets make you most productive with what the organization needs (centralized authentication, for example).
A: A centralised system doesn't necessarily prevent you from using separate branches to do development on. There doesn't need to be a single true copy of the code base, rather different developers or teams can have different branches, legacy branches could exist etc.
What it does generally mean is that the repository is centrally managed - but that's generally an advantage in a company with a competent IT department because it means there's only one place to backup and only one place to manage storage in.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "78"
}
|
Q: Supplementary development tools for Java What are your favorite supplementary tools for Java development?
Mine are:
1) Total Commander (due to the ability to search inside JARs).
2) JAD + Jadclipse (to understand and debug libraries)
And of-course, Google. (can't really live without it)
A: PMD
PMD scans Java source code and looks for potential problems like:
* Possible bugs - empty try/catch/finally/switch statements
* Dead code - unused local variables, parameters and private methods
* Suboptimal code - wasteful String/StringBuffer usage
* Overcomplicated expressions - unnecessary if statements, for loops that could be while loops
* Duplicate code - copied/pasted code means copied/pasted bugs
A: Jython for interactive testing and exploration of all sorts of things.
A: *
*Eclipse Classic (with WebTools, Subclipse and Eclipse Checkstyle plugins)
*Maven
*Oracle SQL Developer
A: *
*Eclipse
*TextMate
*Ant
*Maven
*JUnit and friends
*Checkstyle (plugins for Eclipse and Maven)
*JAD
*DBVisualizer
A: *
*Maven for organizing and building your project
*Hudson to do this automatically ;-)
*Emma (and the EclEmma plugin for Eclipse) to get some insight in your code coverage
A: *
*Ultra Edit
*Agent Ransack
*DJ Java Decompiler
A: JavaRebel speeds up development by automatically hot deploying code changes to the running program.
A: I pretty much spend most of my time in Eclipse and at the command line.
With Eclipse I usually modify the keyboard bindings so I have features such as Open Type/Resource, Quick Outline, Show Refactor Menu and so on at the tip of my fingers. I also install Q for Eclipse to enable good Maven-integration allowing me access to the source of my dependencies when coding.
At the command line it's tools such as Maven, Ant and Subversion that are used the most. I have a few commands to switch between JDKs to test that projects compile and run on all their intended targets.
I used to keep a copy of JAD around, but thanks to Maven and Q for Eclipse I harldy ever use it anymore. Decompiled code is not nearly as usable as the original.
I almost forgot, JConsole helps with monitoring your application also I use YourKit for more advanced profiling.
A: *
*Eclipse with:
*
*Subclipse
*JBoss Tools
*Ant
*Junit
*Ultraedit (for column editing)
*JAD
*Jarbrowser
*SQLYog (for MySQL), TOAD (for Oracle), Management Studio (for SQL Server)
Eclipse has already a lot to offer, thanks to the countless plugins (which support other languages and environments, too).
A: *
*Ant/Maven
*TextMate
*Google of course ;-)
A: *
*Groovy: my pseudo Java scratchpad
*Eclipse or Netbeans: whichever I am feeling like for an IDE
*Subversion: always need a good version control
A: FindBugs, Proguard, JProfiler, Cobertura.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Developing MS Word add-in Anyone knows of a good tool for developing add-ins for Word in .net?
Hopefully something that supports both office 2003 and 2007.
Thanks.
A: There are lots of options for development tools for Office. The most obvious one is of course Office itself. It has rich support for macros and VBA. You could also use SharePoint to extend document sharing and management functionality. But if your add-in is more complex than can be handled inside of Office, I suggest you use Visual Studio 2008 or the Tools For Office add-on for Visual Studio 2005.
One thing to keep in mind is that Office is mostly a collection of COM objects. So while tools like Visual Studio, with its deep support of the .NET Framework and Office classes make it very simple to develop solutions for Office applications, with some time, energy, and a high tolerance for pain, you could develop an Office add-in with Notepad.
Microsoft has a very nice resource site for Office developers here.
A: Several tools can be used to develop extensions for Office and there are quite a number of books on the subject. Some of the more popular approaches are:
*
*VBA comes with office and can be used in two modes. In the first, macros can be written within the document or a template. This has the advantage that the code follows the document and the disadvantage that you cannot easily propogate updates to existing documents. It can also be used to develop extensions by placing a document with the macros in the right folder and registering it with Office.
*Visual Studio Tools for Office Allows you do to VBA-like projects but with .Net. The assemblies can be bundled with the documents or installed as extensions. Note that VSTO is not necessary for doing non-bundled extensions - you can do this with any .Net development tool if you install the Primary Interop Assembiles for Office. These are shims that wrap the COM API with a native .Net one.
*Any language such as that supports COM (Component Object Model) can be used to develop office extensions. Examples of such languages are C++, Delphi and Python.
A: Any version of Visual Studio will do the job. Remember to think about deployment and if you'd want to require the user to have this or that version of the .net framework installed.
A: Daniel Moth have made some very good VSTO primer webcasts, take a look at those.
A: Visual Studio 2008. VB.NET.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How do I convert legacy ASP applications to ASP.NET? We have a large ASP (classic ASP) application and we would like to convert it to .NET in order to work on further releases. It makes no sense continuing to use ASP as it is obsolete, and we don't want to rewrite it from scratch (Joel Spolsky tells you why).
Is there a way to convert it from ASP to ASP.NET automatically?
A: Even if there are tools to convert between classic ASP and ASP.NET, they're not going to generate very good results: the two environments are just too fundamentally different. A quick Google turns up a few results, mostly of the "we'll have our guys in India do it" variety.
My advice would be not to touch your existing ASP code for now. The runtime environment will be supported by Microsoft for the foreseeable future, so there's no urgent need to migrate. Instead, start working on new functionality in ASP.NET: this way, you won't be held back by legacy concepts, and can use the new coolness afforded by the Framework (including stuff like ASP.NET MVC) in any way you see fit.
Of course, your new code will need to work with the existing ASP environment. Sharing session state between ASP and ASP.NET will most likely be one of your first requirements, but you'll soon identify more issues like that.
The 'right' solution for such issues will depend entirely on your current code and requirements: sometimes, you'll be able to wrap .NET code in a COM object for use by your ASP code, sometimes partial porting/migration may be the solution.
However, on average, the 'two worlds' approach should be entirely feasible, and allow you to develop exciting new features without having to worry about your legacy code.
December 2009 addition to original answer: Just came across the ASP Classic Compiler, which is an actively maintained VBscript compiler that converts classic ASP pages into code that runs natively on ASP.NET. It has several cool features, such as the ability to use it as a ASP.NET MVC custom ViewEngine, so despite its beta status, it would definitely seem worth keeping an eye on...
A: gmStudio is a comprehensive VB6/ASP/COM to .NET upgrade tool. It can read, analyze, interpret, rewrite and restructure (as C# or VB.NET) individual pages+includes or entire sites.
The technology has been in active development since 2007 and we have used it to help us rewrite sites ranging from a few hundred pages to 1000s of pages.
The tool is endorsed on MSDN here.
A (old) demo video is on ScreenCast here. (I really need to update this! Until then, please let me know if you want a live demo to see the latest.)
There is a more lot to tell, please contact us if you are intrested.
Disclaimer: I work for Great Migrations.
A: Well,
I used to work for the company where all web apps were classic ASP.
When decision was made to move to .NET we had to find a way to transform 168(!) web apps into this new framework.
I tried all the tools available at the time to do this and all failed.
Best way is to build a new web server and start there from scratch, this way you can be sure that upgrade will happen fast and will work without any hick-ups because of old-new integration. You will be able to choose what functionality and visual appearances to keep and which one to change.
Do not waste your time on automatic tools to upgrade your old ASP files/sites into NET platform. None so far have ever worked properly.
And on top of that if you have database on back end, you will run into problem with connection to it from web apps.
A: Microsoft has an article up on MSDN that talks about Migrating ASP Pages to ASP.NET. They basically tell you to install .net on your computer/server and the transform one page at a time. ASP and ASP.NET can co-exist so can can rename each page to "aspx" as you go. You should note, however, that session state and application state are not shared between ASP and ASP.NET pages (See @mdb's answer for a workaround on that problem.)
There is also The ASP to ASP.NET Migration Assistant, but I'm not sure that project/program is still active. You can try it by downloading from this page:
http://www.asp.net/downloads/archived/migration-assistants/asp-to-aspnet/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Shut-down script on Windows to delete a registry key? EDIT: This was formerly more explicitly titled: - "Best solution to stop Kontiki's KHOST.EXE from loading automatically at start-up on Windows XP?"
Essentially, whenever the 40D application is run it sets up khost.exe to automatically start-up with Windows. This is annoying as it increases my boot up time by a couple of minutes and I don't even use the P2P aspect of 4OD anyway.
The registry keys that are set are:
Command: C:\Program Files\Kontiki\KHost.exe -all
Description: kdx
Location: HKU\S-1-5-21-1757981266-1960408961-839522115-1003\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
Name: kdx
Setting ID:
User: LAPTOP\Me
Command: "C:\Program Files\Kontiki\KHost.exe" -all
Description: 4oD
Location: HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run
Name: 4oD
Setting ID:
User: All Users
I'm assuming some kind of start-up or shut-down script to delete these registry keys would be the best solution, but I'm not that up with .vbs or .bat scripting or where I'd put them to automatically run at an appropriate time.
I know there is a TV On-Demand Monitor application, but I don't really need to be running yet another process, I just need to delete the registry keys as I describe above.
A: What I ended up doing in the end:
1) Stopped 40D from the task tray with a right-click > exit which terminated the Khost.exe process.
2) Opened Start > All Programs > Administrative Tools > Services and stopped KService then set the Startup Type to 'Manual'.
3) Created a ShutdownScript.vbs with the following content:
Set SH = CreateObject("WScript.Shell")
RemoveRegKey "HKU\S-1-5-21-1757981266-1960408961-839522115-1003\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\kdx"
RemoveRegKey "HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\kdx"
RemoveRegKey "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\4oD"
Shutdown
Set Shell = Nothing
Set SH = Nothing
WScript.Quit
Sub RemoveRegKey(sKey)
On Error Resume Next
SH.RegDelete sKey
End Sub
Sub Shutdown()
SH.Run "shutdown -s -t 1", 0, TRUE
End Sub
4) Put a shortcut to the script in my Start Menu and now use that to shut the PC down.
Now 40D will work when I need it, and all I have to do is quit it and shutdown with the script to stop it auto-starting everytime I boot up the PC.
THANKS FOR ALL YOUR HELP WITH THIS! :)
A: Why not just copy the executable to some other name, and put a do-nothing exe in its place. Then change your shortcuts to the copied and renamed EXE. If the program is sensitive to its name, then point your shortcuts to a VBS file to temporarily rename the EXE file.
A: for the vb script you would use something like this:
Dim WSHShell
Set WSHShell = WScript.CreateObject("WScript.Shell")
'repeat the line below for each key to delete
WSHShell.RegDelete "[Location of Key]"
Just drop the code into a text file and re-name it something like shutdown,vbs.
As for when to run it, if you are in a corporate environment you could use a group policy and set it as a machine shutdown script. Alternatively, see this page here about adding it manually
A: Another method:
Create a VBS file that runs the program and then deletes the registry keys.
Set objShell = CreateObject("WScript.Shell")
objShell.Exec("C:\Program Files\Kontiki\KHost.exe")
strRoot = "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\4oD"
strDelete = objShell.RegDelete(strRoot)
...
And point your shortcuts at that.
A: Should I suggest you give a try to AutoIt (http://www.autoitscript.com/autoit3/), a freeware scripting language designed for automating the Windows GUI and general scripting.
If you choose to use it, the AutoIt code for your need would be a 2-liner:
RegDelete("YourKey", "YourValue");
ShutDown(1);
And you can compile it into a standalone exe that can run on any computer (no runtime library needed)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How do JavaScript closures work? How would you explain JavaScript closures to someone with a knowledge of the concepts they consist of (for example functions, variables and the like), but does not understand closures themselves?
I have seen the Scheme example given on Wikipedia, but unfortunately it did not help.
A: A closure is where an inner function has access to variables in its outer function. That's probably the simplest one-line explanation you can get for closures.
A: Example for the first point by dlaliberte:
A closure is not only created when you return an inner function. In fact, the enclosing function does not need to return at all. You might instead assign your inner function to a variable in an outer scope, or pass it as an argument to another function where it could be used immediately. Therefore, the closure of the enclosing function probably already exists at the time that enclosing function was called since any inner function has access to it as soon as it is called.
var i;
function foo(x) {
var tmp = 3;
i = function (y) {
console.log(x + y + (++tmp));
}
}
foo(2);
i(3);
A: I like Kyle Simpson's definition of a closure:
Closure is when a function is able to remember and access its lexical
scope even when that function is executing outside its lexical scope.
Lexical scope is when an inner scope can access its outer scope.
Here is a modified example he provides in his book series 'You Don't Know JS: Scopes & Closures'.
function foo() {
var a = 2;
function bar() {
console.log( a );
}
return bar;
}
function test() {
var bz = foo();
bz();
}
// prints 2. Here function bar referred by var bz is outside
// its lexical scope but it can still access it
test();
A: MDN explains it best I think:
Closures are functions that refer to independent (free) variables. In other words, the function defined in the closure 'remembers' the environment in which it was created.
A closure always has an outer function and an inner function. The inner function is where all the work happens, and the outer function is just the environment that preserves the scope where the inner function was created. In this way, the inner function of a closure 'remembers' the environment/scope in which it was created. The most classic example is a counter function:
var closure = function() {
var count = 0;
return function() {
count++;
console.log(count);
};
};
var counter = closure();
counter() // returns 1
counter() // returns 2
counter() // returns 3
In the above code, count is preserved by the outer function (environment function), so that every time you call counter(), the inner function (work function) can increment it.
A: I know there are plenty of solutions already, but I guess that this small and simple script can be useful to demonstrate the concept:
// makeSequencer will return a "sequencer" function
var makeSequencer = function() {
var _count = 0; // not accessible outside this function
var sequencer = function () {
return _count++;
}
return sequencer;
}
var fnext = makeSequencer();
var v0 = fnext(); // v0 = 0;
var v1 = fnext(); // v1 = 1;
var vz = fnext._count // vz = undefined
A: You're having a sleep over and you invite Dan.
You tell Dan to bring one XBox controller.
Dan invites Paul.
Dan asks Paul to bring one controller. How many controllers were brought to the party?
function sleepOver(howManyControllersToBring) {
var numberOfDansControllers = howManyControllersToBring;
return function danInvitedPaul(numberOfPaulsControllers) {
var totalControllers = numberOfDansControllers + numberOfPaulsControllers;
return totalControllers;
}
}
var howManyControllersToBring = 1;
var inviteDan = sleepOver(howManyControllersToBring);
// The only reason Paul was invited is because Dan was invited.
// So we set Paul's invitation = Dan's invitation.
var danInvitedPaul = inviteDan(howManyControllersToBring);
alert("There were " + danInvitedPaul + " controllers brought to the party.");
A: The author of Closures has explained closures pretty well, explaining the reason why we need them and also explaining LexicalEnvironment which is necessary to understanding closures.
Here is the summary:
What if a variable is accessed, but it isn’t local? Like here:
In this case, the interpreter finds the variable in the
outer LexicalEnvironment object.
The process consists of two steps:
*
*First, when a function f is created, it is not created in an empty
space. There is a current LexicalEnvironment object. In the case
above, it’s window (a is undefined at the time of function
creation).
When a function is created, it gets a hidden property, named [[Scope]], which references the current LexicalEnvironment.
If a variable is read, but can not be found anywhere, an error is generated.
Nested functions
Functions can be nested one inside another, forming a chain of LexicalEnvironments which can also be called a scope chain.
So, function g has access to g, a and f.
Closures
A nested function may continue to live after the outer function has finished:
Marking up LexicalEnvironments:
As we see, this.say is a property in the user object, so it continues to live after User completed.
And if you remember, when this.say is created, it (as every function) gets an internal reference this.say.[[Scope]] to the current LexicalEnvironment. So, the LexicalEnvironment of the current User execution stays in memory. All variables of User also are its properties, so they are also carefully kept, not junked as usually.
The whole point is to ensure that if the inner function wants to access an outer variable in the future, it is able to do so.
To summarize:
*
*The inner function keeps a reference to the outer
LexicalEnvironment.
*The inner function may access variables from it
any time even if the outer function is finished.
*The browser keeps the LexicalEnvironment and all its properties (variables) in memory until there is an inner function which references it.
This is called a closure.
A: A closure is a pairing of:
*
*A function and
*A reference to that function's outer scope (lexical environment)
A lexical environment is part of every execution context (stack frame) and is a map between identifiers (i.e. local variable names) and values.
Every function in JavaScript maintains a reference to its outer lexical environment. This reference is used to configure the execution context created when a function is invoked. This reference enables code inside the function to "see" variables declared outside the function, regardless of when and where the function is called.
If a function was called by a function, which in turn was called by another function, then a chain of references to outer lexical environments is created. This chain is called the scope chain.
In the following code, inner forms a closure with the lexical environment of the execution context created when foo is invoked, closing over variable secret:
function foo() {
const secret = Math.trunc(Math.random() * 100)
return function inner() {
console.log(`The secret number is ${secret}.`)
}
}
const f = foo() // `secret` is not directly accessible from outside `foo`
f() // The only way to retrieve `secret`, is to invoke `f`
In other words: in JavaScript, functions carry a reference to a private "box of state", to which only they (and any other functions declared within the same lexical environment) have access. This box of the state is invisible to the caller of the function, delivering an excellent mechanism for data-hiding and encapsulation.
And remember: functions in JavaScript can be passed around like variables (first-class functions), meaning these pairings of functionality and state can be passed around your program: similar to how you might pass an instance of a class around in C++.
If JavaScript did not have closures, then more states would have to be passed between functions explicitly, making parameter lists longer and code noisier.
So, if you want a function to always have access to a private piece of state, you can use a closure.
...and frequently we do want to associate the state with a function. For example, in Java or C++, when you add a private instance variable and a method to a class, you are associating the state with functionality.
In C and most other common languages, after a function returns, all the local variables are no longer accessible because the stack-frame is destroyed. In JavaScript, if you declare a function within another function, then the local variables of the outer function can remain accessible after returning from it. In this way, in the code above, secret remains available to the function object inner, after it has been returned from foo.
Uses of Closures
Closures are useful whenever you need a private state associated with a function. This is a very common scenario - and remember: JavaScript did not have a class syntax until 2015, and it still does not have a private field syntax. Closures meet this need.
Private Instance Variables
In the following code, the function toString closes over the details of the car.
function Car(manufacturer, model, year, color) {
return {
toString() {
return `${manufacturer} ${model} (${year}, ${color})`
}
}
}
const car = new Car('Aston Martin', 'V8 Vantage', '2012', 'Quantum Silver')
console.log(car.toString())
Functional Programming
In the following code, the function inner closes over both fn and args.
function curry(fn) {
const args = []
return function inner(arg) {
if(args.length === fn.length) return fn(...args)
args.push(arg)
return inner
}
}
function add(a, b) {
return a + b
}
const curriedAdd = curry(add)
console.log(curriedAdd(2)(3)()) // 5
Event-Oriented Programming
In the following code, function onClick closes over variable BACKGROUND_COLOR.
const $ = document.querySelector.bind(document)
const BACKGROUND_COLOR = 'rgba(200, 200, 242, 1)'
function onClick() {
$('body').style.background = BACKGROUND_COLOR
}
$('button').addEventListener('click', onClick)
<button>Set background color</button>
Modularization
In the following example, all the implementation details are hidden inside an immediately executed function expression. The functions tick and toString close over the private state and functions they need to complete their work. Closures have enabled us to modularize and encapsulate our code.
let namespace = {};
(function foo(n) {
let numbers = []
function format(n) {
return Math.trunc(n)
}
function tick() {
numbers.push(Math.random() * 100)
}
function toString() {
return numbers.map(format)
}
n.counter = {
tick,
toString
}
}(namespace))
const counter = namespace.counter
counter.tick()
counter.tick()
console.log(counter.toString())
Examples
Example 1
This example shows that the local variables are not copied in the closure: the closure maintains a reference to the original variables themselves. It is as though the stack-frame stays alive in memory even after the outer function exits.
function foo() {
let x = 42
let inner = () => console.log(x)
x = x + 1
return inner
}
foo()() // logs 43
Example 2
In the following code, three methods log, increment, and update all close over the same lexical environment.
And every time createObject is called, a new execution context (stack frame) is created and a completely new variable x, and a new set of functions (log etc.) are created, that close over this new variable.
function createObject() {
let x = 42;
return {
log() { console.log(x) },
increment() { x++ },
update(value) { x = value }
}
}
const o = createObject()
o.increment()
o.log() // 43
o.update(5)
o.log() // 5
const p = createObject()
p.log() // 42
Example 3
If you are using variables declared using var, be careful you understand which variable you are closing over. Variables declared using var are hoisted. This is much less of a problem in modern JavaScript due to the introduction of let and const.
In the following code, each time around the loop, a new function inner is created, which closes over i. But because var i is hoisted outside the loop, all of these inner functions close over the same variable, meaning that the final value of i (3) is printed, three times.
function foo() {
var result = []
for (var i = 0; i < 3; i++) {
result.push(function inner() { console.log(i) } )
}
return result
}
const result = foo()
// The following will print `3`, three times...
for (var i = 0; i < 3; i++) {
result[i]()
}
Final points:
*
*Whenever a function is declared in JavaScript closure is created.
*Returning a function from inside another function is the classic example of closure, because the state inside the outer function is implicitly available to the returned inner function, even after the outer function has completed execution.
*Whenever you use eval() inside a function, a closure is used. The text you eval can reference local variables of the function, and in the non-strict mode, you can even create new local variables by using eval('var foo = …').
*When you use new Function(…) (the Function constructor) inside a function, it does not close over its lexical environment: it closes over the global context instead. The new function cannot reference the local variables of the outer function.
*A closure in JavaScript is like keeping a reference (NOT a copy) to the scope at the point of function declaration, which in turn keeps a reference to its outer scope, and so on, all the way to the global object at the top of the scope chain.
*A closure is created when a function is declared; this closure is used to configure the execution context when the function is invoked.
*A new set of local variables is created every time a function is called.
Links
*
*Douglas Crockford's simulated private attributes and private methods for an object, using closures.
*A great explanation of how closures can cause memory leaks in IE if you are not careful.
*MDN documentation on JavaScript Closures.
A: Taking the question seriously, we should find out what a typical 6-year-old is capable of cognitively, though admittedly, one who is interested in JavaScript is not so typical.
On Childhood Development: 5 to 7 Years it says:
Your child will be able to follow two-step directions. For example, if you say to your child, "Go to the kitchen and get me a trash bag" they will be able to remember that direction.
We can use this example to explain closures, as follows:
The kitchen is a closure that has a local variable, called trashBags. There is a function inside the kitchen called getTrashBag that gets one trash bag and returns it.
We can code this in JavaScript like this:
function makeKitchen() {
var trashBags = ['A', 'B', 'C']; // only 3 at first
return {
getTrashBag: function() {
return trashBags.pop();
}
};
}
var kitchen = makeKitchen();
console.log(kitchen.getTrashBag()); // returns trash bag C
console.log(kitchen.getTrashBag()); // returns trash bag B
console.log(kitchen.getTrashBag()); // returns trash bag A
Further points that explain why closures are interesting:
*
*Each time makeKitchen() is called, a new closure is created with its own separate trashBags.
*The trashBags variable is local to the inside of each kitchen and is not accessible outside, but the inner function on the getTrashBag property does have access to it.
*Every function call creates a closure, but there would be no need to keep the closure around unless an inner function, which has access to the inside of the closure, can be called from outside the closure. Returning the object with the getTrashBag function does that here.
A: Closure is when a function is closed in a way that it was defined in a namespace which is immutable by the time the function is called.
In JavaScript, it happens when you:
*
*Define one function inside the other function
*The inner function is called after the outer function returned
// 'name' is resolved in the namespace created for one invocation of bindMessage
// the processor cannot enter this namespace by the time displayMessage is called
function bindMessage(name, div) {
function displayMessage() {
alert('This is ' + name);
}
$(div).click(displayMessage);
}
A: For a six-year-old ...
Do you know what objects are?
Objects are things that have properties and do stuff.
One of the most important things about closures is that they let you make objects in JavaScript. Objects in JavaScript are just functions and closures that lets JavaScript store the value of the property for the object once it has been created.
Objects are very useful and keep everything nice and organised. Different objects can do different jobs and working together objects can do complicated things.
It's lucky that JavaScript has closures for making objects, otherwise everything would become a messy nightmare.
A: This is how a beginner wrapped one's head around Closures like a function is wrapped inside of a functions body also known as Closures.
Definition from the book Speaking JavaScript "A closure is a function plus the connection to the scope in which the function was created" -Dr.Axel Rauschmayer
So what could that look like? Here is an example
function newCounter() {
var counter = 0;
return function increment() {
counter += 1;
}
}
var counter1 = newCounter();
var counter2 = newCounter();
counter1(); // Number of events: 1
counter1(); // Number of events: 2
counter2(); // Number of events: 1
counter1(); // Number of events: 3
newCounter closes over increment, counter can be referenced to and accessed by increment.
counter1 and counter2 will keep track of their own value.
Simple but hopefully a clear perspective of what a closure is around all these great and advanced answers.
A: Closure are not difficult to understand. It depends only from the point of view.
I personally like to use them in cases of daily life.
function createCar()
{
var rawMaterial = [/* lots of object */];
function transformation(rawMaterials)
{
/* lots of changement here */
return transformedMaterial;
}
var transformedMaterial = transformation(rawMaterial);
function assemblage(transformedMaterial)
{
/*Assemblage of parts*/
return car;
}
return assemblage(transformedMaterial);
}
We only need to go through certain steps in particular cases. As for the transformation of materials is only useful when you have the parts.
A: JavaScript functions can access their:
*
*Arguments
*Locals (that is, their local variables and local functions)
*Environment, which includes:
*
*globals, including the DOM
*anything in outer functions
If a function accesses its environment, then the function is a closure.
Note that outer functions are not required, though they do offer benefits I don't discuss here. By accessing data in its environment, a closure keeps that data alive. In the subcase of outer/inner functions, an outer function can create local data and eventually exit, and yet, if any inner function(s) survive after the outer function exits, then the inner function(s) keep the outer function's local data alive.
Example of a closure that uses the global environment:
Imagine that the Stack Overflow Vote-Up and Vote-Down button events are implemented as closures, voteUp_click and voteDown_click, that have access to external variables isVotedUp and isVotedDown, which are defined globally. (For simplicity's sake, I am referring to StackOverflow's Question Vote buttons, not the array of Answer Vote buttons.)
When the user clicks the VoteUp button, the voteUp_click function checks whether isVotedDown == true to determine whether to vote up or merely cancel a down vote. Function voteUp_click is a closure because it is accessing its environment.
var isVotedUp = false;
var isVotedDown = false;
function voteUp_click() {
if (isVotedUp)
return;
else if (isVotedDown)
SetDownVote(false);
else
SetUpVote(true);
}
function voteDown_click() {
if (isVotedDown)
return;
else if (isVotedUp)
SetUpVote(false);
else
SetDownVote(true);
}
function SetUpVote(status) {
isVotedUp = status;
// Do some CSS stuff to Vote-Up button
}
function SetDownVote(status) {
isVotedDown = status;
// Do some CSS stuff to Vote-Down button
}
All four of these functions are closures as they all access their environment.
A: There once was a caveman
function caveman {
who had a very special rock,
var rock = "diamond";
You could not get the rock yourself because it was in the caveman's private cave. Only the caveman knew how to find and get the rock.
return {
getRock: function() {
return rock;
}
};
}
Luckily, he was a friendly caveman, and if you were willing to wait for his return, he would gladly get it for you.
var friend = caveman();
var rock = friend.getRock();
Pretty smart caveman.
A: My perspective of Closures:
Closures can be compared to a book, with a bookmark, on a bookshelf.
Suppose you have read a book, and you like some page in the book. You put in a bookmark at that page to track it.
Now once you finish reading the book, you do not need the book anymore, except, you want to have access to that page. You could have just cut out the page, but then you would loose the context on the story. So you put the book back in your bookshelf with the bookmark.
This is similar to a closure. The book is the outer function, and the page is your inner function, which gets returned, from the outer function. The bookmark is the reference to your page, and the context of the story is the lexical scope, which you need to retain. The bookshelf is the function stack, which cannot be cleaned up of the old books, till you hold onto the page.
Code Example:
function book() {
var pages = [....]; //array of pages in your book
var bookMarkedPage = 20; //bookmarked page number
function getPage(){
return pages[bookMarkedPage];
}
return getPage;
}
var myBook = book(),
myPage = myBook.getPage();
When you run the book() function, you are allocating memory in the stack for the function to run in. But since it returns a function, the memory cannot be released, as the inner function has access to the variables from the context outside it, in this case 'pages' and 'bookMarkedPage'.
So effectively calling book() returns a reference to a closure, i.e not only a function, but a reference to the book and it's context, i.e. a reference to the function getPage, state of pages and bookMarkedPage variables.
Some points to consider:
Point 1:
The bookshelf, just like the function stack has limited space, so use it wisely.
Point 2:
Think about the fact, whether you need to hold onto the entire book when you just want to track a single page. You can release part of the memory, by not storing all the pages in the book when the closure is returned.
This is my perspective of Closures. Hope it helps, and if anyone thinks that this is not correct, please do let me know, as I am very interested to understand even more about scopes and closures!
A: Let's start from here, As defined on MDN: Closures are functions that refer to independent (free) variables (variables that are used locally, but defined in an enclosing scope). In other words, these functions 'remember' the environment in which they were created.
Lexical scoping
Consider the following:
function init() {
var name = 'Mozilla'; // name is a local variable created by init
function displayName() { // displayName() is the inner function, a closure
alert(name); // use variable declared in the parent function
}
displayName();
}
init();
init() creates a local variable called name and a function called displayName(). The displayName() function is an inner function that is defined inside init() and is only available within the body of the init() function. The displayName() function has no local variables of its own. However, because inner functions have access to the variables of outer functions, displayName() can access the variable name declared in the parent function, init().
function init() {
var name = "Mozilla"; // name is a local variable created by init
function displayName() { // displayName() is the inner function, a closure
alert (name); // displayName() uses variable declared in the parent function
}
displayName();
}
init();
Run the code and notice that the alert() statement within the displayName() function successfully displays the value of the name variable, which is declared in its parent function. This is an example of lexical scoping, which describes how a parser resolves variable names when functions are nested. The word "lexical" refers to the fact that lexical scoping uses the location where a variable is declared within the source code to determine where that variable is available. Nested functions have access to variables declared in their outer scope.
Closure
Now consider the following example:
function makeFunc() {
var name = 'Mozilla';
function displayName() {
alert(name);
}
return displayName;
}
var myFunc = makeFunc();
myFunc();
Running this code has exactly the same effect as the previous example of the init() function above: this time, the string "Mozilla" will be displayed in a JavaScript alert box. What's different — and interesting — is that the displayName() inner function is returned from the outer function before being executed.
At first glance, it may seem unintuitive that this code still works. In some programming languages, the local variables within a function exist only for the duration of that function's execution. Once makeFunc() has finished executing, you might expect that the name variable would no longer be accessible. However, because the code still works as expected, this is obviously not the case in JavaScript.
The reason is that functions in JavaScript form closures. A closure is the combination of a function and the lexical environment within which that function was declared. This environment consists of any local variables that were in-scope at the time that the closure was created. In this case, myFunc is a reference to the instance of the function displayName created when makeFunc is run. The instance of displayName maintains a reference to its lexical environment, within which the variable name exists. For this reason, when myFunc is invoked, the variable name remains available for use and "Mozilla" is passed to alert.
Here's a slightly more interesting example — a makeAdder function:
function makeAdder(x) {
return function(y) {
return x + y;
};
}
var add5 = makeAdder(5);
var add10 = makeAdder(10);
console.log(add5(2)); // 7
console.log(add10(2)); // 12
In this example, we have defined a function makeAdder(x), which takes a single argument, x, and returns a new function. The function it returns takes a single argument, y, and returns the sum of x and y.
In essence, makeAdder is a function factory — it creates functions which can add a specific value to their argument. In the above example we use our function factory to create two new functions — one that adds 5 to its argument, and one that adds 10.
add5 and add10 are both closures. They share the same function body definition, but store different lexical environments. In add5's lexical environment, x is 5, while in the lexical environment for add10, x is 10.
Practical closures
Closures are useful because they let you associate some data (the lexical environment) with a function that operates on that data. This has obvious parallels to object oriented programming, where objects allow us to associate some data (the object's properties) with one or more methods.
Consequently, you can use a closure anywhere that you might normally use an object with only a single method.
Situations where you might want to do this are particularly common on the web. Much of the code we write in front-end JavaScript is event-based — we define some behavior, then attach it to an event that is triggered by the user (such as a click or a keypress). Our code is generally attached as a callback: a single function which is executed in response to the event.
For instance, suppose we wish to add some buttons to a page that adjust the text size. One way of doing this is to specify the font-size of the body element in pixels, then set the size of the other elements on the page (such as headers) using the relative em unit:
body {
font-family: Helvetica, Arial, sans-serif;
font-size: 12px;
}
h1 {
font-size: 1.5em;
}
h2 {
font-size: 1.2em;
}
Our interactive text size buttons can change the font-size property of the body element, and the adjustments will be picked up by other elements on the page thanks to the relative units.
Here's the JavaScript:
function makeSizer(size) {
return function() {
document.body.style.fontSize = size + 'px';
};
}
var size12 = makeSizer(12);
var size14 = makeSizer(14);
var size16 = makeSizer(16);
size12, size14, and size16 are now functions which will resize the body text to 12, 14, and 16 pixels, respectively. We can attach them to buttons (in this case links) as follows:
document.getElementById('size-12').onclick = size12;
document.getElementById('size-14').onclick = size14;
document.getElementById('size-16').onclick = size16;
<a href="#" id="size-12">12</a>
<a href="#" id="size-14">14</a>
<a href="#" id="size-16">16</a>
function makeSizer(size) {
return function() {
document.body.style.fontSize = size + 'px';
};
}
var size12 = makeSizer(12);
var size14 = makeSizer(14);
var size16 = makeSizer(16);
document.getElementById('size-12').onclick = size12;
document.getElementById('size-14').onclick = size14;
document.getElementById('size-16').onclick = size16;
for reading more about closures, visit the link on MDN
A: The Straw Man
I need to know how many times a button has been clicked and do something on every third click...
Fairly Obvious Solution
// Declare counter outside event handler's scope
var counter = 0;
var element = document.getElementById('button');
element.addEventListener("click", function() {
// Increment outside counter
counter++;
if (counter === 3) {
// Do something every third time
console.log("Third time's the charm!");
// Reset counter
counter = 0;
}
});
<button id="button">Click Me!</button>
Now this will work, but it does encroach into the outer scope by adding a variable, whose sole purpose is to keep track of the count. In some situations, this would be preferable as your outer application might need access to this information. But in this case, we are only changing every third click's behavior, so it is preferable to enclose this functionality inside the event handler.
Consider this option
var element = document.getElementById('button');
element.addEventListener("click", (function() {
// init the count to 0
var count = 0;
return function(e) { // <- This function becomes the click handler
count++; // and will retain access to the above `count`
if (count === 3) {
// Do something every third time
console.log("Third time's the charm!");
//Reset counter
count = 0;
}
};
})());
<button id="button">Click Me!</button>
Notice a few things here.
In the above example, I am using the closure behavior of JavaScript. This behavior allows any function to have access to the scope in which it was created, indefinitely. To practically apply this, I immediately invoke a function that returns another function, and because the function I'm returning has access to the internal count variable (because of the closure behavior explained above) this results in a private scope for usage by the resulting function... Not so simple? Let's dilute it down...
A simple one-line closure
// _______________________Immediately invoked______________________
// | |
// | Scope retained for use ___Returned as the____ |
// | only by returned function | value of func | |
// | | | | | |
// v v v v v v
var func = (function() { var a = 'val'; return function() { alert(a); }; })();
All variables outside the returned function are available to the returned function, but they are not directly available to the returned function object...
func(); // Alerts "val"
func.a; // Undefined
Get it? So in our primary example, the count variable is contained within the closure and always available to the event handler, so it retains its state from click to click.
Also, this private variable state is fully accessible, for both readings and assigning to its private scoped variables.
There you go; you're now fully encapsulating this behavior.
Full Blog Post (including jQuery considerations)
A: As a father of a 6-year-old, currently teaching young children (and a relative novice to coding with no formal education so corrections will be required), I think the lesson would stick best through hands-on play. If the 6-year-old is ready to understand what a closure is, then they are old enough to have a go themselves. I'd suggest pasting the code into jsfiddle.net, explaining a bit, and leaving them alone to concoct a unique song. The explanatory text below is probably more appropriate for a 10 year old.
function sing(person) {
var firstPart = "There was " + person + " who swallowed ";
var fly = function() {
var creature = "a fly";
var result = "Perhaps she'll die";
alert(firstPart + creature + "\n" + result);
};
var spider = function() {
var creature = "a spider";
var result = "that wiggled and jiggled and tickled inside her";
alert(firstPart + creature + "\n" + result);
};
var bird = function() {
var creature = "a bird";
var result = "How absurd!";
alert(firstPart + creature + "\n" + result);
};
var cat = function() {
var creature = "a cat";
var result = "Imagine That!";
alert(firstPart + creature + "\n" + result);
};
fly();
spider();
bird();
cat();
}
var person="an old lady";
sing(person);
INSTRUCTIONS
DATA: Data is a collection of facts. It can be numbers, words, measurements, observations or even just descriptions of things. You can't touch it, smell it or taste it. You can write it down, speak it and hear it. You could use it to create touch smell and taste using a computer. It can be made useful by a computer using code.
CODE: All the writing above is called code. It is written in JavaScript.
JAVASCRIPT: JavaScript is a language. Like English or French or Chinese are languages. There are lots of languages that are understood by computers and other electronic processors. For JavaScript to be understood by a computer it needs an interpreter. Imagine if a teacher who only speaks Russian comes to teach your class at school. When the teacher says "все садятся", the class would not understand. But luckily you have a Russian pupil in your class who tells everyone this means "everybody sit down" - so you all do. The class is like a computer and the Russian pupil is the interpreter. For JavaScript the most common interpreter is called a browser.
BROWSER: When you connect to the Internet on a computer, tablet or phone to visit a website, you use a browser. Examples you may know are Internet Explorer, Chrome, Firefox and Safari. The browser can understand JavaScript and tell the computer what it needs to do. The JavaScript instructions are called functions.
FUNCTION: A function in JavaScript is like a factory. It might be a little factory with only one machine inside. Or it might contain many other little factories, each with many machines doing different jobs. In a real life clothes factory you might have reams of cloth and bobbins of thread going in and T-shirts and jeans coming out. Our JavaScript factory only processes data, it can't sew, drill a hole or melt metal. In our JavaScript factory data goes in and data comes out.
All this data stuff sounds a bit boring, but it is really very cool; we might have a function that tells a robot what to make for dinner. Let's say I invite you and your friend to my house. You like chicken legs best, I like sausages, your friend always wants what you want and my friend does not eat meat.
I haven't got time to go shopping, so the function needs to know what we have in the fridge to make decisions. Each ingredient has a different cooking time and we want everything to be served hot by the robot at the same time. We need to provide the function with the data about what we like, the function could 'talk' to the fridge, and the function could control the robot.
A function normally has a name, parentheses and braces. Like this:
function cookMeal() { /* STUFF INSIDE THE FUNCTION */ }
Note that /*...*/ and // stop code being read by the browser.
NAME: You can call a function just about whatever word you want. The example "cookMeal" is typical in joining two words together and giving the second one a capital letter at the beginning - but this is not necessary. It can't have a space in it, and it can't be a number on its own.
PARENTHESES: "Parentheses" or () are the letter box on the JavaScript function factory's door or a post box in the street for sending packets of information to the factory. Sometimes the postbox might be marked for example cookMeal(you, me, yourFriend, myFriend, fridge, dinnerTime), in which case you know what data you have to give it.
BRACES: "Braces" which look like this {} are the tinted windows of our factory. From inside the factory you can see out, but from the outside you can't see in.
THE LONG CODE EXAMPLE ABOVE
Our code begins with the word function, so we know that it is one! Then the name of the function sing - that's my own description of what the function is about. Then parentheses (). The parentheses are always there for a function. Sometimes they are empty, and sometimes they have something in. This one has a word in: (person). After this there is a brace like this { . This marks the start of the function sing(). It has a partner which marks the end of sing() like this }
function sing(person) { /* STUFF INSIDE THE FUNCTION */ }
So this function might have something to do with singing, and might need some data about a person. It has instructions inside to do something with that data.
Now, after the function sing(), near the end of the code is the line
var person="an old lady";
VARIABLE: The letters var stand for "variable". A variable is like an envelope. On the outside this envelope is marked "person". On the inside it contains a slip of paper with the information our function needs, some letters and spaces joined together like a piece of string (it's called a string) that make a phrase reading "an old lady". Our envelope could contain other kinds of things like numbers (called integers), instructions (called functions), lists (called arrays). Because this variable is written outside of all the braces {}, and because you can see out through the tinted windows when you are inside the braces, this variable can be seen from anywhere in the code. We call this a 'global variable'.
GLOBAL VARIABLE: person is a global variable, meaning that if you change its value from "an old lady" to "a young man", the person will keep being a young man until you decide to change it again and that any other function in the code can see that it's a young man. Press the F12 button or look at the Options settings to open the developer console of a browser and type "person" to see what this value is. Type person="a young man" to change it and then type "person" again to see that it has changed.
After this we have the line
sing(person);
This line is calling the function, as if it were calling a dog
"Come on sing, Come and get person!"
When the browser has loaded the JavaScript code an reached this line, it will start the function. I put the line at the end to make sure that the browser has all the information it needs to run it.
Functions define actions - the main function is about singing. It contains a variable called firstPart which applies to the singing about the person that applies to each of the verses of the song: "There was " + person + " who swallowed". If you type firstPart into the console, you won't get an answer because the variable is locked up in a function - the browser can't see inside the tinted windows of the braces.
CLOSURES: The closures are the smaller functions that are inside the big sing() function. The little factories inside the big factory. They each have their own braces which mean that the variables inside them can't be seen from the outside. That's why the names of the variables (creature and result) can be repeated in the closures but with different values. If you type these variable names in the console window, you won't get its value because it's hidden by two layers of tinted windows.
The closures all know what the sing() function's variable called firstPart is, because they can see out from their tinted windows.
After the closures come the lines
fly();
spider();
bird();
cat();
The sing() function will call each of these functions in the order they are given. Then the sing() function's work will be done.
A: A closure is basically creating two things :
- a function
- a private scope that only that function can access
It is like putting some coating around a function.
So to a 6-years-old, it could be explained by giving an analogy. Let's say I build a robot. That robot can do many things. Among those things, I programmed it to count the number of birds he sees in the sky. Each time he has seen 25 birds, he should tell me how many birds he has seen since the beginning.
I don't know how many birds he has seen unless he has told me. Only he knows. That's the private scope. That's basically the robot's memory. Let's say I gave him 4 GB.
Telling me how many birds he has seen is the returned function. I also created that.
That analogy is a bit sucky, but someone could improve it I guess.
A: The word closure simply refers to being able to access objects (six-year-old: things) that are closed (six-year-old: private) within a function (six-year-old: box). Even if the function (six-year-old: box) is out of scope (six-year-old: sent far away).
A: I have read all of these before in the past, and they are all very informative. Some come very close to getting the simple explanation and then get complex or remain abstract, defeating the purpose and failing to show a very simple real world use.
Though combing through all the examples and explanations you get a good idea of what closures are and aren't via comments and code, I was still unsatisfied with a very simple illustration that helped me get a closures usefulness without getting so complex. My wife wants to learn coding and I figured I needed to be able to show here not only what, but why, and and how.
I am not sure a six year old will get this, but I think it might be a little closer to demonstrating a simple case in a real world way that might acually be useful and that is easily understandable.
One of the best (or closest to simplest) is the retelling of Morris' Closures for Dummies example.
Taking the "SayHi2Bob" concept just one step further demonstrates the two basic things you can glean from reading all the answers:
*
*Closures have access to the containing function's variables.
*Closures persist in their own memory space (and thus are useful for all kinds of oop-y instantiation stuff)
Proving and demonstrating this to myself, I made a little fiddle:
http://jsfiddle.net/9ZMyr/2/
function sayHello(name) {
var text = 'Hello ' + name; // Local variable
console.log(text);
var sayAlert = function () {
alert(text);
}
return sayAlert;
}
sayHello();
/* This will write 'Hello undefined' to the console (in Chrome anyway),
but will not alert though since it returns a function handle to nothing).
Since no handle or reference is created, I imagine a good js engine would
destroy/dispose of the internal sayAlert function once it completes. */
// Create a handle/reference/instance of sayHello() using the name 'Bob'
sayHelloBob = sayHello('Bob');
sayHelloBob();
// Create another handle or reference to sayHello with a different name
sayHelloGerry = sayHello('Gerry');
sayHelloGerry();
/* Now calling them again demonstrates that each handle or reference contains its own
unique local variable memory space. They remain in memory 'forever'
(or until your computer/browser explode) */
sayHelloBob();
sayHelloGerry();
This demonstrates both of the basic concepts you should get about closures.
In simple terms to explain the why this is useful, I have a base function to which I can make references or handles that contain unique data which persists within that memory reference. I don't have to rewrite the function for each time I want to say someone's name. I have encapsulated that routine and made it reusable.
To me this leads to at least the basic concepts of constructors, oop practices, singletons vs instantiated instances with their own data, etc. etc.
If you start a neophyte with this, then you can move on to more complex object property/member based calls, and hopefully the concepts carry.
A: I think it is valuable to take a step back, and examine a more general notion of a "closure" -- the so-called "join operator".
In mathematics, a "join" operator is a function on a partially ordered set which returns the smallest object greater than or equal to its arguments. In symbols, join [a,b] = d such that d >= a and d >= b, but there does not exist an e such that d > e >= a or d > e >= b.
So the join gives you the smallest thing "bigger" than the parts.
Now, note that JavaScript scopes are a partially ordered structure. So that there is a sensible notion of a join. In particular, a join of scopes is the smallest scope bigger than the original scopes. That scope is called the closure.
So a closure for the variables a, b, c is the smallest scope (in the lattice of scopes for your program!) that brings a, b, and c into scope.
A: The easiest use case I can think of to explain JavaScript closures is the Module Pattern. In the Module Pattern you define a function and call it immediately afterwards in what is called an Immediately Invoked Function Expression (IIFE). Everything that you write inside that function has private scope because it's defined inside the closure, thus allowing you to "simulate" privacy in JavaScript. Like so:
var Closure = (function () {
// This is a closure
// Any methods, variables and properties you define here are "private"
// and can't be accessed from outside the function.
//This is a private variable
var foo = "";
//This is a private method
var method = function(){
}
})();
If, on the other hand, you'd like to make one or multiple variables or methods visible outside the closure, you can return them inside an object literal. Like so:
var Closure = (function () {
// This is a closure
// Any methods, variables and properties you define here are "private"
// and can't be accessed from outside the function.
//This is a private variable
var foo = "";
//This is a private method
var method = function(){
}
//The method will be accessible from outside the closure
return {
method: method
}
})();
Closure.method();
Hope it helps.
Regards,
A: The best way is to explain these concepts incrementally:
Variables
console.log(x);
// undefined
Here, undefined is JavaScript's way of saying "I have no idea what x means."
Variables are like tags.
You can say, tag x points to value 42:
var x = 42;
console.log(x);
// 42
Now JavaScript knows what x means.
You can also re-assign a variable.
Make tag x point to a different value:
x = 43;
console.log(x);
// 43
Now x means something else.
Scope
When you make a function, the function has its own "box" for variables.
function A() {
var x = 42;
}
console.log(x);
// undefined
From outside the box, you cannot see what's inside the box.
But from inside the box, you can see what's outside that box:
var x = 42;
function A() {
console.log(x);
}
// 42
Inside function A, you have "scope access" to x.
Now if you have two boxes side-by-side:
function A() {
var x = 42;
}
function B() {
console.log(x);
}
// undefined
Inside function B, you have no access to variables inside function A.
But if you put define function B inside function A:
function A() {
var x = 42;
function B() {
console.log(x);
}
}
// 42
You now have "scope access".
Functions
In JavaScript, you run a function by calling it:
function A() {
console.log(42);
}
Like this:
A();
// 42
Functions as Values
In JavaScript, you can point a tag to a function, just like pointing to a number:
var a = function() {
console.log(42);
};
Variable a now means a function, you can run it.
a();
// 42
You can also pass this variable around:
setTimeout(a, 1000);
In a second (1000 milliseconds), the function a points to is called:
// 42
Closure Scope
Now when you define functions, those functions have access to their outer scopes.
When you pass functions around as values, it would be troublesome if that access is lost.
In JavaScript, functions keep their access to outer scope variables.
Even when they are passed around to be run somewhere else.
var a = function() {
var text = 'Hello!'
var b = function() {
console.log(text);
// inside function `b`, you have access to `text`
};
// but you want to run `b` later, rather than right away
setTimeout(b, 1000);
}
What happens now?
// 'Hello!'
Or consider this:
var c;
var a = function() {
var text = 'Hello!'
var b = function() {
console.log(text);
// inside function `b`, you have access to `text`
};
c = b;
}
// now we are out side of function `a`
// call `a` so the code inside `a` runs
a();
// now `c` has a value that is a function
// because what happened when `a` ran
// when you run `c`
c();
// 'Hello!'
You can still access variables in the closure scope.
Even though a has finished running, and now you are running c outside of a.
What just happened here is called 'closure' in JavaScript.
A: Closures in JavaScript are associated with concept of scopes.
Prior to es6, there is no block level scope, there is only function level scope in JS.
That means whenever there is a need for block level scope, we need to wrap it inside a function.
Check this simple and interesting example, how closure solves this issue in ES5
// let say we can only use a traditional for loop, not the forEach
for (var i = 0; i < 10; i++) {
setTimeout(function() {
console.log('without closure the visited index - '+ i)
})
}
// this will print 10 times 'visited index - 10', which is not correct
/**
Expected output is
visited index - 0
visited index - 1
.
.
.
visited index - 9
**/
// we can solve it by using closure concept
//by using an IIFE (Immediately Invoked Function Expression)
// --- updated code ---
for (var i = 0; i < 10; i++) {
(function (i) {
setTimeout(function() {
console.log('with closure the visited index - '+ i)
})
})(i);
}
NB: this can easily be solved by using es6 let instead of var, as let creates lexical scope.
In simple word, Closure in JS is nothing but accessing function scope.
A: Okay, talking with a 6-year old child, I would possibly use following associations.
Imagine - you are playing with your little brothers and sisters in the entire house, and you are moving around with your toys and brought some of them into your older brother's room. After a while your brother returned from the school and went to his room, and he locked inside it, so now you could not access toys left there anymore in a direct way. But you could knock the door and ask your brother for that toys. This is called toy's closure; your brother made it up for you, and he is now into outer scope.
Compare with a situation when a door was locked by draft and nobody inside (general function execution), and then some local fire occur and burn down the room (garbage collector:D), and then a new room was build and now you may leave another toys there (new function instance), but never get the same toys which were left in the first room instance.
For an advanced child I would put something like the following. It is not perfect, but it makes you feel about what it is:
function playingInBrothersRoom (withToys) {
// We closure toys which we played in the brother's room. When he come back and lock the door
// your brother is supposed to be into the outer [[scope]] object now. Thanks god you could communicate with him.
var closureToys = withToys || [],
returnToy, countIt, toy; // Just another closure helpers, for brother's inner use.
var brotherGivesToyBack = function (toy) {
// New request. There is not yet closureToys on brother's hand yet. Give him a time.
returnToy = null;
if (toy && closureToys.length > 0) { // If we ask for a specific toy, the brother is going to search for it.
for ( countIt = closureToys.length; countIt; countIt--) {
if (closureToys[countIt - 1] == toy) {
returnToy = 'Take your ' + closureToys.splice(countIt - 1, 1) + ', little boy!';
break;
}
}
returnToy = returnToy || 'Hey, I could not find any ' + toy + ' here. Look for it in another room.';
}
else if (closureToys.length > 0) { // Otherwise, just give back everything he has in the room.
returnToy = 'Behold! ' + closureToys.join(', ') + '.';
closureToys = [];
}
else {
returnToy = 'Hey, lil shrimp, I gave you everything!';
}
console.log(returnToy);
}
return brotherGivesToyBack;
}
// You are playing in the house, including the brother's room.
var toys = ['teddybear', 'car', 'jumpingrope'],
askBrotherForClosuredToy = playingInBrothersRoom(toys);
// The door is locked, and the brother came from the school. You could not cheat and take it out directly.
console.log(askBrotherForClosuredToy.closureToys); // Undefined
// But you could ask your brother politely, to give it back.
askBrotherForClosuredToy('teddybear'); // Hooray, here it is, teddybear
askBrotherForClosuredToy('ball'); // The brother would not be able to find it.
askBrotherForClosuredToy(); // The brother gives you all the rest
askBrotherForClosuredToy(); // Nothing left in there
As you can see, the toys left in the room are still accessible via the brother and no matter if the room is locked. Here is a jsbin to play around with it.
A: Closures are hard to explain because they are used to make some behaviour work that everybody intuitively expects to work anyway. I find the best way to explain them (and the way that I learned what they do) is to imagine the situation without them:
const makePlus = function(x) {
return function(y) { return x + y; };
}
const plus5 = makePlus(5);
console.log(plus5(3));
What would happen here if JavaScript didn't know closures? Just replace the call in the last line by its method body (which is basically what function calls do) and you get:
console.log(x + 3);
Now, where's the definition of x? We didn't define it in the current scope. The only solution is to let plus5 carry its scope (or rather, its parent's scope) around. This way, x is well-defined and it is bound to the value 5.
A: A function in JavaScript is not just a reference to a set of instructions (as in C language), but it also includes a hidden data structure which is composed of references to all nonlocal variables it uses (captured variables). Such two-piece functions are called closures. Every function in JavaScript can be considered a closure.
Closures are functions with a state. It is somewhat similar to "this" in the sense that "this" also provides state for a function but function and "this" are separate objects ("this" is just a fancy parameter, and the only way to bind it permanently to a function is to create a closure). While "this" and function always live separately, a function cannot be separated from its closure and the language provides no means to access captured variables.
Because all these external variables referenced by a lexically nested function are actually local variables in the chain of its lexically enclosing functions (global variables can be assumed to be local variables of some root function), and every single execution of a function creates new instances of its local variables, it follows that every execution of a function returning (or otherwise transferring it out, such as registering it as a callback) a nested function creates a new closure (with its own potentially unique set of referenced nonlocal variables which represent its execution context).
Also, it must be understood that local variables in JavaScript are created not on the stack frame, but on the heap and destroyed only when no one is referencing them. When a function returns, references to its local variables are decremented, but they can still be non-null if during the current execution they became part of a closure and are still referenced by its lexically nested functions (which can happen only if the references to these nested functions were returned or otherwise transferred to some external code).
An example:
function foo (initValue) {
//This variable is not destroyed when the foo function exits.
//It is 'captured' by the two nested functions returned below.
var value = initValue;
//Note that the two returned functions are created right now.
//If the foo function is called again, it will return
//new functions referencing a different 'value' variable.
return {
getValue: function () { return value; },
setValue: function (newValue) { value = newValue; }
}
}
function bar () {
//foo sets its local variable 'value' to 5 and returns an object with
//two functions still referencing that local variable
var obj = foo(5);
//Extracting functions just to show that no 'this' is involved here
var getValue = obj.getValue;
var setValue = obj.setValue;
alert(getValue()); //Displays 5
setValue(10);
alert(getValue()); //Displays 10
//At this point getValue and setValue functions are destroyed
//(in reality they are destroyed at the next iteration of the garbage collector).
//The local variable 'value' in the foo is no longer referenced by
//anything and is destroyed too.
}
bar();
A: An answer for a six-year-old (assuming he knows what a function is and what a variable is, and what data is):
Functions can return data. One kind of data you can return from a function is another function. When that new function gets returned, all the variables and arguments used in the function that created it don't go away. Instead, that parent function "closes." In other words, nothing can look inside of it and see the variables it used except for the function it returned. That new function has a special ability to look back inside the function that created it and see the data inside of it.
function the_closure() {
var x = 4;
return function () {
return x; // Here, we look back inside the_closure for the value of x
}
}
var myFn = the_closure();
myFn(); //=> 4
Another really simple way to explain it is in terms of scope:
Any time you create a smaller scope inside of a larger scope, the smaller scope will always be able to see what is in the larger scope.
A: Perhaps a little beyond all but the most precocious of six-year-olds, but a few examples that helped make the concept of closure in JavaScript click for me.
A closure is a function that has access to another function's scope (its variables and functions). The easiest way to create a closure is with a function within a function; the reason being that in JavaScript a function always has access to its containing function’s scope.
function outerFunction() {
var outerVar = "monkey";
function innerFunction() {
alert(outerVar);
}
innerFunction();
}
outerFunction();
ALERT: monkey
In the above example, outerFunction is called which in turn calls innerFunction. Note how outerVar is available to innerFunction, evidenced by its correctly alerting the value of outerVar.
Now consider the following:
function outerFunction() {
var outerVar = "monkey";
function innerFunction() {
return outerVar;
}
return innerFunction;
}
var referenceToInnerFunction = outerFunction();
alert(referenceToInnerFunction());
ALERT: monkey
referenceToInnerFunction is set to outerFunction(), which simply returns a reference to innerFunction. When referenceToInnerFunction is called, it returns outerVar. Again, as above, this demonstrates that innerFunction has access to outerVar, a variable of outerFunction. Furthermore, it is interesting to note that it retains this access even after outerFunction has finished executing.
And here is where things get really interesting. If we were to get rid of outerFunction, say set it to null, you might think that referenceToInnerFunction would loose its access to the value of outerVar. But this is not the case.
function outerFunction() {
var outerVar = "monkey";
function innerFunction() {
return outerVar;
}
return innerFunction;
}
var referenceToInnerFunction = outerFunction();
alert(referenceToInnerFunction());
outerFunction = null;
alert(referenceToInnerFunction());
ALERT: monkey
ALERT: monkey
But how is this so? How can referenceToInnerFunction still know the value of outerVar now that outerFunction has been set to null?
The reason that referenceToInnerFunction can still access the value of outerVar is because when the closure was first created by placing innerFunction inside of outerFunction, innerFunction added a reference to outerFunction’s scope (its variables and functions) to its scope chain. What this means is that innerFunction has a pointer or reference to all of outerFunction’s variables, including outerVar. So even when outerFunction has finished executing, or even if it is deleted or set to null, the variables in its scope, like outerVar, stick around in memory because of the outstanding reference to them on the part of the innerFunction that has been returned to referenceToInnerFunction. To truly release outerVar and the rest of outerFunction’s variables from memory you would have to get rid of this outstanding reference to them, say by setting referenceToInnerFunction to null as well.
//////////
Two other things about closures to note. First, the closure will always have access to the last values of its containing function.
function outerFunction() {
var outerVar = "monkey";
function innerFunction() {
alert(outerVar);
}
outerVar = "gorilla";
innerFunction();
}
outerFunction();
ALERT: gorilla
Second, when a closure is created, it retains a reference to all of its enclosing function’s variables and functions; it doesn’t get to pick and choose. And but so, closures should be used sparingly, or at least carefully, as they can be memory intensive; a lot of variables can be kept in memory long after a containing function has finished executing.
A: I'd simply point them to the Mozilla Closures page. It's the best, most concise and simple explanation of closure basics and practical usage that I've found. It is highly recommended to anyone learning JavaScript.
And yes, I'd even recommend it to a 6-year old -- if the 6-year old is learning about closures, then it's logical they're ready to comprehend the concise and simple explanation provided in the article.
A: I believe in shorter explanations, so see the below image.
function f1() ..> Light Red Box
function f2() ..> Red Small Box
Here we have two functions, f1() and f2(). f2() is inner to f1().
f1() has a variable, var x = 10.
When invoking the function f1(), f2() can access the value of var x = 10.
Here is the code:
function f1() {
var x=10;
function f2() {
console.log(x)
}
return f2
}
f1()
f1() invoking here:
A: TLDR
A closure is a link between a function and its outer lexical (ie. as-written) environment, such that the identifiers (variables, parameters, function declarations etc) defined within that environment are visible from within the function, regardless of when or from where the function is invoked.
Details
In the terminology of the ECMAScript specification, a closure can be said to be implemented by the [[Environment]] reference of every function-object, which points to the lexical environment within which the function is defined.
When a function is invoked via the internal [[Call]] method, the [[Environment]] reference on the function-object is copied into the outer environment reference of the environment record of the newly-created execution context (stack frame).
In the following example, function f closes over the lexical environment of the global execution context:
function f() {}
In the following example, function h closes over the lexical environment of function g, which, in turn, closes over the lexical environment of the global execution context.
function g() {
function h() {}
}
If an inner function is returned by an outer, then the outer lexical environment will persist after the outer function has returned. This is because the outer lexical environment needs to be available if the inner function is eventually invoked.
In the following example, function j closes over the lexical environment of function i, meaning that variable x is visible from inside function j, long after function i has completed execution:
function i() {
var x = 'mochacchino'
return function j() {
console.log('Printing the value of x, from within function j: ', x)
}
}
const k = i()
setTimeout(k, 500) // invoke k (which is j) after 500ms
In a closure, the variables in the outer lexical environment themselves are available, not copies.
function l() {
var y = 'vanilla';
return {
setY: function(value) {
y = value;
},
logY: function(value) {
console.log('The value of y is: ', y);
}
}
}
const o = l()
o.logY() // The value of y is: vanilla
o.setY('chocolate')
o.logY() // The value of y is: chocolate
The chain of lexical environments, linked between execution contexts via outer environment references, forms a scope chain and defines the identifiers visible from any given function.
Please note that in an attempt to improve clarity and accuracy, this answer has been substantially changed from the original.
A: Every function in JavaScript maintains a link to its outer lexical environment. A lexical environment is a map of all the names (eg. variables, parameters) within a scope, with their values.
So, whenever you see the function keyword, code inside that function has access to variables declared outside the function.
function foo(x) {
var tmp = 3;
function bar(y) {
console.log(x + y + (++tmp)); // will log 16
}
bar(10);
}
foo(2);
This will log 16 because function bar closes over the parameter x and the variable tmp, both of which exist in the lexical environment of outer function foo.
Function bar, together with its link with the lexical environment of function foo is a closure.
A function doesn't have to return in order to create a closure. Simply by virtue of its declaration, every function closes over its enclosing lexical environment, forming a closure.
function foo(x) {
var tmp = 3;
return function (y) {
console.log(x + y + (++tmp)); // will also log 16
}
}
var bar = foo(2);
bar(10); // 16
bar(10); // 17
The above function will also log 16, because the code inside bar can still refer to argument x and variable tmp, even though they are no longer directly in scope.
However, since tmp is still hanging around inside bar's closure, it is available to be incremented. It will be incremented each time you call bar.
The simplest example of a closure is this:
var a = 10;
function test() {
console.log(a); // will output 10
console.log(b); // will output 6
}
var b = 6;
test();
When a JavaScript function is invoked, a new execution context ec is created. Together with the function arguments and the target object, this execution context also receives a link to the lexical environment of the calling execution context, meaning the variables declared in the outer lexical environment (in the above example, both a and b) are available from ec.
Every function creates a closure because every function has a link to its outer lexical environment.
Note that variables themselves are visible from within a closure, not copies.
A: OK, 6-year-old closures fan. Do you want to hear the simplest example of closure?
Let's imagine the next situation: a driver is sitting in a car. That car is inside a plane. Plane is in the airport. The ability of driver to access things outside his car, but inside the plane, even if that plane leaves an airport, is a closure. That's it. When you turn 27, look at the more detailed explanation or at the example below.
Here is how I can convert my plane story into the code.
var plane = function(defaultAirport) {
var lastAirportLeft = defaultAirport;
var car = {
driver: {
startAccessPlaneInfo: function() {
setInterval(function() {
console.log("Last airport was " + lastAirportLeft);
}, 2000);
}
}
};
car.driver.startAccessPlaneInfo();
return {
leaveTheAirport: function(airPortName) {
lastAirportLeft = airPortName;
}
}
}("Boryspil International Airport");
plane.leaveTheAirport("John F. Kennedy");
A: The simplest, shortest, most-easy-to-understand answer:
A closure is a block of code where each line can reference the same set of variables with the same variable names.
If "this" means something different than it does somewhere else, then you know it is two different closures.
A: Also... Perhaps we should cut your 27-year-old friend a little slack, because the entire concept of "closures" really is(!) ... voodoo!
By that I mean: (a) you do not, intuitively, expect it ...AND... (b) when someone takes the time to explain it to you, you certainly do not expect it to work!
Intuition tells you that "this must be nonsense... surely it must result in some kind of syntax-error or something!" How on earth(!) could you, in effect, "pull a function from 'the middle of' wherever-it's-at," such that you could [still!] actually have read/write access to the context of "wherever-it-was-at?!"
When you finally realize that such a thing is possible, then ... sure ... anyone's after-the-fact reaction would be: "whoa-a-a-a(!)... kew-el-l-l-l...(!!!)"
But there will be a "big counter-intuitive hurdle" to overcome, first. Intuition gives you plenty of utterly-plausible expectations that such a thing would be "of course, absolutely nonsensical and therefore quite impossible."
Like I said: "it's voodoo."
A: A closure is simply when a function have access to its outside scope even after the scope's function has finished executing.
Example:
function multiplier(n) {
function multiply(x) {
return n*x;
}
return mutliply;
}
var 10xmultiplier = multiplier(10);
var x = 10xmultiplier(5); // x= 50
we can see that even after multiplier has finished executing, the inner function multiply gets still access to the value of x which is 10 in this example.
A very common use of closures is currying (the same example above) where we spice our function progressively with parameters instead of supplying all of the arguments at once.
We can achieve this because Javascript (in addition to the prototypal OOP) allows as to program in a functional fashion where higher order functions can take other functions as arguments (fisrt class functions).
functional programming in wikipedia
I highly recommend you to read this book by Kyle Simpson: 2 one part of the book series is dedicated to closures and it is called scope and closures.
you don't know js: free reading on github
A: This is an attempt to clear up several (possible) misunderstandings about closures that appear in some of the other answers.
*
*A closure is not only created when you return an inner function. In fact, the enclosing function does not need to return at all in order for its closure to be created. You might instead assign your inner function to a variable in an outer scope, or pass it as an argument to another function where it could be called immediately or any time later. Therefore, the closure of the enclosing function is probably created as soon as the enclosing function is called since any inner function has access to that closure whenever the inner function is called, before or after the enclosing function returns.
*A closure does not reference a copy of the old values of variables in its scope. The variables themselves are part of the closure, and so the value seen when accessing one of those variables is the latest value at the time it is accessed. This is why inner functions created inside of loops can be tricky, since each one has access to the same outer variables rather than grabbing a copy of the variables at the time the function is created or called.
*The "variables" in a closure include any named functions declared within the function. They also include arguments of the function. A closure also has access to its containing closure's variables, all the way up to the global scope.
*Closures use memory, but they don't cause memory leaks since JavaScript by itself cleans up its own circular structures that are not referenced. Internet Explorer memory leaks involving closures are created when it fails to disconnect DOM attribute values that reference closures, thus maintaining references to possibly circular structures.
A: In JavaScript closures are awesome and unique, where variables or arguments are available to inner functions, and they will be alive even after the outer function has returned. Closures are used in most of the design patterns in JS
function getFullName(a, b) {
return a + b;
}
function makeFullName(fn) {
return function(firstName) {
return function(secondName) {
return fn(firstName, secondName);
}
}
}
makeFullName(getFullName)("Stack")("overflow"); // Stackoverflow
A:
A closure is a function having access to the parent scope, even after the parent function has closed.
So basically a closure is a function of another function. We can say like a child function.
A closure is an inner function that has access to the outer
(enclosing) function’s variables—scope chain. The closure has three
scope chains: it has access to its own scope (variables defined
between its curly brackets), it has access to the outer function’s
variables, and it has access to the global variables.
The inner function has access not only to the outer function’s
variables but also to the outer function’s parameters. Note that the
inner function cannot call the outer function’s arguments object,
however, even though it can call the outer function’s parameters
directly.
You create a closure by adding a function inside another function.
Also, it's very useful method which is used in many famous frameworks including Angular, Node.js and jQuery:
Closures are used extensively in Node.js; they are workhorses in
Node.js’ asynchronous, non-blocking architecture. Closures are also
frequently used in jQuery and just about every piece of JavaScript
code you read.
But how the closures look like in a real-life coding?
Look at this simple sample code:
function showName(firstName, lastName) {
var nameIntro = "Your name is ";
// this inner function has access to the outer function's variables, including the parameter
function makeFullName() {
return nameIntro + firstName + " " + lastName;
}
return makeFullName();
}
console.log(showName("Michael", "Jackson")); // Your name is Michael Jackson
Also, this is classic closure way in jQuery which every javascript and jQuery developers used it a lot:
$(function() {
var selections = [];
$(".niners").click(function() { // this closure has access to the selections variable
selections.push(this.prop("name")); // update the selections variable in the outer function's scope
});
});
But why we use closures? when we use it in an actual programming?
what are the practical use of closures? the below is a good explanation and example by MDN:
Practical closures
Closures are useful because they let you associate some data (the
lexical environment) with a function that operates on that data. This
has obvious parallels to object oriented programming, where objects
allow us to associate some data (the object's properties) with one or
more methods.
Consequently, you can use a closure anywhere that you might normally
use an object with only a single method.
Situations where you might want to do this are particularly common on
the web. Much of the code we write in front-end JavaScript is
event-based — we define some behavior, then attach it to an event that
is triggered by the user (such as a click or a keypress). Our code is
generally attached as a callback: a single function which is executed
in response to the event.
For instance, suppose we wish to add some buttons to a page that
adjust the text size. One way of doing this is to specify the
font-size of the body element in pixels, then set the size of the
other elements on the page (such as headers) using the relative em
unit:
Read the code below and run the code to see how closure help us here to easily make separate functions for each sections:
//javascript
function makeSizer(size) {
return function() {
document.body.style.fontSize = size + 'px';
};
}
var size12 = makeSizer(12);
var size14 = makeSizer(14);
var size16 = makeSizer(16);
document.getElementById('size-12').onclick = size12;
document.getElementById('size-14').onclick = size14;
document.getElementById('size-16').onclick = size16;
/*css*/
body {
font-family: Helvetica, Arial, sans-serif;
font-size: 12px;
}
h1 {
font-size: 1.5em;
}
h2 {
font-size: 1.2em;
}
<!--html><!-->
<p>Some paragraph text</p>
<h1>some heading 1 text</h1>
<h2>some heading 2 text</h2>
<a href="#" id="size-12">12</a>
<a href="#" id="size-14">14</a>
<a href="#" id="size-16">16</a>
For further study about closures, I recommend you to visit this page by MDN:
https://developer.mozilla.org/en/docs/Web/JavaScript/Closures
A: For a six-year-old?
You and your family live in the mythical town of Ann Ville. You have a friend who lives next door, so you call them and ask them to come out and play. You dial:
000001 (jamiesHouse)
After a month, you and your family move out of Ann Ville to the next town, but you and your friend still keep in touch, so now you have to dial the area code for the town that your friend lives in, before dialling their 'proper' number:
001 000001 (annVille.jamiesHouse)
A year after that, your parents move to a whole new country, but you and your friend still keep in touch, so after bugging your parents to let you make international rate calls, you now dial:
01 001 000001 (myOldCountry.annVille.jamiesHouse)
Strangely though, after moving to your new country, you and your family just so happen to move to a new town called Ann Ville... and you just so happen to make friends with some new person called Jamie... You give them a call...
000001 (jamiesHouse)
Spooky...
So spooky in fact, that you tell Jamie from your old country about it... You have a good laugh about it. So one day, you and your family take a holiday back to the old country. You visit your old town (Ann Ville), and go to visit Jamie...
*
*"Really? Another Jamie? In Ann Ville? In your new country!!?"
*"Yeah... Let's call them..."
02 001 000001 (myNewCountry.annVille.jamiesHouse)
Opinions?
What's more, I have a load of questions about the patience of a modern six-year-old...
A: Here is a simple real-time scenario. Just read it through, and you will understand how we have used closure here (see how seat number is changing).
All other examples explained previously are also very good to understand the concept.
function movieBooking(movieName) {
var bookedSeatCount = 0;
return function(name) {
++bookedSeatCount ;
alert( name + " - " + movieName + ", Seat - " + bookedSeatCount )
};
};
var MI1 = movieBooking("Mission Impossible 1 ");
var MI2 = movieBooking("Mission Impossible 2 ");
MI1("Mayur");
// alert
// Mayur - Mission Impossible 1, Seat - 1
MI1("Raju");
// alert
// Raju - Mission Impossible 1, Seat - 2
MI2("Priyanka");
// alert
// Raja - Mission Impossible 2, Seat - 1
A: Closures allow JavaScript programmers to write better code. Creative, expressive, and concise. We frequently use closures in JavaScript, and, no matter our JavaScript experience, we undoubtedly encounter them time and again. Closures might appear complex but hopefully, after you read this, closures will be much more easily understood and thus more appealing for your everyday JavaScript programming tasks.
You should be familiar with JavaScript variable scope before you read further because to understand closures you must understand JavaScript’s variable scope.
What is a closure?
A closure is an inner function that has access to the outer (enclosing) function’s variables—scope chain. The closure has three scope chains: it has access to its own scope (variables defined between its curly brackets), it has access to the outer function’s variables, and it has access to the global variables.
The inner function has access not only to the outer function’s variables, but also to the outer function’s parameters. Note that the inner function cannot call the outer function’s arguments object, however, even though it can call the outer function’s parameters directly.
You create a closure by adding a function inside another function.
A Basic Example of Closures in JavaScript:
function showName (firstName, lastName) {
var nameIntro = "Your name is ";
// this inner function has access to the outer function's variables, including the parameter
function makeFullName () {
return nameIntro + firstName + " " + lastName;
}
return makeFullName ();
}
showName ("Michael", "Jackson"); // Your name is Michael Jackson
Closures are used extensively in Node.js; they are workhorses in Node.js’ asynchronous, non-blocking architecture. Closures are also frequently used in jQuery and just about every piece of JavaScript code you read.
A Classic jQuery Example of Closures:
$(function() {
var selections = [];
$(".niners").click(function() { // this closure has access to the selections variable
selections.push (this.prop("name")); // update the selections variable in the outer function's scope
});
});
Closures’ Rules and Side Effects
1. Closures have access to the outer function’s variable even after the outer function returns:
One of the most important and ticklish features with closures is that the inner function still has access to the outer function’s variables even after the outer function has returned. Yep, you read that correctly. When functions in JavaScript execute, they use the same scope chain that was in effect when they were created. This means that even after the outer function has returned, the inner function still has access to the outer function’s variables. Therefore, you can call the inner function later in your program. This example demonstrates:
function celebrityName (firstName) {
var nameIntro = "This celebrity is ";
// this inner function has access to the outer function's variables, including the parameter
function lastName (theLastName) {
return nameIntro + firstName + " " + theLastName;
}
return lastName;
}
var mjName = celebrityName ("Michael"); // At this juncture, the celebrityName outer function has returned.
// The closure (lastName) is called here after the outer function has returned above
// Yet, the closure still has access to the outer function's variables and parameter
mjName ("Jackson"); // This celebrity is Michael Jackson
2. Closures store references to the outer function’s variables:
They do not store the actual value.
Closures get more interesting when the value of the outer function’s variable changes before the closure is called. And this powerful feature can be harnessed in creative ways, such as this private variables example first demonstrated by Douglas Crockford:
function celebrityID () {
var celebrityID = 999;
// We are returning an object with some inner functions
// All the inner functions have access to the outer function's variables
return {
getID: function () {
// This inner function will return the UPDATED celebrityID variable
// It will return the current value of celebrityID, even after the changeTheID function changes it
return celebrityID;
},
setID: function (theNewID) {
// This inner function will change the outer function's variable anytime
celebrityID = theNewID;
}
}
}
var mjID = celebrityID (); // At this juncture, the celebrityID outer function has returned.
mjID.getID(); // 999
mjID.setID(567); // Changes the outer function's variable
mjID.getID(); // 567: It returns the updated celebrityId variable
3. Closures Gone Awry
Because closures have access to the updated values of the outer function’s variables, they can also lead to bugs when the outer function’s variable changes with a for loop. Thus:
// This example is explained in detail below (just after this code box).
function celebrityIDCreator (theCelebrities) {
var i;
var uniqueID = 100;
for (i = 0; i < theCelebrities.length; i++) {
theCelebrities[i]["id"] = function () {
return uniqueID + i;
}
}
return theCelebrities;
}
var actionCelebs = [{name:"Stallone", id:0}, {name:"Cruise", id:0}, {name:"Willis", id:0}];
var createIdForActionCelebs = celebrityIDCreator (actionCelebs);
var stalloneID = createIdForActionCelebs [0];
console.log(stalloneID.id()); // 103
More can be found here-
*
*http://javascript.info/tutorial/closures
*http://www.javascriptkit.com/javatutors/closures.shtml
A: FOREWORD: this answer was written when the question was:
Like the old Albert said : "If you can't explain it to a six-year old, you really don't understand it yourself.”. Well I tried to explain JS closures to a 27 years old friend and completely failed.
Can anybody consider that I am 6 and strangely interested in that subject ?
I'm pretty sure I was one of the only people that attempted to take the initial question literally. Since then, the question has mutated several times, so my answer may now seem incredibly silly & out of place. Hopefully the general idea of the story remains fun for some.
I'm a big fan of analogy and metaphor when explaining difficult concepts, so let me try my hand with a story.
Once upon a time:
There was a princess...
function princess() {
She lived in a wonderful world full of adventures. She met her Prince Charming, rode around her world on a unicorn, battled dragons, encountered talking animals, and many other fantastical things.
var adventures = [];
function princeCharming() { /* ... */ }
var unicorn = { /* ... */ },
dragons = [ /* ... */ ],
squirrel = "Hello!";
/* ... */
But she would always have to return back to her dull world of chores and grown-ups.
return {
And she would often tell them of her latest amazing adventure as a princess.
story: function() {
return adventures[adventures.length - 1];
}
};
}
But all they would see is a little girl...
var littleGirl = princess();
...telling stories about magic and fantasy.
littleGirl.story();
And even though the grown-ups knew of real princesses, they would never believe in the unicorns or dragons because they could never see them. The grown-ups said that they only existed inside the little girl's imagination.
But we know the real truth; that the little girl with the princess inside...
...is really a princess with a little girl inside.
A: I wrote a blog post a while back explaining closures. Here's what I said about closures in terms of why you'd want one.
Closures are a way to let a function
have persistent, private variables -
that is, variables that only one
function knows about, where it can
keep track of info from previous times
that it was run.
In that sense, they let a function act a bit like an object with private attributes.
Full post:
So what are these closure thingys?
A: Here's the most Zen answer I can give:
What would you expect this code to do? Tell me in a comment before you run it. I'm curious!
function foo() {
var i = 1;
return function() {
console.log(i++);
}
}
var bar = foo();
bar();
bar();
bar();
var baz = foo();
baz();
baz();
baz();
Now open the console in your browser (Ctrl + Shift + I or F12, hopefully) and paste the code in and hit Enter.
If this code printed what you expect (JavaScript newbies - ignore the "undefined" at the end), then you already have wordless understanding. In words, the variable i is part of the inner function instance's closure.
I put it this way because, once I understood that this code is putting instances of foo()'s inner function in bar and baz and then calling them via those variables, nothing else surprised me.
But if I'm wrong and the console output surprised you, let me know!
A: The original question had a quote:
If you can't explain it to a six-year old, you really don't understand it yourself.
This is how I'd try to explain it to an actual six-year-old:
You know how grown-ups can own a house, and they call it home? When a mom has a child, the child doesn't really own anything, right? But its parents own a house, so whenever someone asks "Where's your home?", the child can answer "that house!", and point to the house of its parents.
A "Closure" is the ability of the child to always (even if abroad) be able to refer to its home, even though it's really the parent's who own the house.
A: (I am not taking the 6-years-old thing into account.)
In a language like JavaScript, where you can pass functions as parameters to other functions (languages where functions are first class citizens), you will often find yourself doing something like:
var name = 'Rafael';
var sayName = function() {
console.log(name);
};
You see, sayName doesn't have the definition for the name variable, but it does use the value of name that was defined outside of sayName (in a parent scope).
Let's say you pass sayName as a parameter to another function, that will call sayName as a callback:
functionThatTakesACallback(sayName);
Note that:
*
*sayName will be called from inside functionThatTakesACallback (assume that, since I haven't implemented functionThatTakesACallback in this example).
*When sayName is called, it will log the value of the name variable.
*functionThatTakesACallback doesn't define a name variable (well, it could, but it wouldn't matter, so assume it doesn't).
So we have sayName being called inside functionThatTakesACallback and referring to a name variable that is not defined inside functionThatTakesACallback.
What happens then? A ReferenceError: name is not defined?
No! The value of name is captured inside a closure. You can think of this closure as context associated to a function, that holds the values that were available where that function was defined.
So: Even though name is not in scope where the function sayName will be called (inside functionThatTakesACallback), sayName can access the value for name that is captured in the closure associated with sayName.
--
From the book Eloquent JavaScript:
A good mental model is to think of function values as containing both the code in their body and the environment in which they are created. When called, the function body sees its original environment, not the environment in which the call is made.
A: Closures are simple:
The following simple example covers all the main points of JavaScript closures.*
Here is a factory that produces calculators that can add and multiply:
function make_calculator() {
var n = 0; // this calculator stores a single number n
return {
add: function(a) {
n += a;
return n;
},
multiply: function(a) {
n *= a;
return n;
}
};
}
first_calculator = make_calculator();
second_calculator = make_calculator();
first_calculator.add(3); // returns 3
second_calculator.add(400); // returns 400
first_calculator.multiply(11); // returns 33
second_calculator.multiply(10); // returns 4000
The key point: Each call to make_calculator creates a new local variable n, which continues to be usable by that calculator's add and multiply functions long after make_calculator returns.
If you are familiar with stack frames, these calculators seem strange: How can they keep accessing n after make_calculator returns? The answer is to imagine that JavaScript doesn't use "stack frames", but instead uses "heap frames", which can persist after the function call that made them returns.
Inner functions like add and multiply, which access variables declared in an outer function**, are called closures.
That is pretty much all there is to closures.
* For example, it covers all the points in the "Closures for Dummies" article given in another answer, except example 6, which simply shows that variables can be used before they are declared, a nice fact to know but completely unrelated to closures. It also covers all the points in the accepted answer, except for the points (1) that functions copy their arguments into local variables (the named function arguments), and (2) that copying numbers creates a new number, but copying an object reference gives you another reference to the same object. These are also good to know but again completely unrelated to closures. It is also very similar to the example in this answer but a bit shorter and less abstract. It does not cover the point of this answer or this comment, which is that JavaScript makes it difficult to plug the current value of a loop variable into your inner function: The "plugging in" step can only be done with a helper function that encloses your inner function and is invoked on each loop iteration. (Strictly speaking, the inner function accesses the helper function's copy of the variable, rather than having anything plugged in.) Again, very useful when creating closures, but not part of what a closure is or how it works. There is additional confusion due to closures working differently in functional languages like ML, where variables are bound to values rather than to storage space, providing a constant stream of people who understand closures in a way (namely the "plugging in" way) that is simply incorrect for JavaScript, where variables are always bound to storage space, and never to values.
** Any outer function, if several are nested, or even in the global context, as this answer points out clearly.
A: Given the following function
function person(name, age){
var name = name;
var age = age;
function introduce(){
alert("My name is "+name+", and I'm "+age);
}
return introduce;
}
var a = person("Jack",12);
var b = person("Matt",14);
Everytime the function person is called a new closure is created. While variables a and b have the same introduce function, it is linked to different closures. And that closure will still exist even after the function person finishes execution.
a(); //My name is Jack, and I'm 12
b(); //My name is Matt, and I'm 14
An abstract closures could be represented to something like this:
closure a = {
name: "Jack",
age: 12,
call: function introduce(){
alert("My name is "+name+", and I'm "+age);
}
}
closure b = {
name: "Matt",
age: 14,
call: function introduce(){
alert("My name is "+name+", and I'm "+age);
}
}
Assuming you know how a class in another language work, I will make an analogy.
Think like
*
*JavaScript function as a constructor
*local variables as instance properties
*these properties are private
*inner functions as instance methods
Everytime a function is called
*
*A new object containing all local variables will be created.
*Methods of this object have access to "properties" of that instance object.
A: The more I think about closure the more I see it as a 2-step process: init - action
init: pass first what's needed...
action: in order to achieve something for later execution.
To a 6-year old, I'd emphasize on the practical aspect of closure:
Daddy: Listen. Could you bring mum some milk (2).
Tom: No problem.
Daddy: Take a look at the map that Daddy has just made: mum is there and daddy is here.
Daddy: But get ready first. And bring the map with you (1), it may come in handy
Daddy: Then off you go (3). Ok?
Tom: A piece of cake!
Example: Bring some milk to mum (=action). First get ready and bring the map (=init).
function getReady(map) {
var cleverBoy = 'I examine the ' + map;
return function(what, who) {
return 'I bring ' + what + ' to ' + who + 'because + ' cleverBoy; //I can access the map
}
}
var offYouGo = getReady('daddy-map');
offYouGo('milk', 'mum');
Because if you bring with you a very important piece of information (the map), you're knowledgeable enough to execute other similar actions:
offYouGo('potatoes', 'great mum');
To a developer I'd make a parallel between closures and OOP.
The init phase is similar to passing arguments to a constructor in a traditional OO language; the action phase is ultimately the method you call to achieve what you want. And the method has access these init arguments using a mechanism called closure.
See my another answer illustrating the parallelism between OO and closures:
How to "properly" create a custom object in JavaScript?
A: Can you explain closures to a 5-year-old?*
I still think Google's explanation works very well and is concise:
/*
* When a function is defined in another function and it
* has access to the outer function's context even after
* the outer function returns.
*
* An important concept to learn in JavaScript.
*/
function outerFunction(someNum) {
var someString = 'Hey!';
var content = document.getElementById('content');
function innerFunction() {
content.innerHTML = someNum + ': ' + someString;
content = null; // Internet Explorer memory leak for DOM reference
}
innerFunction();
}
outerFunction(1);
*A C# question
A: Even though many beautiful definitions of JavaScript closures exists on the Internet, I am trying to start explaining my six-year-old friend with my favourite definitions of closure which helped me to understand the closure much better.
What is a Closure?
A closure is an inner function that has access to the outer (enclosing) function’s variables—scope chain. The closure has three scope chains: it has access to its own scope (variables defined between its curly brackets), it has access to the outer function’s variables, and it has access to the global variables.
A closure is the local variables for a function - kept alive after the function has returned.
Closures are functions that refer to independent (free) variables. In other words, the function defined in the closure 'remembers' the environment in which it was created in.
Closures are an extension of the concept of scope. With closures, functions have access to variables that were available in the scope where the function was created.
A closure is a stack-frame which is not deallocated when the function returns. (As if a 'stack-frame' were malloc'ed instead of being on the stack!)
Languages such as Java provide the ability to declare methods private, meaning that they can only be called by other methods in the same class. JavaScript does not provide a native way of doing this, but it is possible to emulate private methods using closures.
A "closure" is an expression (typically a function) that can have free variables together with an environment that binds those variables (that "closes" the expression).
Closures are an abstraction mechanism that allow you to separate concerns very cleanly.
Uses of Closures:
Closures are useful in hiding the implementation of functionality while still revealing the interface.
You can emulate the encapsulation concept in JavaScript using closures.
Closures are used extensively in jQuery and Node.js.
While object literals are certainly easy to create and convenient for storing data, closures are often a better choice for creating static singleton namespaces in a large web application.
Example of Closures:
Assuming my 6-year-old friend get to know addition very recently in his primary school, I felt this example of adding the two numbers would be the simplest and apt for the six-year-old to learn the closure.
Example 1: Closure is achieved here by returning a function.
function makeAdder(x) {
return function(y) {
return x + y;
};
}
var add5 = makeAdder(5);
var add10 = makeAdder(10);
console.log(add5(2)); // 7
console.log(add10(2)); // 12
Example 2: Closure is achieved here by returning an object literal.
function makeAdder(x) {
return {
add: function(y){
return x + y;
}
}
}
var add5 = makeAdder(5);
console.log(add5.add(2));//7
var add10 = makeAdder(10);
console.log(add10.add(2));//12
Example 3: Closures in jQuery
$(function(){
var name="Closure is easy";
$('div').click(function(){
$('p').text(name);
});
});
Useful Links:
*
*Closures (Mozilla Developer Network)
*Understand JavaScript Closures With Ease
Thanks to the above links which helps me to understand and explain closure better.
A: A closure is a function within a function that has access to its "parent" function's variables and parameters.
Example:
function showPostCard(Sender, Receiver) {
var PostCardMessage = " Happy Spring!!! Love, ";
function PreparePostCard() {
return "Dear " + Receiver + PostCardMessage + Sender;
}
return PreparePostCard();
}
showPostCard("Granny", "Olivia");
A: Meet the illustrated explanation: How do JavaScript closures work behind the scenes.
The article explains how the scope objects (or LexicalEnvironments) are allocated and used in an intuitive way. Like, for this simple script:
"use strict";
var foo = 1;
var bar = 2;
function myFunc() {
//-- Define local-to-function variables
var a = 1;
var b = 2;
var foo = 3;
}
//-- And then, call it:
myFunc();
When executing the top-level code, we have the following arrangement of scope objects:
And when myFunc() is called, we have the following scope chain:
Understanding of how scope objects are created, used and deleted is a key to having a big picture and to understand how do closures work under the hood.
See the aforementioned article for all the details.
A: To understand closures you have to get down to the program and literally execute as if you are the run time. Let's look at this simple piece of code:
JavaScript runs the code in two phases:
*
*Compilation Phase // JavaScript is not a pure interpreted language
*Execution Phase
When JavaScript goes through the compilation phase it extract out the declarations of variables and functions. This is called hoisting. Functions encountered in this phase are saved as text blobs in memory also known as lambda. After compilation JavaScript enters the execution phase where it assigns all the values and runs the function. To run the function it prepares the execution context by assigning memory from the heap and repeating the compilation and execution phase for the function. This memory area is called scope of the function. There is a global scope when execution starts. Scopes are the key in understanding closures.
In this example, in first go, variable a is defined and then f is defined in the compilation phase. All undeclared variables are saved in the global scope. In the execution phase f is called with an argument. f's scope is assigned and the compilation and execution phase is repeated for it.
Arguments are also saved in this local scope for f. Whenever a local execution context or scope is created it contain a reference pointer to its parent scope. All variable access follows this lexical scope chain to find its value. If a variable is not found in the local scope it follows the chain and find it in its parent scope. This is also why a local variable overrides variables in the parent scope. The parent scope is called the "Closure" for local a scope or function.
Here when g's scope is being set up it got a lexical pointer to its parents scope of f. The scope of f is the closure for g. In JavaScript, if there is some reference to functions, objects or scopes if you can reach them somehow, it will not get garbage collected. So when myG is running, it has a pointer to scope of f which is its closure. This area of memory will not get garbage collected even f has returned. This is a closure as far as the runtime is concerned.
SO WHAT IS A CLOSURE?
*
*It is an implicit, permanent link between a function and its scope chain...
*A function definition's (lambda) hidden [[scope]] reference.
*Holds the scope chain (preventing garbage collection).
*It is used and copied as the "outer environment reference" anytime the function is run.
IMPLICIT CLOSURE
var data = "My Data!";
setTimeout(function() {
console.log(data); // Prints "My Data!"
}, 3000);
EXPLICIT CLOSURES
function makeAdder(n) {
var inc = n;
var sum = 0;
return function add() {
sum = sum + inc;
return sum;
};
}
var adder3 = makeAdder(3);
A very interesting talk on closures and more is Arindam Paul - JavaScript VM internals, EventLoop, Async and ScopeChains.
A: Version picture for this answer: [Resolved]
Just forget about scope every thing and remember: When a variable needed somewhere, javascript will not destroy it. The variable always point to newest value.
Example 1:
Example 2:
Example 3:
A: This answer is a summary of this youtube video Javascript Closures. So full credits to that video.
Closures are nothing but Stateful functions which maintain states of their private variables.
Normally when you make a call to a function as shown in the below figure. The variables are created on a stack ( running RAM memory) used and then disallocated.
But now there are situations where we want to maintain this state of the function thats where Javascript closures comes to use. A closure is a function inside function with a return call as shown in the below code.
So the closure code for the counter function above looks something as shown below.Its a function inside function with a return statement.
function Counter() {
var counter = 0;
var Increment = function () {
counter++;
alert(counter);
}
return {
Increment
}
}
So now if you make a call the counter will increment in other words the function call maintains states.
var x = Counter(); // get the reference of the closure
x.Increment(); // Displays 1
x.Increment(); // Display 2 ( Maintains the private variables)
But now the biggest question whats the use of such stateful function. Stateful functions are building blocks to implement OOP concept like abstraction ,encapsulation and creating self contained modules.
So whatever you want encapsulated you can put it as private and things to be exposed to public should be put in return statement. Also these components are self contained isolated objects so they do not pollute global variables.
A object which follows OOP principles is self contained , follows abstraction , follows encapsulation and so. With out closures in Javascript this is difficult to implement.
A: From a personal blog post:
By default, JavaScript knows two types of scopes: global and local.
var a = 1;
function b(x) {
var c = 2;
return x * c;
}
In the above code, variable a and function b are available from anywhere in the code (that is, globally). Variable c is only available within the b function scope (that is, local). Most software developers won't be happy with this lack of scope flexibility, especially in large programs.
JavaScript closures help solving that issue by tying a function with a context:
function a(x) {
return function b(y) {
return x + y;
}
}
Here, function a returns a function called b. Since b is defined within a, it automatically has access to whatever is defined in a, that is, x in this example. This is why b can return x + y without declaring x.
var c = a(3);
Variable c is assigned the result of a call to a with parameter 3. That is, an instance of function b where x = 3. In other words, c is now a function equivalent to:
var c = function b(y) {
return 3 + y;
}
Function b remembers that x = 3 in its context. Therefore:
var d = c(4);
will assign the value 3 + 4 to d, that is 7.
Remark: If someone modifies the value of x (say x = 22) after the instance of function b has been created, this will be reflected in b too. Hence a later call to c(4) would return 22 + 4, that is 26.
Closures can also be used to limit the scope of variables and methods declared globally:
(function () {
var f = "Some message";
alert(f);
})();
The above is a closure where the function has no name, no argument and is called immediately. The highlighted code, which declares a global variable f, limits the scopes of f to the closure.
Now, there is a common JavaScript caveat where closures can help:
var a = new Array();
for (var i=0; i<2; i++) {
a[i]= function(x) { return x + i ; }
}
From the above, most would assume that array a would be initialized as follows:
a[0] = function (x) { return x + 0 ; }
a[1] = function (x) { return x + 1 ; }
a[2] = function (x) { return x + 2 ; }
In reality, this is how a is initialized, since the last value of i in the context is 2:
a[0] = function (x) { return x + 2 ; }
a[1] = function (x) { return x + 2 ; }
a[2] = function (x) { return x + 2 ; }
The solution is:
var a = new Array();
for (var i=0; i<2; i++) {
a[i]= function(tmp) {
return function (x) { return x + tmp ; }
} (i);
}
The argument/variable tmp holds a local copy of the changing value of i when creating function instances.
A: A function is executed in the scope of the object/function in which it is defined. The said function can access the variables defined in the object/function where it has been defined while it is executing.
And just take it literally.... as the code is written :P
A: A closure is a function that has access to information from the environment it was defined in.
For some, the information is the value in the environment at the time of creation. For others, the information is the variables in the environment at the time of creation.
If the lexical environment that the closure refers to belongs to a function that has exited, then (in the case of a closure referring to the variables in the environment) those lexical variables will continue to exist for reference by the closure.
A closure can be thought of a special case of global variables -- with a private copy created just for the function.
Or it can be thought of as a method where the environment is a specific instance of an object whose properties are the variables in the environment.
The former (closure as environment) similar to the latter where the environment copy is a context variable passed to each function in the former, and the instance variables form a context variable in the latter.
So a closure is a way to call a function without having to specify the context explicitly as a parameter or as the object in a method invocation.
var closure = createclosure(varForClosure);
closure(param1); // closure has access to whatever createclosure gave it access to,
// including the parameter storing varForClosure.
vs
var contextvar = varForClosure; // use a struct for storing more than one..
contextclosure(contextvar, param1);
vs
var contextobj = new contextclass(varForClosure);
contextobj->objclosure(param1);
For maintainable code, I recommend the object oriented way. However for a quick and easy set of tasks (for example creating a callback), a closure can become natural and more clear, especially in the context of lamda or anonymous functions.
A: I tend to learn better by GOOD/BAD comparisons. I like to see working code followed by non-working code that someone is likely to encounter. I put together a jsFiddle that does a comparison and tries to boil down the differences to the simplest explanations I could come up with.
Closures done right:
console.log('CLOSURES DONE RIGHT');
var arr = [];
function createClosure(n) {
return function () {
return 'n = ' + n;
}
}
for (var index = 0; index < 10; index++) {
arr[index] = createClosure(index);
}
for (var index of arr) {
console.log(arr[index]());
}
*
*In the above code createClosure(n) is invoked in every iteration of the loop. Note that I named the variable n to highlight that it is a new variable created in a new function scope and is not the same variable as index which is bound to the outer scope.
*This creates a new scope and n is bound to that scope; this means we have 10 separate scopes, one for each iteration.
*createClosure(n) returns a function that returns the n within that scope.
*Within each scope n is bound to whatever value it had when createClosure(n) was invoked so the nested function that gets returned will always return the value of n that it had when createClosure(n) was invoked.
Closures done wrong:
console.log('CLOSURES DONE WRONG');
function createClosureArray() {
var badArr = [];
for (var index = 0; index < 10; index++) {
badArr[index] = function () {
return 'n = ' + index;
};
}
return badArr;
}
var badArr = createClosureArray();
for (var index of badArr) {
console.log(badArr[index]());
}
*
*In the above code the loop was moved within the createClosureArray() function and the function now just returns the completed array, which at first glance seems more intuitive.
*What might not be obvious is that since createClosureArray() is only invoked once only one scope is created for this function instead of one for every iteration of the loop.
*Within this function a variable named index is defined. The loop runs and adds functions to the array that return index. Note that index is defined within the createClosureArray function which only ever gets invoked one time.
*Because there was only one scope within the createClosureArray() function, index is only bound to a value within that scope. In other words, each time the loop changes the value of index, it changes it for everything that references it within that scope.
*All of the functions added to the array return the SAME index variable from the parent scope where it was defined instead of 10 different ones from 10 different scopes like the first example. The end result is that all 10 functions return the same variable from the same scope.
*After the loop finished and index was done being modified the end value was 10, therefore every function added to the array returns the value of the single index variable which is now set to 10.
Result
CLOSURES DONE RIGHT
n = 0
n = 1
n = 2
n = 3
n = 4
n = 5
n = 6
n = 7
n = 8
n = 9
CLOSURES DONE WRONG
n = 10
n = 10
n = 10
n = 10
n = 10
n = 10
n = 10
n = 10
n = 10
n = 10
A: The following example is a simple illustration of a JavaScript closure.
This is the closure function, which returns a function, with access to its local variable x,
function outer(x){
return function inner(y){
return x+y;
}
}
Invoke the function like this:
var add10 = outer(10);
add10(20); // The result will be 30
add10(40); // The result will be 50
var add20 = outer(20);
add20(20); // The result will be 40
add20(40); // The result will be 60
A: Wikipedia on closures:
In computer science, a closure is a function together with a referencing environment for the nonlocal names (free variables) of that function.
Technically, in JavaScript, every function is a closure. It always has an access to variables defined in the surrounding scope.
Since scope-defining construction in JavaScript is a function, not a code block like in many other languages, what we usually mean by closure in JavaScript is a function working with nonlocal variables defined in already executed surrounding function.
Closures are often used for creating functions with some hidden private data (but it's not always the case).
var db = (function() {
// Create a hidden object, which will hold the data
// it's inaccessible from the outside.
var data = {};
// Make a function, which will provide some access to the data.
return function(key, val) {
if (val === undefined) { return data[key] } // Get
else { return data[key] = val } // Set
}
// We are calling the anonymous surrounding function,
// returning the above inner function, which is a closure.
})();
db('x') // -> undefined
db('x', 1) // Set x to 1
db('x') // -> 1
// It's impossible to access the data object itself.
// We are able to get or set individual it.
ems
The example above is using an anonymous function, which was executed once. But it does not have to be. It can be named (e.g. mkdb) and executed later, generating a database function each time it is invoked. Every generated function will have its own hidden database object. Another usage example of closures is when we don't return a function, but an object containing multiple functions for different purposes, each of those function having access to the same data.
A: I found very clear chapter 8 section 6, "Closures," of JavaScript: The Definitive Guide by David Flanagan, 6th edition, O'Reilly, 2011. I'll try to paraphrase.
*
*When a function is invoked, a new object is created to hold the local variables for that invocation.
*A function's scope depends on its declaration location, not its execution location.
Now, assume an inner function declared within an outer function and referring to variables of that outer function. Further assume the outer function returns the inner function, as a function. Now there is an external reference to whatever values were in the inner function's scope (which, by our assumptions, includes values from the outer function).
JavaScript will preserve those values, as they have remained in scope of the current execution thanks to being passed out of the completed outer function. All functions are closures, but the closures of interest are the inner functions which, in our assumed scenario, preserve outer function values within their "enclosure" (I hope I'm using language correctly here) when they (the inner functions) are returned from outer functions. I know this doesn't meet the six-year-old requirement, but hopefully it is still helpful.
A: Maybe you should consider an object-oriented structure instead of inner functions. For example:
var calculate = {
number: 0,
init: function (num) {
this.number = num;
},
add: function (val) {
this.number += val;
},
rem: function (val) {
this.number -= val;
}
};
And read the result from the calculate.number variable, who needs "return" anyway.
//Addition
First think about scope which defines what variable you have to access to (In Javascript);
//there are two kinds of scope
Global Scope which include variable declared outside function or curly brace
let globalVariable = "foo";
One thing to keep in mind is once you've declared a global variable you can use it anywhere in your code even in function;
Local Scope which include variable that are usable only in a specific part of your code:
Function scope is when you declare a variable in a function you can access the variable only within the function
function User(){
let name = "foo";
alert(name);
}
alert(name);//error
//Block scope is when you declare a variable within a block then you can access that variable only within a block
{
let user = "foo";
alert(user);
}
alert(user);
//Uncaught ReferenceError: user is not defined at.....
//A Closure
function User(fname){
return function(lname){
return fname + " " lname;
}
}
let names = User("foo");
alert(names("bar"));
//When you create a function within a function you've created a closure, in our example above since the outer function is returned the inner function got access to outer function's scope
A: A closure is something many JavaScript developers use all the time, but we take it for granted. How it works is not that complicated. Understanding how to use it purposefully is complex.
At its simplest definition (as other answers have pointed out), a closure is basically a function defined inside another function. And that inner function has access to variables defined in the scope of the outer function. The most common practice that you'll see using closures is defining variables and functions in the global scope, and having access to those variables in the function scope of that function.
var x = 1;
function myFN() {
alert(x); //1, as opposed to undefined.
}
// Or
function a() {
var x = 1;
function b() {
alert(x); //1, as opposed to undefined.
}
b();
}
So what?
A closure isn't that special to a JavaScript user until you think about what life would be like without them. In other languages, variables used in a function get cleaned up when that function returns. In the above, x would have been a "null pointer", and you'd need to establish a getter and setter and start passing references. Doesn't sound like JavaScript right? Thank the mighty closure.
Why should I care?
You don't really have to be aware of closures to use them. But as others have also pointed out, they can be leveraged to create faux private variables. Until you get to needing private variables, just use them like you always have.
A: If you want to explain it to a six-year old child then you must find something very much simpler and NO code.
Just tell the child that he is "open", which says that he is able to have relations with some others, his friends. At some point in time, he has determined friends (we can know the names of his friends), that is a closure. If you take a picture of him and his friends then he is "closed" relatively to his friendship ability. But in general, he is "open". During his whole life he will have many different sets of friends. One of these sets is a closure.
A: I'm sure, Einstein didn't say it with a direct expectation for us to pick any esoteric brainstormer thing and run over six-year-olds with futile attempts to get those 'crazy' (and what is even worse for them-boring) things to their childish minds :) If I were six years old I wouldn't like to have such parents or wouldn't make friendship with such boring philanthropists, sorry :)
Anyway, for babies, closure is simply a hug, I guess, whatever way you try to explain :) And when you hug a friend of yours then you both kind of share anything you guys have at the moment. It's a rite of passage, once you've hugged somebody you're showing her trust and willingness to let her do with you a lot of things you don't allow and would hide from others. It's an act of friendship :).
I really don't know how to explain it to 5-6 years old babies. I neither think they will appreciate any JavaScript code snippets like:
function Baby(){
this.iTrustYou = true;
}
Baby.prototype.hug = function (baby) {
var smiles = 0;
if (baby.iTrustYou) {
return function() {
smiles++;
alert(smiles);
};
}
};
var
arman = new Baby("Arman"),
morgan = new Baby("Morgana");
var hug = arman.hug(morgan);
hug();
hug();
For children only:
Closure is hug
Bug is fly
KISS is smooch! :)
A:
The children will never forget the secrets they have shared with their parents, even after their parents are
gone. This is what closures are for functions.
The secrets for JavaScript functions are the private variables
var parent = function() {
var name = "Mary"; // secret
}
Every time you call it, the local variable "name" is created and given the name "Mary". And every time the function exits the variable is lost and the name is forgotten.
As you may guess, because the variables are re-created every time the function is called, and nobody else will know them, there must be a secret place where they are stored. It could be called Chamber of Secrets or stack or local scope but it doesn't matter. We know they are there, somewhere, hidden in the memory.
But, in JavaScript, there is this very special thing that functions which are created inside other functions, can also know the local variables of their parents and keep them as long as they live.
var parent = function() {
var name = "Mary";
var child = function(childName) {
// I can also see that "name" is "Mary"
}
}
So, as long as we are in the parent -function, it can create one or more child functions which do share the secret variables from the secret place.
But the sad thing is, if the child is also a private variable of its parent function, it would also die when the parent ends, and the secrets would die with them.
So to live, the child has to leave before it's too late
var parent = function() {
var name = "Mary";
var child = function(childName) {
return "My name is " + childName +", child of " + name;
}
return child; // child leaves the parent ->
}
var child = parent(); // < - and here it is outside
And now, even though Mary is "no longer running", the memory of her is not lost and her child will always remember her name and other secrets they shared during their time together.
So, if you call the child "Alice", she will respond
child("Alice") => "My name is Alice, child of Mary"
That's all there is to tell.
A: After a function is invoked, it goes out of scope. If that function contains something like a callback function, then that callback function is still in scope. If the callback function references some local variable in the immediate environment of the parent function, then naturally you'd expect that variable to be inaccessible to the callback function and return undefined.
Closures ensure that any property that is referenced by the callback function is available for use by that function, even when its parent function may have gone out of scope.
A: Closures are a means through which inner functions can refer to the variables present in their outer enclosing function after their parent functions have already terminated.
// A function that generates a new function for adding numbers.
function addGenerator( num ) {
// Return a simple function for adding two numbers
// with the first number borrowed from the generator
return function( toAdd ) {
return num + toAdd
};
}
// addFive now contains a function that takes one argument,
// adds five to it, and returns the resulting number.
var addFive = addGenerator( 5 );
// We can see here that the result of the addFive function is 9,
// when passed an argument of 4.
alert( addFive( 4 ) == 9 );
A: I put together an interactive JavaScript tutorial to explain how closures work.
What's a Closure?
Here's one of the examples:
var create = function (x) {
var f = function () {
return x; // We can refer to x here!
};
return f;
};
// 'create' takes one argument, creates a function
var g = create(42);
// g is a function that takes no arguments now
var y = g();
// y is 42 here
A: If you understand it well you can explain it simple. And the simplest way is abstracting it from the context. Code aside, even programming aside. A metaphor example will do it better.
Let's imagine that a function is a room whose walls are of glass, but they are special glass, like the ones in an interrogation room. From outside they are opaque, from inside they are transparent. It can be rooms inside other rooms, and the only way of contact is a phone.
If you call from the outside, you don't know what is in it, but you know that the people inside will do a task if you give them certain information. They can see outside, so they can ask you for stuff that are outside and make changes to that stuff, but you can't change what it is inside from the outside, you don't even see (know) what it is inside. The people inside that room you are calling see what it is outside, but not what it is inside the rooms in that room, so they interact with them the way you are doing from outside. The people inside the most inner rooms can see many things, but the people of the most outer room don't even know about the most inner rooms' existence.
For each call to an inner room, the people in that room keeps a record of the information about that specific call, and they are so good doing that that they never mistake one call stuff with other call stuff.
Rooms are functions, visibility is scope, people doing task is statements, stuff are objects, phone calls are function calls, phone call information is arguments, call records are scope instances, the most outer room is the global object.
A: Imagine there is a very large park in your town where you see a magician called Mr. Coder starting baseball games in different corners of the park using his magic wand, called JavaScript.
Naturally each baseball game has the exact same rules and each game has its own score board.
Naturally, the scores of one baseball game are completely separate from the other games.
A closure is the special way Mr.Coder keeps the scoring of all his magical baseball games separate.
A: Pinocchio: Closures in 1883 (over a century before JavaScript)
I think it can best be explained to a 6-year-old with a nice adventure... The part of the Adventures of Pinocchio where Pinocchio is being swallowed by an oversized dogfish...
var tellStoryOfPinocchio = function(original) {
// Prepare for exciting things to happen
var pinocchioFindsMisterGeppetto;
var happyEnding;
// The story starts where Pinocchio searches for his 'father'
var pinocchio = {
name: 'Pinocchio',
location: 'in the sea',
noseLength: 2
};
// Is it a dog... is it a fish...
// The dogfish appears, however there is no such concept as the belly
// of the monster, there is just a monster...
var terribleDogfish = {
swallowWhole: function(snack) {
// The swallowing of Pinocchio introduces a new environment (for the
// things happening inside it)...
// The BELLY closure... with all of its guts and attributes
var mysteriousLightLocation = 'at Gepetto\'s ship';
// Yes: in my version of the story the monsters mouth is directly
// connected to its belly... This might explain the low ratings
// I had for biology...
var mouthLocation = 'in the monsters mouth and then outside';
var puppet = snack;
puppet.location = 'inside the belly';
alert(snack.name + ' is swallowed by the terrible dogfish...');
// Being inside the belly, Pinocchio can now experience new adventures inside it
pinocchioFindsMisterGeppetto = function() {
// The event of Pinocchio finding Mister Geppetto happens inside the
// belly and so it makes sence that it refers to the things inside
// the belly (closure) like the mysterious light and of course the
// hero Pinocchio himself!
alert(puppet.name + ' sees a mysterious light (also in the belly of the dogfish) in the distance and swims to it to find Mister Geppetto! He survived on ship supplies for two years after being swallowed himself. ');
puppet.location = mysteriousLightLocation;
alert(puppet.name + ' tells Mister Geppetto he missed him every single day! ');
puppet.noseLength++;
}
happyEnding = function() {
// The escape of Pinocchio and Mister Geppetto happens inside the belly:
// it refers to Pinocchio and the mouth of the beast.
alert('After finding Mister Gepetto, ' + puppet.name + ' and Mister Gepetto travel to the mouth of the monster.');
alert('The monster sleeps with its mouth open above the surface of the water. They escape through its mouth. ');
puppet.location = mouthLocation;
if (original) {
alert(puppet.name + ' is eventually hanged for his innumerable faults. ');
} else {
alert(puppet.name + ' is eventually turned into a real boy and they all lived happily ever after...');
}
}
}
}
alert('Once upon a time...');
alert('Fast forward to the moment that Pinocchio is searching for his \'father\'...');
alert('Pinocchio is ' + pinocchio.location + '.');
terribleDogfish.swallowWhole(pinocchio);
alert('Pinocchio is ' + pinocchio.location + '.');
pinocchioFindsMisterGeppetto();
alert('Pinocchio is ' + pinocchio.location + '.');
happyEnding();
alert('Pinocchio is ' + pinocchio.location + '.');
if (pinocchio.noseLength > 2)
console.log('Hmmm... apparently a little white lie was told. ');
}
tellStoryOfPinocchio(false);
A:
A closure is a function having access to the parent scope, even after the parent function has closed.
var add = (function() {
var counter = 0;
return function() {
return counter += 1;
}
})();
add();
add();
add();
// The counter is now 3
Example explained:
*
*The variable add is assigned the return value of a self-invoking function.
*The self-invoking function only runs once. It sets the counter to zero (0), and returns a function expression.
*This way add becomes a function. The "wonderful" part is that it can access the counter in the parent scope.
*This is called a JavaScript closure. It makes it possible for a function to have "private" variables.
*The counter is protected by the scope of the anonymous function, and can only be changed using the add function.
Source
A: Closures are a somewhat advanced, and often misunderstood feature of the JavaScript language. Simply put, closures are objects that contain a function and a reference to the environment in which the function was created. However, in order to fully understand closures, there are two other features of the JavaScript language that must first be understood―first-class functions and inner functions.
First-Class Functions
In programming languages, functions are considered to be first-class citizens if they can be manipulated like any other data type. For example, first-class functions can be constructed at runtime and assigned to variables. They can also be passed to, and returned by other functions. In addition to meeting the previously mentioned criteria, JavaScript functions also have their own properties and methods. The following example shows some of the capabilities of first-class functions. In the example, two functions are created and assigned to the variables “foo” and “bar”. The function stored in “foo” displays a dialog box, while “bar” simply returns whatever argument is passed to it. The last line of the example does several things. First, the function stored in “bar” is called with “foo” as its argument. “bar” then returns the “foo” function reference. Finally, the returned “foo” reference is called, causing “Hello World!” to be displayed.
var foo = function() {
alert("Hello World!");
};
var bar = function(arg) {
return arg;
};
bar(foo)();
Inner Functions
Inner functions, also referred to as nested functions, are functions that are defined inside of another function (referred to as the outer function). Each time the outer function is called, an instance of the inner function is created. The following example shows how inner functions are used. In this case, add() is the outer function. Inside of add(), the doAdd() inner function is defined and called.
function add(value1, value2) {
function doAdd(operand1, operand2) {
return operand1 + operand2;
}
return doAdd(value1, value2);
}
var foo = add(1, 2);
// foo equals 3
One important characteristic of inner functions is that they have implicit access to the outer function’s scope. This means that the inner function can use the variables, arguments, etc. of the outer function. In the previous example, the “value1” and “value2” arguments of add() were passed to doAdd() as the “operand1” and “operand2” arguments. However, this is unnecessary because doAdd() has direct access to “value1” and “value2”. The previous example has been rewritten below to show how doAdd() can use “value1” and “value2”.
function add(value1, value2) {
function doAdd() {
return value1 + value2;
}
return doAdd();
}
var foo = add(1, 2);
// foo equals 3
Creating Closures
A closure is created when an inner function is made accessible from
outside of the function that created it. This typically occurs when an
outer function returns an inner function. When this happens, the
inner function maintains a reference to the environment in which it
was created. This means that it remembers all of the variables (and
their values) that were in scope at the time. The following example
shows how a closure is created and used.
function add(value1) {
return function doAdd(value2) {
return value1 + value2;
};
}
var increment = add(1);
var foo = increment(2);
// foo equals 3
There are a number of things to note about this example.
The add() function returns its inner function doAdd(). By returning a reference to an inner function, a closure is created.
“value1” is a local variable of add(), and a non-local variable of doAdd(). Non-local variables refer to variables that are neither in the local nor the global scope. “value2” is a local variable of doAdd().
When add(1) is called, a closure is created and stored in “increment”. In the closure’s referencing environment, “value1” is bound to the value one. Variables that are bound are also said to be closed over. This is where the name closure comes from.
When increment(2) is called, the closure is entered. This means that doAdd() is called, with the “value1” variable holding the value one. The closure can essentially be thought of as creating the following function.
function increment(value2) {
return 1 + value2;
}
When to Use Closures
Closures can be used to accomplish many things. They are very useful
for things like configuring callback functions with parameters. This
section covers two scenarios where closures can make your life as a
developer much simpler.
Working With Timers
Closures are useful when used in conjunction with the setTimeout() and setInterval() functions. To be more specific, closures allow you to pass arguments to the callback functions of setTimeout() and setInterval(). For example, the following code prints the string “some message” once per second by calling showMessage().
<!DOCTYPE html>
<html lang="en">
<head>
<title>Closures</title>
<meta charset="UTF-8" />
<script>
window.addEventListener("load", function() {
window.setInterval(showMessage, 1000, "some message<br />");
});
function showMessage(message) {
document.getElementById("message").innerHTML += message;
}
</script>
</head>
<body>
<span id="message"></span>
</body>
</html>
Unfortunately, Internet Explorer does not support passing callback arguments via setInterval(). Instead of displaying “some message”, Internet Explorer displays “undefined” (since no value is actually passed to showMessage()). To work around this issue, a closure can be created which binds the “message” argument to the desired value. The closure can then be used as the callback function for setInterval(). To illustrate this concept, the JavaScript code from the previous example has been rewritten below to use a closure.
window.addEventListener("load", function() {
var showMessage = getClosure("some message<br />");
window.setInterval(showMessage, 1000);
});
function getClosure(message) {
function showMessage() {
document.getElementById("message").innerHTML += message;
}
return showMessage;
}
Emulating Private Data
Many object-oriented languages support the concept of private member data. However, JavaScript is not a pure object-oriented language and does not support private data. But, it is possible to emulate private data using closures. Recall that a closure contains a reference to the environment in which it was originally created―which is now out of scope. Since the variables in the referencing environment are only accessible from the closure function, they are essentially private data.
The following example shows a constructor for a simple Person class. When each Person is created, it is given a name via the “name” argument. Internally, the Person stores its name in the “_name” variable. Following good object-oriented programming practices, the method getName() is also provided for retrieving the name.
function Person(name) {
this._name = name;
this.getName = function() {
return this._name;
};
}
There is still one major problem with the Person class. Because JavaScript does not support private data, there is nothing stopping somebody else from coming along and changing the name. For example, the following code creates a Person named Colin, and then changes its name to Tom.
var person = new Person("Colin");
person._name = "Tom";
// person.getName() now returns "Tom"
Personally, I wouldn’t like it if just anyone could come along and legally change my name. In order to stop this from happening, a closure can be used to make the “_name” variable private. The Person constructor has been rewritten below using a closure. Note that “_name” is now a local variable of the Person constructor instead of an object property. A closure is formed because the outer function, Person() exposes an inner function by creating the public getName() method.
function Person(name) {
var _name = name;
this.getName = function() {
return _name;
};
}
Now, when getName() is called, it is guaranteed to return the value that was originally passed to the constructor. It is still possible for someone to add a new “_name” property to the object, but the internal workings of the object will not be affected as long as they refer to the variable bound by the closure. The following code shows that the “_name” variable is, indeed, private.
var person = new Person("Colin");
person._name = "Tom";
// person._name is "Tom" but person.getName() returns "Colin"
When Not to Use Closures
It is important to understand how closures work and when to use them.
It is equally important to understand when they are not the right tool
for the job at hand. Overusing closures can cause scripts to execute
slowly and consume unnecessary memory. And because closures are so
simple to create, it is possible to misuse them without even knowing
it. This section covers several scenarios where closures should be
used with caution.
In Loops
Creating closures within loops can have misleading results. An example of this is shown below. In this example, three buttons are created. When “button1” is clicked, an alert should be displayed that says “Clicked button 1”. Similar messages should be shown for “button2” and “button3”. However, when this code is run, all of the buttons show “Clicked button 4”. This is because, by the time one of the buttons is clicked, the loop has finished executing, and the loop variable has reached its final value of four.
<!DOCTYPE html>
<html lang="en">
<head>
<title>Closures</title>
<meta charset="UTF-8" />
<script>
window.addEventListener("load", function() {
for (var i = 1; i < 4; i++) {
var button = document.getElementById("button" + i);
button.addEventListener("click", function() {
alert("Clicked button " + i);
});
}
});
</script>
</head>
<body>
<input type="button" id="button1" value="One" />
<input type="button" id="button2" value="Two" />
<input type="button" id="button3" value="Three" />
</body>
</html>
To solve this problem, the closure must be decoupled from the actual loop variable. This can be done by calling a new function, which in turn creates a new referencing environment. The following example shows how this is done. The loop variable is passed to the getHandler() function. getHandler() then returns a closure that is independent of the original “for” loop.
function getHandler(i) {
return function handler() {
alert("Clicked button " + i);
};
}
window.addEventListener("load", function() {
for (var i = 1; i < 4; i++) {
var button = document.getElementById("button" + i);
button.addEventListener("click", getHandler(i));
}
});
Unnecessary Use in Constructors
Constructor functions are another common source of closure misuse.
We’ve seen how closures can be used to emulate private data. However,
it is overkill to implement methods as closures if they don’t actually
access the private data. The following example revisits the Person
class, but this time adds a sayHello() method which doesn’t use the
private data.
function Person(name) {
var _name = name;
this.getName = function() {
return _name;
};
this.sayHello = function() {
alert("Hello!");
};
}
Each time a Person is instantiated, time is spent creating the
sayHello() method. If many Person objects are created, this becomes a
waste of time. A better approach would be to add sayHello() to the
Person prototype. By adding to the prototype, all Person objects can
share the same method. This saves time in the constructor by not
having to create a closure for each instance. The previous example is
rewritten below with the extraneous closure moved into the prototype.
function Person(name) {
var _name = name;
this.getName = function() {
return _name;
};
}
Person.prototype.sayHello = function() {
alert("Hello!");
};
Things to Remember
*
*Closures contain a function and a reference to the environment in
which the function was created.
*A closure is formed when an outer function exposes an inner function.
Closures can be used to easily pass parameters to callback functions.
*Private data can be emulated by using closures. This is common in
object-oriented programming and namespace design.
*Closures should be not overused in constructors. Adding to the
prototype is a better idea.
Link
A: I do not understand why the answers are so complex here.
Here is a closure:
var a = 42;
function b() { return a; }
Yes. You probably use that many times a day.
There is no reason to believe closures are a complex design hack to address specific problems. No, closures are just about using a variable that comes from a higher scope from the perspective of where the function was declared (not run).
Now what it allows you to do can be more spectacular, see other answers.
A: A closure is created when the inner function is somehow made available to any scope outside the outer function.
Example:
var outer = function(params){ //Outer function defines a variable called params
var inner = function(){ // Inner function has access to the params variable of the outer function
return params;
}
return inner; //Return inner function exposing it to outer scope
},
myFunc = outer("myParams");
myFunc(); //Returns "myParams"
A: A closure is a block of code which meets three criteria:
*
*It can be passed around as a value and
*executed on demand by anyone who has that value, at which time
*it can refer to variables from the context in which it was created
(that is, it is closed with respect to variable access, in the
mathematical sense of the word "closed").
(The word "closure" actually has an imprecise meaning, and some people don't think that criterion #1 is part of the definition. I think it is.)
Closures are a mainstay of functional languages, but they are present in many other languages as well (for example, Java's anonymous inner classes). You can do cool stuff with them: they allow deferred execution and some elegant tricks of style.
By: Paul Cantrell, @ http://innig.net/software/ruby/closures-in-ruby
A: Closures are simple
You probably shouldn't tell a six-year old about closures, but if you do, you might say that closure gives an ability to gain access to a variable declared in some other function scope.
function getA() {
var a = [];
// this action happens later,
// after the function returned
// the `a` value
setTimeout(function() {
a.splice(0, 0, 1, 2, 3, 4, 5);
});
return a;
}
var a = getA();
out('What is `a` length?');
out('`a` length is ' + a.length);
setTimeout(function() {
out('No wait...');
out('`a` length is ' + a.length);
out('OK :|')
});
<pre id="output"></pre>
<script>
function out(k) {
document.getElementById('output').innerHTML += '> ' + k + '\n';
}
</script>
A: Considering the question is about explaining it simply as if to a 6-year-old, my answer would be:
"When you declare a function in JavaScript it has forever access to all the variables and functions that were available in the line before that function declaration. The function and all the outer variables and functions that it has access to is what we call a closure."
A: Functions containing no free variables are called pure functions.
Functions containing one or more free variables are called closures.
var pure = function pure(x){
return x
// only own environment is used
}
var foo = "bar"
var closure = function closure(){
return foo
// foo is free variable from the outer environment
}
src: https://leanpub.com/javascriptallongesix/read#leanpub-auto-if-functions-without-free-variables-are-pure-are-closures-impure
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7621"
}
|
Q: How do you integrate a TDD approach with VisualStudio? I am interested in hearing about experiences using TDD and unit testing for C++ in general with Visual Studio 2005 (Professional)
First some background. We have a fairly large project and much of it has been developed on Linux using CppUnit for the unit tests. The project is divided into several libraries, each with their own set of tests. I have a simple script which compiles the library, compiles the test suite and then runs the tests. So after making changes in the code I just run "test" from a command line and the tests run.
Now, most of the developers are using Visual Studio 2005 on Windows for the development of this product. Of course they can still run the tests from a command line using nmake but involves extra steps and I would prefer to have a much more integrated solution.
So my question has two parts.
Firstly, what is the best way of laying out the code for tests on a large code base? Is it normal to create several test projects in a solution, one for each library?
Secondly are there any tools for integrating CppUnit tests into Visual Studio? With dependencies set up corectly running the test project should run the tests but currently the results still appear in a command window.
A: One of the projects at my company does exactly this. We use a unit test framework called CXXTest (http://cxxtest.sourceforge.net/guide.html). We really like this framework for C++ because it only requires that you write a header file that contains your unit tests. The .CPP files are created by a script (both Python and Perl scripts are provided).
We integrate with visual studio by providing a post build step that builds the unit tests (if they need building) and then executes them. The output (showing what passed and what failed) is displayed in the output window -- you never need to leave the IDE.
A: I use the Boost Test framework. I tend to split my code into .lib files and will have a separate console-mode EXE test project for each. When the test project is built it makes use of the 'Post build stage' to launch itself, thus running the tests. You could make each test project a dependency of your main application so that each time it builds, all tests are run first, but this can be time-consuming. Instead I tend to run the test projects by hand as needed, but my automated nightly build system will run all test projects as a matter of course (I script this and if any tests fail, the build fails and I get an email notification).
More details here.
A: *
*I find the following folder hierarchy useful. Create code and tests as the subfolders of ProjectFolder. Create 2 solutions code\Project.sln and tests\Tests.sln. Now for every class library or executable created, e.g. Customers.dll have a corresponding test dll. So code\Customers\Customers.csproj will have tests\Customers\TestCustomers.csproj, which references the former.
*Integrating CPPUnit into Visual Studio would be on the lines of choosing the right application in Project Properties.. 'Debug' settings. I think this page has what you need to have single key press test execution + reporting within the IDE.
A: Here is what I do:
*
*Create a test executable project, in your main solution, which uses source from only the unit, the unit's tests and the test framework.
*Make the test runner generate a text file on successful run of the tests, so visual studio can track dependencies.
*Add a project to launch your test runner and generate the test file. This means you now have two projects per test.
*Make the test runner a dependency of the library that incorporates the unit.
Personally, I don't think the test framework (Google Test, Boost test, CppUnit, etc) matters that much. Most are pretty much functionally equivalent.
I'm not entirely happy with the number of projects generated, but I consider this a Visual Studio GUI issue, in the sense that its actually quite useful to have these projects included like this for purposes such as debugging.
I tried using post build steps to run the tests but this unfortunately mean that the build was not interrupted after the first failure has passed.
A: My team is currently using an system where we have an automated nightly build (that can also be run from the project build dashboard by anyone) that includes a VS2k5 "test" solution. The test solution holds all the unit test projects; one unit test project for every "unit" of code in the main project.
When the automated build runs, it builds the main solution, then the test solution, and finally runs all the executables produced by the test solution (a Perl script glues this together). The results of the compile as well as the test execution (EXIT _ SUCCESS, EXIT _ FAILURE) are used to update the project build dashboard.
That EXIT _ FAILURE trick can also be applied to a custom build step of the main project: if the unit test custom build step returns EXIT _ FAILURE, then the build itself fails.
A: You can also use managed C++ to write unit tests in Visual Studio, using the unit testing framework that's built in.
A: Have a look at CUnitWin32. There's an example included as well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Is querying the MySQL information_schema database a good way to find related tables? I have a table which is referenced by foreign keys on many other tables. In my program if I want to delete one of these rows I need to first search for dependencies and present them to the user - "This object depends on x from table y, z from table q, etc". I also expect the number of tables which have foreign keys to this table to grow considerably over time.
Is the information_schema database a good way to do a search for all dependencies? I tried to query it to retrieve a list of all tables which have foreign keys to my table, then iterate over the result and select all entries from each table where the foreign key value matches the value the user is trying to delete. The query I have is as follows:
SELECT * FROM `KEY_COLUMN_USAGE` kcu
LEFT JOIN TABLE_CONSTRAINTS tc
ON tc.CONSTRAINT_NAME = kcu.CONSTRAINT_NAME
WHERE tc.CONSTRAINT_TYPE='FOREIGN KEY'
AND (kcu.REFERENCED_TABLE_SCHEMA='db')
AND (kcu.REFERENCED_TABLE_NAME = 'testtable')
which works perfectly for determining the tables which I need to search, however it is very slow. The query takes around 1 to 2 seconds at best to execute on my development machine, which will reduce a lot when I run it on my production server, but will still be quite slow.
I need to know if it's a bad idea to use information_schema in this way. If not, how I can extract better performance from the query. Is the query I'm using solid or is there a better way to do it? If so, how best should I tackle this problem from a maintainability perspective.
A: Dvorak is right, INFORMATION_SCHEMA is intended for that.
Concerning your performance concerns, there are several ways you can improve the performance
*
*Easy way, but not much improvement will come from it:
Store the info in a static variable. At least the query will occur only once per page
*Use persistent caching : The alternative PHP cache can help you (see http://fr3.php.net/manual/en/book.apc.php).
The info you'll get from the information schema is a good candidate to store in a persistent cache.
*Use a ORM library, such as doctrine (http://www.doctrine-project.org/)
A look at the file lib/Doctrine/Import/Mysql.php will show that it does exactly what you need, and much more.
A: I think this is exactly the sort of thing that INFORMATION_SCHEMA is intended for.
A: I was looking into this as well. I want to use the KEY_COLUMN_USAGE for some CRUD. And I noticed that there aren't any keys or indexes available on these tables. That could be the reason for poor performance.
A: Using INFORMATION_SCHEMA for this is OK on static or administrative systems but is not recommended for a transactional application function as INFORMATION_SCHEMA is probably implemented as views on top of the native system data dictionary.
This would be a fairly inefficient way to do a generic 'D' operation for a CRUD library. Also, on many systems (Oracle comes to mind) the system data dictionary is actually implemented as views on a lower level data structure. This means that the native system data dictionary may also not be suitable for this either. The system data dictionary may also change from version to version.
There should be relatively few instances where a straight 'delete' of a record and all of its children is the right way to go. Doing this as a generic function may get you little practical beneift. Also, if the foreign keys are not present in the database you will get orphaned children lying about as this approach is dependent on the FK's being present to know which children to delete.
A: Slows my applications to a crawl, but I need the foreign key constraint data to get everything hooked together properly.
The delays are huge when querying information schema, and make a page that used to load instantly, load in 3-4 seconds.
Well, at least foreign key constraints are available in MySQL 5, that makes for more robust application development, but obviously at a cost.
People have been complaining about this issue since 2006 based on my Google searches, and the problem remains -- must not be an easy fix ;--(
A: In case this ever gets found on google, it's also worth noting that the information_schema occasionally differs from what is returned by show create table.
There's a good example of this in this DBA Stack Exchange thread. After executing this command:
create table rolando (num int not null, primary key (num) using hash);
Check the results:
mysql> show create table rolando\G
(...)
PRIMARY KEY (`num`) USING HASH
mysql> show indexes from rolando;
(...) | Index_type | (...)
(...) | BTREE | (...)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: What is the coolest thing you have done with threads? Just wondering
A: I'd love to say that I've cleverly parallelized an algorithm using lock-free data structures in order to get n-fold performance increase on an n-core processor. But I've never had a practical need, especially since most of my professional code has been for single-core systems.
Almost every time I've used more than one thread, in any language, it has been one of two reasons:
*
*the system (or a third party) offers a blocking API and I need an asynchronous one (or at least to let several ops run at once).
*to take advantage of pre-emptive priority-based scheduling to keep everything nice and responsive without having to chop all my slow operations into tiny pieces by hand.
Necessary, but not what you'd call glamorous.
A: I'm so useless with threads, I have to get my girlfriend to sew my buttons back on.
A: I made them work as I wanted once ! That was cool!
A: We once wrote a multithreaded application that essentially read a file line by line and did a lookup in an internal database to see if there was a match, appended some data and moved to the next line. The complexity though was that there could be multiple files processed at the same time, and multiple records per file could be searched. There was a manager class that knew how many threads were available and was responsible for divding out available worker threads to each file (if there was only one file to be processed it would recieve all 40 threads, if there were 5 files, depending on priority, each would recieve a fraction of those 40). We used Async delegates at first but noticed it would be hard to catch any exceptions that may occur in the async threads so we used the traditional thread start in .net.
The key to this was having a collection of ManualResetEvents in the manager class that was comprised of ManualResentEvents that were public proprties in the worker classes(threads). When a worker thread would finish it would signal it's ManualResetEvent which would be picked up by the .WaitAny() on the Manager class. The manager would then know that one of the threads was finished and would start a new one. In reality it was a little more complex than this but this was the core of what it did.
The hard part was unit testing this to ensure that at any given times the correct number of threads were running. We had tests that would act as if there was only one file in queue (gets all 40 threads), then another file was introduced and the allocation of threads would have to cycle down to 20 a piece for the two files. We had "Mock Objects" that essentially had a thread sleep parameter that we would pass a value for (in ms) to control how long each thread would take to process so we would have a good idea of when to do our assertion or interrogate the file processor to see how many threads, records, it was currently processing. There were also tests that would have two files running with 20 threads a piece, then one file would finish and as all the record threads would finish on the first file they would be reallocated to the second file to help it finish faster.
I'm sure this isn't the clearest explanation of what we actually did, I really need to write a blog post about it. If anyone needs more information on it please contact me, I'll try to answer as best I can.
A: I have recently been hired to help with a quite large and complicated multithreaded application running on Microsoft Windows systems, with reader/writer locking objects. That made it difficult to search for deadlocks, so I wrote a deadlock detection object executing in its own thread that was sent information messages (with PostThreadMessage) from the locking objects whenever they were attempting to lock, succeeded or failed to lock, and unlocked.
By looking up the different threads and the shared locks and their state in a truth table it was then possible to without any doubt pinpoint the cause and location of the deadlocks.
A: I figured out that I can create a FIFO, with ONLY ONE WRITER AND ONLY ONE READER, without using any synchronization instruments.
( so a master-slave with 2*n FIFO .. without any mutex / semaphone !! )
If you have a long linked list, you don't need to synchronize for inserting at one end and removing at other end.
The trick is to keep always one element in the list (-;
the code is really small
My pride was 'dented' when a hardware guy told me ..that's obvious (-:
A: I don't know if this counts, but for me a working multithreaded software is fascinating in itself, not so much the purpose they achieve. You have like 10, 20, 100 workers working in your program with the same infrastructure (Singletons, files etc.). Having everything work in harmony with mutexes, semaphores, context switches etc. is wonderful to observe, like being a manager and your team is working perfectly together. You read the application log, see the threads cooperate for a common goal, and it's just great. Can anybody relate to this feeling?
A: One of the most interesting things I have done with threads was write a multi-threaded application to solve a maze.
While it's nothing ground breaking, it was definitely interesting.
A: I create a windows service to consult a bunch of RSS feeds and store the information retrieved in a database. Since the application can contain a lot of RSS feeds, a pool of threads queries every n times packets of RSS. Like Thorsten79 comented, the most exciting part is watching your threads cooperate and work together as a team.
A: Distributed link checking system for a web crawler. Since web crawling is a very easily threaded solution I don't know if that counts...
I did write an algorithm to crack DES when I was in college that ran on a custom 256 CPU machine at the University. That was pretty neat, but was really just a divide and conquer type of problem.
A: Ian P, Mind to elaborate? AFAIK you don't need to use threading to solve a maze, unless, of course the maze is so complicated that the wait time becomes so unbearable that you have to add in status bar so that the users would not get bored and thought that your program hangs.
A: Oh -- they were not complicated mazes.
The mazes were defined in an array, similar to this:
String[] MazeArray = new String[5];
MazeArray[0] = "---X---X-------XF";
MazeArray[0] = "-X-X-X---XXXXXXX-";
MazeArray[0] = "-X-X-X-X-X---X---";
MazeArray[0] = "-X-X-X-XXX-X-X-X-";
MazeArray[0] = "SX---X-----X---X-";
I'd spawn a new worker thread when there is a fork in the path, and have that thread investigate that path. Then, through some basic logic, I could determine the shortest path, longest path, etc.
The example listed is obviously over simplified, but it should illustrate the point. It's a fun exercise, you should try it if you have a few minutes to spare.
Ian
A: Well this is really vanilla, but it's a nice stepping stone toward more creative threading:
In upload-new-data-and-process-it-for-me type requests, the request thread accepts the data and throws it in a queue, and the user goes on their merry way. One or more background threads continually dequeues items and processes them in some way.
A: I wrote a small fcgi servlet model in C++ which allocated a new thread instantly for each new request.
If you don't think thats cool, you should see what happened when i pumped 3K req/s through it. I accidentally forgot to clean them up, and even thou they all self terminated and stopped actually using memory, they still consumed addressing and I had the app quicky reseve more memory than I had and cease to create threads.
( I was on 32bit at the time and it literally stopped after creating 2^32 threads. Goodness knows what it would do with 64bits )
Also, I created a multi-threaded ( well, forking ) breadth-first fork-on-directory replacement for the famous command rm -rf . Mainly I was frustrated with rm -rf seeming to wait for IO to respond with a yay/nay response, which made it slow on some directory structures ( such as squid caches ). The only real caveat of this code is it only had entertainment value, and if ever used on a whole filesystem, it would be a race between 2 scenarios:
*
*Disk being wiped making it unusable, and possibly erasing the commands that permitted you to tell it to die.
*The system "Fork Bombing" due to the massive fork rate and making it so highly operation intensive it no longer even responds to commands.
And in the case of "fork bombing" the massive spawn rate could result in the recursive rm stopping itself ( or hit the Ulimit if any )
A: A simple client server with asynchronous input/output.
A: I guess implementing multi-threading for dos was a coolest thing i done with threads ..
A: I created a threading library with lock-free intertask communication to simplify multithreading programming. Delphi only: OmniThreadLibrary.
(To give credit where it is due - I didn't write the lock-free structures, GJ did.)
A: I wrote an image filtering framework in Java that uses thread pools.
I was surprised how much faster the filters run in multiple parallel threads even on a single processor single core machine. When I find some free time, I want to actually figure out why that is; all I'm doing is accessing memory and mathematical computation.
Threads rock (as long as they don't lock.)
Kudos to Java thread pools, too.
A: I wrote a multi-threading library for HDOS. HDOS was an 8-bit OS that ran on HeathKit systems. I intercepted the system clock tick (Every 55 millseconds, if I remember correctly) and had a scheduler that would decide which thread to run next in a round-robin fashion. Of course since the OS itself was not multi-threading, only one thread was allowed to be in the OS at any given time.
I never did anything useful with it. It was just a fun project I decided to tackle to see if I could do it.
A: I made an implementation for quick sort where the left and right sides of the partition are sorted concurrently. It was almost twice as fast as the same code using only 1 thread.
A: I wrote a lock-free cache that's 10x faster than the most common, lock-based, caching library. It was really interesting to figure out how to do complex CASing logic and come up with an alternative to LRU that didn't suffer suffer locking overhead.
I also wrote a distributed master-worker framework, which I later prototyped abstractions to support fork-join and map-reduce. I need to redo those at some point, as they weren't production quality, but it was quite a fun diversion.
I'd really love to write a SkipList data structure, just to learn how. There's already a core implementation, but its such a cool and simple idea I'd love to dig into it. It would be purely throw away code, but educational.
A: This isn't an application of multi-threading, but a small snippet that shows off C# 3.0 features including lambdas and object initializers. Not what you had in mind, I'm sure—but the "cooleset thing [I've] done with threads" nonetheless.
new Thread(() =>
{
// do stuff in a new thread's context
})
{
Name = "Thread " + GetHashCode().ToString(),
Priority = this.threadPriority
}
.Start();
A: I wrote an implementation of actors/message passing for multithreading in C#, ever since I wrote that multithreading has been so easy! Writing that in itself was pretty fun, lockless data structures for message queues. There isn't a thread for each actor but instead a pool of threads which loops through the actors and runs each of them.
Since multithreading is so easy now that I have this system to play with I've written several cool things:
A simple socket server/client demo app
A multithreaded webcrawler (I'm going to go back to that one)
A procedural content generator for games
and some other things, but they're all small and boring
A: Multi-threaded calls to a (third party application) which does not support simultaneous calls from different apps or threads? Although multiple instances could be executed (and this is how I eventually implemented it), certain applications operations could not be executed simultaneously on different app instances.
A: See this parallel N-puzzle solver. It solves the Npuzzle problem by iterative deepening searching forking grains to implement the search. This is done in a parallel programming language I designed which makes it easy to fork grains; almost the whole magic in program is hiding in the "fork parallel" operator (|| ... ) You have to look for it in the code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Can .Net 3.5 apps run on machines that have .Net 2.0 runtime installed? I write my app in VS 2008 and so use all the fanciful stuffs such as LINQ, object initializers etc. Now can my app run on machines that have only .Net 2.0 runtime, but no .Net 3.5 runtime? .Net 3.5 runtime is a huge download, as all of you might know.
A: What you can use is for example the var keyword, auto-getters and auto-setters, object initializers. I.e. syntactic sugar that is compiled to 2.0 code.
What you can't use is functionality that resides in .Net framework 3.0 and 3.5 library. For example LINQ.
You can try for yourself what you can and can't use by setting target platform in Visual Studio to .Net Framework 2.0. The compiler will complain when you use things from Framework 3.0 and 3.5.
You can use Extension Methods with a little trick: Creating this class to your project
namespace System.Runtime.CompilerServices
{
public class ExtensionAttribute : Attribute { }
}
Extension Methods are actually also compiled to 2.0 code, but the compiler needs this class to be defined. Read about it here
A: You can run it on .NET 2.0 if you don't use .NET 3.5 libraries. See Visual Studio multi-targeting support You can use LinqBridge to use Linq queries on .NET 2.0
For details, see MSVS multi-targeting screencast by Daniel Moth on Channel9.
A: I would recommend looking at Smallest DotNet to find a smaller version of the framework when deploying application for Framework 3.0 and 3.5.
A: In most cases probably not, whilst .Net 3.5 still excute with the .Net 2.0 CLR there are a lot of new libraries and functionality that you are very likely to use such as the code that defines the extension methods that will not be available to your clients that don't have .Net 3.5 installed.
You can use VS2008 to target .Net 2.0. I think it is a property on the Solution element.
http://en.wikipedia.org/wiki/Microsoft_.NET#Microsoft_.NET has a lot of information.
A: You need to install it .Net 3.5 if you want to use its features.
A: If cost is not a factor, you might consider runtime virtualization software such as VMWare ThinApp or Xenocode Postbuild, both of which allow .NET applications to run without having to install the .NET runtime.
A: This article from Jean-Baptiste Evain explains how you can use C# 3.0 and LINQ and targeting machines on which there is only .NET 2.0 runtime installed.
The idea is to use System.Core Mono implementation, which is licensed under the MIT/X11 license.
Note : This answer was first provided to a duplicated question.
A: No, they can not.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How does nunit work? Can someone explain me how it works, starting from when you select to run a test
A: When you select to run a test,
*
*it will create an instance of the parent class of that test method.
*It then proceeds to run the method marked with TestFixtureSetup attribute if one exists (once for the the test class).
*Next is the method marked with the Setup attribute is called if one exists (once before every test in that class)
*Next your selected method (with the Test attribute) is executed. All assertions are checked. If all assertions are valid, the test is marked as Pass (Green in the GUI) else Fail (Red). If any exceptions pop up that were not specified with the ExpectedException attribute, test fails.
*Then the method marked with the Teardown attribute is called if one exists. (Cleanup code.. called once after every test in the class)
*Finally method marked with TestFixtureTeardown attribute is executed. (once after all tests in the test class)
That's it in a nutshell. The power of xUnit is its simplicity. Is that what you were looking for ?
A: I use it at work, but I'm not an expert. Here's a link to the NUnit documentation: http://www.nunit.org/index.php?p=getStarted&r=2.4.8
A: 1) Have a class you want to test in a .NET project (MyClass is the class name, MyProject is the project name, for example)
2) Add another project to your solution called MyProject.Tests
3) Add a reference from MyProject to MyProject.Tests so that you can access the class you want to test from the testing code
3) In this new project add a new class file called MyClass (the same as the class in MyProject)
4) In that class, add your testing code like this page explains -- http://www.nunit.org/index.php?p=quickStart&r=2.4.8
5) When you've written your tests, build the solution. In the MyProject.Tests project folder a new folder will appear -- 'MyProject.Tests\bin\Debug'. That's assuming you built in Debug mode. If you built in Release mode it'll be MyProject.Test\bin\Release. Either will work. In this folder, you'll find a dll file called MyProject.Tests.dll
6) Open the nUnit testing utility, File > Open, then navigate to the folder in #5 to find that MyProject.Tests.dll. Open it.
7) The tests from the dll should be listed in the nUnit utility window, and you can now select which tests to run, and run them.
Note: The naming convention isn't necessary, it's just the way I do it. If you have a project called 'MyProject' and you want your testing project to be called 'ArbitraryName' instead of 'MyProject.Test', then it'll still work... the naming convention just helps keep track of what exactly is being tested.
A: What do you mean how does it work?
You define your test classes with [TestFixture] and your tests with [Test]
It's nothing more than a testing framework, you still have to write the tests and all of that jazz :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I handle the window close event in Tkinter? How do I handle the window close event (user clicking the 'X' button) in a Python Tkinter program?
A: I'd like to thank the answer by Apostolos for bringing this to my attention. Here's a much more detailed example for Python 3 in the year 2019, with a clearer description and example code.
Beware of the fact that destroy() (or not having a custom window closing handler at all) will destroy the window and all of its running callbacks instantly when the user closes it.
This can be bad for you, depending on your current Tkinter activity, and especially when using tkinter.after (periodic callbacks). You might be using a callback which processes some data and writes to disk... in that case, you obviously want the data writing to finish without being abruptly killed.
The best solution for that is to use a flag. So when the user requests window closing, you mark that as a flag, and then react to it.
(Note: I normally design GUIs as nicely encapsulated classes and separate worker threads, and I definitely don't use "global" (I use class instance variables instead), but this is meant to be a simple, stripped-down example to demonstrate how Tk abruptly kills your periodic callbacks when the user closes the window...)
from tkinter import *
import time
# Try setting this to False and look at the printed numbers (1 to 10)
# during the work-loop, if you close the window while the periodic_call
# worker is busy working (printing). It will abruptly end the numbers,
# and kill the periodic callback! That's why you should design most
# applications with a safe closing callback as described in this demo.
safe_closing = True
# ---------
busy_processing = False
close_requested = False
def close_window():
global close_requested
close_requested = True
print("User requested close at:", time.time(), "Was busy processing:", busy_processing)
root = Tk()
if safe_closing:
root.protocol("WM_DELETE_WINDOW", close_window)
lbl = Label(root)
lbl.pack()
def periodic_call():
global busy_processing
if not close_requested:
busy_processing = True
for i in range(10):
print((i+1), "of 10")
time.sleep(0.2)
lbl["text"] = str(time.time()) # Will error if force-closed.
root.update() # Force redrawing since we change label multiple times in a row.
busy_processing = False
root.after(500, periodic_call)
else:
print("Destroying GUI at:", time.time())
try: # "destroy()" can throw, so you should wrap it like this.
root.destroy()
except:
# NOTE: In most code, you'll wanna force a close here via
# "exit" if the window failed to destroy. Just ensure that
# you have no code after your `mainloop()` call (at the
# bottom of this file), since the exit call will cause the
# process to terminate immediately without running any more
# code. Of course, you should NEVER have code after your
# `mainloop()` call in well-designed code anyway...
# exit(0)
pass
root.after_idle(periodic_call)
root.mainloop()
This code will show you that the WM_DELETE_WINDOW handler runs even while our custom periodic_call() is busy in the middle of work/loops!
We use some pretty exaggerated .after() values: 500 milliseconds. This is just meant to make it very easy for you to see the difference between closing while the periodic call is busy, or not... If you close while the numbers are updating, you will see that the WM_DELETE_WINDOW happened while your periodic call "was busy processing: True". If you close while the numbers are paused (meaning that the periodic callback isn't processing at that moment), you see that the close happened while it's "not busy".
In real-world usage, your .after() would use something like 30-100 milliseconds, to have a responsive GUI. This is just a demonstration to help you understand how to protect yourself against Tk's default "instantly interrupt all work when closing" behavior.
In summary: Make the WM_DELETE_WINDOW handler set a flag, and then check that flag periodically and manually .destroy() the window when it's safe (when your app is done with all work).
PS: You can also use WM_DELETE_WINDOW to ask the user if they REALLY want to close the window; and if they answer no, you don't set the flag. It's very simple. You just show a messagebox in your WM_DELETE_WINDOW and set the flag based on the user's answer.
A: Matt has shown one classic modification of the close button.
The other is to have the close button minimize the window.
You can reproduced this behavior by having the iconify method
be the protocol method's second argument.
Here's a working example, tested on Windows 7 & 10:
# Python 3
import tkinter
import tkinter.scrolledtext as scrolledtext
root = tkinter.Tk()
# make the top right close button minimize (iconify) the main window
root.protocol("WM_DELETE_WINDOW", root.iconify)
# make Esc exit the program
root.bind('<Escape>', lambda e: root.destroy())
# create a menu bar with an Exit command
menubar = tkinter.Menu(root)
filemenu = tkinter.Menu(menubar, tearoff=0)
filemenu.add_command(label="Exit", command=root.destroy)
menubar.add_cascade(label="File", menu=filemenu)
root.config(menu=menubar)
# create a Text widget with a Scrollbar attached
txt = scrolledtext.ScrolledText(root, undo=True)
txt['font'] = ('consolas', '12')
txt.pack(expand=True, fill='both')
root.mainloop()
In this example we give the user two new exit options:
the classic File → Exit, and also the Esc button.
A: Tkinter supports a mechanism called protocol handlers. Here, the term protocol refers to the interaction between the application and the window manager. The most commonly used protocol is called WM_DELETE_WINDOW, and is used to define what happens when the user explicitly closes a window using the window manager.
You can use the protocol method to install a handler for this protocol (the widget must be a Tk or Toplevel widget):
Here you have a concrete example:
import tkinter as tk
from tkinter import messagebox
root = tk.Tk()
def on_closing():
if messagebox.askokcancel("Quit", "Do you want to quit?"):
root.destroy()
root.protocol("WM_DELETE_WINDOW", on_closing)
root.mainloop()
A: You should use destroy() to close a tkinter window.
from Tkinter import *
root = Tk()
Button(root, text="Quit", command=root.destroy).pack()
root.mainloop()
Explanation:
root.quit()
The above line just Bypasses the root.mainloop() i.e root.mainloop() will still be running in background if quit() command is executed.
root.destroy()
While destroy() command vanish out root.mainloop() i.e root.mainloop() stops.
So as you just want to quit the program so you should use root.destroy() as it will it stop the mainloop()`.
But if you want to run some infinite loop and you don't want to destroy your Tk window and want to execute some code after root.mainloop() line then you should use root.quit().
Ex:
from Tkinter import *
def quit():
global root
root.quit()
root = Tk()
while True:
Button(root, text="Quit", command=quit).pack()
root.mainloop()
#do something
A: Depending on the Tkinter activity, and especially when using Tkinter.after, stopping this activity with destroy() -- even by using protocol(), a button, etc. -- will disturb this activity ("while executing" error) rather than just terminate it. The best solution in almost every case is to use a flag. Here is a simple, silly example of how to use it (although I am certain that most of you don't need it! :)
from Tkinter import *
def close_window():
global running
running = False # turn off while loop
print( "Window closed")
root = Tk()
root.protocol("WM_DELETE_WINDOW", close_window)
cv = Canvas(root, width=200, height=200)
cv.pack()
running = True;
# This is an endless loop stopped only by setting 'running' to 'False'
while running:
for i in range(200):
if not running:
break
cv.create_oval(i, i, i+1, i+1)
root.update()
This terminates graphics activity nicely. You only need to check running at the right place(s).
A: If you want to change what the x button does or make it so that you cannot close it at all try this.
yourwindow.protocol("WM_DELETE_WINDOW", whatever)
then defy what "whatever" means
def whatever():
# Replace this with your own event for example:
print("oi don't press that button")
You can also make it so that when you close that window you can call it back like this
yourwindow.withdraw()
This hides the window but does not close it
yourwindow.deiconify()
This makes the window visible again
A: The easiest code is:
from tkinter import *
window = Tk()
For hiding the window : window.withdraw()
For appearing the window : window.deiconify()
For exiting from the window : exit()
For exiting from the window(If you've made a .exe file) :
from tkinter import *
import sys
window = Tk()
sys.exit()
And of course you have to place a button and use the codes above in a function so you can type the function's name in the command part of the button
A: Try The Simple Version:
import tkinter
window = Tk()
closebutton = Button(window, text='X', command=window.destroy)
closebutton.pack()
window.mainloop()
Or If You Want To Add More Commands:
import tkinter
window = Tk()
def close():
window.destroy()
#More Functions
closebutton = Button(window, text='X', command=close)
closebutton.pack()
window.mainloop()
A: you can use:
root = Tk()
def func():
print('not clossed')
root.protocol('wm_delete_window', func)
root.mainloop()
A: def on_closing():
if messagebox.askokcancel("Quit", "would you like to quit"):
window.destroy()
window.protocol("WM_DELETE_WINDOW", on_closing)
you can handle a window close event like this, if you wanna do something else just change the things that happen in the on_closing() function.
A: i say a lot simpler way would be using the break command, like
import tkinter as tk
win=tk.Tk
def exit():
break
btn= tk.Button(win, text="press to exit", command=exit)
win.mainloop()
OR use sys.exit()
import tkinter as tk
import sys
win=tk.Tk
def exit():
sys.exit
btn= tk.Button(win, text="press to exit", command=exit)
win.mainloop()
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "188"
}
|
Q: Is there a tool to monitor the SQL statements being executed by an .EXE? I'd like to be able to hook into a 3rd party application to see what SQL Statements are being executed. Specifically, it is a VB6 application running on SQL Server 2005.
For example, when the application fills out a grid, I'd like to be able to see exactly what query produced that data.
A: If you have the appropriate rights (sysadmin or ALTER TRACE permission) on the DB you could watch using SQL Profiler.
A: If the application does not write a log or something, the only way to watch them is on the database side. SQL Profiler is the proper tool for the task on MSSQL 2005.
A: You can view it server side by connecting to the SQL server with the SQL Server Profiler included in the tools. Here's a usage run down of it from Microsoft
A: Reviewing it on the server as other answers indicate is most likely the best way to go. However, if that's not available, you can also turn on ODBC logging which may be helpful.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How do I determine when there is no more data to read in a NetworkStream? I have a web app which connects to a server using a TCP connection and reads a binary document which it then writes to its response object. In other words it's transferring a file from a backend server using a custom protocol and returning that file to its client through HTTP.
The server sends a status code and a mime type, which I read successfully and then writes the contents of the file and closes the socket. This seems to work fine.
The client (a C# web app), reads the data:
private NetworkStream stream_;
public void WriteDocument(HttpResponse response)
{
while (stream_.DataAvailable)
{
const int bufsize = 4 * 1024;
byte[] buffer = new byte[bufsize];
int nbytes = stream_.Read(buffer, 0, bufsize);
if (nbytes > 0)
{
if (nbytes < bufsize)
Array.Resize<byte>(ref buffer, nbytes);
response.BinaryWrite(buffer);
}
}
response.End();
}
This seems to always exit the read loop before all the data has arrived. What am I doing wrong?
A: I would use the OutputStream directly with a general-purpose function. With the Stream, you can control Flush.
public void WriteDocument(HttpResponse response) {
StreamCopy(response.OutputStream, stream_);
response.End();
}
public static void StreamCopy(Stream dest, Stream src) {
byte[] buffer = new byte[4 * 1024];
int n = 1;
while (n > 0) {
n = src.Read(buffer, 0, buffer.Length);
dest.Write(buffer, 0, n);
}
dest.Flush();
}
A: Here's what I do. Usually the content length is desired to know when to end the data storing loop. If your protocol does not send the amount of data to expect as a header then it should send some marker to signal the end of transmission.
The DataAvailable property just signals if there's data to read from the socket NOW, it doesn't (and cannot) know if there's more data to be sent or not. To check that the socket is still open you can test for stream_.Socket.Connected && stream_.Socket.Readable
public static byte[] doFetchBinaryUrl(string url)
{
BinaryReader rdr;
HttpWebResponse res;
try
{
res = fetch(url);
rdr = new BinaryReader(res.GetResponseStream());
}
catch (NullReferenceException nre)
{
return new byte[] { };
}
int len = int.Parse(res.GetResponseHeader("Content-Length"));
byte[] rv = new byte[len];
for (int i = 0; i < len - 1; i++)
{
rv[i] = rdr.ReadByte();
}
res.Close();
return rv;
}
A: Not sure how things work in .Net, but in most environments I've worked in Read() returns 0 bytes when the connection is closed. So you'd do something like:
char buffer[4096];
int num_read;
while ( num_read = src.Read(sizeof(buffer)) > 0 )
{
dst.Write(buffer, num_read);
}
A: The root of your problem is this line:
while (stream_.DataAvailable)
DataAvailable simply means there's data in the stream buffer ready to be read and processed. It makes no guarantee about the 'end' of the stream having been reached. In particular, DataAvailable can be false if there's any pause in transmission, or if your sender is slower than your reader.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Local Database with Silverlight What would be a good local database for a Silverlight application? The database's main purpose is for local data caching and synchronization services. I do not believe that SQL anywhere or SQLite will work since they use unmanaged code which will not run under the silverlight sandbox
A: Why don't use a new feature in SL 2 called "Isolated Storage"? It is fully support in local database (like Google Gear) but of course It is not a database. You can use XML file format to keep it.
*
*pros; User just need to install SL runtime.
*cons; It's not exactly a database
Find two references:
Moth said in his blog about it http://www.danielmoth.com/Blog/2008/04/isolatedstorage-in-siverlight-2-beta-1.html
Dino made very good summary at http://www.ddj.com/windows/208300036?pgno=2
A: @Aaron Fischer,
I'm very interested in this question too. I'm looking DB for XBAP (WPF in browser) apps. Here is my question "What embedded database with Isolated Storage support can you recommend?"
SQLite & MSSQL CE (aka SQL anywhere) wouldn't work.
VistaDB is implemented in .NET and can work under constraints (it has support for Isolated Storage) but I'm looking for alternatives.
Another option is Sybase iAnywhere - but I'm not sure how to deploy it on end-user machine.
I'm going to try DB4objects for Silverlight. If it would work, I'll update the post.
A: The answer is siaqodb. Siaqodb is real Silverlight client side object database, you can store an object with just one line of code and retrieve back objects via LINQ.For more info take a look at http://siaqodb.com
A: Based on this example it looks possible to use Google Gears and thus Sqlite. The major down side is the amount of integration work and the need to install yet one more platform on the client's computer.
A: If your caching needs are basic enough and you don't have so much data that you're doing it to minimize RAM usage, perhaps you don't even need a full-blown database. You could create an object database of sorts using a structure such as a dictionary and put into it the objects that would otherwise be your table rows. You could then serialize this data to a file in your local storage and deserialize it next time the app runs. If your data structures are done well, you could even use Linq to query your object database.
If your primary goal is to minimize the number of times you have to pull the same data from your server, this could be something to consider.
On the other hand, this isn't the way to go if you have too much data or if you make frequent writes to the database (as it would then have to serialize the whole structure to disk every time).
If you do have too much data but still want to try this, you could see if there is a logical way to partition your data into multiple files that aren't likely to be needed at the same time. Then you could push your unused data out to disk and load it back in next time the program needed it. Of course, if you take this approach too far, you'll end up essentially writing your own database system anyways.
A: I would love to see two things. 1.) Some kind of persisted local database support or 2.) Some kind of actual database server support without the hassle of web services.
Personally, I'd take Access and OleDb. :)
A: One last thing... that the database type of functionality is something Flash/Flex doesn't offer... this would be a great way for Microsoft to differitiate Silverlight and really give it a leg up.
A: There is now a sqlite port to c# called csharp-sqlite
This has promise once they find an acceptable name.
A: Yes, I think a LINQ provider is the optimal solution. As the storage space is limited, you don't really need tables and indexes, it would be convenient to have a simple way to store and query objects on the client via LINQ without having to deal with low-level file streams.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: How can I make a Enso style application in C# The background should be transparent, but the text should not.
A: By making an "Enso style" application you mean the Enso launcher?
Here is a screenshot of it:
alt text http://enscreenshots.softonic.com/s2en/68000/68880/3_ensolauncher03.jpg
I would suggest at looking at the open-source C# Cropper application. He does a similar looking GUI with transparent background. You can open up his project and see how he implements it.
alt text http://img352.imageshack.us/img352/726/cropperuijt3.png
A: You can set the background color, and transparency key properties to the same color and that will make the background transparent. The rest of the control items will stay non-transparent, as long as they are different colors.
A: Enso is open source too.
You can directly peek at they source code.
http://code.google.com/p/enso/
Let us know how they do it when you find out!.
UPDATE
It looks like they use something called Cairo. If you can link it from C# you're done.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What is a "callable"? Now that it's clear what a metaclass is, there is an associated concept that I use all the time without knowing what it really means.
I suppose everybody made once a mistake with parenthesis, resulting in an "object is not callable" exception. What's more, using __init__ and __new__ lead to wonder what this bloody __call__ can be used for.
Could you give me some explanations, including examples with the magic method ?
A: From Python's sources object.c:
/* Test whether an object can be called */
int
PyCallable_Check(PyObject *x)
{
if (x == NULL)
return 0;
if (PyInstance_Check(x)) {
PyObject *call = PyObject_GetAttrString(x, "__call__");
if (call == NULL) {
PyErr_Clear();
return 0;
}
/* Could test recursively but don't, for fear of endless
recursion if some joker sets self.__call__ = self */
Py_DECREF(call);
return 1;
}
else {
return x->ob_type->tp_call != NULL;
}
}
It says:
*
*If an object is an instance of some class then it is callable iff it has __call__ attribute.
*Else the object x is callable iff x->ob_type->tp_call != NULL
Desciption of tp_call field:
ternaryfunc tp_call An optional
pointer to a function that implements
calling the object. This should be
NULL if the object is not callable.
The signature is the same as for
PyObject_Call(). This field is
inherited by subtypes.
You can always use built-in callable function to determine whether given object is callable or not; or better yet just call it and catch TypeError later. callable is removed in Python 3.0 and 3.1, use callable = lambda o: hasattr(o, '__call__') or isinstance(o, collections.Callable).
Example, a simplistic cache implementation:
class Cached:
def __init__(self, function):
self.function = function
self.cache = {}
def __call__(self, *args):
try: return self.cache[args]
except KeyError:
ret = self.cache[args] = self.function(*args)
return ret
Usage:
@Cached
def ack(x, y):
return ack(x-1, ack(x, y-1)) if x*y else (x + y + 1)
Example from standard library, file site.py, definition of built-in exit() and quit() functions:
class Quitter(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return 'Use %s() or %s to exit' % (self.name, eof)
def __call__(self, code=None):
# Shells like IDLE catch the SystemExit, but listen when their
# stdin wrapper is closed.
try:
sys.stdin.close()
except:
pass
raise SystemExit(code)
__builtin__.quit = Quitter('quit')
__builtin__.exit = Quitter('exit')
A: Quite simply, a "callable" is something that can be called like a method. The built in function "callable()" will tell you whether something appears to be callable, as will checking for a call property. Functions are callable as are classes, class instances can be callable. See more about this here and here.
A: In Python a callable is an object which type has a __call__ method:
>>> class Foo:
... pass
...
>>> class Bar(object):
... pass
...
>>> type(Foo).__call__(Foo)
<__main__.Foo instance at 0x711440>
>>> type(Bar).__call__(Bar)
<__main__.Bar object at 0x712110>
>>> def foo(bar):
... return bar
...
>>> type(foo).__call__(foo, 42)
42
As simple as that :)
This of course can be overloaded:
>>> class Foo(object):
... def __call__(self):
... return 42
...
>>> f = Foo()
>>> f()
42
A: A callable is an object allows you to use round parenthesis ( ) and eventually pass some parameters, just like functions.
Every time you define a function python creates a callable object.
In example, you could define the function func in these ways (it's the same):
class a(object):
def __call__(self, *args):
print 'Hello'
func = a()
# or ...
def func(*args):
print 'Hello'
You could use this method instead of methods like doit or run, I think it's just more clear to see obj() than obj.doit()
A: It's something you can put "(args)" after and expect it to work. A callable is usually a method or a class. Methods get called, classes get instantiated.
A: To check function or method of class is callable or not that means we can call that function.
Class A:
def __init__(self,val):
self.val = val
def bar(self):
print "bar"
obj = A()
callable(obj.bar)
True
callable(obj.__init___)
False
def foo(): return "s"
callable(foo)
True
callable(foo())
False
A: Let me explain backwards:
Consider this...
foo()
... as syntactic sugar for:
foo.__call__()
Where foo can be any object that responds to __call__. When I say any object, I mean it: built-in types, your own classes and their instances.
In the case of built-in types, when you write:
int('10')
unicode(10)
You're essentially doing:
int.__call__('10')
unicode.__call__(10)
That's also why you don't have foo = new int in Python: you just make the class object return an instance of it on __call__. The way Python solves this is very elegant in my opinion.
A: A callable is anything that can be called.
The built-in callable (PyCallable_Check in objects.c) checks if the argument is either:
*
*an instance of a class with a __call__ method or
*is of a type that has a non null tp_call (c struct) member which indicates callability otherwise (such as in functions, methods etc.)
The method named __call__ is (according to the documentation)
Called when the instance is ''called'' as a function
Example
class Foo:
def __call__(self):
print 'called'
foo_instance = Foo()
foo_instance() #this is calling the __call__ method
A: callables implement the __call__ special method so any object with such a method is callable.
A: A Callable is an object that has the __call__ method. This means you can fake callable functions or do neat things like Partial Function Application where you take a function and add something that enhances it or fills in some of the parameters, returning something that can be called in turn (known as Currying in functional programming circles).
Certain typographic errors will have the interpreter attempting to call something you did not intend, such as (for example) a string. This can produce errors where the interpreter attempts to execute a non-callable application. You can see this happening in a python interpreter by doing something like the transcript below.
[nigel@k9 ~]$ python
Python 2.5 (r25:51908, Nov 6 2007, 15:55:44)
[GCC 4.1.2 20070925 (Red Hat 4.1.2-27)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 'aaa'() # <== Here we attempt to call a string.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'str' object is not callable
>>>
A: __call__ makes any object be callable as a function.
This example will output 8:
class Adder(object):
def __init__(self, val):
self.val = val
def __call__(self, val):
return self.val + val
func = Adder(5)
print func(3)
A: Callable is a type or class of "Build-in function or Method" with a method
call
>>> type(callable)
<class 'builtin_function_or_method'>
>>>
Example:
print is a callable object. With a build-in function call
When you invoke the print function, Python creates an object of type print and invokes its method call passing the parameters if any.
>>> type(print)
<class 'builtin_function_or_method'>
>>> print.__call__(10)
10
>>> print(10)
10
>>>
A: A class, function, method and object which has __call__() are callable.
You can check if callable with callable() which returns True if callable and returns False if not callable as shown below:
class Class1:
def __call__(self):
print("__call__")
class Class2:
pass
def func():
pass
print(callable(Class1)) # Class1
print(callable(Class2)) # Class2
print(callable(Class1())) # Class1 object
print(callable(Class2())) # Class2 object
print(callable(func)) # func
Then, only Class2 object which doesn't have __call__() is not callable returning False as shown below:
True # Class1
True # Class2
True # Class1 object
False # Class2 object
True # func
In addition, all of them below are not callable returning False as shown below:
print(callable("Hello")) # "str" type
print(callable(100)) # "int" type
print(callable(100.23)) # "float" type
print(callable(100 + 2j)) # "complex" type
print(callable(True)) # "bool" type
print(callable(None)) # "NoneType"
print(callable([])) # "list" type
print(callable(())) # "tuple" type
print(callable({})) # "dict" type
print(callable({""})) # "set" type
Output:
False # "str" type
False # "int" type
False # "float" type
False # "complex" type
False # "bool" type
False # "NoneType"
False # "list" type
False # "tuple" type
False # "dict" type
False # "set" type
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "359"
}
|
Q: How can I stay up-to-date on computer (especially software) security? I recently bought and read a box set of books on security (Building Secure Software: How to Avoid Security Problems the Right Way, Exploiting Software: How to Break Code, and Software Security: Building Security In). Although I think that the contents of these books will be useful for years to come, the authors do acknowledge that the world of computer and software security changes very quickly. What are some ways that I could stay on top of the latest happenings in these areas?
A: I follow Schneier on Security in my RSS reader.
A: Listen to the security now podcast, on twit. After then depending on the OSes you are using you should subscribe their security mailing lists, or rss feed.
A: The Register's Security section. RSS available. (I am a big fan of El Reg.)
Also, and it might be a little lightweight for a coder, but the Security Now! podcast with Steve Gibson and Leo Laporte is decent.
A: If you can afford it (or convince your employer to pay), go to at least one conference a year. As a last resort, there's always Defcon, which takes place on a weekend and is only $100. It's not as professional as, say, Black Hat, but it's better than nothing.
A: RISKS is not security-specific, but some interesting security-related topics are discussed there.
BUGTRAQ is a full-disclosure security mailing list that is worth skimming. (Every time a vulnerability is disclosed in a piece of software that ships with most Linux distributions, there is a barrage of disclosures from all of the various distributions. This negatively affects the signal-to-noise ratio unless you're using one of those distributions.)
Some security-related blogs that may be interesting (in addition to Schneier on Security which has already been linked): …And You Will Know me by the Trail of Bits, DoxPara Research (Dan Kaminsky), Matasano Chargen, Microsoft's Security Development Lifecycle, ZDNet's "Zero Day".
A: OWASP (http://www.owasp.org) provides a very nice RSS feed, mostly aggregated from many different sources.
A: Oh, don't forget the incredibly interesting hackers' conferences by the CCC. The conferences' names have a fixed pattern. The last one was 24c3, the next one will be 25c3. They are held in Berlin, Germany, and are one of the biggest convergence points in hacker and security culture on this planet.
You will find videos and mp3 transcripts of the last conferences at Chaos Radio.
Just in case you can't make the trip, the talks are usually broadcasted via live streams. Recordings get published weeks after the event.
A: For web security I subscribe the the following Feeds: Some are updated regularly, some aren't.
DanchoDanchevOnSecurity
Internet Storm Center
The Register (enterprise security)
US-CERT Cyber Security Bulletins
Zero Day
ha.ckers.org
and one of my newest adds
Stack Overflow: tagged Security
or you can just add all to your iGoogle hope page:
My iGoogle Security Page
I'm sure there are more interesting feeds out there if you're more application centric.
Regardless, feeds or visiting sites is the only way to really stay completely on top of things. Conferences are great, and fun to go to, but you'll get the same information an hour later via the web; usually with the added bonus of having several points of view to help you understand the topics.
A: Security Now! is not bad (I listen each week).
It often contains good explanations of underlying technologies (e.g. how does a router know where to send an IP packet?), although I do think it does go on a bit.
If you want a more hardcore podcast, then try Paul "dot com"'s Security Weekly.
It's really for penetration testers, but I can't help thinking that if a penetration tester knows about it then so should I.
A: Then there is the ACM's SIGSAC and the ACM's Transactions on Information and System Security. Being a member of the ACM is generally recommended by the authors of the Practical Programmer.
A: A blog I enjoy (apart from Schneier on Security) is Light Blue Touchpaper - a collective blog by the computer security research department at Cambridge University (led by the wonderful Ross Anderson.
A: IEEE has "Security and Privacy" as a magazine - it is pretty good.
A: I use many of the other mentions mentioned above (Schneier as mentioned), however I've found Slashdot honestly gives me the best "heads up" as to the new attack vectors coming in. It's not always timely, and mostly just a general overview, but it's good at posting vectors I never thought of.
A: Consider attending a local OWASP chapter meeting.
A: For software security and especially web application security OWASP Moderated AppSec News is a great RSS feed. Good signal / noise ratio. It should be enough to be up to date.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Where can I find an .xsd file to provide intellisense for Castle Windsor? I'm looking for an .xsd schema file to drop into my Visual Studio directory to provide intellisense for the xml configuration file for the Castle Windsor IoC container. I've looked in the downloaded code for Windsor, as well as googled several different ways. I see many people asking the same question, but no answers. Anyone know if there is such a document?
A: I have rehosted the project-distributor schema zip on google code here. The zip contains schema, example usage and a readme to install it.
A: Perhaps this is what you are looking for:
http://jimblogdog.blogspot.com/2008/05/castlewindsor-schema-enables-visual.html
Here is the link to download the castle windsor schema:
http://www.projectdistributor.net/Releases/Release.aspx?releaseId=427
Good Luck!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: VMware server 1.0.7 modules incompatible with kernel 2.6.26 -- patched, where to submit?
*
*VMware server 1.0.7 installed with vmware-package
*Debian GNU/Linux testing (lenny)
*Kernel 2.6.26-1-686
There were several compile problems when trying to build the binary kernel modules from the vmware-server-kernel-source package made by vmware-package from the VMware server tarball. Recently VMware has updated their kernel module sources so as to make them compatible with kernel 2.6.25, but they broke again with 2.6.26.
vmmon-only/linux/driver.c:146: error: unknown field 'nopage' specified in initializer
vmmon-only/linux/driver.c:147: warning: initialization from incompatible pointer type
vmmon-only/linux/driver.c:150: error: unknown field 'nopage' specified in initializer
vmmon-only/linux/driver.c:151: warning: initialization from incompatible pointer type
That's only the first error, but there are other compile problems (in vmnet-only).
Many advice on forums are to use vmware-any-any instead, but that has its own problems (see my other question).
As you can see from my own answer below, I've solved the problem by fixing the incompatiblities, and came up with a patch. Now I'd like VMware to include it in future releases, to save me and others trouble of applying it by hand after every VMware or kernel upgrade. Question: where/how do I submit such fixes to VMware?
A: I've bludgeoned the kernel module into working with the 2.6.26 kernel. Here is my patch.
A: Did you try searching the VMware support website? This has been asked in the VMware forums.
A: Perhaps http://open-vm-tools.sourceforge.net/contribute.php ?
A: I wrote a support request to VMware, and they assured me that my patch will reach the VMware server team.
A: Thanks for this great effort..
I've used it to get VMWare Server 1.08 running on OpenFiler. The vmware-any-any patch was also suggested but I couldn't start a guest VM because of the 'not enough physical memory' error.
Now my vm's are running happily again :)
A: Thanks a lot Alexey!
This sorted stuff out for myself and a colleague of mine.
Had the same issue as Bruce with the any-any patch.
One thing, I noticed that the patch was missing the @@'s at the beginning.. I've done a new pastebin that has them in it (curse their highlighting thing!)
It's here: http://pastebin.com/f2ea13d45
Thanks,
Chris
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: php: output[] w/ join vs $output .= I'm modifying some code in which the original author built a web page by using an array thusly:
$output[]=$stuff_from_database;
$output[]='more stuff';
// etc
echo join('',$output);
Can anyone think of a reason why this would be preferable (or vice versa) to:
$output =$stuff_from_database;
$output .='more stuff';
// etc
echo $output;
A: This is a little off topic, but
$output =$stuff_from_database;
$output .='more stuff';
// etc
echo $output;
Is far slower than:
echo = $stuff_from_database;
echo 'more stuff';
In fact, the fastest way to build a string in PHP is:
ob_start();
echo = $stuff_from_database;
echo 'more stuff';
$output = ob_get_contents();
ob_end_clean();
Due to the way that output buffers work and such, it is the fastest way to build a string. Obviously you would only do this if you really need to optimize sting building as it is ugly and doesn't lead to easy reading of the code. And everyone one knows that "Premature optimization is the root of all evil".
A: Again, a little bit off topic (not very far), but if you were aiming to put something between the array items being output, then if there was only a few lines to concatenate, join(', ', $output) would do it easily and quickly enough. This would be easier to write, and avoids having to check for the end of the list (where you would not want a trailing ',').
For programmer time, as it is on the order of 1000's of times more expensive than a cpu cycle, I'd usually just throw it into a join if it wasn't going to be run 10,0000+ times per second.
Post-coding micro-optimisation like this is very rarely worth it in terms of time taken vs cpu time saved.
A: It was probably written by someone who comes from a language where strings are immutable and thus concatenation is expensive. PHP is not one of them as the following tests show. So the second approach is performance wise, better. The only other reason that I can think of to use the first approach is to be able to replace some part of the array with another, but that means to keep track of the indexes, which is not specified.
~$ cat join.php
<?php
for ($i=0;$i<50000;$i++) {
$output[] = "HI $i\n";
}
echo join('',$output);
?>
~$ time for i in `seq 100`; do php join.php >> outjoin ; done
real 0m19.145s
user 0m12.045s
sys 0m3.216s
~$ cat dot.php
<?php
for ($i=0;$i<50000;$i++) {
$output.= "HI $i\n";
}
echo $output;
?>
~$ time for i in `seq 100`; do php dot.php >> outdot ; done
real 0m15.530s
user 0m8.985s
sys 0m2.260s
A: <!-- Redacted Previous Comment -->
It would appear I had an error in my code so i was doing a no-op and forgot to check.
It would appear Contary to previous testing, and reading previous blogs on the topic,
the following conclusions are actually ( tested ) untrue at least for all variants of the above code I can permute.
*
*UNTRUE: String interpolation is slower than string concatenation. (!)
*UNTRUE: SprintF is fastest.
In actual tests, Sprintf was the slowest and interpolation was fastest
PHP 5.2.6-pl7-gentoo (cli) (built: Sep 21 2008 13:43:03)
Copyright (c) 1997-2008 The PHP Group
Zend Engine v2.2.0, Copyright (c) 1998-2008 Zend Technologies
with Xdebug v2.0.3, Copyright (c) 2002-2007, by Derick Rethans
It may be conditional to my setup, but its still odd. :/
A: I think the fastest way to do it is with echoes. It's not as pretty and probably not enough faster to be worth the cost in readability.
echo $stuff_from_database
, 'more stuff'
, 'yet more'
// etc
, 'last stuff';
A: The bottom will reallocate the $output string repeatedly, whereas I believe the top will just store each piece in an array, and then join them all at the end. The original example may end up being faster as a result. If this isn't performance sensitive, then I would probably append, not join.
A: If joined with implode() or join(), PHP is able to better optimize that. It goes through all strings in your list, calculates the length and allocates the required space and fills the space. In comparison with the ".=" it has to constantly free() and malloc() the space for the string.
A: This part of the code is not performance sensitive; it is used to format and display a user's shopping cart info on a low-traffic website. Also, the array (or string) is built from scratch each time and there is no need to address or search for specific elements. It sounds like the array is somewhat more efficient, but I guess it doesn't matter either way for this use. Thanks for all of the info!
A: If you're generating a list (e.g. a shopping cart), you should have some code that generates the HTML from each entry fetched from the database.
for ($prod in $cart)
{
$prod->printHTML();
}
Or something like that. That way, the code becomes a lot cleaner. Of course, this assumes that you've got nice code with objects, rather than a whole moronic mess like my company does (and is replacing).
A: It has already been said, but I think a distinct answer will help.
The original programmer wrote it that way because he thought it was faster. In fact, under PHP 4, it really was faster, especially for large strings.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Using SQLITE with VB6 I am currently using an MSAccess mdb file for a redistributable app.
A while ago I found out about SQLite, as an alternative to my solution, but the binaries they provide do not offer the possiblilty of using them as an object in VB6. (Or at least I couldn't figure it out how).
Does anyone has a link, or could write a little about connecting to a SQLite DB from VB6, and its differences with using ADO?
A: I've been working on a VB6 app with SQLite for a while and I've tried a couple of methods of connecting.
So let me summarize and give, what in my opinion is, the best answer.
Methods mentioned by Ben Hoffstein, gobansaor and David W. Fenton are good, but they rely on proprietary interfaces to sqlite.
OLEDB provider by CherryCity is good because it's using a standard interface, but they have a per installation royalty system, which makes it really, really expensive. And their website does not state upfront that the product has royalties. You only find out when you actually bought the product for development and want to distribute it.
Finally there is the absolutely free as in both beer and speech, SQLite ODBC driver at http://www.ch-werner.de/sqliteodbc/ . It works pretty well and I haven't encountered any major issues just yet. The only minor issue I've encountered is that it won't allow multiple statements in one call, so you just have to separate it. In addition, the driver allows the DSN-less approach, which makes everything so much easier.
So, imo, the ODBC driver is really the best solution.
A: Or try DHSqlite http://www.thecommon.net/2.html from Datenhaus..
"...developed as a fast alternative
to ADO, encapsulating the super-fast SQLite-engine..."
"...With only two Dlls you get a complete Replacement to the whole ADO/JET-environment - no dependency-hazzle anymore..."
..it's free (but not opensource).
A: Here is a link with code examples:
http://www.freevbcode.com/ShowCode.asp?ID=6893
A: Just an FYI on this topic/question ...
The FreeVB code link posted uses AGS_SQLite.dll which only supports SQLite 2.x (limited functionality)
The DHSqlite link provided supports SQLite 3.x as well and is a better recommendation for anyone doing SQLite development with VB6 (Classic) ... There are code examples for this SQLite engine at http://www.thecommon.net/3.html
Hope that helps!
A: The vbRichClient-Framework (currently at Version 5), is a free available Set of 3 Dlls:
vbRichClient5.dll
vb_cairo_sqlite.dll
DirectCOM.dll
The vbRichClient5.dll is written in VB6 - and a later Open-Sourcing under LGPL is planned.
Its main-purpose is, to decouple from as many MS-COM-dependencies as possible, with the goal in mind,
to achieve a self-hosting state easier later on, when the accompanying (VB6-compatible) Compiler will lift off.
And if easier to achieve platform-portability (for the Compiler and the new Class-based Runtime) is the goal,
then we need to start working with such a decoupling-framework already in the transition- and planning-phase.
So, the lib offers a modern GUI-Framework which works Vector-based, using the cairo-library under the
hood (no GDI/GDI+ or DirectX here ... and also nothing of the MS-CommonControls.dll is touched).
The other larger part, which is often needed and used within "typical VB-Applications" is easy DB-Access
(usually done over an accompanying Desktop-DB-File in *.mdb-Format). So what the framework also offers,
is an easy to use (and nearly ADO-compatible) replacement for the MS-JET-Engine. This is, what makes
up the other larger part of the accompanying satellite-binary: vb_cairo_sqlite.dll ... the SQLite-engine.
http://www.vbrichclient.com/#/en/Downloads.htm
A: The COM Wrappers / Visual Basic DLLs section at the middle of this page lists some solution usable with VB6.
And yes, I'm still stuck developing with VB6 :(
A: It appears to be possible to directly access the SQLite functions in sqlite.dll using VB Declare Sub or Declare Function syntax.
An example which does this is shown here:
https://github.com/RobbiNespu/VB6-Sqlite3
Key extract:
Public Declare Sub sqlite3_open Lib "sqlite.dll" (ByVal FileName As String, ByRef handle As Long)
Public Declare Sub sqlite3_close Lib "sqlite.dll" (ByVal DB_Handle As Long)
Public Declare Function sqlite3_last_insert_rowid Lib "sqlite.dll" (ByVal DB_Handle As Long) As Long
Public Declare Function sqlite3_changes Lib "sqlite.dll" (ByVal DB_Handle As Long) As Long
Public Declare Function sqlite_get_table Lib "sqlite.dll" (ByVal DB_Handle As Long, ByVal SQLString As String, ByRef ErrStr As String) As Variant()
Public Declare Function sqlite_libversion Lib "sqlite.dll" () As String
Public Declare Function number_of_rows_from_last_call Lib "sqlite.dll" () As Long
...
query = "SELECT * FROM users"
row = sqlite_get_table(DBz, query, minfo)
(I do not know if that example is really ready for production code).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Multiple Tables in a TClientDataset? Is it possible to put the results from more than one query on more than one table into a TClientDataset?
Just something like
SELECT * from t1;
SELECT * from t2;
SELECT * from t3;
I can't seem to figure out a way to get a data provider (SetProvider) to pull in results from more than one table at a time.
A: The only way would be to join the tables. But then you have to provide the criteria of the join through joined foreign keys.
select * from t1, t2, t3 where t1.key = t2.key and t2.key = t3.key;
Now suppose you came up with a key (like LineNr) that would allow for such a join. You then could use a full outer join to include all records (important if not all tables have the same number of rows). But this would somehow be a hack. Be sure not to take auto_number for the key, as it does not reuse keys and therefore tends to leave holes in the numbering, resulting in many lines that are only partially filled with values.
If you want to populate a clientdataset from multiple tables that have the same set of fields, you can use the UNION operator to do so. This will just use the same columns and combine all rows into one table.
A: ClientDatasets can contain fields that are themselves other datasets. So if you want to create three tables in a single dataset, create three ClientDatasets holding the three result sets that you want, and then you can put them into a single ClientDataSet.
This article:
http://dn.codegear.com/article/29001
shows you how to do it both at runtime and at designtime. Pay particular attention to the section entitled:
"Creating a ClientDataSet's Structure at Runtime using TFields"
A: There is not a way to have multiple table data in the same TClientDataSet like you referenced. The TClientDataSet holds a single cursor for a single dataset.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Marketing Software Online What are some resources in use for marketing downloadable desktop software online? AdWords, certainly, and "organic" search engine results but is anyone having any luck making sales through sites like Tucows and/or Download.com anymorE?
A: Joel on Software forums
A: There are many ways to market your software online. Some cost money, many do not. Consider the following:
*
*Start a blog and update it regularly with items of interest to your potential customers
*Create forums where users can report bugs, ask for help, and suggest features. Build a community around your product that potential customers can find. Many will judge you by what those in your community say about your products.
*Submit press releases when you have announcements. There are many free, and some paid, sites that accept press releases. Don't forget announcements forums, newsgroups, and mailing lists.
*Create screencasts and post them to your site, as well as to community video sites like YouTube.
*Purchase advertising targeted as narrowly as possible to your prospective customers. If your target is Microsoft developers, for instance, consider Lake Quincy Media and CodeProject as good places to start.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: How does Jan Willem Klop's "(L L L...)" Y combinator work? I understand what a Y Combinator is, but I don't understand this example of a "novel" combinator, from the Wikipedia page:
Yk = (L L L L L L L L L L L L L L L L L L L L L L L L L L)
Where:
L = λabcdefghijklmnopqstuvwxyzr. (r (t h i s i s a f i x e d p o i n t c o m b i n a t o r))
How does this work?
A: The essence of a fixed-point combinator C is that C f reduces to f (C f). It doesn't matter what you take for C as long as does this. So instead of
(\y f. f (y y f)) (\y f. f (y y f))
you can just as well take
(\y z f. f (y y y f)) (\y z f. f (y y y f)) (\y z f. f (y y y f))
Basically you need something of the form
C t1 t2 ... tN
where ti = C for some i and
C = \x1 x2 .. xN f. f (xi u1 u2 ... xi ... u(N-1) f)
The other terms tj and uj are not actually "used". You can see that Klop's L has this form (although he uses the fact that all ti are L such that the second xi can also be any other xj).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Best content type to serve JSONP? I have a webservice that when called without specifying a callback will return a JSON string using application/json as the content type.
When a callback is specified it will wrap the JSON string in a callback function, so it's not really valid JSON anymore. My question is, should I serve it as application/javascript in this case or still use application/json?
A: Use application/javascript. In that way, clients can rely on the content-type without having to manually check whether a response has padding or not.
A: Use application/json as per rfc4627.txt if what you return is plain JSON.
If you return JavaScript (which is really what JSONP is), then use application/javascript as per rfc4329.txt
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "121"
}
|
Q: What's "P=NP?", and why is it such a famous question? The question of whether P=NP is perhaps the most famous in all of Computer Science. What does it mean? And why is it so interesting?
Oh, and for extra credit, please post a proof of the statement's truth or falsehood. :)
A: A short summary from my humble knowledge:
There are some easy computational problems (like finding the shortest path between two points in a graph), which can be calculated pretty fast ( O(n^k), where n is the size of the input and k is a constant (in the case of graphs, it's the number of vertexes or edges)).
Other problems, like finding a path that crosses every vertex in a graph or getting the RSA private key from the public key is harder (O(e^n)).
But CS speak tells that the problem is that we cannot 'convert' a non-deterministic Turing-machine to a deterministic one, we can, however, transform non-deterministic finite automatons (like the regex parser) into deterministic ones (well, you can, but the run-time of the machine will take long). That is, we have to try every possible path (usually smart CS professors can exclude a few ones).
It's interesting because nobody even has any idea of the solution. Some say it's true, some say it's false, but there is no consensus. Another interesting thing is that a solution would be harmful for public/private key encryptions (like RSA). You could break them as easily as generating an RSA key is now.
And it's a pretty inspiring problem.
A: There is not much I can add to the what and why of the P=?NP part of the question, but in regards to the proof. Not only would a proof be worth some extra credit, but it would solve one of the Millennium Problems. An interesting poll was recently conducted and the published results (PDF) are definitely worth reading in regards to the subject of a proof.
A: First, some definitions:
*
*A particular problem is in P if you can compute a solution in time less than n^k for some k, where n is the size of the input. For instance, sorting can be done in n log n which is less than n^2, so sorting is polynomial time.
*A problem is in NP if there exists a k such that there exists a solution of size at most n^k which you can verify in time at most n^k. Take 3-coloring of graphs: given a graph, a 3-coloring is a list of (vertex, color) pairs which has size O(n) and you can verify in time O(m) (or O(n^2)) whether all neighbors have different colors. So a graph is 3-colorable only if there is a short and readily verifiable solution.
An equivalent definition of NP is "problems solvable by a Nondeterministic Turing machine in Polynomial time". While that tells you where the name comes from, it doesn't give you the same intuitive feel of what NP problems are like.
Note that P is a subset of NP: if you can find a solution in polynomial time, there is a solution which can be verified in polynomial time--just check that the given solution is equal to the one you can find.
Why is the question P =? NP interesting? To answer that, one first needs to see what NP-complete problems are. Put simply,
*
*A problem L is NP-complete if (1) L is in P, and (2) an algorithm which solves L can be used to solve any problem L' in NP; that is, given an instance of L' you can create an instance of L that has a solution if and only if the instance of L' has a solution. Formally speaking, every problem L' in NP is reducible to L.
Note that the instance of L must be polynomial-time computable and have polynomial size, in the size of L'; that way, solving an NP-complete problem in polynomial time gives us a polynomial time solution to all NP problems.
Here's an example: suppose we know that 3-coloring of graphs is an NP-hard problem. We want to prove that deciding the satisfiability of boolean formulas is an NP-hard problem as well.
For each vertex v, have two boolean variables v_h and v_l, and the requirement (v_h or v_l): each pair can only have the values {01, 10, 11}, which we can think of as color 1, 2 and 3.
For each edge (u, v), have the requirement that (u_h, u_l) != (v_h, v_l). That is,
not ((u_h and not u_l) and (v_h and not v_l) or ...)
enumerating all the equal configurations and stipulation that neither of them are the case.
AND'ing together all these constraints gives a boolean formula which has polynomial size (O(n+m)). You can check that it takes polynomial time to compute as well: you're doing straightforward O(1) stuff per vertex and per edge.
If you can solve the boolean formula I've made, then you can also solve graph coloring: for each pair of variables v_h and v_l, let the color of v be the one matching the values of those variables. By construction of the formula, neighbors won't have equal colors.
Hence, if 3-coloring of graphs is NP-complete, so is boolean-formula-satisfiability.
We know that 3-coloring of graphs is NP-complete; however, historically we have come to know that by first showing the NP-completeness of boolean-circuit-satisfiability, and then reducing that to 3-colorability (instead of the other way around).
A: P stands for polynomial time. NP stands for non-deterministic polynomial time.
Definitions:
*
*Polynomial time means that the complexity of the algorithm is O(n^k), where n is the size of your data (e. g. number of elements in a list to be sorted), and k is a constant.
*Complexity is time measured in the number of operations it would take, as a function of the number of data items.
*Operation is whatever makes sense as a basic operation for a particular task. For sorting, the basic operation is a comparison. For matrix multiplication, the basic operation is multiplication of two numbers.
Now the question is, what does deterministic vs. non-deterministic mean? There is an abstract computational model, an imaginary computer called a Turing machine (TM). This machine has a finite number of states, and an infinite tape, which has discrete cells into which a finite set of symbols can be written and read. At any given time, the TM is in one of its states, and it is looking at a particular cell on the tape. Depending on what it reads from that cell, it can write a new symbol into that cell, move the tape one cell forward or backward, and go into a different state. This is called a state transition. Amazingly enough, by carefully constructing states and transitions, you can design a TM, which is equivalent to any computer program that can be written. This is why it is used as a theoretical model for proving things about what computers can and cannot do.
There are two kinds of TM's that concern us here: deterministic and non-deterministic. A deterministic TM only has one transition from each state for each symbol that it is reading off the tape. A non-deterministic TM may have several such transition, i. e. it is able to check several possibilities simultaneously. This is sort of like spawning multiple threads. The difference is that a non-deterministic TM can spawn as many such "threads" as it wants, while on a real computer only a specific number of threads can be executed at a time (equal to the number of CPUs). In reality, computers are basically deterministic TMs with finite tapes. On the other hand, a non-deterministic TM cannot be physically realized, except maybe with a quantum computer.
It has been proven that any problem that can be solved by a non-deterministic TM can be solved by a deterministic TM. However, it is not clear how much time it will take. The statement P=NP means that if a problem takes polynomial time on a non-deterministic TM, then one can build a deterministic TM which would solve the same problem also in polynomial time. So far nobody has been able to show that it can be done, but nobody has been able to prove that it cannot be done, either.
NP-complete problem means an NP problem X, such that any NP problem Y can be reduced to X by a polynomial reduction. That implies that if anyone ever comes up with a polynomial-time solution to an NP-complete problem, that will also give a polynomial-time solution to any NP problem. Thus that would prove that P=NP. Conversely, if anyone were to prove that P!=NP, then we would be certain that there is no way to solve an NP problem in polynomial time on a conventional computer.
An example of an NP-complete problem is the problem of finding a truth assignment that would make a boolean expression containing n variables true.
For the moment in practice any problem that takes polynomial time on the non-deterministic TM can only be done in exponential time on a deterministic TM or on a conventional computer.
For example, the only way to solve the truth assignment problem is to try 2^n possibilities.
A: To give the simplest answer I can think of:
Suppose we have a problem that takes a certain number of inputs, and has various potential solutions, which may or may not solve the problem for given inputs. A logic puzzle in a puzzle magazine would be a good example: the inputs are the conditions ("George doesn't live in the blue or green house"), and the potential solution is a list of statements ("George lives in the yellow house, grows peas, and owns the dog"). A famous example is the Traveling Salesman problem: given a list of cities, and the times to get from any city to any other, and a time limit, a potential solution would be a list of cities in the order the salesman visits them, and it would work if the sum of the travel times was less than the time limit.
Such a problem is in NP if we can efficiently check a potential solution to see if it works. For example, given a list of cities for the salesman to visit in order, we can add up the times for each trip between cities, and easily see if it's under the time limit. A problem is in P if we can efficiently find a solution if one exists.
(Efficiently, here, has a precise mathematical meaning. Practically, it means that large problems aren't unreasonably difficult to solve. When searching for a possible solution, an inefficient way would be to list all possible potential solutions, or something close to that, while an efficient way would require searching a much more limited set.)
Therefore, the P=NP problem can be expressed this way: If you can verify a solution for a problem of the sort described above efficiently, can you find a solution (or prove there is none) efficiently? The obvious answer is "Why should you be able to?", and that's pretty much where the matter stands today. Nobody has been able to prove it one way or another, and that bothers a lot of mathematicians and computer scientists. That's why anybody who can prove the solution is up for a million dollars from the Claypool Foundation.
We generally assume that P does not equal NP, that there is no general way to find solutions. If it turned out that P=NP, a lot of things would change. For example, cryptography would become impossible, and with it any sort of privacy or verifiability on the Internet. After all, we can efficiently take the encrypted text and the key and produce the original text, so if P=NP we could efficiently find the key without knowing it beforehand. Password cracking would become trivial. On the other hand, there's whole classes of planning problems and resource allocation problems that we could solve effectively.
You may have heard the description NP-complete. An NP-complete problem is one that is NP (of course), and has this interesting property: if it is in P, every NP problem is, and so P=NP. If you could find a way to efficiently solve the Traveling Salesman problem, or logic puzzles from puzzle magazines, you could efficiently solve anything in NP. An NP-complete problem is, in a way, the hardest sort of NP problem.
So, if you can find an efficient general solution technique for any NP-complete problem, or prove that no such exists, fame and fortune are yours.
A: *
*A yes-or-no problem is in P (Polynomial time) if the answer can be computed in polynomial time.
*A yes-or-no problem is in NP (Non-deterministic Polynomial time) if a yes answer can be verified in polynomial time.
Intuitively, we can see that if a problem is in P, then it is in NP. Given a potential answer for a problem in P, we can verify the answer by simply recalculating the answer.
Less obvious, and much more difficult to answer, is whether all problems in NP are in P. Does the fact that we can verify an answer in polynomial time mean that we can compute that answer in polynomial time?
There are a large number of important problems that are known to be NP-complete (basically, if any these problems are proven to be in P, then all NP problems are proven to be in P). If P = NP, then all of these problems will be proven to have an efficient (polynomial time) solution.
Most scientists believe that P!=NP. However, no proof has yet been established for either P = NP or P!=NP. If anyone provides a proof for either conjecture, they will win US $1 million.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "268"
}
|
Q: How to convert legacy Interbase DB to SQL Server? I have an Interbase DB. How can I convert it to SQL Server?
A: You could use SQL Server built in Data Transformation Services (DTS) in SQL Server 2000 or SQL Server Integration Services (SSIS) in SQL Server 2005.
Try setting up an ODBC DSN for Interbase. Then in DTS / SSIS use the Other (ODBC Data Source) and the DSN.
If that does not work then see if Interbase has a utility to export to text files and then use DTS / SSIS to import the text files.
A: If you want to spend some money, this will do it:
http://www.spectralcore.com/fullconvert/tutorials/convert-interbase-firebird-to-mssql-sql-server.php
A: The Interbase DB Wikipedia page says that it supports OBDC and ADO.NET, so I would think that SQL Server can probably import this database on its own. I don't have access to an Interbase DB installation to try, but you might find these pages helpful.
MSDN on import data wizard
MSDN on bulk import command (if Interbase DB can dump a text file)
Article on bulk importing from an ADO.NET supporting source
Hopefully somebody will have direct experience with this database and can help. Good luck!
A: If you only need to convert tables and data, that's rather simple. Just use ODBC driver for InterBase, connect to it and pump the data.
However, if you need business logic as well, you cannot covert it just like that. You can convert regular tables and views without too much problems. Domain info would be lost but you don't need it in MSSQL anyway. The only problem with tables can be array fields, which you need to convert to separate tables, but that isn't too hard either.
The problem is the conversion of triggers and stored procedures, since InterBase uses its own, custom PSQL language. It has some concepts that are different from MSSQL. For example, you have procedures that can return resultsets, and you would need to convert those to MSSQL functions.
In any case, it shouldn't be too hard, since you're going from low to high complexity, but there are no tools to do it automatically.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What are some ADFS alternatives for doing single sign on for an ASP.NET app with users in active directory? Needs to be secure and able to handle thousands of users.
A: Check out ADAM and AzMan.
ADAM is Active Directory Application Mode. There is a how-to guide at: http://msdn.microsoft.com/en-us/library/ms998331.aspx
AzMan is Authorization Manager. There is a how-to guide at: http://msdn.microsoft.com/en-us/library/ms998336.aspx
A: Three alternatives to ADFS are
*
*Symplified - Various product options
*Ping - SaaS product
*Forgerock - This is an open platform
All integrate with AD in some way
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Getting Arduino LilyPad to switch BlueSmirf v2.11 to/from command mode A battery powered (2 x AA) Arduino LilyPad should switch a BlueSmirf v2.11 Bluetooth modem to/from command mode (see source code below). The BlueSmirf has been set to 9600 baud.
If the PC connects via Bluetooth (see source code below), the Arduino program runs fine at the beginning (sending multiple "ping\n"). After some time it (LilyPad/BlueSmirf) starts to also send "$$$" and "---\n" over the Bluetooth connection instead of switching to/from command mode.
Any ideas?
Regards,
tamberg
// Arduino source code:
void setup () {
Serial.begin(9600);
}
void loop () {
Serial.print("$$$");
delay(2000); // TODO: Inquiry, etc.
Serial.print("---\n");
delay(100);
Serial.print("ping\n");
delay(2000);
}
// C# source code (runs on PC)
using System;
using System.IO.Ports;
class Program {
static void Main () {
SerialPort p = new SerialPort(
"COM20", 9600, Parity.None, 8, StopBits.One);
using (p) {
p.Open();
while (p.IsOpen) {
Console.Write((char) p.ReadChar());
}
}
}
}
A: From the datasheet, page 6:
NOTE1 : You can enter command mode
locally over the serial port at any
time when not connected. Once a
connection is made, you can only enter
command mode if the config timer has
not expired. To enable continuous
configuration, set the config timer to
255. Also, if the device is in Auto Master mode 3, you will NOT be able to
enter command mode when connected over
Bluetooth.
My guess would be that the config timer is expiring.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Changing another Process Locale From my own "key logger like" process I figured out that another process Locale is wrong (i.e. by sniffing few keys, I figured out that the foreground process Locale should be something while it is set to another). What's the best way to do this?
A: I'd use setLocale from within that process to change it, and notify the process about this with some form of IPC like:
*
*signals
*sockets
*pipes
from the process who knows
A: You didn't specify operating system or anything, but in Linux this is quite hard unless the target process is willing to help (i.e. there's some IPC mechanism available where you can ask the process to do it for you)
What you can do is to attach to the process, like a debugger or strace does, and the call the appropriate system call (like setlocale())
The result on the target process is of course undetermined since it probably doesn't expect to get its locale changed under its feet :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Combine multiple results in a subquery into a single comma-separated value I've got two tables:
TableA
------
ID,
Name
TableB
------
ID,
SomeColumn,
TableA_ID (FK for TableA)
The relationship is one row of TableA - many of TableB.
Now, I want to see a result like this:
ID Name SomeColumn
1. ABC X, Y, Z (these are three different rows)
2. MNO R, S
This won't work (multiple results in a subquery):
SELECT ID,
Name,
(SELECT SomeColumn FROM TableB WHERE F_ID=TableA.ID)
FROM TableA
This is a trivial problem if I do the processing on the client side. But this will mean I will have to run X queries on every page, where X is the number of results of TableA.
Note that I can't simply do a GROUP BY or something similar, as it will return multiple results for rows of TableA.
I'm not sure if a UDF, utilizing COALESCE or something similar might work?
A: 1. Create the UDF:
CREATE FUNCTION CombineValues
(
@FK_ID INT -- The foreign key from TableA which is used
-- to fetch corresponding records
)
RETURNS VARCHAR(8000)
AS
BEGIN
DECLARE @SomeColumnList VARCHAR(8000);
SELECT @SomeColumnList =
COALESCE(@SomeColumnList + ', ', '') + CAST(SomeColumn AS varchar(20))
FROM TableB C
WHERE C.FK_ID = @FK_ID;
RETURN
(
SELECT @SomeColumnList
)
END
2. Use in subquery:
SELECT ID, Name, dbo.CombineValues(FK_ID) FROM TableA
3. If you are using stored procedure you can do like this:
CREATE PROCEDURE GetCombinedValues
@FK_ID int
As
BEGIN
DECLARE @SomeColumnList VARCHAR(800)
SELECT @SomeColumnList =
COALESCE(@SomeColumnList + ', ', '') + CAST(SomeColumn AS varchar(20))
FROM TableB
WHERE FK_ID = @FK_ID
Select *, @SomeColumnList as SelectedIds
FROM
TableA
WHERE
FK_ID = @FK_ID
END
A: Even this will serve the purpose
Sample data
declare @t table(id int, name varchar(20),somecolumn varchar(MAX))
insert into @t
select 1,'ABC','X' union all
select 1,'ABC','Y' union all
select 1,'ABC','Z' union all
select 2,'MNO','R' union all
select 2,'MNO','S'
Query:
SELECT ID,Name,
STUFF((SELECT ',' + CAST(T2.SomeColumn AS VARCHAR(MAX))
FROM @T T2 WHERE T1.id = T2.id AND T1.name = T2.name
FOR XML PATH('')),1,1,'') SOMECOLUMN
FROM @T T1
GROUP BY id,Name
Output:
ID Name SomeColumn
1 ABC X,Y,Z
2 MNO R,S
A: In MySQL there is a group_concat function that will return what you're asking for.
SELECT TableA.ID, TableA.Name, group_concat(TableB.SomeColumn)
as SomColumnGroup FROM TableA LEFT JOIN TableB ON
TableB.TableA_ID = TableA.ID
A: I think you are on the right track with COALESCE. See here for an example of building a comma-delimited string:
http://www.sqlteam.com/article/using-coalesce-to-build-comma-delimited-string
A: You may need to provide some more details for a more precise response.
Since your dataset seems kind of narrow, you might consider just using a row per result and performing the post-processing at the client.
So if you are really looking to make the server do the work return a result set like
ID Name SomeColumn
1 ABC X
1 ABC Y
1 ABC Z
2 MNO R
2 MNO S
which of course is a simple INNER JOIN on ID
Once you have the resultset back at the client, maintain a variable called CurrentName and use that as a trigger when to stop collecting SomeColumn into the useful thing you want it to do.
A: Assuming you only have WHERE clauses on table A create a stored procedure thus:
SELECT Id, Name From tableA WHERE ...
SELECT tableA.Id AS ParentId, Somecolumn
FROM tableA INNER JOIN tableB on TableA.Id = TableB.F_Id
WHERE ...
Then fill a DataSet ds with it. Then
ds.Relations.Add("foo", ds.Tables[0].Columns("Id"), ds.Tables[1].Columns("ParentId"));
Finally you can add a repeater in the page that puts the commas for every line
<asp:DataList ID="Subcategories" DataKeyField="ParentCatId"
DataSource='<%# Container.DataItem.CreateChildView("foo") %>' RepeatColumns="1"
RepeatDirection="Horizontal" ItemStyle-HorizontalAlign="left" ItemStyle-VerticalAlign="top"
runat="server" >
In this way you will do it client side but with only one query, passing minimal data between database and frontend
A: I tried the solution priyanka.sarkar mentioned and the didn't quite get it working as the OP asked. Here's the solution I ended up with:
SELECT ID,
SUBSTRING((
SELECT ',' + T2.SomeColumn
FROM @T T2
WHERE WHERE T1.id = T2.id
FOR XML PATH('')), 2, 1000000)
FROM @T T1
GROUP BY ID
A: Solution below:
SELECT GROUP_CONCAT(field_attr_best_weekday_value)as RAVI
FROM content_field_attr_best_weekday LEFT JOIN content_type_attraction
on content_field_attr_best_weekday.nid = content_type_attraction.nid
GROUP BY content_field_attr_best_weekday.nid
Use this, you also can change the Joins
A: I have reviewed all the answers. I think in database insertion should be like:
ID Name SomeColumn
1. ABC ,X,Y Z (these are three different rows)
2. MNO ,R,S
The comma should be at previous end and do searching by like %,X,%
A: SELECT t.ID,
t.NAME,
(SELECT t1.SOMECOLUMN
FROM TABLEB t1
WHERE t1.F_ID = T.TABLEA.ID)
FROM TABLEA t;
This will work for selecting from different table using sub query.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "85"
}
|
Q: Getting image dimensions without reading the entire file Is there a cheap way to get the dimensions of an image (jpg, png, ...)? Preferably, I would like to achieve this using only the standard class library (because of hosting restrictions). I know that it should be relatively easy to read the image header and parse it myself, but it seems that something like this should be already there. Also, I’ve verified that the following piece of code reads the entire image (which I don’t want):
using System;
using System.Drawing;
namespace Test
{
class Program
{
static void Main(string[] args)
{
Image img = new Bitmap("test.png");
System.Console.WriteLine(img.Width + " x " + img.Height);
}
}
}
A: Based on the answers so far and some additional searching, it seems that in the .NET 2 class library there is no functionality for it. So I decided to write my own. Here is a very rough version of it. At the moment, I needed it only for JPG’s. So it completes the answer posted by Abbas.
There is no error checking or any other verification, but I currently need it for a limited task, and it can be eventually easily added. I tested it on some number of images, and it usually does not read more that 6K from an image. I guess it depends on the amount of the EXIF data.
using System;
using System.IO;
namespace Test
{
class Program
{
static bool GetJpegDimension(
string fileName,
out int width,
out int height)
{
width = height = 0;
bool found = false;
bool eof = false;
FileStream stream = new FileStream(
fileName,
FileMode.Open,
FileAccess.Read);
BinaryReader reader = new BinaryReader(stream);
while (!found || eof)
{
// read 0xFF and the type
reader.ReadByte();
byte type = reader.ReadByte();
// get length
int len = 0;
switch (type)
{
// start and end of the image
case 0xD8:
case 0xD9:
len = 0;
break;
// restart interval
case 0xDD:
len = 2;
break;
// the next two bytes is the length
default:
int lenHi = reader.ReadByte();
int lenLo = reader.ReadByte();
len = (lenHi << 8 | lenLo) - 2;
break;
}
// EOF?
if (type == 0xD9)
eof = true;
// process the data
if (len > 0)
{
// read the data
byte[] data = reader.ReadBytes(len);
// this is what we are looking for
if (type == 0xC0)
{
width = data[1] << 8 | data[2];
height = data[3] << 8 | data[4];
found = true;
}
}
}
reader.Close();
stream.Close();
return found;
}
static void Main(string[] args)
{
foreach (string file in Directory.GetFiles(args[0]))
{
int w, h;
GetJpegDimension(file, out w, out h);
System.Console.WriteLine(file + ": " + w + " x " + h);
}
}
}
}
A: Updated ICR's answer to support progressive jPegs & WebP as well :)
internal static class ImageHelper
{
const string errorMessage = "Could not recognise image format.";
private static Dictionary<byte[], Func<BinaryReader, Size>> imageFormatDecoders = new Dictionary<byte[], Func<BinaryReader, Size>>()
{
{ new byte[] { 0x42, 0x4D }, DecodeBitmap },
{ new byte[] { 0x47, 0x49, 0x46, 0x38, 0x37, 0x61 }, DecodeGif },
{ new byte[] { 0x47, 0x49, 0x46, 0x38, 0x39, 0x61 }, DecodeGif },
{ new byte[] { 0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A }, DecodePng },
{ new byte[] { 0xff, 0xd8 }, DecodeJfif },
{ new byte[] { 0x52, 0x49, 0x46, 0x46 }, DecodeWebP },
};
/// <summary>
/// Gets the dimensions of an image.
/// </summary>
/// <param name="path">The path of the image to get the dimensions of.</param>
/// <returns>The dimensions of the specified image.</returns>
/// <exception cref="ArgumentException">The image was of an unrecognised format.</exception>
public static Size GetDimensions(BinaryReader binaryReader)
{
int maxMagicBytesLength = imageFormatDecoders.Keys.OrderByDescending(x => x.Length).First().Length;
byte[] magicBytes = new byte[maxMagicBytesLength];
for(int i = 0; i < maxMagicBytesLength; i += 1)
{
magicBytes[i] = binaryReader.ReadByte();
foreach(var kvPair in imageFormatDecoders)
{
if(StartsWith(magicBytes, kvPair.Key))
{
return kvPair.Value(binaryReader);
}
}
}
throw new ArgumentException(errorMessage, "binaryReader");
}
private static bool StartsWith(byte[] thisBytes, byte[] thatBytes)
{
for(int i = 0; i < thatBytes.Length; i += 1)
{
if(thisBytes[i] != thatBytes[i])
{
return false;
}
}
return true;
}
private static short ReadLittleEndianInt16(BinaryReader binaryReader)
{
byte[] bytes = new byte[sizeof(short)];
for(int i = 0; i < sizeof(short); i += 1)
{
bytes[sizeof(short) - 1 - i] = binaryReader.ReadByte();
}
return BitConverter.ToInt16(bytes, 0);
}
private static int ReadLittleEndianInt32(BinaryReader binaryReader)
{
byte[] bytes = new byte[sizeof(int)];
for(int i = 0; i < sizeof(int); i += 1)
{
bytes[sizeof(int) - 1 - i] = binaryReader.ReadByte();
}
return BitConverter.ToInt32(bytes, 0);
}
private static Size DecodeBitmap(BinaryReader binaryReader)
{
binaryReader.ReadBytes(16);
int width = binaryReader.ReadInt32();
int height = binaryReader.ReadInt32();
return new Size(width, height);
}
private static Size DecodeGif(BinaryReader binaryReader)
{
int width = binaryReader.ReadInt16();
int height = binaryReader.ReadInt16();
return new Size(width, height);
}
private static Size DecodePng(BinaryReader binaryReader)
{
binaryReader.ReadBytes(8);
int width = ReadLittleEndianInt32(binaryReader);
int height = ReadLittleEndianInt32(binaryReader);
return new Size(width, height);
}
private static Size DecodeJfif(BinaryReader binaryReader)
{
while(binaryReader.ReadByte() == 0xff)
{
byte marker = binaryReader.ReadByte();
short chunkLength = ReadLittleEndianInt16(binaryReader);
if(marker == 0xc0 || marker == 0xc2) // c2: progressive
{
binaryReader.ReadByte();
int height = ReadLittleEndianInt16(binaryReader);
int width = ReadLittleEndianInt16(binaryReader);
return new Size(width, height);
}
if(chunkLength < 0)
{
ushort uchunkLength = (ushort)chunkLength;
binaryReader.ReadBytes(uchunkLength - 2);
}
else
{
binaryReader.ReadBytes(chunkLength - 2);
}
}
throw new ArgumentException(errorMessage);
}
private static Size DecodeWebP(BinaryReader binaryReader)
{
binaryReader.ReadUInt32(); // Size
binaryReader.ReadBytes(15); // WEBP, VP8 + more
binaryReader.ReadBytes(3); // SYNC
var width = binaryReader.ReadUInt16() & 0b00_11111111111111; // 14 bits width
var height = binaryReader.ReadUInt16() & 0b00_11111111111111; // 14 bits height
return new Size(width, height);
}
}
A: I did this for PNG file
var buff = new byte[32];
using (var d = File.OpenRead(file))
{
d.Read(buff, 0, 32);
}
const int wOff = 16;
const int hOff = 20;
var Widht =BitConverter.ToInt32(new[] {buff[wOff + 3], buff[wOff + 2], buff[wOff + 1], buff[wOff + 0],},0);
var Height =BitConverter.ToInt32(new[] {buff[hOff + 3], buff[hOff + 2], buff[hOff + 1], buff[hOff + 0],},0);
A: using (FileStream file = new FileStream(this.ImageFileName, FileMode.Open, FileAccess.Read))
{
using (Image tif = Image.FromStream(stream: file,
useEmbeddedColorManagement: false,
validateImageData: false))
{
float width = tif.PhysicalDimension.Width;
float height = tif.PhysicalDimension.Height;
float hresolution = tif.HorizontalResolution;
float vresolution = tif.VerticalResolution;
}
}
the validateImageData set to false prevents GDI+ from performing costly analysis of the image data, thus severely decreasing load time. This question sheds more light on the subject.
A: Have you tried using the WPF Imaging classes? System.Windows.Media.Imaging.BitmapDecoder, etc.?
I believe some effort was into making sure those codecs only read a subset of the file in order to determine header information. It's worth a check.
A: I was looking for something similar a few months earlier. I wanted to read the type, version, height and width of a GIF image but couldn’t find anything useful online.
Fortunately in case of GIF, all the required information was in the first 10 bytes:
Type: Bytes 0-2
Version: Bytes 3-5
Height: Bytes 6-7
Width: Bytes 8-9
PNG are slightly more complex (width and height are 4-bytes each):
Width: Bytes 16-19
Height: Bytes 20-23
As mentioned above, wotsit is a good site for detailed specs on image and data formats though the PNG specs at pnglib are much more detailed. However, I think the Wikipedia entry on PNG and GIF formats is the best place to start.
Here’s my original code for checking GIFs, I have also slapped together something for PNGs:
using System;
using System.IO;
using System.Text;
public class ImageSizeTest
{
public static void Main()
{
byte[] bytes = new byte[10];
string gifFile = @"D:\Personal\Images&Pics\iProduct.gif";
using (FileStream fs = File.OpenRead(gifFile))
{
fs.Read(bytes, 0, 10); // type (3 bytes), version (3 bytes), width (2 bytes), height (2 bytes)
}
displayGifInfo(bytes);
string pngFile = @"D:\Personal\Images&Pics\WaveletsGamma.png";
using (FileStream fs = File.OpenRead(pngFile))
{
fs.Seek(16, SeekOrigin.Begin); // jump to the 16th byte where width and height information is stored
fs.Read(bytes, 0, 8); // width (4 bytes), height (4 bytes)
}
displayPngInfo(bytes);
}
public static void displayGifInfo(byte[] bytes)
{
string type = Encoding.ASCII.GetString(bytes, 0, 3);
string version = Encoding.ASCII.GetString(bytes, 3, 3);
int width = bytes[6] | bytes[7] << 8; // byte 6 and 7 contain the width but in network byte order so byte 7 has to be left-shifted 8 places and bit-masked to byte 6
int height = bytes[8] | bytes[9] << 8; // same for height
Console.WriteLine("GIF\nType: {0}\nVersion: {1}\nWidth: {2}\nHeight: {3}\n", type, version, width, height);
}
public static void displayPngInfo(byte[] bytes)
{
int width = 0, height = 0;
for (int i = 0; i <= 3; i++)
{
width = bytes[i] | width << 8;
height = bytes[i + 4] | height << 8;
}
Console.WriteLine("PNG\nWidth: {0}\nHeight: {1}\n", width, height);
}
}
A: Your best bet as always is to find a well tested library. However, you said that is difficult, so here is some dodgy largely untested code that should work for a fair number of cases:
using System;
using System.Collections.Generic;
using System.Drawing;
using System.IO;
using System.Linq;
namespace ImageDimensions
{
public static class ImageHelper
{
const string errorMessage = "Could not recognize image format.";
private static Dictionary<byte[], Func<BinaryReader, Size>> imageFormatDecoders = new Dictionary<byte[], Func<BinaryReader, Size>>()
{
{ new byte[]{ 0x42, 0x4D }, DecodeBitmap},
{ new byte[]{ 0x47, 0x49, 0x46, 0x38, 0x37, 0x61 }, DecodeGif },
{ new byte[]{ 0x47, 0x49, 0x46, 0x38, 0x39, 0x61 }, DecodeGif },
{ new byte[]{ 0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A }, DecodePng },
{ new byte[]{ 0xff, 0xd8 }, DecodeJfif },
};
/// <summary>
/// Gets the dimensions of an image.
/// </summary>
/// <param name="path">The path of the image to get the dimensions of.</param>
/// <returns>The dimensions of the specified image.</returns>
/// <exception cref="ArgumentException">The image was of an unrecognized format.</exception>
public static Size GetDimensions(string path)
{
using (BinaryReader binaryReader = new BinaryReader(File.OpenRead(path)))
{
try
{
return GetDimensions(binaryReader);
}
catch (ArgumentException e)
{
if (e.Message.StartsWith(errorMessage))
{
throw new ArgumentException(errorMessage, "path", e);
}
else
{
throw e;
}
}
}
}
/// <summary>
/// Gets the dimensions of an image.
/// </summary>
/// <param name="path">The path of the image to get the dimensions of.</param>
/// <returns>The dimensions of the specified image.</returns>
/// <exception cref="ArgumentException">The image was of an unrecognized format.</exception>
public static Size GetDimensions(BinaryReader binaryReader)
{
int maxMagicBytesLength = imageFormatDecoders.Keys.OrderByDescending(x => x.Length).First().Length;
byte[] magicBytes = new byte[maxMagicBytesLength];
for (int i = 0; i < maxMagicBytesLength; i += 1)
{
magicBytes[i] = binaryReader.ReadByte();
foreach(var kvPair in imageFormatDecoders)
{
if (magicBytes.StartsWith(kvPair.Key))
{
return kvPair.Value(binaryReader);
}
}
}
throw new ArgumentException(errorMessage, "binaryReader");
}
private static bool StartsWith(this byte[] thisBytes, byte[] thatBytes)
{
for(int i = 0; i < thatBytes.Length; i+= 1)
{
if (thisBytes[i] != thatBytes[i])
{
return false;
}
}
return true;
}
private static short ReadLittleEndianInt16(this BinaryReader binaryReader)
{
byte[] bytes = new byte[sizeof(short)];
for (int i = 0; i < sizeof(short); i += 1)
{
bytes[sizeof(short) - 1 - i] = binaryReader.ReadByte();
}
return BitConverter.ToInt16(bytes, 0);
}
private static int ReadLittleEndianInt32(this BinaryReader binaryReader)
{
byte[] bytes = new byte[sizeof(int)];
for (int i = 0; i < sizeof(int); i += 1)
{
bytes[sizeof(int) - 1 - i] = binaryReader.ReadByte();
}
return BitConverter.ToInt32(bytes, 0);
}
private static Size DecodeBitmap(BinaryReader binaryReader)
{
binaryReader.ReadBytes(16);
int width = binaryReader.ReadInt32();
int height = binaryReader.ReadInt32();
return new Size(width, height);
}
private static Size DecodeGif(BinaryReader binaryReader)
{
int width = binaryReader.ReadInt16();
int height = binaryReader.ReadInt16();
return new Size(width, height);
}
private static Size DecodePng(BinaryReader binaryReader)
{
binaryReader.ReadBytes(8);
int width = binaryReader.ReadLittleEndianInt32();
int height = binaryReader.ReadLittleEndianInt32();
return new Size(width, height);
}
private static Size DecodeJfif(BinaryReader binaryReader)
{
while (binaryReader.ReadByte() == 0xff)
{
byte marker = binaryReader.ReadByte();
short chunkLength = binaryReader.ReadLittleEndianInt16();
if (marker == 0xc0)
{
binaryReader.ReadByte();
int height = binaryReader.ReadLittleEndianInt16();
int width = binaryReader.ReadLittleEndianInt16();
return new Size(width, height);
}
binaryReader.ReadBytes(chunkLength - 2);
}
throw new ArgumentException(errorMessage);
}
}
}
Hopefully the code is fairly obvious. To add a new file format you add it to imageFormatDecoders with the key being an array of the "magic bits" which appear at the beginning of every file of the given format and the value being a function which extracts the size from the stream. Most formats are simple enough, the only real stinker is jpeg.
A: Yes, you can absolutely do this and the code depends on the file format. I work for an imaging vendor (Atalasoft), and our product provides a GetImageInfo() for every codec that does the minimum to find out dimensions and some other easy to get data.
If you want to roll your own, I suggest starting with wotsit.org, which has detailed specs for pretty much all image formats and you will see how to identify the file and also where information in it can be found.
If you are comfortable working with C, then the free jpeglib can be used to get this information too. I would bet that you can do this with .NET libraries, but I don't know how.
A: It's going to depend on the file format. Usually they will state it up in the early bytes of the file. And, usually, a good image-reading implementation will take that into account. I can't point you to one for .NET though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "116"
}
|
Q: What XNA based 3D terrain and physics libraries exist? I'm planning on creating a game that contains a landscape with objects on it. The landscape will be defined using a heightfield, and the objects will move about on top of, and fly over the terrain. If you've ever played the old games Marble Madness and Virus/Zarch, that's the kind of complexity and style I'm trying to create.
I've seen various physics engines on the Internet, and a few tutorials about how to render heightfields as terrain, but they either lack documentation or seem overly complex for what I need.
All I need is a way to draw a heightfield, place 3D objects on it and then make them roll down the hills, or fly around in the sky. I don't mind making my own code to do this, so any relevant tutorials would be good too.
A: Here is a more complete list, Xbox, Zune and Windows...
*
*Farseer - 2d only.
*JigLibX
*Bullet
*
* BulletX
* XBAP
*Oops! 3D Physics Framework
*Bepu physics
*Jello Physics
*Physics2D.Net
Windows Only...
*
*PhysX
*
*MS Robotics Studio wrapper
*PhysXdotNet Wrapper
*ODE (Open Dyamics Engine)
*
*XPA (XNA Physics lib)
*Newton Game Dynamics
*
*Newton Physics Port to XNA
A: If you're looking for more of a tutorial rather than a full-blown solution, have you checked the collision series at the XNA creators site?
Specifically, Collision Series 5: Heightmap Collision with Normals sounds like exactly what you're looking for.
A: Check out Newton Game Dynamics, there is a port of their physics engine for XNA. The only caveat is that it only works under Windows.
A: Check out Matali Physics Engine. Matali Physics is a physics engine for XNA.
home page
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Redirect to different controller I have some code in an IAuthorizationFilter which redirects the user to a login page but I'm having trouble changing the controller which is used. So I might do
public void OnAuthorization(AuthorizationContext context)
{
UserController u = new UserController();
context.Result = u.Login();
context.Cancel = true;
}
But this results in
The view 'Login' or its master could not be found. The following locations were searched:
~/Views/Product/Login.aspx
~/Views/Product/Login.ascx
~/Views/Shared/Login.aspx
~/Views/Shared/Login.ascx
I am running this from a product controler. How do I get the view engine to use the user controler rather than the product controler?
Edit: I got it working with
RedirectResult r = new RedirectResult("../User.aspx/Login");
context.Result = r;
context.Cancel = true;
But this is a cludge, I'm sure there is a better way. There is frustratingly little exposed in the ActionFilterAttribute. Seems like it might be useful if the controller exposed in AuthorizationContext had RedirectToAction exposed this would be easy.
A: Agree with ddc0660, you should be redirecting. Don't run u.Login(), but rather set context.Result to a RedirectResult.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do you performance test JavaScript code? CPU Cycles, Memory Usage, Execution Time, etc.?
Added: Is there a quantitative way of testing performance in JavaScript besides just perception of how fast the code runs?
A: We can always measure time taken by any function by simple date object.
var start = +new Date(); // log start timestamp
function1();
var end = +new Date(); // log end timestamp
var diff = end - start;
A: You could use this: http://getfirebug.com/js.html. It has a profiler for JavaScript.
A: Try jsPerf. It's an online javascript performance tool for benchmarking and comparing snippets of code. I use it all the time.
A: I was looking something similar but found this.
https://jsbench.me/
It allows a side to side comparison and you can then also share the results.
A: performance.mark (Chrome 87 ^)
performance.mark('initSelect - start');
initSelect();
performance.mark('initSelect - end');
A: Most browsers are now implementing high resolution timing in performance.now(). It's superior to new Date() for performance testing because it operates independently from the system clock.
Usage
var start = performance.now();
// code being timed...
var duration = performance.now() - start;
References
*
*https://developer.mozilla.org/en-US/docs/Web/API/Performance.now()
*http://www.w3.org/TR/hr-time/#dom-performance-now
A: Quick answer
On jQuery (more specifically on Sizzle), we use this (checkout master and open speed/index.html on your browser), which in turn uses benchmark.js. This is used to performance test the library.
Long answer
If the reader doesn't know the difference between benchmark, workload and profilers, first read some performance testing foundations on the "readme 1st" section of spec.org. This is for system testing, but understanding this foundations will help JS perf testing as well. Some highlights:
What is a benchmark?
A benchmark is "a standard of measurement or evaluation" (Webster’s II Dictionary). A computer benchmark is typically a computer program that performs a strictly defined set of operations - a workload - and returns some form of result - a metric - describing how the tested computer performed. Computer benchmark metrics usually measure speed: how fast was the workload completed; or throughput: how many workload units per unit time were completed. Running the same computer benchmark on multiple computers allows a comparison to be made.
Should I benchmark my own application?
Ideally, the best comparison test for systems would be your own application with your own workload. Unfortunately, it is often impractical to get a wide base of reliable, repeatable and comparable measurements for different systems using your own application with your own workload. Problems might include generation of a good test case, confidentiality concerns, difficulty ensuring comparable conditions, time, money, or other constraints.
If not my own application, then what?
You may wish to consider using standardized benchmarks as a reference point. Ideally, a standardized benchmark will be portable, and may already have been run on the platforms that you are interested in. However, before you consider the results you need to be sure that you understand the correlation between your application/computing needs and what the benchmark is measuring. Are the benchmarks similar to the kinds of applications you run? Do the workloads have similar characteristics? Based on your answers to these questions, you can begin to see how the benchmark may approximate your reality.
Note: A standardized benchmark can serve as reference point. Nevertheless, when you are doing vendor or product selection, SPEC does not claim that any standardized benchmark can replace benchmarking your own actual application.
Performance testing JS
Ideally, the best perf test would be using your own application with your own workload switching what you need to test: different libraries, machines, etc.
If this is not feasible (and usually it is not). The first important step: define your workload. It should reflect your application's workload. In this talk, Vyacheslav Egorov talks about shitty workloads you should avoid.
Then, you could use tools like benchmark.js to assist you collect metrics, usually speed or throughput. On Sizzle, we're interested in comparing how fixes or changes affect the systemic performance of the library.
If something is performing really bad, your next step is to look for bottlenecks.
How do I find bottlenecks? Profilers
What is the best way to profile javascript execution?
A: Profilers are definitely a good way to get numbers, but in my experience, perceived performance is all that matters to the user/client. For example, we had a project with an Ext accordion that expanded to show some data and then a few nested Ext grids. Everything was actually rendering pretty fast, no single operation took a long time, there was just a lot of information being rendered all at once, so it felt slow to the user.
We 'fixed' this, not by switching to a faster component, or optimizing some method, but by rendering the data first, then rendering the grids with a setTimeout. So, the information appeared first, then the grids would pop into place a second later. Overall, it took slightly more processing time to do it that way, but to the user, the perceived performance was improved.
These days, the Chrome profiler and other tools are universally available and easy to use, as are
console.time() (mozilla-docs, chrome-docs)
console.profile() (mozilla-docs, chrome-docs)
performance.now() (mozilla-docs)
Chrome also gives you a timeline view which can show you what is killing your frame rate, where the user might be waiting, etc.
Finding documentation for all these tools is really easy, you don't need an SO answer for that. 7 years later, I'll still repeat the advice of my original answer and point out that you can have slow code run forever where a user won't notice it, and pretty fast code running where they do, and they will complain about the pretty fast code not being fast enough. Or that your request to your server API took 220ms. Or something else like that. The point remains that if you take a profiler out and go looking for work to do, you will find it, but it may not be the work your users need.
A: JSLitmus is a lightweight tool for creating ad-hoc JavaScript benchmark tests
Let examine the performance between function expression and function constructor:
<script src="JSLitmus.js"></script>
<script>
JSLitmus.test("new Function ... ", function() {
return new Function("for(var i=0; i<100; i++) {}");
});
JSLitmus.test("function() ...", function() {
return (function() { for(var i=0; i<100; i++) {} });
});
</script>
What I did above is create a function expression and function constructor performing same operation. The result is as follows:
FireFox Performance Result
IE Performance Result
A: I find execution time to be the best measure.
A: You could use console.profile in firebug
A: I do agree that perceived performance is really all that matters. But sometimes I just want to find out which method of doing something is faster. Sometimes the difference is HUGE and worth knowing.
You could just use javascript timers. But I typically get much more consistent results using the native Chrome (now also in Firefox and Safari) devTool methods console.time() & console.timeEnd()
Example of how I use it:
var iterations = 1000000;
console.time('Function #1');
for(var i = 0; i < iterations; i++ ){
functionOne();
};
console.timeEnd('Function #1')
console.time('Function #2');
for(var i = 0; i < iterations; i++ ){
functionTwo();
};
console.timeEnd('Function #2')
Update (4/4/2016):
Chrome canary recently added Line Level Profiling the dev tools sources tab which let's you see exactly how long each line took to execute!
A: I usually just test javascript performance, how long script runs. jQuery Lover gave a good article link for testing javascript code performance, but the article only shows how to test how long your javascript code runs. I would also recommend reading article called "5 tips on improving your jQuery code while working with huge data sets".
A: Here is a reusable class for time performance. Example is included in code:
/*
Help track time lapse - tells you the time difference between each "check()" and since the "start()"
*/
var TimeCapture = function () {
var start = new Date().getTime();
var last = start;
var now = start;
this.start = function () {
start = new Date().getTime();
};
this.check = function (message) {
now = (new Date().getTime());
console.log(message, 'START:', now - start, 'LAST:', now - last);
last = now;
};
};
//Example:
var time = new TimeCapture();
//begin tracking time
time.start();
//...do stuff
time.check('say something here')//look at your console for output
//..do more stuff
time.check('say something else')//look at your console for output
//..do more stuff
time.check('say something else one more time')//look at your console for output
A: Some people are suggesting specific plug-ins and/or browsers. I would not because they're only really useful for that one platform; a test run on Firefox will not translate accurately to IE7. Considering 99.999999% of sites have more than one browser visit them, you need to check performance on all the popular platforms.
My suggestion would be to keep this in the JS. Create a benchmarking page with all your JS test on and time the execution. You could even have it AJAX-post the results back to you to keep it fully automated.
Then just rinse and repeat over different platforms.
A: Here is a simple function that displays the execution time of a passed in function:
var perf = function(testName, fn) {
var startTime = new Date().getTime();
fn();
var endTime = new Date().getTime();
console.log(testName + ": " + (endTime - startTime) + "ms");
}
A: I have a small tool where I can quickly run small test-cases in the browser and immediately get the results:
JavaScript Speed Test
You can play with code and find out which technique is better in the tested browser.
A: I think JavaScript performance (time) testing is quite enough. I found a very handy article about JavaScript performance testing here.
A: UX Profiler approaches this problem from user perspective. It groups all the browser events, network activity etc caused by some user action (click) and takes into consideration all the aspects like latency, timeouts etc.
A: Performance testing became something of a buzzword as of late but that’s not to say that performance testing is not an important process in QA or even after the product has shipped. And while I develop the app I use many different tools, some of them mentioned above like the chrome Profiler I usually look at a SaaS or something opensource that I can get going and forget about it until I get that alert saying that something went belly up.
There are lots of awesome tools that will help you keep an eye on performance without having you jump through hoops just to get some basics alerts set up. Here are a few that I think are worth checking out for yourself.
*
*Sematext.com
*Datadog.com
*Uptime.com
*Smartbear.com
*Solarwinds.com
To try and paint a clearer picture, here is a little tutorial on how to set up monitoring for a react application.
A: You could use https://github.com/anywhichway/benchtest which wraps existing Mocha unit tests with performance tests.
A: The golden rule is to NOT under ANY circumstances lock your users browser. After that, I usually look at execution time, followed by memory usage (unless you're doing something crazy, in which case it could be a higher priority).
A: This is a very old question but I think we can contribute with a simple solution based on es6 for fast testing your code.
This is a basic bench for execution time. We use performance.now() to improve the accuracy:
/**
* Figure out how long it takes for a method to execute.
*
* @param {Function} method to test
* @param {number} iterations number of executions.
* @param {Array} list of set of args to pass in.
* @param {T} context the context to call the method in.
* @return {number} the time it took, in milliseconds to execute.
*/
const bench = (method, list, iterations, context) => {
let start = 0
const timer = action => {
const time = performance.now()
switch (action) {
case 'start':
start = time
return 0
case 'stop':
const elapsed = time - start
start = 0
return elapsed
default:
return time - start
}
};
const result = []
timer('start')
list = [...list]
for (let i = 0; i < iterations; i++) {
for (const args of list) {
result.push(method.apply(context, args))
}
}
const elapsed = timer('stop')
console.log(`Called method [${method.name}]
Mean: ${elapsed / iterations}
Exec. time: ${elapsed}`)
return elapsed
}
const fnc = () => {}
const isFunction = (f) => f && f instanceof Function
const isFunctionFaster = (f) => f && 'function' === typeof f
class A {}
function basicFnc(){}
async function asyncFnc(){}
const arrowFnc = ()=> {}
const arrowRFnc = ()=> 1
// Not functions
const obj = {}
const arr = []
const str = 'function'
const bol = true
const num = 1
const a = new A()
const list = [
[isFunction],
[basicFnc],
[arrowFnc],
[arrowRFnc],
[asyncFnc],
[Array],
[Date],
[Object],
[Number],
[String],
[Symbol],
[A],
[obj],
[arr],
[str],
[bol],
[num],
[a],
[null],
[undefined],
]
const e1 = bench(isFunction, list, 10000)
const e2 = bench(isFunctionFaster, list, 10000)
const rate = e2/e1
const percent = Math.abs(1 - rate)*100
console.log(`[isFunctionFaster] is ${(percent).toFixed(2)}% ${rate < 1 ? 'faster' : 'slower'} than [isFunction]`)
A: This is a good way of collecting performance information for the specific operation.
start = new Date().getTime();
for (var n = 0; n < maxCount; n++) {
/* perform the operation to be measured *//
}
elapsed = new Date().getTime() - start;
assert(true,"Measured time: " + elapsed);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "392"
}
|
Q: Difference between SSL and Kerberos authentication? I am trying to understand what's the actual difference between SSL and Kerberos authentications, and why sometimes I have both SSL traffic and Kerberos.
Or does Kerberos use SSL in any way?
Anyone could help?
Thank you!
A: SSL uses public key cryptography:
*
*You (or your browser) has a public/private keypair
*The server has a public/private key as well
*You generate a symmetric session key
*You encrypt with the server's public key and send this encrypted session key to the server.
*The server decrypts the encrypted session key with its private key.
*You and the server begin communicating using the symmetric session key (basically because symmetric keys are faster).
Kerberos does not use public key cryptography. It uses a trusted 3rd party. Here's a sketch:
*
*You both (server and client) prove your identity to a trusted 3rd party (via a secret).
*When you want to use the server, you check and see that the server is trustworthy. Meanwhile, the server checks to see that you are trustworthy. Now, mutually assured of each others' identity. You can communicate with the server.
2
A: In short:
Kerberos usually does not encrypt transferring data, but SSL and TLS do.
"there are no standard APIs for accessing these messages. As of
Windows Vista, Microsoft does not provide a mechanism for user
applications to produce KRB_PRIV or KRB_SAFE messages." - from
http://www.kerberos.org/software/appskerberos.pdf
In opposite, SSL and TLS usually do not transfer and proof Yours Windows domain login name to the server, but Kerberos does.
A: While Kerberos and SSL are both protocols, Kerberos is an authentication protocol, but SSL is an encryption protocol. Kerberos usually uses UDP, SSL uses (most of the time) TCP. SSL authentication is usually done by checking the server's and the client's RSA or ECDSA keys embedded in something called X.509 certificates. You're authenticated by your certificate and the corresponding key. With Kerberos, you can be authenticated by your password, or some other way. Windows uses Kerberos for example, when used in domain.
Keep in mind: Recent versions of SSL are called TLS for Transport Layer Security.
A: To put simply, Kerberos is a protocol for establishing mutual identity trust, or authentication, for a client and a server, via a trusted third-party, whereas SSL ensures authentication of the server alone, and only if its public key has already been established as trustworthy via another channel. Both provides secure communication between the server and client.
More formally (but without getting into mathematical proofs), given a client C, server S, and a third-party T which both C and S trust:
After Kerbeos authentication, it is established that:
*
*C believes S is who it intended to contact
*S believes C is who it claims to be
*C believes that it has a secure connection to S
*C believes that S believes it has a secure connection to C
*S believes that it has a secure connection to C
*S believes that C believes it has a secure connection to S
SSL, on the other hand, only establishes that:
*
*C believes S is who it intended to contact
*C believes it has a secure connection to S
*S believes it has a secure connection to C
Clearly, Kerberos establishes a stronger, more complete trust relationship.
Additionally, to establish the identity of S over SSL, C needs prior knowledge about S, or an external way to confirm this trust. For most people's everyday use, this comes in the form of Root Certificates, and caching of S's certificate for cross-referencing in the future.
Without this prior knowledge, SSL is susceptible to man-in-the-middle attack, where a third-party is able to pretend to be S to C by relaying communication between them using 2 separate secure channels to C and S. To compromise a Kerberos authentication, the eavesdropper must masquerade as T to both S and C. Note, however, that the set of trusts is still unbroken according to the goal of Kerberos, as the end-state is still correct according to the precondition "C and S trusts T".
Finally, as it has been pointed out in a comment, Kerberos can be and has been extended to use SSL-like mechanism for establishing the initial secure connection between C and T.
A: A short answer: SSL and Kerberos both use encryption but SSL uses a key that is unchanged during a session while Kerberos uses several keys for encrypting the communication between a client and a client.
In SSL, encryption is dealt with directly by the two ends of communication while in Kerberos, the encryption key is provided by a third party - some kind of intermediate - between the client and the server.
A: From http://web.mit.edu/kerberos/:
Kerberos was created by MIT as a solution to these network security problems. The Kerberos protocol uses strong cryptography so that a client can prove its identity to a server (and vice versa) across an insecure network connection. After a client and server has used Kerberos to prove their identity, they can also encrypt all of their communications to assure privacy and data integrity as they go about their business.
Meanwhile:
SSL is used for establishing server<-->server authentication via public key encryption.
A: From https://www.eldos.com/security/articles/7240.php?page=all,
Kerberos and TLS are not the things to compare. Their have different objectives and different methods. In the beginning of our article we mentioned the frequently asked questions like “which is better” and “what to choose”. The former is not a question at all: nothing is better and everything is good if you use it in a right way. The latter question is worth a serious consideration: what to choose depends on what you have and what you want.
If you want to secure your communications in a sense that nobody can read it or tamper it, perhaps the right choice is to use TLS or some other protocols based on it. A good example of TLS usage for securing World Wide Web traffic carried by HTTP is to use HTTPS. For secure file transferring you may use FTPS, and take into account that SMTP (though it stands for a “simple” mail transfer protocol, not “secure”) is also may be protected with TLS.
On the other hand, if you need to manage user access to services, you may want to use Kerberos. Imagine, for example, that you have several servers like Web server, FTP, SMTP and SQL servers, and optionally something else, everything on one host. Some clients are allowed to use SMTP and HTTP, but not allowed to use FTP, others may use FTP but don’t have access to your databases. This is exactly the situation when Kerberos is coming to use, you just have to describe user rights and your administrative policy in Authentication Server.
A: SSL authentication uses certifiactes to verify youself to server whereas Kerberos works entirely different.
SSL can be imported manually and added as per configurations in client and host manually.
Whereas kerberos is authentication where no password are transmitted over network. Here kerberos KDC server doesn't need to communicate with any service or host to verify the client. Client uses principle stored in kerberos to communicate with kerberos server. In return kerberos server provides ticket using keytab of other server stored beforehand. In the other server, the client provides the ticket and services matches the ticket with their own keytab and verify the client.
A: Simply put,
SSL is to encrypt the data so that the data cannot be understood by someone who is trying to steal it out in the network.
Kerberos is a network authentication protocol which helps in authenticating a client to talk to server without sharing any password/token during the time of the request.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56"
}
|
Q: How to stream binary data to standard output in .NET? I'm trying to stream binary data to the standard output in .NET. However you can only write char using the Console class. I want to use it with redirection. Is there a way to do this?
A: You can access the output stream using Console.OpenStandardOutput.
static void Main(string[] args) {
MemoryStream data = new MemoryStream(Encoding.UTF8.GetBytes("Some data"));
using (Stream console = Console.OpenStandardOutput()) {
data.CopyTo(console);
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Recommendations for a google finance-like interactive chart control I need some sort of interactive chart control for my .NET-based web app. I have some wide XY charts, and the user should be able to interactively scroll and zoom into a specific window on the x axis. Something that acts similar to the google finance control would be nice, but without the need for the date labels or the news event annotations. Also, I'd prefer to avoid Flash, if that's even possible. Can someone please give some recommendations of something that might come close?
EDIT: the "real" google timeline visualization is for date-based data. I just have numeric data. I tried to use that control for non-date data, but it seems to always want to show a date and demands that the first data column actually be a date.
A: This is the one you are looking for. An almost exact match for the Google Flash graph.
http://www.humblesoftware.com/finance/index
A: Have a look at the the Google vizualisation API, I guess this is what Google uses on Google Finance. I had a look at a few other chart API's, but this one is free and beautiful.
A: You could try out Flotr, a nice javascript library. It has pretty decent mouse controls and is free to use.
A: How about using the "real" google finance tool from the Google visualizations project?
http://code.google.com/apis/visualization/documentation/gallery/annotatedtimeline.html
A: The Zoom Scrollbar sample on the SoftwareFX site looks like what you are looking for:
http://demo.softwarefx.com/chartfx/aspnet/ajaxsamples/
A: Check out amCharts. There's XY Chart and Stock charts. Sure these are Flash based charts but I don't think you can have anything this nice and interactive without Flash or Silverlight these days.
A: jqplot is impressive and improving every day
A: Why not use this clone:
http://code.google.com/p/time-series-graph/
A: I wanted to respond to knb's comment about Google Finance chart but seems like there's no reply button. Anyhow, according to this:
http://code.google.com/apis/visualization/documentation/gallery/annotatedtimeline.html#Data_Policy
No data is sent to any server so it doesn't seem like anything is fetched by Google. Anyone have any comment as to this being the case or not? Is it better to err on the side of safety and not use it if concerned about Google having your data?
A: D3 is a good library for plotting very rich UI charts. One can use D3 for plotting google finance like interactive charts.
Find more exaples here
A: I've recently used two generic libraries with my .NET work - they both have many different charttypes which include the zooming and scrolling you're after: one is free (ZedGraph) the other is not (Dundas).
I'd happily recommend them both. Dundas is better - but it isn't cheap. Zed is open source so can be quite informative to just read the code.
A: HighCharts (comercial licenses only) have a pure JS finance like chart the looks good. It is currently in Beta, though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Best way to add tests to an existing Rails project? I have a Rails project which I neglected to build tests for (for shame!) and the code base has gotten pretty large. A friend of mine said that RSpec was a pain to use unless you use it from the beginning. Is this true? What would make him say that?
So, considering the available tests suites and the fact that the code base is already there, what would be my best course of action for getting this thing testable? Is it really that much different than doing it from the beginning?
A: Maybe start with the models? They should be testable in isolation, which ought to make them the lowest-hanging fruit.
Then pick a model and start writing tests that say what it does. As you go along, think about other ways to test the code - are there edge cases that maybe you're not sure about? Write the tests and see how the model behaves. As you develop the tests, you may see areas in the code that aren't as clean and de-duplicated (DRY) as they might be. Now you have tests, you can refactor the code, since you know that you're not affecting behaviour. Try not to start improving design until you have tests in place - that way lies madness.
Once you have the models pinned down, move up.
That's one way. Alternatives might be starting with views or controllers, but you may find it easier to start with end-to-end transaction tests andwork your way into smaller and smaller pieces as you go along.
A: This question came up recently on the RSpec mailing list, and the advice we generally gave was:
*
*Don't bother trying to retro-fit specs to existing, working, code unless you're going to change it - it's exhausting and, unless the code needs to be changed, rather pointless.
*Start writing specs for any changes you make from now on. Bug fixes are an especially good opportunity for this.
*Try to train yourself into the discipline that before you touch the code, first of all write a failing example (=spec) to drive out the change.
You may find that the design of code which wasn't driven out by code examples or unit tests makes it awkward to write tests or specs for. This is perhaps what your friend was alluding to. You will almost certainly need to learn a few key refactoring techniques to break up dependencies so that you can exercise each class in isolation from your specs. Michael Feathers' excellent book, Working Effectively With Legacy Code has some great material to help you learn this delicate skill.
I'd also encourage you to use the built-in spec:rcov rake task to generate code coverage stats. It's extremely rewarding to watch these numbers go up as you start to get your codebase under test.
A: The accepted answer is good advice - although not practical in some instances. I recently was faced with this problem on a few apps of mine because I NEEDED tests for existing code. There simply was no other way around it.
I started off doing all unit tests, then moved onto functionals.
Get in the habit of writing failing tests for any new code, or whenever you're going to change a part of the system. I've found this has helped me gain more knowledge of testing as I go.
Use rcov to measure your progress.
Good luck!
A: Writing tests for existing code may reveal bugs in your code. These tests will force you to look at the existing code so you can see what test you need to write in order to get it to pass and you may see some code that could possibly be written better, or is now useless.
Another tip is to write a test when you encounter a bug so it should never re-occur, this is called regressional testing.
A: Retrofitting specs is not inevitably a bad idea. You go from working code to working code with known properties which allows you to understand whether any future change breaks anything. At the moment if you need to make a change how can you know what it will affect?
What people mean when they say that it is hard to add tests/specs to exisitng code is that code which is hard to test is often highly coupled which makes it hard to write low-level isolated tests.
One idea would be to start with full-stack tests using something like the RSpec story runner. You can then work from the 'outside in' isolating what you can in low-level isolated tests and gradually untangle the harder code bit by bit.
A: You can start writing "characterization tests". With this,you might what to try out the pretentious gem here:
It is still a work in progress though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Should domain objects and simple JavaBeans be unit tested? Should simple JavaBeans that have only simple getters and setters be unit tested??
What about Beans with some logic in getters and setters?
A: If it's not worth testing, it's not worth writing.
That doesn't always mean you should write tests. Sometimes it means you should delete code. Do you need these beans? Do they actually do anything important? If you need them, you should write tests. If not, delete the code and live a happier life knowing you have less to maintain.
A: I think that is one of those questions that everybody asks to himself (herself).
But consider this: the inner logic of the accessors is dead simple now, but it may change in the future, and - which is far more important - you will feel free to change it to whatever you want if you have tests for the methods. So, you get freedom and confidence by means of couple of testcases. Sounds like a good deal, huh?
A: Another rule of thumb (similar to what others have said) is "test anything that could possibly break". To me that excludes auto-generated getter and setters, but includes hand-written ones that contain some logic.
A: How can you securely refactoring untested code? What will happen when your bean pojo changes if you haven't tests? Are you creating a Anemic Domain Model?
A: You should not write tests which:
*
*Test the language or the IDE (i.e. automatically generated getters and setters)
*Add no value to your test harness and kill your enthusiasm for Unit Testing
The same applies for .NET objects which only have properties (sometimes called 'Info' objects).
In an ideal world you would have 100% test coverage, but in practice this is not going to happen. So spend the client's money where it will add the most benefit i.e. writing tests for classes with complex state and behaviour.
If your JavaBean becomes more interesting you can of course add a test case later. One of the common problems associated with Unit Testing / TDD is the mistaken belief that everything has to be perfect first time.
A: You only have to test the stuff that you want to work correctly.
(Sorry to whoever I stole that quote from)
A: You should test things that have some meaning. Getters and setters commonly do not contain any logic and are just used in Java for the lack of properties. I think that testing them is just as stupid as checking that Java does return a value every time you evaluate "a.x".
If the accessor does have logic it's up to you to decide the threshhold. If your team is lazish. it's best to test all logic. If it is more disciplined, it's better to find a ratio that doesn't make you write too much boilerplate tests.
A: if it's just a getter/setter without changing anything to the values, I'd say there's no need for testing. If it does do some logic, a few simple unit tests will provide some security.
A: Like Maxim already said: having tests will not add extra functionality to your application, but will enable you to make changes with more confidence. To determine which classes/methods chould be unit tested, I always ask myself two questions:
*
*is this piece of code imported in relation to the overall functionality?
*will this piece of code probably change over the lifetime of this applicaiton?
If both questions are answered with yes, a unit test is necessary.
A: In my opinion, the purpose of writing unit test is testing the business logic of the unit in test. Therefore, if there is no business logic (for example, when a getter simply returns a value or a setter sets it) then there is no point in writing a test. If, however, there is some logic (getter changes the data in some way before returning it) then yes, you should have a unit test.
As a general rule, I believe one should not write tests for beans that do not contain any business logic.
A: If it has an external interface and contains code that a person wrote (as opposed to being auto-generated by the IDE or compiler), then it should definitely be tested.
If one or both of those conditions doesn't hold, then it's something of a grey area and comes down to more of a "belt and suspenders"-type question of just how careful you feel the need to be.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Is it a problem if multiple different accepting sockets use the same OpenSSL context? Is it OK if the same OpenSSL context is used by several different accepting sockets?
In particular I'm using the same boost::asio::ssl::context with 2 different listening sockets.
A: Yep, SSL_CTX--which I believe is the underlying data structure--is just a global data structure used by your program. From ssl(3):
SSL_CTX (SSL Context)
That's the global context structure which is created by a server or client once per program life-time and which holds mainly default values for the SSL structures which are later created for the connections.
A: It should be OK.
For example a typical RFC4217 FTPS server will use the same SSL context for the control socket and all data sockets within that session.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Miktex on Windows Vista I have some problems with Miktex installed on Windows Vista Business SP1/32 bit. I use miktex 2.7, ghostscript, and texniccenter 1 beta 7.50. When I compile a document with the following profiles: Latex=>DVI, Latex=>PDF everything works fine; the system crashes when I compile with profiles Latex=>PS and Latex=>PS=>PDF. The error is reported into a window that states: "Dvi-to-Postscript converter has stopped working". What can I do? I need Latex=>PS=>PDF to include my images into the final PDF.
Thanks in advance,
Yet another LaTeX user
A: If everything you need is images, you could still compile directly to PDF. You only need to have an image in PNG or JPG format, and use the following code:
%in the document preamble
\usepackage{graphicx}
%in the document, in the place where you want to put your image
\includegraphics{image_filename_without_extension}
When the image is a PNG or JPG file (there are some more, I don't remember which ones ATM), you can compile the file with pdfLaTeX, but not with the normal LaTeX (i.e. you can produce a PDF, but not DVI or PS).
Of course normally, if everything works fine, it's nice to have one copy of the image in EPS, and another in, say, PNG -- this way you can compile easily both to PDF, and to PS.
Hope that helps.
A: Thanks for reply. I have solved the problem: the dvi crashed because I have installed Miktex with the User Account Control enabled. I have disabled it, reinstalled and now it's working (with UAC still disabled).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do you fix the flickering that occurs when you use slideToggle in jQuery? I have a simple unordered list that I want to show and hide on click using the jQuery slideUp and slideDown effect. Everything seems to work fine, however in IE6 the list will slide up, flicker for a split second, and then disappear.
Does anyone know of a fix for this?
Thanks!
A: $(document).ready(function() {
// Fix background image caching problem
if (jQuery.browser.msie) {
try {
document.execCommand("BackgroundImageCache", false, true);
} catch(err) {}
}
};
Apparently.
A: Apologies for the extra comment (I can't upvote or comment on Pavel's answer), but adding a DOCTYPE fixed this issue for me, and the slideUp/Down/Toggle effects now work correctly in IE7.
See A List Apart for more information on DOCTYPES, or you can try specifying the fairly lenient 4/Transitional:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
A: Oli's fix only seems to apply to flickering backgrounds, which is not the case here.
Ryan McGeary's advice is solid, except for when the client/your boss absolutely demand that IE6 not act like it has fetal alcohol syndrome.
I found the solution here: Slide effect bugs in IE 6 and 7 since version 1.1.3
Added a doctype declaration to the top of the file (why wasn't it there before? who knows!) and the flicker vanished, never to be seen again.
A: From what I've heard and tried (including the other suggestions here) there are still situations where the flicker will continue to be noticeable, especially when you don't have the choice of easily leaving quirks mode.
In my case I had to stay on quirks mode for now and the other suggestions still didn't fix the problem for me. I ended up adding a little workaround until we can finally leave quirks mode:
//Start the slideUp effect lasting 500ms
$('#element').slideUp(500);
//Abort the effect just before it finishes and force hide()
//I had to play with the timeout interval until I found one that
// looked exactly right. 400ms worked for me.
setTimeout(function() {
$('#element').stop(true, true).hide();
}, 400);
A: This code does not depends on the browser (no browser detection), works great and reproduces the behaviour of the method .slideUp
$("#element").animate({
height: 1, // Avoiding sliding to 0px (flash on IE)
paddingTop: "hide",
paddingBottom: "hide"
})
// Then hide
.animate({display:"hide"},{queue:true});
A: Dunno if someone will read this answer, but here is a workaround for those who, like me, can't add a document type to the page (thank you Sharepoint 2007 default templates) without spending a few days on a complete template revision.
On a DOCTYPE-less document, the flickering occurs when an element height reaches 0. So the workaround I've found is to animate my elements to an height of 1px, rather than 0.
Like this:
$(".slider").click(function (e) {
$(this).animate({"height" : "1px"});
});
Hope it will help.
N.B: don't forget that in order to slideDown the element, you have to previously store its initial height somehow (node property, rel attribute hack, etc).
A: Just let IE6 flicker. I don't think it's worth it to invest time in a dying browser when your base functionality works well enough. If you're worried about flickering for accessibility reasons, just sniff for IE6 and replace the animation with a generic show() and hide() instead. I recommend avoiding complicated code for edge cases that don't matter.
A: I'm working with a carousel that has marked-up copy over some background slides. The slide transition is a fade. Everything's fine so far.
But some parts of the copy fade-in after the slide loads. And then fade-out right before the slide transition. This copy, an unordered list of links (UL > LI*2 > A), faded-in over the slide background. This, too, is fine in every browser except IE. IE had a flickering background on the UL.
What was happening is that there were two simultaneous fade-Ins running: the background image on the slide & the UL. I used sergio's prototyping setTimeout function to run the UL fadeIn() after the slide had completed loading. Then, I called another setTimeout to make the slide transition right after the UL fadeOut().
setTimeout is your friend when combating IE flicker.
A: We had the same problem today. Not only in IE6, but also in IE8! I've fixed it by hiding the div somewhat earlier, by using a timeout:
var pane = $('.ColorPane');
var speed = 500;
window.setTimeout(function() { pane.css('display', 'none'); }, speed - 100);
pane.slideUp(speed);
Hope it helps some of you out there.
A: I posted a quick fix solution over at http://blog.clintonbeattie.com/how-to-solve-the-jquery-flickering-content-problem/
In short, add overflow:hidden to the containing element that you are sliding in/out. Hope this helps!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: How to configure asp.net process to run under a domain account? I would like to configure asp.net process to run under an account with domain credentials.
My requirement is to access some files on a network share.
What are the steps for this? Is there any built-in account I can use?
A: Check this article from MSDN.
How To: Create a Service Account for an ASP.NET 2.0 Application
This How To shows you how to create and configure a custom least-privileged service account to run an ASP.NET Web application. By default, an ASP.NET application on Microsoft Windows Server 2003 and IIS 6.0 runs using the built-in Network Service account. In production environments, you usually run your application using a custom service account. By using a custom service account, you can audit and authorize your application separately from others, and your application is protected from any changes made to the privileges or permissions associated with the Network Service account. To use a custom service account, you must configure the account by running the Aspnet_regiis.exe utility with the -ga switch, and then configure your application to run in a custom application pool that uses the custom account's identity.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: "Information Not Found" page in Visual Studio 2008, VB.NET Express Edition I'm experimenting with VS 2008 Express edition and when I hit f1 on a keyword or class name I seem to get the Information Not Found page more than 90% of the time.
Is this a configuration problem? Is it because this is the "free-as-in-beer" Express edition? Did Microsoft move their online documentation since the version I downloaded?
I'm kind of amazed that here's a demo of their flagship development product which seems to be missing almost any kind of integrated reference documentation. How is the integrated help in Visual Studio meant to work?
A: Keep in mind that the full MSDN Library for Visual Studio is massive - at the current time it is 2GB - so for this reason it's offered as a separate download.
There appears to be an abriged version which is 300MB, although I would suggest that you'd see the "Information Not Available" message every now and then with that version installed.
You can get them both for free from MSDN.
Personally, I have the full version installed, but if you don't want to download MSDN you can turn online help on by doing the following:
In your Visual Studio product, select
Tools -> Options from the application
menu. Then select Environment -> Help
-> Online in the Options dialog. Under "When loading Help content" select
"Try online first, then local" and
click OK
A: Did you install the MSDN Libary with the express edition? You need to :)
And you can also download the full version of the MSDN Documentation if you just want EVERYTHING.
The documentation is absolutely free.
What version do you have of express, then I can redirect you to a link to the full documentation, but for the express one (smaller, but proberbly have what you need), you need to run the installer again, and add the MSDN Libary on.
A: run microsoft visual studio 2008 documentation with 'run as administrator' option. visual studio should be closed, then check all the settings. close and then try within visual studio, it should work now
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Strange call stack, could it be problem in asio's usage of openssl? I have this strange call stack and I am stumped to understand why.
It seems to me that asio calls open ssl's read and then gets a negative return value (-37) .
Asio seems to then try to use it inside the memcpy function.
The function that causes this call stack is used hunderds of thousands of times without this error.
It happens only rarely, about once a week.
ulRead = (boost::asio::read(spCon->socket(), boost::asio::buffer(_requestHeader, _requestHeader.size()), boost::asio::transfer_at_least(_requestHeader.size()), error_));
Note that request header's size is exactly 3 bytes always.
Could anyone shed some light on possible reasons?
Note: I'm using boost asio 1.36
Here is the crashing call stack crash happens in memcpy because of the huge "count":
A: A quick look at evp_lib.c shows that it tries to pull a length from the cipher context, and in your case gets a Very Bad Value(tm). It then uses this value to copy a string (which does the memcpy). My guess is something is trashing your cipher, be it a thread safety problem, or a reading more bytes into a buffer than allowed.
Relevant source:
int EVP_CIPHER_set_asn1_iv(EVP_CIPHER_CTX *c, ASN1_TYPE *type)
{
int i=0,j;
if (type != NULL)
{
j=EVP_CIPHER_CTX_iv_length(c);
OPENSSL_assert(j <= sizeof c->iv);
i=ASN1_TYPE_set_octetstring(type,c->oiv,j);
}
return(i);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What’s the difference between Response.Write() and Response.Output.Write()? What’s the difference between Response.Write() and Response.Output.Write()?
A: There is effectively no difference, although Response.Output.Write() provides more overloads which can allow you to pass different parameters. Scott Hansleman covers it in depth.
A: They both write to the output stream using a TextWriter (not directly to a Stream), however using HttpContext.Response.Output.Write offers more overloads (17 in Framework 2.0, including formatting options) than HttpContext.Response.Write (only 4 with no formatting options).
The HttpResponse type does not allow direct 'set' access to its output stream.
A: Nothing really.
But. Response.Write takes the stream in the Response.Output property. You could set another Output stream, and in that way instead of writing back to the client, maybe write to a file or something crazy. So thats there relation.
A: Response.Output.Write(): It is used to display any type of data like int, date, string etc. i.e. It displays the formatted output.
Response.Write(): To display only string type of data i.e. It's can't display formatted output().
To display formatted output from Response.Write() you can write:
Response.Write(String.Format(" ",___));
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How do I set the thickness of a line in VB.NET In VB.NET I'm drawing an ellipse using some code like this.
aPen = New Pen(Color.Black)
g.DrawEllipse(aPen, n.boxLeft, n.boxTop, n.getWidth(), n.getHeight)
But I want to set the thickness of the line. How do I do it? Is it a property of the Pen or an argument to the DrawEllipse method?
(NB : For some reason, the help is VisualStudio is failing me so I've got to hit the web anyway. Thought I'd try here first.)
A: Use the pen's Width property.
aPen.Width = 10.0F
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Did you apply computational complexity theory in real life? I'm taking a course in computational complexity and have so far had an impression that it won't be of much help to a developer.
I might be wrong but if you have gone down this path before, could you please provide an example of how the complexity theory helped you in your work? Tons of thanks.
A: For most types of programming work the theory part and proofs may not be useful in themselves but what they're doing is try to give you the intuition of being able to immediately say "this algorithm is O(n^2) so we can't run it on these one million data points". Even in the most elementary processing of large amounts of data you'll be running into this.
Thinking quickly complexity theory has been important to me in business data processing, GIS, graphics programming and understanding algorithms in general. It's one of the most useful lessons you can get from CS studies compared to what you'd generally self-study otherwise.
A: O(1): Plain code without loops. Just flows through. Lookups in a lookup table are O(1), too.
O(log(n)): efficiently optimized algorithms. Example: binary tree algorithms and binary search. Usually doesn't hurt. You're lucky if you have such an algorithm at hand.
O(n): a single loop over data. Hurts for very large n.
O(n*log(n)): an algorithm that does some sort of divide and conquer strategy. Hurts for large n. Typical example: merge sort
O(n*n): a nested loop of some sort. Hurts even with small n. Common with naive matrix calculations. You want to avoid this sort of algorithm if you can.
O(n^x for x>2): a wicked construction with multiple nested loops. Hurts for very small n.
O(x^n, n! and worse): freaky (and often recursive) algorithms you don't want to have in production code except in very controlled cases, for very small n and if there really is no better alternative. Computation time may explode with n=n+1.
Moving your algorithm down from a higher complexity class can make your algorithm fly. Think of Fourier transformation which has an O(n*n) algorithm that was unusable with 1960s hardware except in rare cases. Then Cooley and Tukey made some clever complexity reductions by re-using already calculated values. That led to the widespread introduction of FFT into signal processing. And in the end it's also why Steve Jobs made a fortune with the iPod.
Simple example: Naive C programmers write this sort of loop:
for (int cnt=0; cnt < strlen(s) ; cnt++) {
/* some code */
}
That's an O(n*n) algorithm because of the implementation of strlen(). Nesting loops leads to multiplication of complexities inside the big-O. O(n) inside O(n) gives O(n*n). O(n^3) inside O(n) gives O(n^4). In the example, precalculating the string length will immediately turn the loop into O(n). Joel has also written about this.
Yet the complexity class is not everything. You have to keep an eye on the size of n. Reworking an O(n*log(n)) algorithm to O(n) won't help if the number of (now linear) instructions grows massively due to the reworking. And if n is small anyway, optimizing won't give much bang, too.
A: Computers are not smart, they will do whatever you instruct them to do. Compilers can optimize code a bit for you, but they can't optimize algorithms. Human brain works differently and that is why you need to understand the Big O. Consider calculating Fibonacci numbers. We all know F(n) = F(n-1) + F(n-2), and starting with 1,1 you can easily calculate following numbers without much effort, in linear time. But if you tell computer to calculate it with that formula (recursively), it wouldn't be linear (at least, in imperative languages). Somehow, our brain optimized algorithm, but compiler can't do this. So, you have to work on the algorithm to make it better.
And then, you need training, to spot brain optimizations which look so obvious, to see when code might be ineffective, to know patterns for bad and good algorithms (in terms of computational complexity) and so on. Basically, those courses serve several things:
*
*understand executional patterns and data structures and what effect they have on the time your program needs to finish;
*train your mind to spot potential problems in algorithm, when it could be inefficient on large data sets. Or understand the results of profiling;
*learn well-known ways to improve algorithms by reducing their computational complexity;
*prepare yourself to pass an interview in the cool company :)
A: It's extremely important. If you don't understand how to estimate and figure out how long your algorithms will take to run, then you will end up writing some pretty slow code. I think about compuational complexity all the time when writing algorithms. It's something that should always be on your mind when programming.
This is especially true in many cases because while your app may work fine on your desktop computer with a small test data set, it's important to understand how quickly your app will respond once you go live with it, and there are hundreds of thousands of people using it.
A: Yes, I frequently use Big-O notation, or rather, I use the thought processes behind it, not the notation itself. Largely because so few developers in the organization(s) I frequent understand it. I don't mean to be disrespectful to those people, but in my experience, knowledge of this stuff is one of those things that "sorts the men from the boys".
I wonder if this is one of those questions that can only receive "yes" answers? It strikes me that the set of people that understand computational complexity is roughly equivalent to the set of people that think it's important. So, anyone that might answer no perhaps doesn't understand the question and therefore would skip on to the next question rather than pause to respond. Just a thought ;-)
A: While it is true that one can get really far in software development without the slightest understanding of algorithmic complexity. I find I use my knowledge of complexity all the time; though, at this point it is often without realizing it. The two things that learning about complexity gives you as a software developer are a way to compare non-similar algorithms that do the same thing (sorting algorithms are the classic example, but most people don't actually write their own sorts). The more useful thing that it gives you is a way to quickly describe an algorithm.
For example, consider SQL. SQL is used every day by a very large number of programmers. If you were to see the following query, your understanding of the query is very different if you've studied complexity.
SELECT User.*, COUNT(Order.*) OrderCount FROM User Join Order ON User.UserId = Order.UserId
If you have studied complexity, then you would understand if someone said it was O(n^2) for a certain DBMS. Without complexity theory, the person would have to explain about table scans and such. If we add an index to the Order table
CREATE INDEX ORDER_USERID ON Order(UserId)
Then the above query might be O(n log n), which would make a huge difference for a large DB, but for a small one, it is nothing at all.
One might argue that complexity theory is not needed to understand how databases work, and they would be correct, but complexity theory gives a language for thinking about and talking about algorithms working on data.
A: There are points in time when you will face problems that require thinking about them. There are many real world problems that require manipulation of large set of data...
Examples are:
*
*Maps application... like Google Maps - how would you process the road line data worldwide and draw them? and you need to draw them fast!
*Logistics application... think traveling sales man on steroids
*Data mining... all big enterprises requires one, how would you mine a database containing 100 tables and 10m+ rows and come up with a useful results before the trends get outdated?
Taking a course in computational complexity will help you in analyzing and choosing/creating algorithms that are efficient for such scenarios.
Believe me, something as simple as reducing a coefficient, say from T(3n) down to T(2n) can make a HUGE differences when the "n" is measured in days if not months.
A: There's lots of good advice here, and I'm sure most programmers have used their complexity knowledge once in a while.
However I should say understanding computational complexity is of extreme importance in the field of Games! Yes you heard it, that "useless" stuff is the kind of stuff game programming lives on.
I'd bet very few professionals probably care about the Big-O as much as game programmers.
A: I use complexity calculations regularly, largely because I work in the geospatial domain with very large datasets, e.g. processes involving millions and occasionally billions of cartesian coordinates. Once you start hitting multi-dimensional problems, complexity can be a real issue, as greedy algorithms that would be O(n) in one dimension suddenly hop to O(n^3) in three dimensions and it doesn't take much data to create a serious bottleneck. As I mentioned in a similar post, you also see big O notation becoming cumbersome when you start dealing with groups of complex objects of varying size. The order of complexity can also be very data dependent, with typical cases performing much better than general cases for well designed ad hoc algorithms.
It is also worth testing your algorithms under a profiler to see if what you have designed is what you have achieved. I find most bottlenecks are resolved much better with algorithm tweaking than improved processor speed for all the obvious reasons.
For more reading on general algorithms and their complexities I found Sedgewicks work both informative and accessible. For spatial algorithms, O'Rourkes book on computational geometry is excellent.
A: In your normal life, not near a computer you should apply concepts of complexity and parallel processing. This will allow you to be more efficient. Cache coherency. That sort of thing.
A: A good example could be when your boss tells you to do some program and you can demonstrate by using the computational complexity theory that what your boss is asking you to do is not possible.
A: Yes, my knowledge of sorting algorithms came in handy one day when I had to sort a stack of student exams. I used merge sort (but not quicksort or heapsort). When programming, I just employ whatever sorting routine the library offers. ( haven't had to sort really large amount of data yet.)
I do use complexity theory in programming all the time, mostly in deciding which data structures to use, but also in when deciding whether or when to sort things, and for many other decisions.
A: 'yes' and 'no'
yes) I frequently use big O-notation when developing and implementing algorithms.
E.g. when you should handle 10^3 items and complexity of the first algorithm is O(n log(n)) and of the second one O(n^3), you simply can say that first algorithm is almost real time while the second require considerable calculations.
Sometimes knowledges about NP complexities classes can be useful. It can help you to realize that you can stop thinking about inventing efficient algorithm when some NP-complete problem can be reduced to the problem you are thinking about.
no) What I have described above is a small part of the complexities theory. As a result it is difficult to say that I use it, I use minor-minor part of it.
I should admit that there are many software development project which don't touch algorithm development or usage of them in sophisticated way. In such cases complexity theory is useless. Ordinary users of algorithms frequently operate using words 'fast' and 'slow', 'x seconds' etc.
A: @Martin: Can you please elaborate on the thought processes behind it?
it might not be so explicit as sitting down and working out the Big-O notation for a solution, but it creates an awareness of the problem - and that steers you towards looking for a more efficient answer and away from problems in approaches you might take. e.g. O(n*n) versus something faster e.g. searching for words stored in a list versus stored in a trie (contrived example)
I find that it makes a difference with what data structures I'll choose to use, and how I'll work on large numbers of records.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
}
|
Q: Which gcc switch disables "left-hand operand of comma has no effect" warning? It's a part of larger code base, which forces -Werror on gcc. This warning is generated in a third party code that shouldn't be changed (and I actually know how to fix it), but I can disable specific warnings. This time man gcc failed me, so please, let some gcc master enlighten me. TIA.
A: It is the -Wno-unused-value option, see the documentation
A: If you use -fdiagnostics-show-option, GCC will tell you how to disable a warning (if possible).
A: Have you tried using a diagnostic pragma directive? These are available in gcc 4.2.1+, I believe.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I get the Subversion revision number in PHP? I want to have my PHP application labeled with the revision number which it uses, but I don't want to use CruiseControl or update a file and upload it every time. How should I do it?
A: This is how I got it to work.
If your server is setup to allow shell_exec AND you have SVN installed just run:
$revision = `svnversion`;
or
$revision = shell_exec('svnversion');
A: SVN keywords is not a good solution. As others pointed out adding $Revision$ in a file only affects the specific file, which may not change for a long time.
Remembering to "edit" a file (by adding or removing a blank line) before every commit is pointless. You could as well just type the revision by hand.
One good way to do it (that I know of) is to have an automated deployment process (which is always a good thing) and using the command svnversion. Here is what I do:
Wherever I need the revision I do an include: <?php include 'version.php'; ?>. This "version.php" file only has the revision number. Moreover it is not part of the repository (it set to be ignored). Here is how I create it:
1) On projects where SVN is installed on the server, I also use it for deployment. Getting the latest version to the server I have a script that among other things does the following (it runs on the server):
cd /var/www/project
svn update
rm version.php
svnversion > version.php
2) On projects where SVN is not installed my deployment script is more complex: it creates the version.php file locally, zips the code, uploads and extracts it
A: From this answer:
You can do it by adding the following
anywhere in your code
$Id:$
So for example Jeff did:
<div id="svnrevision">svn revision: $Id:$</div>
and when checked in the
server replaced $Id:$ with the current
revision number. I also found this reference.
There is also $Date:$, $Rev:$,
$Revision:$
A: Bit late now, but use a Subversion post-commit hook. In your repository's hooks folder, create a shell script like this one:
#!/bin/bash
REPOS="$1"
REV="$2"
cd /web/root
rm -f /web/root/templates/base.html
/usr/bin/svn update
/bin/sed -i s/REVISION/$REV/ /web/root/templates/base.html
This particular example assumes your live site is in /web/root and the development code is held elsewhere. When you commit a dev change, the script deletes the prior live template (to avoid conflict messages), runs the update and replaces occurrences of REVISION in the template with the actual revision number.
More on hooks here
A: In most cases the code on the server would actually contain an "Export" of the code, not a checkout, and therefore not contain the .svn folders. At least that's the setup I see most often. Do others actually check out their code onto the web server?
A: The easiest way is to use the Subversion "Keyword Substitution". There is a guide here in the SVN book (Version Control with Subversion).
You'll basically just have to add the text $Rev$ somewhere in your file.
Then enable the keyword in your repository. On checkout SVN will substitute the revision number into the file.
A: You can get close with SVN Keywords. Add $Revision$ where you want the revision to show, but that will only show the last revision that particular file was changed, so you would have to make a change to the file each time. Getting the global revision number isn't possible without some sort of external script, or a post-commit hook.
A: You could also do it like this:
$status = @shell_exec('svnversion '.realpath(__FILE__));
if ( preg_match('/\d+/', $status, $match) ) {
echo 'Revision: '.$match[0];
}
A: See my response to the similar question "Mark" svn export with revision.
If you capture the revision number when you export you can use:
svn export /path/to/repository | grep ^Exported > revision.txt
To strip everything but the revision number, you can pipe it through this sed command:
svn export /path/to/repository | grep ^Exported | sed 's/^[^0-9]\+\([0-9]\+\).*/\1/' > revision.txt
A: Assuming your webroot is a checked-out copy of the subversion tree, you could parse the /.svn/entries file and hook out the revision number (4th line here)...
In PHP:
$svn = File('.svn/entries');
$svnrev = $svn[3];
unset($svn);
A: $svn_rev=file_get_contents('/path.to.repository/db/current');
A: Another possibility to do this is to run a cron that executes the steps described in the "Deploy Process" (assuming it is a *nix/FreeBSD server).
A: If performance is an issue, then you could do:
exec('svn info /path/to/repository', $output);
$svn_ver = (int) trim(substr($output[4], strpos($output[4], ':')));
This of course depends on your having done a checkout, and the presence of the svn command.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
}
|
Q: How can I detect, using php, if the machine has oracle (oci8 and/or pdo_oci) installed? How can I detect, using php, if the machine has oracle (oci8 and/or pdo_oci) installed?
I'm working on a PHP project where some developers, such as myself, have it installed, but there's little need for the themers to have it. How can I write a quick function to use in the code so that my themers are able to work on the look of the site without having it crash on them?
A: if the oci extension isn't installed, then you'll get a fatal error with farside.myopenid.com's answer, you can use function_exists('oci_connect') or extension_loaded('oci8') (or whatever the extension's actually called)
A: The folks here have pieces of the solution, but let's roll it all into one solution.
For just a single instance of an oracle function, testing with function_exists() is good enough; but if the code is sprinkled throughout to OCI calls, it's going to be a huge pain in the ass to wrap every one in a function_exists() test.
Therefore, I think the simplest solution would be to create a file called nodatabase.php that might look something like this:
<?php
// nodatabase.php
// explicitly override database functions with empty stubs. Only include this file
// when you want to run the code without an actual database backend. Any database-
// related functions used in the codebase must be included below.
function oci_connect($user, $password, $db = '', $charset='UTF-8', $session_mode=null)
{
}
function oci_execute($statement, $mode=0)
{
}
// and so on...
Then, conditionally include this file if a global (say, THEME_TESTING) is defined just ahead of where the database code is called. Such an include might look like this:
// define("THEME_TESTING", true) // uncomment this line to disable database usage
if( defined(THEME_TESTING) )
include('nodatabase.php'); // override oracle API with stub functions for the artists.
Now, when you hand the project over to the artists, they simply need to make that one modification and they're good to go.
A: I dont know if I fully understand your question but a simple way would be to do this:
<?php
$connection = oci_connect('username', 'password', 'table');
if (!$connection) {
// no OCI connection.
}
?>
A: As mentioned above by Greg, programmatically you can use the function_exists() method. Don't forget you can also use the following to see all the environment specifics with your PHP install using the following:
<?php
phpinfo();
?>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Kerberos and T125 protocol Why does Kerberos authentication use T125 protocol? I believe Kerberos authentication behaves this way:
*
*Client asks for a ticket to the Kerberos authority
*The Kerberos authority provides a Ticket to the client
*The Client tries to authenticate towards a Server and sends to the server this Ticket.
*The Server verifies the Ticket is OK with the Kerberos Authority, and authenticates the Client.
Now, where, in this process, is used T125 and why?
And does the Client send the Ticket any time it tries to access (e.g: for each HTTP GET page) to the Server and the Server checks this Ticket any time, or is it just once at the beginning of the "conversation"?
Thank you!
A: Not familiar with T125, but your Kerberos flow is off a little.
Roughly:
*
*User authenticates to KDC (Kerberos authority)
*KDC grants user a TGT (ticket granting ticket)
*user tries to access server
*Server demands server ticket, sends user some info (to identify the server)
*user asks KDC for ticket for server, sends TGT and server info
*KDC issues server ticket to user
*User submits server ticket to user on every access.
I know I didnt directly answer your T125 question, but I hope this helped anyway.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: .Net 8-bit Encoding I'm working on serial port, transmitting and receiving data to some hardware at 8bit data. I would like to store it as string to facilitate comparison, and preset data are stored as string or hex format in xml file. I found out that only when using Encoding.Default which is ANSI encoding then the 8bit data is converted properly and easily reversible. ASCII encoding will only works for 7bit data, and UTF8 or UTF7 doesn't works well too, since I'm using some character from 1-255. Encoding.Default would be just fine, but I read on MSDN that it's dependent on OS codepage setting, which means it might behave differently on different codepage configured. I use GetBytes() and GetString extensively using the Encoding, but would like a failsafe and portable method that works all the time at any configuration. Any idea or better suggestion for this?
A: Why not just use an array of bytes instead? It would have none of the encoding problems you're likely to suffer with the text approach.
A: I think you should use a byte array instead. For comparison you can use some method like this:
static bool CompareRange(byte[] a, byte[] b, int index, int count)
{
bool res = true;
for(int i = index; i < index + count; i++)
{
res &= a[i] == b[i];
}
return res;
}
A: Latin-1 aka ISO-8859-1 aka codepage 28591 is a useful codepage for this scenario, as it maps values in the range 128-255 unchanged. The following are interchangeable:
Encoding.GetEncoding(28591)
Encoding.GetEncoding("Latin1")
Encoding.GetEncoding("iso-8859-1")
The following code illustrates the fact that for Latin1, unlike Encoding.Default, all characters in the range 0-255 are mapped unchanged:
static void Main(string[] args)
{
Console.WriteLine("Test Default Encoding returned {0}", TestEncoding(Encoding.Default));
Console.WriteLine("Test Latin1 Encoding returned {0}", TestEncoding(Encoding.GetEncoding("Latin1")));
Console.ReadLine();
return;
}
private static bool CompareBytes(char[] chars, byte[] bytes)
{
bool result = true;
if (chars.Length != bytes.Length)
{
Console.WriteLine("Length mismatch {0} bytes and {1} chars" + bytes.Length, chars.Length);
return false;
}
for (int i = 0; i < chars.Length; i++)
{
int charValue = (int)chars[i];
if (charValue != (int)bytes[i])
{
Console.WriteLine("Byte at index {0} value {1:X4} does not match char {2:X4}", i, (int) bytes[i], charValue);
result = false;
}
}
return result;
}
private static bool TestEncoding(Encoding encoding)
{
byte[] inputBytes = new byte[256];
for (int i = 0; i < 256; i++)
{
inputBytes[i] = (byte) i;
}
char[] outputChars = encoding.GetChars(inputBytes);
Console.WriteLine("Comparing input bytes and output chars");
if (!CompareBytes(outputChars, inputBytes)) return false;
byte[] outputBytes = encoding.GetBytes(outputChars);
Console.WriteLine("Comparing output bytes and output chars");
if (!CompareBytes(outputChars, outputBytes)) return false;
return true;
}
A: Use the Hebrew codepage for Windows-1255. Its 8 bit.
Encoding enc = Encoding.GetEncoding("windows-1255");
I missunderstod you when you wrote "1-255", thought you where refereing to characters in codepage 1255.
A: You could use base64 encoding to convert from byte to string and back. No problems with code pages or weird characters that way, and it'll be more space-efficient than hex.
byte[] toEncode;
string encoded = System.Convert.ToBase64String(toEncode);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Auto-implemented getters and setters vs. public fields I see a lot of example code for C# classes that does this:
public class Point {
public int x { get; set; }
public int y { get; set; }
}
Or, in older code, the same with an explicit private backing value and without the new auto-implemented properties:
public class Point {
private int _x;
private int _y;
public int x {
get { return _x; }
set { _x = value; }
}
public int y {
get { return _y; }
set { _y = value; }
}
}
My question is why. Is there any functional difference between doing the above and just making these members public fields, like below?
public class Point {
public int x;
public int y;
}
To be clear, I understand the value of getters and setters when you need to do some translation of the underlying data. But in cases where you're just passing the values through, it seems needlessly verbose.
A: The idea is that even if the underlying data structure needs to change, the public interface to the class won't have to be changed.
C# can treat properties and variables differently at times. For example, you can't pass properties as ref or out parameters. So if you need to change the data structure for some reason and you were using public variables and now you need to use properties, your interface will have to change and now code that accesses property x may not longer compile like it did when it was variable x:
Point pt = new Point();
if(Int32.TryParse(userInput, out pt.x))
{
Console.WriteLine("x = {0}", pt.x);
Console.WriteLine("x must be a public variable! Otherwise, this won't compile.");
}
Using properties from the start avoids this, and you can feel free to tweak the underlying implementation as much as you need to without breaking client code.
A: I tend to agree (that it seems needlessly verbose), although this has been an issue our team hasn't yet resolved and so our coding standards still insist on verbose properties for all classes.
Jeff Atwood dealt with this a few years ago. The most important point he retrospectively noted is that changing from a field to a property is a breaking change in your code; anything that consumes it must be recompiled to work with the new class interface, so if anything outside of your control is consuming your class you might have problems.
A: Setter and Getter enables you to add additional abstraction layer and in pure OOP you should always access the objects via the interface they are providing to the outside world ...
Consider this code, which will save you in asp.net and which it would not be possible without the level of abstraction provided by the setters and getters:
class SomeControl
{
private string _SomeProperty ;
public string SomeProperty
{
if ( _SomeProperty == null )
return (string)Session [ "SomeProperty" ] ;
else
return _SomeProperty ;
}
}
A: Since auto implemented getters takes the same name for the property and the actual private storage variables. How can you change it in the future? I think the point being said is that use the auto implemented instead of field so that you can change it in the future if in case you need to add logic to getter and setter.
For example:
public string x { get; set; }
and for example you already use the x a lot of times and you do not want to break your code.
How do you change the auto getter setter... for example for setter you only allow setting a valid telephone number format... how do you change the code so that only the class is to be change?
My idea is add a new private variable and add the same x getter and setter.
private string _x;
public string x {
get {return _x};
set {
if (Datetime.TryParse(value)) {
_x = value;
}
};
}
Is this what you mean by making it flexible?
A: It's also much simpler to change it to this later:
public int x { get; private set; }
A: Also to be considered is the effect of the change to public members when it comes to binding and serialization. Both of these often rely on public properties to retrieve and set values.
A: Also, you can put breakpoints on getters and setters, but you can't on fields.
A: It encapsulates setting and accessing of those members. If some time from now a developer for the code needs to change logic when a member is accessed or set it can be done without changing the contract of the class.
A: AFAIK the generated CIL interface is different. If you change a public member to a property you are changing it's public interface and need to rebuild every file that uses that class. This is not necessary if you only change the implementation of the getters and setters.
A: Maybe just making fields public you could leads you to a more Anemic Domain Model.
Kind Regards
A: It is also worth noting that you can't make Auto Properties Readonly and you cannot initialise them inline. Both of these are things I would like to see in a future release of .NET, but I believe you can do neither in .NET 4.0.
The only times I use a backing field with properties these days is when my class implements INotifyPropertyChanged and I need to fire the OnPropertyChanged event when a property is changed.
Also in these situations I set the backing fields directly when values are passed in from a constructor (no need to try and fire the OnPropertyChangedEvent (which would be NULL at this time anyway), anywhere else I use the property itself.
A: You never know if you might not need some translation of the data later. You are prepared for that if you hide away your members. Users of your class wont notice if you add the translation since the interface remains the same.
A: The biggest difrence is that, if ever you change your internal structure, you can still maintain the getters and setters as is, changing their internal logic without hurting the users of your API.
A: If you have to change how you get x and y in this case, you could just add the properties later. This is what I find most confusing. If you use public member variables, you can easily change that to a property later on, and use private variables called _x and _y if you need to store the value internally.
A: Setters and getters are bad in principle (they are a bad OO smell--I'll stop short of saying they are an anti-pattern because they really are necessary sometimes).
No, there is technically no difference and when I really want to share access to an object these days, I occasionally make it public final instead of adding a getter.
The way setters and getters were "Sold" is that you might need to know that someone is getting a value or changing one--which only makes sense with primitives.
Property bag objects like DAOs, DTOs and display objects are excluded from this rule because these aren't objects in a real "OO Design" meaning of the word Object. (You don't think of "Passing Messages" to a DTO or bean--those are simply a pile of attribute/value pairs by design).
A:
why we dont just use public fields instead of using properties then
call accessors ( get,set ) when we dont need to make validations ?
*
*A property is a member that provides a flexible mechanism to read only or write only
*Properties can be overridden but fields can't be.
A: Adding getter and setter makes the variable a property as in working in Wpf/C#.
If it's just a public member variable, it's not accessible from XAML because it's not a property (even though its public member variable).
If it has setter and getter, then its accessible from XAML because now its a property.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "80"
}
|
Q: Why is it wrong to use std::auto_ptr<> with standard containers? Why is it wrong to use std::auto_ptr<> with standard containers?
A: The copy semantics of auto_ptr are not compatible with the containers.
Specifically, copying one auto_ptr to another does not create two equal objects since one has lost its ownership of the pointer.
More specifically, copying an auto_ptr causes one of the copies to let go of the pointer. Which of these remains in the container is not defined. Therefore, you can randomly lose access to pointers if you store auto_ptrs in the containers.
A: C++03 Standard (ISO-IEC 14882-2003) says in clause 20.4.5 paragraph 3:
[...]
[Note: [...]
auto_ptr does not meet the CopyConstructible and Assignable requirements for Standard Library
container elements and thus instantiating a Standard Library container
with an auto_ptr results in undefined behavior. — end note]
C++11 Standard (ISO-IEC 14882-2011) says in appendix D.10.1 paragraph 3:
[...]
Note: [...] Instances of auto_ptr meet the requirements of
MoveConstructible and MoveAssignable, but do not meet the requirements
of CopyConstructible and CopyAssignable. — end note ]
C++14 Standard (ISO-IEC 14882-2014) says in appendix C.4.2
Annex D: compatibility features:
Change: The class templates auto_ptr, unary_function, and binary_function, the function templates random_shuffle, and the
function templates (and their return types) ptr_fun, mem_fun,
mem_fun_ref, bind1st, and bind2nd are not defined.
Rationale: Superseded by new features.
Effect on original feature: Valid C ++ 2014 code that uses these class templates and function templates may fail to compile in this
International Standard.
A: Two super excellent articles on the subject:
*
*Smart Pointers - What, Why, Which?
*Guru of the Week #25
A: The STL containers need to be able to copy the items you store in them, and are designed to expect the original and the copy to be equivalent. auto pointer objects have a completely different contract, whereby copying creates a transfer of ownership. This means that containers of auto_ptr will exhibit strange behaviour, depending on usage.
There is a detailed description of what can go wrong in Effective STL (Scott Meyers) item 8 and also a not-so-detailed description in Effective C++ (Scott Meyers) item 13.
A: The C++ Standard says that an STL element must be "copy-constructible" and "assignable." In other words, an element must be able to be assigned or copied and the two elements are logically independent. std::auto_ptr does not fulfill this requirement.
Take for example this code:
class X
{
};
std::vector<std::auto_ptr<X> > vecX;
vecX.push_back(new X);
std::auto_ptr<X> pX = vecX[0]; // vecX[0] is assigned NULL.
To overcome this limitation, you should use the std::unique_ptr, std::shared_ptr or std::weak_ptr smart pointers or the boost equivalents if you don't have C++11. Here is the boost library documentation for these smart pointers.
A: STL containers store copies of contained items. When an auto_ptr is copied, it sets the old ptr to null. Many container methods are broken by this behavior.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "223"
}
|
Q: Error creating UDF in SQL Server 2005 I'm trying to create a UDF in SQL Server 2005 Express as below:
CREATE FUNCTION [CombineValues] ()
RETURNS VARCHAR(8000)
AS
BEGIN
DECLARE @CuisineList VARCHAR(8000);
RETURN
(
SELECT @CuisineList = COALESCE(@CuisineList + ', ', '') +
CAST(Cuisine AS varchar(20))
FROM Cuisines
)
END
Cuisines has the structure:
CuisineID INT PK,
Cuisine VARCHAR(20)
When I try to create the function as above, I get an error:
Msg 102, Level 15, State 1, Procedure CombineValues, Line 10 Incorrect
syntax near '='.
What am I doing wrong?
A: You need to be careful when using this method. This may not affect you now, for this query, but please keep this in mind for future applications of this method.
The problem occurs when you have a NULL value in your list. When this happens, you will get incorrect results.
For example, if your original table looks like this...
1 Blah
2 NULL
3 Foo
4 Cracker
Your function will return Foo, Cracker. The first value, Blah, will be missed by this function call. It is very easy to accommodate this, with a slight alteration to your function, like this...
CREATE FUNCTION [CombineValues] ()
RETURNS VARCHAR(8000)
AS
BEGIN
DECLARE @CuisineList VARCHAR(8000);
SELECT @CuisineList = COALESCE(@CuisineList + ', ', '') +
CAST(Cuisine AS varchar(20))
FROM Cuisines
WHERE Cuisine Is Not NULL
RETURN @CuisineList
END
By testing for NOT NULL, you will eliminate this potential problem.
A: try changing SELECT to SET and then end your function by SELECT (ing) your @CuisineList
A: Hojou, your suggestion didn't work, but something similar did:
CREATE FUNCTION [CombineValues] ()
RETURNS VARCHAR(8000)
AS
BEGIN
DECLARE @CuisineList VARCHAR(8000);
SELECT @CuisineList = COALESCE(@CuisineList + ', ', '') + CAST(Cuisine AS varchar(20)) FROM Cuisines;
RETURN
(
SELECT @CuisineList
)
END
I would like to mark this as the answer, but since I am the one who asked this question, I'm not sure this is appropriate? Any suggestions? Please feel feel to comment.
A: This answer is from the original poster, Wild Thing. Please do not vote it up or down.
CREATE FUNCTION [CombineValues] ()
RETURNS VARCHAR(8000)
AS
BEGIN
DECLARE @CuisineList VARCHAR(8000);
SELECT @CuisineList = COALESCE(@CuisineList + ', ', '') + CAST(Cuisine AS varchar(20)) FROM Cuisines;
RETURN
(
SELECT @CuisineList
)
END
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I automatically add properties in Objective-C? When adding new properties to classes, I find myself typing the same things over and over in xcode:
*
*add TYPE *NAME; (in .h interface)
*add @property (nonatomic, retain) TYPE *NAME; (in .h)
*add @synthesize NAME; (in .m)
*add [NAME release]; (in .m dealloc)
(I'm in a non-garbage collected environment.)
How can I do this automatically?
A: That sounds about right. IIRC, the Objective-C 2.0 doc says you might be able to leave out step #1, but otherwise I don't know of any shortcuts.
You could probably write a user script to do so within Xcode. See http://www.mactech.com/articles/mactech/Vol.23/23.01/2301XCode/index.html.
A: According to the Developer Documentation in 64bit runtimes you can leave out step 1.
A: You could look at Andrew Pang's RMModelObject - I haven't used it, but it acts as a object base class that simplifies model creation.
I haven't used it, but here's some of what's highlighted in the readme:
*
*no need to declare instance variables,
*no need to write accessor methods,
*free NSCopying protocol support (-copyWithZone:),
*free NSCoding protocol support (-initWithCoder:, -encodeWithCoder:),
*free -isEqual: and -hash` implementation,
*no need to write -dealloc in most cases.
A: Here's another solution which I modified from
this article (also see the initial article)
The version in the blog was searching for variables outside of the variable declaration block and was matching method names too. I have done a crude fix to only search for variables before the first '}'. This will break if there are multiple interface declarations in the header file.
I set the output to "Replace Document Conents" and input as "Entire Document"
....
#!/usr/bin/python
thisfile = '''%%%{PBXFilePath}%%%'''
code = '''%%%{PBXAllText}%%%'''
selmark = '''%%%{PBXSelection}%%%'''
import re
if thisfile.endswith('.h'):
variableEnd = code.find('\n', code.find('}'))
properties = []
memre = re.compile('\s+(?:IBOutlet)?\s+([^\-+@].*? \*?.*?;)')
for match in memre.finditer(code[:variableEnd]):
member = match.group(1)
retain = member.find('*') != -1 and ', retain' or ''
property = '@property (nonatomic%s) %s' % (retain,member)
if code.find(property) == -1:
properties.append(property)
if properties:
print '%s\n\n%s%s%s%s' % (code[:variableEnd],selmark,'\n'.join(properties),selmark,code[variableEnd:])
elif thisfile.endswith('.m'):
headerfile = thisfile.replace('.m','.h')
properties = []
retains = []
propre = re.compile('@property\s\((.*?)\)\s.*?\s\*?(.*?);')
header = open(headerfile).read()
for match in propre.finditer(header):
if match.group(1).find('retain') != -1:
retains.append(match.group(2))
property = '@synthesize %s;' % match.group(2)
if code.find(property) == -1:
properties.append(property)
pindex = code.find('\n', code.find('@implementation'))
if properties and pindex != -1:
output = '%s\n\n%s%s%s' % (code[:pindex],selmark,'\n'.join(properties),selmark)
if retains:
dindex = code.find('\n', code.find('(void)dealloc'))
output += code[pindex:dindex]
retainsstr = '\n\t'.join(['[%s release];' % retain for retain in retains])
output += '\n\t%s' % retainsstr
pindex = dindex
output += code[pindex:]
print output
A: There is Kevin Callahan's Accessorizer. From the web page:
Accessorizer selects the appropriate
property specifiers based on ivar type
- and can also generate explicit accessors (1.0) automagically ... but
Accessorizer does much, much more ...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to create query parameters in Javascript? Is there any way to create the query parameters for doing a GET request in JavaScript?
Just like in Python you have urllib.urlencode(), which takes in a dictionary (or list of two tuples) and creates a string like 'var1=value1&var2=value2'.
A: This should do the job:
const createQueryParams = params =>
Object.keys(params)
.map(k => `${k}=${encodeURI(params[k])}`)
.join('&');
Example:
const params = { name : 'John', postcode: 'W1 2DL'}
const queryParams = createQueryParams(params)
Result:
name=John&postcode=W1%202DL
A: functional
function encodeData(data) {
return Object.keys(data).map(function(key) {
return [key, data[key]].map(encodeURIComponent).join("=");
}).join("&");
}
A: If you are using Prototype there is Form.serialize
If you are using jQuery there is Ajax/serialize
I do not know of any independent functions to accomplish this, though, but a google search for it turned up some promising options if you aren't currently using a library. If you're not, though, you really should because they are heaven.
A: The built-in URL class provides a convenient interface for creating and parsing URLs.
There are no networking methods that require exactly a URL object, strings are good enough. So technically we don’t have to use URL. But sometimes it can be really helpful.
Example
let url = new URL("https://google.com/search");
url.searchParams.set('var1', "value1");
url.searchParams.set('var2', "value2");
url.searchParams.set('var3', "value3");
url.searchParams.set('var4', "value4 has spaces");
console.log(url)
A: Zabba has provided in a comment on the currently accepted answer a suggestion that to me is the best solution: use jQuery.param().
If I use jQuery.param() on the data in the original question, then the code is simply:
const params = jQuery.param({
var1: 'value',
var2: 'value'
});
The variable params will be
"var1=value&var2=value"
For more complicated examples, inputs and outputs, see the jQuery.param() documentation.
A: A little modification to typescript:
public encodeData(data: any): string {
return Object.keys(data).map((key) => {
return [key, data[key]].map(encodeURIComponent).join("=");
}).join("&");
}
A: Just like to revisit this almost 10 year old question. In this era of off-the-shelf programming, your best bet is to set your project up using a dependency manager (npm). There is an entire cottage industry of libraries out there that encode query strings and take care of all the edge cases. This is one of the more popular ones -
https://www.npmjs.com/package/query-string
A: Here you go:
function encodeQueryData(data) {
const ret = [];
for (let d in data)
ret.push(encodeURIComponent(d) + '=' + encodeURIComponent(data[d]));
return ret.join('&');
}
Usage:
const data = { 'first name': 'George', 'last name': 'Jetson', 'age': 110 };
const querystring = encodeQueryData(data);
A: URLSearchParams has increasing browser support.
const data = {
var1: 'value1',
var2: 'value2'
};
const searchParams = new URLSearchParams(data);
// searchParams.toString() === 'var1=value1&var2=value2'
Node.js offers the querystring module.
const querystring = require('querystring');
const data = {
var1: 'value1',
var2: 'value2'
};
const searchParams = querystring.stringify(data);
// searchParams === 'var1=value1&var2=value2'
A: ES2017 (ES8)
Making use of Object.entries(), which returns an array of object's [key, value] pairs. For example, for {a: 1, b: 2} it would return [['a', 1], ['b', 2]]. It is not supported (and won't be) only by IE.
Code:
const buildURLQuery = obj =>
Object.entries(obj)
.map(pair => pair.map(encodeURIComponent).join('='))
.join('&');
Example:
buildURLQuery({name: 'John', gender: 'male'});
Result:
"name=John&gender=male"
A: We've just released arg.js, a project aimed at solving this problem once and for all. It's traditionally been so difficult but now you can do:
var querystring = Arg.url({name: "Mat", state: "CO"});
And reading works:
var name = Arg("name");
or getting the whole lot:
var params = Arg.all();
and if you care about the difference between ?query=true and #hash=true then you can use the Arg.query() and Arg.hash() methods.
A: Here is an example:
let my_url = new URL("https://stackoverflow.com")
my_url.pathname = "/questions"
const parameters = {
title: "just",
body: 'test'
}
Object.entries(parameters).forEach(([name, value]) => my_url.searchParams.set(name, value))
console.log(my_url.href)
A: I have improved the function of shog9`s to handle array values
function encodeQueryData(data) {
const ret = [];
for (let d in data) {
if (typeof data[d] === 'object' || typeof data[d] === 'array') {
for (let arrD in data[d]) {
ret.push(`${encodeURIComponent(d)}[]=${encodeURIComponent(data[d][arrD])}`)
}
} else if (typeof data[d] === 'null' || typeof data[d] === 'undefined') {
ret.push(encodeURIComponent(d))
} else {
ret.push(`${encodeURIComponent(d)}=${encodeURIComponent(data[d])}`)
}
}
return ret.join('&');
}
Example
let data = {
user: 'Mark'
fruits: ['apple', 'banana']
}
encodeQueryData(data) // user=Mark&fruits[]=apple&fruits[]=banana
A: By using queryencoder, you can have some nice-to-have options, such custom date formatters, nested objects and decide if a val: true will be just value or value=true.
const { encode } = require('queryencoder');
const object = {
date: new Date('1999-04-23')
};
// The result is 'date=1999-04-23'
const queryUrl = encode(object, {
dateParser: date => date.toISOString().slice(0, 10)
});
A: const base = "https://www.facebook.com"
const path = '/v15.0/dialog/oauth'
const params = new URLSearchParams({
client_id: clientID,
redirect_uri: redirectUri,
state: randomState,
})
const url = new URL(`${path}?${params.toString()}`, base)
Here's an example to create query parameters and build URL from base using only JavaScript built-in constructor.
This is part of Facebook Login implementation in manual approach.
According to URLSearchParams doc's example, there's a line
const new_url = new URL(`${url.origin}${url.pathname}?${new_params}`);
and I've followed that practice.
This is by far the most standardized way to build URL I believe.
I was a bit surprised that JavaScript doesn't supports query or fragment arguments in thier URL constructor still in 2023, despite It's definately worth having that.
A: This thread points to some code for escaping URLs in php. There's escape() and unescape() which will do most of the work, but the you need add a couple extra things.
function urlencode(str) {
str = escape(str);
str = str.replace('+', '%2B');
str = str.replace('%20', '+');
str = str.replace('*', '%2A');
str = str.replace('/', '%2F');
str = str.replace('@', '%40');
return str;
}
function urldecode(str) {
str = str.replace('+', ' ');
str = unescape(str);
return str;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "201"
}
|
Q: TortoiseSVN Error: "OPTIONS of 'https://...' could not connect to server (...)" I'm trying to setup a new computer to synchronize with my SVN repository that's hosted with cvsdude.com.
I get this error:
![SVN Error][1] - removed image shack image that had been replaced by an advert
Here's what I did (these have worked in the past):
*
*Downloaded and installed TortoiseSVN
*Created a new folder C:\aspwebsite
*Right-clicked, chose SVN Checkout...
*Entered the following information, clicked OK:
*
*URL of repository: https://<reponame>-svn.cvsdude.com/aspwebsite
*Checkout directory: C:\aspwebsite
*Checkout depth: Fully recursive
*Omit externals: Unchecked
*Revision: HEAD revision
*Got TortoiseSVN error:
*
*OPTIONS of 'https://<reponame>-svn.cvsdude.com/aspwebsite': could not connect to server (https://<reponame>-svn.cvsdude.com)
Rather than getting the error, TortoiseSVN should have asked for my username and password and then downloaded about 90MB.
Why can't I checkout from my Subversion repository?
Kent Fredric wrote:
Either their security certificate has
expired, or their hosting is
broken/down.
Contact CVSDude and ask them whats up.
It could also be a timeout, because
for me their site is exhaustively
slow..
It errors after only a couple seconds. I don't think it's a timeout.
Matt wrote:
Try visiting
https://[redacted]-svn.cvsdude.com/aspwebsite
and see what happens. If you can visit
it in your browser, you ought to be
able to get the files in your SVN
client and we can work from there. If
it fails, then there's your answer.
I can access the site in a web browser.
A: This was driving me nuts and I solved it today. I'm posting in this old thread because I arrived here several times while searching for a solution. I hope it helps someone.
For me, I checked svn-settings --> network --> Edit Subversion server file and found that there were some uncommented lines at the end:
http-proxy-host =
ssl-trust-default-ca = no
http-proxy-username =
http-proxy-password =
that differed from my co-workers. Once I comment these, it started working again.
A: It sounds like you are almost definitely behind a proxy server.
Where this does not work for me behind my proxy:
svn checkout http://v8.googlecode.com/svn/trunk/ v8-read-only
this does:
svn --config-option servers:global:http-proxy-host=MY_PROXY_HOST --config-option servers:global:http-proxy-port=MY_PROXY_PORT checkout http://v8.googlecode.com/svn/trunk/ v8-read-only
UPDATE I forgot to quote my source :-)
http://svnbook.red-bean.com/en/1.1/ch07.html#svn-ch-7-sect-1.3.1
A: I just had a similar issue, but it didn't error immediately, so it may have not been the same issue.
I'm behind a firewall and changed my proxy settings (TortoiseSVN->Settings->Network) to access an open source repo yesterday. I received the error this morning trying to checkout a repo in the local domain behind the firewall. I just had to remove the proxy settting in TortoiseSVN->Settings->Network to get it work locally again.
A: Check you proxy settings in TortoiseSVN->Settings->Network.
Maybe they are configured differently than in your web browser.
A: Late reaction, but I've struggled with this for a while so maybe I can save somebody some time by showing my solution.
My problem showed a bit different, but the cause might be the same.
In my situation, TortoiseSVN kept on trying to connect via a proxy server. I could access SVN via chrome, firefox and IE fine.
Turns out that there is a configuration file that has a different configuration than the GUI in TortoiseSVN shows.
Mine was located here:
C:\Documents and Settings\[username]\Application Data\Subversion\, but you can also open the file via the TortoiseSVN gui.
In my file, http-proxy-exceptions was empty. After I specified it, everything worked fine.
[global]
http-proxy-exceptions = 10.1.1.11
http-proxy-host = 197.132.0.223
http-proxy-port = 8080
http-proxy-username = defaultusername
http-proxy-password = defaultpassword
http-compression = no
A: It is the problem with your proxy setting in TortoiseSVN. Connect using a network which doesn't use proxy or configure your proxy settings properly.
A: I realize this is an old question, but the same issue happened to me, but for a completely different reason.
It could be that cvs-dude changed certificates, so it no longer matches the certificate you have cached.
You can go to TortoiseSVN->Settings->Saved Data and click the 'Clear' button next to 'Authentication data' and then try again.
A: Either their security certificate has expired, or their hosting is broken/down.
Contact CVSDude and ask them whats up.
It could also be a timeout, because for me their site is exhaustively slow..
A: I've have the same problem like this, but using my own server. Maybe APACHE is allowing only limited connection to the same server. I'm increasing the max_connection and KeepAlive setting. So far so good.
A: I had a similar issue; turns out it was case-sensitivity issue. So, make sure you use the proper case.
A: Try pasting in the SVN URL into your browser's Address bar. You'll likely see that you cannot connect because of some issue with the URL. I had this issue just today and the problem was that I had mistyped the port number, but as others have noted it could also be a case-sensitivity issue, proxy settings, or other connection-level issues.
A: I did not have network settings changed in any way and thus most of the stuff presented here did not apply to me. After messing around a lot the comment about the virus scanner got me on the right track: There are some virus scanners like McAfee, that protect certain areas of the system directories and make them read-only.
When you connect to a server for the first time, Tortoise SVN tries to write the certificate on one of these files which fails due to the protection. Switch off the protection briefly, start the check out and after the certificate dialog, you can switch it back on. This at least worked for me.
A: I got the same error today and discovered that the firewall was blocking the svn client
A: This can occur because of you are trying to checking out the repository by accessing it via a proxy server without enabling the proxy server in the place you need to change the settings in TortoiseSvn. So if you are using a proxy server make sure that you put a tick in "Enable Proxy Server" in Settings->Network and give your Server address and Port number in the relevant places. Now try to check out again.
A: Thank you to all the commenters on this page. When I first installed the latest TortoiseSVN I got this error.
I was using the latest version, so decided to downgrade to 1.5.9 (as the rest of my colleagues were using) and this got it to work. Then, once built, my machine was moved onto another subnet and the problem started again.
I went to TortoiseSVN->Settings->Saved Data and cleared the Authentication data. After this it worked fine.
A: make sure when you add your proxy entries to the server file, you add them under the [global] group. (That seemed to make the difference for me under ubuntu.)
A: I got this error too when I had my server as an exception for the proxy in the SVN config file like this: http-proxy-exceptions = *.repo.domain.com
The solution for me was to use the svn server IP instead of the name. For some reason the name was not getting properly resolved from Eclipse Juno - Subclipse and from TortoiseSVN.
So, what worked for me: http-proxy-exceptions = XXX.XX.X.X (the server IP)
A: For me this was the solution.
The problem was that the SVN server was behind a reverse-proxy (pound). And the reverse proxy had to be told to allow OPTIONS.
A: remote VisualSVN server 2.5.8 is accessible from at least 3 computers.
However on my local computer the url of the repository was not accessible
and svn ls https://server-ip:443/svn/project/trunk return error
OPTIONS of 'https://…' could not connect to server (…)
My local computer used to have access to the server. The only thing that was changed was switching to http connection instead of https for Redmine reasons(certificate issue).
I tried different things listed above. What actually solved my problem was installing a new the VisualSVN server 2.5.9 using the same repository. And also Redmine recognized the new repository through https.
A: Neither of the answers resolved the issue for me. Even after I had installed a new version of Tortoise SVN + Ccleaner.
Seems that there is a folder in AppData\Roaming\Subversion that contains all configuration of Tortoise SVN. You need to delete it all and restart Tortoise SVN.
Hope this helps someone as the ultimate solution.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
}
|
Q: Listview icons show up blurry (C#) I'm attempting to display a "LargeIcon" view in a listview control, however the images I specify are blurry. This is what I have so far:
alt text http://img220.imageshack.us/img220/1005/blurryiconsql3.jpg
The .png files are 48x48 and that's what I have it set to display at in the ImageList properties. There's one thing that I've noticed (which is probably the cause) but I don't know how to change it. Inside the "Images Collection Editor" where you choose what images you want for the ImageList control, it looks like it's setting the wrong size for each image.
alt text http://img83.imageshack.us/img83/5218/imagepropertiesmf9.jpg
As you can see the "PhysicalDimension" and the "Size" is set to 16x16 and not abled to be manipulated. Does anyone have any ideas? Many thanks!
A: Be sure to set ImageList.ImageSize to 48 x 48 too.
A: When adding a .PNG Icon format size the editor tends to pick the first entry size in that file, so it picks up the 16x16 entry and it's stretching that out. That's why you see the 16x16 in the properties there. As suggested, the support for PNG is poor, I'm often found myself rolling over to another format as well to avoid this.
You can open the file in Paint.Net if you need a free editor or something more fully featured like Photoshop or Fireworks and extract the exact size you want.
A: I'm not sure if its the problem in this specific case, but Microsoft support for the PNG format is generally poor. Try adding the images in .bmp format and they should display fine.
A: Check also the ColorDepth setting on your ImageList. I had a similar issue with a TreeView control, but after reading the previous posting regarding the size I found this setting, played around with it a bit and found that it greatly affects the way images from an ImageList are rendered. The higher the depth the better the quality.
A: Be sure to set the ImageList size to 48x48 px BEFORE you add the images.
If the ImageList is set to 32x32 and you add a 48x48 image, the icon is resized to 32x32. When you change the ImageList to 48x48 afterwards, the image is just resized again, thus losing quality and going blurry.
Also, Paint.NET (or Photoshop) can't open .ico files.
Visual Studio/.NET can handle 32-bit PNG images fine, the built-in image editor in VS is a bit lack-lustre though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Lots of unnecessary frameworks load into my iPhone app - can I prevent this? There appear to be a lot of unnecessary frameworks loading into my iPhone app. I didn't link against them in Xcode, and I don't need them.
When I run "lsof -p" against them on the iPhone, I see these (and others) that I can't explain:
*
*CoreVideo
*AddressBookUI
*JavaScriptCore
*MobileSync
*EAP8021X
*BluetoothManager
*MusicLibrary
*CoreAudio
*MobileMusicPlayer
*AddressBook
*CoreTelephony
*MobileBluetooth
*Calendar
*TelephonyUI
*WebCore / WebKit
*MediaPlayer
*VideoToolbox
I wonder whether this is contributing to the slow startup times. My app is very simple. It is basically a Twitter-like posting client. The only multimedia function is to pick an image from the camera or library, and it uses simple NSURL / NSURLConnection functions to post data to a couple of web services.
This is a jailbroken 2.1 iPhone with a few apps installed from Cydia. Is this normal?
A: Before you go to all of the trouble of trying to stop the OS from loading these frameworks, you should rule out other causes of your slow launch time.
First, build a "Hello, World" app and use it as a baseline. A project template app with nothing added should serve well. If that is starting up faster than your own app, then it is something you are doing in your own code.
A: This is normal, but that doesn't mean it's ideal. It probably only has a small impact on app startup time, but it'll have a slightly greater impact than that on memory usage.
If you'd like this to be improved, the best thing to do is to head on over to Apple's bug reporter and file a bug about it. Attach a copy of your application (the binary, not the source) and they should be able to track things down from there. I'm sure they'd be interested in reports like this.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: YouTube embeds not working in WordPress after import from Blogger I imported a series of blogger posts (via xml) into WordPress, and the YouTube embed tags were removed.
YouTube URLs in posts are not identified. Instead, just the text of the url is left. Possibly as opposed to full embed tags.
I'm trying to restore the embed codes so it's seen.
Another fact that is notable in the XML import is that [EMBLED CONTENT] appears instead of the url, that is, the video...
A:
…by default, WordPress filters imported XML by removing possible troublesome tags…unfortunateely, including things like <embed> and <iframe> and other instances where you’ve included content in your posts. WordPress does so via a file you can find in /wp-includes called kses.php. In kses.php, you’ll want to scroll down to line 1309 and comment out the three lines under //Post filtering so that they look like this:
// Post filtering
#add_filter('content_save_pre', 'wp_filter_post_kses');
#add_filter('excerpt_save_pre', 'wp_filter_post_kses');
#add_filter('content_filtered_save_pre', 'wp_filter_post_kses');
Source: http://jasongriffey.net/wp/2010/06/21/moving-to-wordpress-3-0/ and http://wordpress.org/support/topic/youtube-embeds-not-working-after-import
A: I think you have a couple of options here:
*
*You could undo the import and
re-import using another means, from
RSS for instance. The value of this
depends on how much effort you have
in the posts as they are in
WordPress now - are you willing to
dump the posts and try again?
*You go to the forums, post a bug in
trac, go to the IRC channel and try
to find some more information;
you're apparently not the only
person to have this problem
(unless, of course, that's you)
*if you have db access you could
update the posts table to add the
appropriate code back in.
*you could manually re-add the embed
codes (obviously).
How many posts are we talking about?
A: This all really depends on HOW you imported the blog posts. What was your method?
When you view the raw source of the posts in wordpress (plain text view) - what does the post look like (a copy of the HTML would be nice)
-- Note - edit your original post to give the answers, a reply doesn't really work if other people answer too.
A: When I moved from blogger to wordpress my YouTube videos moved over just fine. Martin is right, a view of the post source code is probably required to be helpful.
One thing to note on a side issue though. When you use the wordpress "blogger importer" the image links will not be updated. When you view your blog everything will look ok, but in fact the images will still be referencing the blogger site.
There is a plugin on wordpress.org that will help with this, but some manual updating may / will be required for a 100% perfect move.
I think this is the one I used.
http://wordpress.org/extend/plugins/blogger-image-import/
A: Here's the solution I found on a wordpress forums.
Find in /wp-includes a file called kses.php. In kses.php, you’ll want to scroll down to line 1309 and comment out the three lines under //Post filtering so that they look like this:
// Post filtering
#add_filter('content_save_pre', 'wp_filter_post_kses');
#add_filter('excerpt_save_pre', 'wp_filter_post_kses');
#add_filter('content_filtered_save_pre', 'wp_filter_post_kses');
This will prevent the filter from removing all your YouTube videos, SlideShare embed, Scribd documents, etc.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: When the bots attack! What are some popular spam prevention methods besides CAPTCHA?
A: A very simple method which puts no load on the user is just to disable the submit button for a second after the page has been loaded. I used it on a public forum which had continuous spam posts, and it stopped them since.
A: Ned Batchelder wrote up a technique that combines hashes with honeypots for some wickedly effective bot-prevention. No captchas, just code.
It's up at Stopping spambots with hashes and honeypots:
Rather than stopping bots by having people identify themselves, we can stop the bots by making it difficult for them to make a successful post, or by having them inadvertently identify themselves as bots. This removes the burden from people, and leaves the comment form free of visible anti-spam measures.
This technique is how I prevent spambots on this site. It works. The method described here doesn't look at the content at all. It can be augmented with content-based prevention such as Akismet, but I find it works very well all by itself.
A: http://chongqed.org/ maintains blacklists of active spam sources and the URLs being advertised in the spams. I have found filtering posts for the latter to be very effective in forums.
A: The most common ones I've observed orient around user input to solve simple puzzles e.g. of the following is a picture of a cat. (displaying pictures of thumbnails of dogs surrounding a cat). Or simple math problems.
While interesting I'm sure the arms race will also overwhelm those systems too.
A: You can use Recaptcha to at least make a captcha useful. Then you can make questions with simple verbal math problems or similar. Microsoft's Asirra makes you find pics of cats and dogs. Requiring a valid email address to activate an account stops spammers when they wouldn't get enough benefit from the service, but might deter normal users as well.
A: The following is unfeasible with today's technology, but I don't think it's too far off. It's also probably overkill for dealing with forum spam, but could be useful for account sign-ups, or any situation where you wanted to be really sure you were dealing with humans and they would be prepared for it to take a few minutes to complete the process.
Have 2 users who are trying to prove themselves human connect to each other via their webcams and ask them if the person they are seeing is human and live (i.e. not a recording), by getting them to, for example, mirror each other's movements, or write something on a piece of paper. Get everyone to do this a few times with different users, and throw a few recordings into the mix which they also have to identify correctly as such.
A: A popular method on forums is to simply queue the threads of members with less than 10 posts in a moderation queue. Of course, this doesn't help if you don't have moderators, or it's not a forum. A more general method is the calculation of hyperlink to text ratios. Often, spam posts contain a ton of hyperlinks, and you can catch a lot this way. In the same vein is comparing the content of consecutive posts. Simply do not allow consecutive posts that are extremely similar.
Of course, anyone with knowledge of the measures you take is going to be able to get around them. To be honest, there is little you can do if you are the target of a specific attack. Rather, you should focus on preventing more general, unskilled attacks.
A: For human moderators it surely helps to be able to easily find and delete all posts from some IP, or all posts from some user if the bot is smart enough to use a registered account. Likewise the option to easily block IP addresses or accounts for some time, without further administration, will lessen the administrative burden for human moderators.
Using cookies to make bots and human spammers believe that their post is actually visible (while only they themselves see it) prevents them (or trolls) from changing techniques. Let the spammers and trolls see the other spam and troll messages.
A: I have tried doing 'honeypots' where you put a field and then hide it with CSS (marking it as 'leave blank' for anyone with stylesheets disabled) but I have found that a lot of bots are able to get past it very quickly. There are also techniques like setting fields to a certain value and changing them with JS, calculating times between load time and submit time, checking the referer URL, and a million other things. They all have their pitfalls and pretty much all you can hope for is to filter as much as you can with them while not alienating who you're here for: the users.
At the end of the day, though, if you really, really, don't want bots to be sending things through your form you're going to want to put a CAPTCHA on it - best one I've seen that takes care of mostly everything is reCAPTCHA - but thanks to India's CAPTCHA solving market and the ingenuity of spammers everywhere that's not even successful all of the time. I would beware using something that is 'ingenious' but kind of 'out there' as it would be more of a 'wtf' for users that are at least somewhat used to your usual CAPTCHAs.
A: Javascript evaluation techniques like this Invisible Captcha system require the browser to evaluate Javascript before the page submission will be accepted. It falls back nicely when the user doesn't have Javascript enabled by just displaying a conventional CAPTCHA test.
A: Honeypots are one effective method. Phil Haack gives one good honeypot method, that could be used in principle for any forum/blog/etc.
You could also write a crawler that follows spam links and analyzes their page to see if it's a genuine link or not. The most obvious would be pages with an exact copy of your content, but you could pick out other indicators.
Moderation and blacklisting, especially with plugins like these ones for WordPress (or whatever you're using, similar software is available for most platforms), will work in a low-volume environment. If your environment is a low volume one, don't underestimate the advantage this gives you. Personally deciding what is reasonable content and what isn't gives you ultimate flexibility in spam control, if you have the time.
Don't forget, as others have pointed out, that CAPTCHAs are not limited to text recognition from an image. Visual association, math problems, and other non-subjective questions relayed through an image also qualify.
A: Animated captchas' - scrolling text - still easy to recognize by humans but if you make sure that none of the frames offer something complete to recognize.
multiple choice question - All it takes is a ______ and a smile. idea here is that the user will have to choose/understand.
session variable - checking that a variable you put into a session is part of the request. will foil the dumb bots that simply generate requests but probably not the bots that are modeled like a browser.
math question - 2 + 5 = - this again is to ask a question that is easy to solve but prevents the bots ability to generate a response.
image grid - you create grid of images - select 1 or 2 of a particular type such as 3x3 grid picture of animals and you have to pick out all the birds on the grid.
Hope this gives you some ideas for your new solution.
A: A friend has the simplest anti-spam method, and it works.
He has a custom text box which says "please type in the number 4".
His blog is rather popular, but still not popular enough for bots to figure it out (yet).
A: Please remember to make your solution accessible to those not using conventional browsers. The iPhone crowd are not to be ignored, and those with vision and cognitive problems should not be excluded either.
A: Sblam is an interesting project.
A: Shocking, but almost every response here included some form of CAPTCHA. The OP wanted something different, I guess maybe he wanted something that actually works, and maybe even solves the real problem.
CAPTCHA doesn't work, and even if it did - its the wrong problem - humans can still flood your system, and by definition CAPTCHA wont stop that (cuz its designed only to tell if you're a human or not - not that it does that well...)
So, what other solutions are there? Well, it depends... on your system and your needs.
For instance, if all you're trying to do is limit how many times a user can fill out a "Contact Me" form, you can simply throttle how many requests each user can submit per hour/day/whatever. If your users are anonymous, maybe you need to throttle according to IP addresses, and occasionally blacklist an IP (though this too can be circumvented, and causes other problems).
If you're referring to a forum or blog comments (such as this one), well the more I use it the more I like the solution. A mix between authenticated users, authorization (based on reputation, not likely to be accumulated through flooding), throttling (how many you can do a day), the occasional CAPTCHA, and finally community moderation to cleanup the few that get through - all combine to provide a decent solution. (I wonder if Jeff can provide some info on how much spam and other malposts actually get through...?)
Another control to consider (dont know if they have it here), is some form of IDS/IPS - if you can detect and recognize spam, you can block THAT pattern. Moderation fills that need manually, here...
Note that any one of these does not prevent the spam, but incrementally lowers the probability, and thus the profitability. This changes the economic equation, and leaves CAPTCHA to actually provide enough value to be worth it - since its no longer worth it for the spammers to bother breaking it or going around it (thanks to the other controls).
A: Give the user the possibility to calculate:
What is the sum of 3 and 8?
By the way: Just surfed by an interesting approach of Microsoft Research: Asirra.
http://research.microsoft.com/asirra/
It shows you several pictures and you have to identify the pictures with a given motif.
A: Try Akismet
Captchas or any form of human-only questions are horrible from a usability perspective. Sometimes they're necessary, but I prefer to kill spam using filters like Akismet.
Akismet was originally built to thwart spam comments on WordPress blogs, but the API is capabable of being adapted for other uses.
Update: We've started using the ruby library Rakismet on our Rails app, Yarp.com. So far, it's been working great to thwart the spam bots.
A: Invisble form fields. Make a form field that doesn't appear on the screen to the user. using display: none as a css style so that it doesn't show up. For accessibility's sake, you could even put hidden text so that people using screen readers would know not to fill it in. Bots almost always fill in all fields, so you could block any post that filled in the invisible field.
A: Block access based on a blacklist of spammers IP addresses.
A: Honeypot techniques put an invisible decoy form at the top of the page. Users don't see it and submit the correct form, bots submit the wrong form which does nothing or bans their IP.
A: I've seen a few neat ideas along the lines of Asira which ask you to identify which pictures are cats. I believe the idea originated from KittenAuth a while ago..
A: Use something like the google image labeler with appropriately chosen images such that a computer wouldn't be able to recognise the dominant features of it that a human could.
The user would be shown an image and would have to type words associated with it. They would keep being shown images until they have typed enough words that agreed with what previous users had typed for the same image. Some images would be new ones that they weren't being tested against, but were included to record what words are associated with them. Depending on your audience you could also possibly choose images that only they would recognise.
A: Mollom is supposedly good at stopping spam. Both personal (free) and professional versions are available.
A: I know some people mentioned ASIRRA, but if you go to all the adopt me links for the images, it will say on that linked page if its a cat or dog. So it should be relatively easy for a bot to just go to all the adoptme links. So its just a matter of time for that project.
A: just verify the email address and let google/yahoo etc worry about it
A: You could get some device ID software the41 has some fraud prevention software that can detect the hardware being used to access your site. I belive they use it to catch fraudsters but could be used to stop bots. Once you have identified an device being used by a bot you can just block that device. Last time a checked it can even trace your route throught he phone network ( Not your Geo-IP !! ) so can even block a post code if you want.
Its expensive through so prop. a better cheaper solution that is a little less big brother.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
}
|
Q: What are some GUI clients available for Mercurial? Also, where would I find them?
A: If your on OSX then MacHg is IMHO quite nice. (I wrote it and maintain it...)
A: Tortoise HG. All the tortoise goodness, now for Mercurial.
UPDATE july 2020:
The original official website linked above is abandoned.
The project moved to: https://foss.heptapod.net/mercurial/tortoisehg/thg
A: If you're using eclipse:
http://bitbucket.org/mercurialeclipse/main
A: If you're an OS X user, Murky is pretty decent.
A: IMO best GUI Hg client for OSX is SoureTree - http://sourcetreeapp.com
I was using MacHg, which is ok (and free), but SourceTree has better support, ongoing development and better workflow.
A: GUI clients & Other tools: https://www.mercurial-scm.org/wiki/OtherTools
A: Visual Studio Code has a simple plugin for a few basic operations:
https://marketplace.visualstudio.com/items?itemName=mrcrowl.hg
The pending changes are highlighted in the editor and support for the basic commit/push/pull/update workflows.
A: If you want to use the console you can check out: https://bitbucket.org/lc2817/hgv
A: There is also hgview http://www.hgview.org/ which has a ncurses based CUI somewhat like tig as well as a GUI for viewing the logs. You can only view the logs from this program, but that is really all I want a user interface for.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
}
|
Q: How do I name a result group in a Regex? (.Net) How do I name a result group in a Regex? (.Net)
A: (?<NameOfGroup>[a-z]*)
Use the ? syntax like in the sample above.
A: (?<NAME>EXPRESSION) or (?'NAME'EXPRESSION)
A: (?<first>group)(?'second'group).
http://www.regular-expressions.info/named.html
A: (?<name>.*)
Substitute with whatever you want to call your group. whatever follow is the regular expression you expect to parse and match the result to the group.
See my post this morning on example how to parse a file list for name and number index
How to get files from 1 to n in a particular pattern?
A: (?<name>subpattern)
Where < name > is the name of the group, and subpattern is the code
For e.g.
(?<words>\w+)
same thing as \w+ but named it "words"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Conditions when finally does not execute in a .NET try-finally block Basically I’ve heard that certain conditions will cause .NET to blow past the finally block. Does anyone know what those conditions are?
A: You can get a situation where the code in the try block causes a SecurityException to be thrown before the try block entered (instead the exception is thrown when the containing method is called (see http://msdn.microsoft.com/en-us/library/fk6t46tz(VS.71).aspx)), in this situation you never even enter the try block so the code in the finally block is never called.
Other possibilities include StackOverflowException, and ExecutingEngineException.
A: Two possibilities:
*
*StackOverflowException
*ExecutionEngineException
The finally block will not be executed when there's a StackOverflowException since there's no room on the stack to even execute any more code. It will also not be called when there's an ExecutionEngineException, which may arise from a call to Environment.FailFast().
A: Finally block on background thread may not execute. However, It depends upon the completed execution of main foreground thread which terminates background thread operation even before the complete execution of background thread.
class Program
{
static void Main(string[] args)
{
Program prgm = new Program();
Thread backgroundThread = new Thread(prgm.CheckBgThread);
backgroundThread.IsBackground = true;
backgroundThread.Start();
Console.WriteLine("Closing the program....");
}
void CheckBgThread()
{
try
{
Console.WriteLine("Doing some work...");
Thread.Sleep(500);
}
finally
{
Console.WriteLine("This should be always executed");
}
}
}
A: There is also Application.Exit method.
A: Unless the CLR blows up and goes down with an ExecutingEngineException (I've seen a few in the .net 1.1 days with just the right amount of COM Interop :) .. I think finally should always execute.
A: Since async/await, there is another way a finally might get ignored that I haven't seen mentioned in other answers:
static class Program
{
[STAThread]
static void Main()
{
async void ThreadExecutionAsync()
{
try
{
SynchronizationContext.SetSynchronizationContext(
new WindowsFormsSynchronizationContext());
await Task.Yield(); // Yield to the context
// The WindowsFormsSynchronizationContext will schedule the continuation
// on the main thread, so the current thread will die
// and we will never get here...
Debugger.Break();
}
finally
{
// Will never get here either...
Debugger.Break();
}
}
var thread = new Thread(ThreadExecutionAsync);
thread.Start();
Application.Run();
}
}
A: Neither code which follows a finally block, nor code in outer scopes, will execute without the finally block having been started first (an exception within the finally block may cause it to exit prematurely, in which case execution will jump out from the finalizer to an outer scope). If code prior to the finally block gets stuck in an endless loop or a method that never exits, or if the execution context is destroyed altogether, the finally block will not execute.
Note that it is proper to rely upon finally blocks, unlike "Finalize" methods (or C# "destructors") which should not properly be relied upon.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
}
|
Q: How to capture PCM data from Wave Out How would it be possible to capture the audio programmatically? I am implementing an application that streams in real time the desktop on the network. The video part is finished. I need to implement the audio part. I need a way to get PCM data from the sound card to feed to my encoder (implemented using Windows Media Format).
I think the answer is related to the openMixer(), waveInOpen() functions in Win32 API, but I am not sure exactly what should I do.
How to open the necessary channel and how to read PCM data from it?
Thanks in advance.
A: The new Windows Vista Core Audio APIs have support for this explicitly (called Loopback Recording), so if you can live with a Vista only application this is the way to go.
See the Loopback Recording article on MSDN for instructions on how to do this.
A: I don't think there is a direct way to do this using the OS - it's a feature that may (or may not) be present on the sound card. Some sound cards have a loopback interface - Creative calls it "What U Hear". You simply select this as the input rather than the microphone, and record from it using the normal waveInOpen() that you already know about.
If the sound card doesn't have this feature then I think you're out of luck other than by doing something crazy like making your own driver. Or you could convince your users to run a cable from the speaker output to the line input :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: What kind of prefix do you use for member variables? No doubt, it's essential for understanding code to give member variables a prefix so that they can easily be distinguished from "normal" variables.
But what kind of prefix do you use?
I have been working on projects where we used m_ as prefix, on other projects we used an underscore only (which I personally don't like, because an underscore only is not demonstrative enough).
On another project we used a long prefix form, that also included the variable type. mul_ for example is the prefix of a member variable of type unsigned long.
Now let me know what kind of prefix you use (and please give a reason for it).
EDIT: Most of you seem to code without special prefixes for member variables! Does this depend on the language? From my experience, C++ code tends to use an underscore or m_ as a prefix for member variables. What about other languages?
A: We use m_ and then a slightly modified Simonyi notation, just like Rob says in a previous response. So, prefixing seems useful and m_ is not too intrusive and easily searched upon.
Why notation at all? And why not just follow (for .NET) the Microsoft notation recommendations which rely upon casing of names?
Latter question first: as pointed out, VB.NET is indifferent to casing. So are databases and (especially) DBAs. When I have to keep straight customerID and CustomerID (in, say, C#), it makes my brain hurt. So casing is a form of notation, but not a very effective one.
Prefix notation has value in several ways:
*
*Increases the human comprehension of code without using the IDE. As in code review -- which I still find easiest to do on paper initially.
*Ever write T-SQL or other RDBMS stored procs? Using prefix notation on database column names is REALLY helpful, especially for those of us who like using text editors for this sort of stuff.
Maybe in short, prefixing as a form of notation is useful because there are still development environments where smart IDEs are not available. Think about the IDE (a software tool) as allowing us some shortcuts (like intellisense typing), but not comprising the whole development environment.
An IDE is an Integrated Development Environment in the same way that a car is a Transportation Network: just one part of a larger system. I don't want to follow a "car" convention like staying on marked roads, when sometimes, its faster just to walk through a vacant lot. Relying on the IDE to track variable typing would be like needing the car's GPS to walk through the vacant lot. Better to have the knowledge (awkward though it may be to have "m_intCustomerID") in a portable form than to run back to the car for every small change of course.
That said, the m_ convention or the "this" convention are both readable. We like m_ because it is easily searched and still allows the variable typing to follow it. Agreed that a plain underscore is used by too many other framework code activities.
A: Using C#, I've moved from the 'm_'-prefix to just an underscore, since 'm_' is an heritage from C++.
The official Microsoft Guidelines tells you not to use any prefixes, and to use camel-case on private members and pascal-case on public members. The problem is that this collides with another guideline from the same source, which states that you should make all code compatible with all languages used in .NET. For instance, VB.NET doesn't make a difference between casings.
So just an underscore for me. This also makes it easy to access through IntelliSense, and external code only calling public members don't have to see the visually messy underscores.
Update: I don't think the C# "this."-prefix helps out the "Me." in VB, which will still see "Me.age" the same as "Me.Age".
A:
No doubt, it's essential for understanding code to give member variables a prefix so that they can easily be distinguished from "normal" variables.
I dispute this claim. It's not the least bit necessary if you have half-decent syntax highlighting. A good IDE can let you write your code in readable English, and can show you the type and scope of a symbol other ways. Eclipse does a good job by highlighting declarations and uses of a symbol when the insertion point is on one of them.
Edit, thanks slim: A good syntax highlighter like Eclipse will also let you use bold or italic text, or change fonts altogether. For instance, I like italics for static things.
Another edit: Think of it this way; the type and scope of a variable are secondary information. It should be available and easy to find out, but not shouted at you. If you use prefixes like m_ or types like LPCSTR, that becomes noise, when you just want to read the primary information – the intent of the code.
Third edit: This applies regardless of language.
A: It depends on which framework I'm using! If I'm writing MFC code then I use m_ and Hungarian notation. For other stuff (which tends to be STL/Boost) then I add an underscore suffix to all member variables and I don't bother with Hungarian notation.
MFC Class
class CFoo
{
private:
int m_nAge;
CString m_strAddress;
public:
int GetAge() const { return m_nAge; }
void SetAge(int n) { m_nAge = n; }
CString GetAddress() const { return m_strAddress;
void SetAddress(LPCTSTR lpsz) { m_strAddress = lpsz; }
};
STL Class
class foo
{
private:
int age_;
std::string address_;
public:
int age() const { return age_; }
void age(int a) { age_ = a; }
std::string address() const { return address_; }
void address(const std::string& str) { address_ = str; }
};
Now this may seem a bit odd - two different styles - but it works for me, and writing a lot of MFC code that doesn't use the same style as MFC itself just looks ugly.
A: I do not use any prefix at all. If I run into danger of mixing up local variables or method parameters with class members, then either the method or the class is too long and benefits from splitting up.
This (arguably) not only makes the code more readable and somewhat "fluent", but most importantly encourages well structured classes and methods. In the end, it thus boils down to a completely different issue than the prefix or no-prefix dillema.
UPDATE: well, taste and preferences change, don't they.. I now use underscore as the prefix for member variables as it has proven to be beneficial in recognizing local and member variables in the long run. Especially new team members sometimes have hard time when the two are not easily recognizable.
A: None. I used to use underscore, but was talked out of it on a project where the others didn't like it, and haven't missed it. A decent IDE or a decent memory will tell you what's a member variable and what isn't. One of the developers on our project insists on putting "this." in front of every member variable, and we humour him when we're working on areas of code that are nominally "his".
A: I prefix member variables with 'm' and parameters (in the function) with 'p'. So code will look like:
class SomeClass {
private int mCount;
...
private void SomeFunction(string pVarName) {...}
}
I find that this quickly tells you the basic scope of any variable - if no prefix, then it's a local. Also, when reading a function you don't need to think about what's being passed in and what's just a local variable.
A: It really depends on the language.
I'm a C++ guy, and prefixing everything with underscore is a bit tricky. The language reserves stuff that begins with underscore for the implementation in some instances (depending on scope). There's also special treatment for double underscore, or underscore following by a capital letter. So I say just avoid that mess and simply choose some other prefix. 'm' is ok IMO. 'm_' is a bit much, but not terrible either. A matter of taste really.
But watch out for those _leadingUnderscores. You'll be surprised how many compiler and library internals are so named, and there's definitely room for accidents and mixup if you're not extremely careful. Just say no.
A: Most of the time, I use python. Python requires you to use self.foo in order to access the attribute foo of the instance of the current class. That way, the problem of confusing local variables, parameters and attributes of the instance you work on is solved.
Generally, I like this approach, even though I dislike being forced to do it. Thus, my ideal way to do thos is to not do it and use some form of attribute access on this or self in order to fetch the member variables. That way, I don't have to clutter the names with meta-data.
A: I'm weirdo and I prefix member variables with initials from the class name (which is camel-cased).
TGpHttpRequest = class(TOmniWorker)
strict private
hrHttpClient : THttpCli;
hrPageContents: string;
hrPassword : string;
hrPostData : string;
Most of the Delphi people just use F.
TGpHttpRequest = class(TOmniWorker)
strict private
FHttpClient : THttpCli;
FPageContents: string;
FPassword : string;
FPostData : string;
A: If the language supports the this or Me keyword, then use no prefix and instead use said keyword.
A: Underscore only.
In my case, I use it because that's what the coding standards document says at my workplace. However, I cannot see the point of adding m_ or some horrible Hungarian thing at the beginning of the variable. The minimalist 'underscore only' keeps it readable.
A: It's more important to be consistent than anything, so pick something you and your teammates can agree upon and stick with it. And if the language you're coding in has a convention, you should try to stick to it. Nothing's more confusing than a code base that follows a prefixing rule inconsistently.
For c++, there's another reason to prefer m_ over _ besides the fact that _ sometimes prefixes compiler keywords. The m stands for member variable. This also gives you the ability disambiguate between locals and the other classes of variables, s_ for static and g_ for global (but of course don't use globals).
As for the comments that the IDE will always take care of you, is the IDE really the only way that you're looking at your code? Does your diff tool have the same level of quality for syntax hilighting as your IDE? What about your source control revision history tool? Do you never even cat a source file to the command line? Modern IDE's are fantastic efficiency tools, but code should be easy to read regardless of the context you're reading it in.
A: another trick is naming convention:
All member variables are named as usual, without any prefix (or 'this.' is it is usual to do so in the project)
But they will be easily differentiated from local variable because in my project, those local variables are always named:
*
*aSomething: represents one object.
*someManyThings: list of objects.
*isAState or hasSomeThing: for boolean state.
Any variable which does not begin by 'a', 'some' or 'is/has' is a member variable.
A: Since VB.NET is not case-sensitive, I prefix my member variables with an underscore and camel case the rest of the name. I capitalize property names.
Dim _valueName As Integer
Public Property ValueName() As Integer
A: I'm with the people that don't use prefixes.
IDEs are so good nowadays, it's easy to find the information about a variable at a glance from syntax colouring, mouse-over tooltips and easy navigation to its definition.
This is on top of what you can get from the context of the variable and naming conventions (such as lowerCamelCase for local variables and private fields, UpperCamelCase for properties and methods etc) and things like "hasXXXX" and "isXX" for booleans.
I haven't used prefixes for years, but I did used to be a "this." prefix monster but I've gone off that unless absolutely necessary (thanks, Resharper).
A: A single _ used only as a visual indicator. (C#)
*
*helps to group members with intellisense.
*easier to spot the member variables when reading the code.
*harder to hide a member variable with a local definition.
A: _ instead of this.
I use _ too instead of this. because is just shorter (4 characters less) and it's a good indicator of member variables. Besides, using this prefix you can avoid naming conflicts. Example:
public class Person {
private String _name;
public Person(String name) {
_name = name;
}
}
Compare it with this:
public class Person {
private String name;
public Person(String name) {
this.name = name;
}
}
I find the first example shorter and more clear.
A: I prefer using this keyword.
That means this.data or this->data instead of some community-dependent naming.
Because:
*
*with nowadays IDEs typing this. popups intellinsense
*its obvious to everyone without knowing defined naming
BTW prefixing variables with letters to denote their type is outdated with good IDEs and reminds me of this Joel's article
A: It kinda depends what language you're working in.
In C# you can reference any member using the 'this' prefix, e.g. 'this.val', which means no prefixes are needed. VB has a similar capability with 'Me'.
In languages where there is a built-in notation for indicating member access I don't see the point in using a prefix. In other languages, I guess it makes sense to use whatever the commonly accepted convention is for that language.
Note that one of the benefits of using a built-in notation is that you can also use it when accessing properties and methods on the class without compromising your naming conventions for those (which is particularly important when accessing non-private members). The main reason for using any kind of indicator is as a flag that you are causing possible side effects in the class, so it's a good idea to have it when using other members, irrespective of whether they are a field/property/method/etc.
A: I use camel case and underscore like many here. I use the underscore because I work with C# and I've gotten used to avoiding the 'this' keyword in my constructors. I camel case method-scoped variants so the underscore reminds me what scope I'm working with at the time. Otherwise I don't think it matters as long as you're not trying to add unnecessary information that is already evident in code.
A: I've used to use m_ perfix in C++ but in C# I prefer just using camel case for the field and pascal case for its property.
private int fooBar;
public int FooBar
{
get { return fooBar; }
set { fooBar = value; }
}
A: I like m_ but as long as convention is used in the code base is used I'm cool with it.
A: Your mul_ example is heading towards Charles Simonyi's Apps Hungarian notation.
I prefer keeping things simple and that's why I like using m_ as the prefix.
Doing this makes it much easier to see where you have to go to see the original declaration.
A: I tend to use m_ in C++, but wouldn't mind to leave it away in Java or C#. And it depends on the coding standard. For legacy code that has a mixture of underscore and m_ I would refactor the code to one standard (given a reasonable code size)
A: I use @.
:D j/k -- but if does kind of depend on the language. If it has getters/setters, I'll usually put a _ in front of the private member variable and the getter/setter will have the same name without the _. Otherwise, I usually don't use any.
A: For my own projects I use _ as a postfix (as Martin York noted above, _ as a prefix is reserver by the C/C++ standard for compiler implementations) and i when working on Symbian projects.
A: In Java, one common convention is to preface member variables with "my" andUseCamelCaseForTheRestOfTheVariableName.
A: None if it's not necessary, single underscore otherwise. Applies for python.
A: If it is really necessary to prefix member variables, I would definitely prefer m_ to just an underscore. I find an underscore on its own reduces readability, and can be confused with C++ reserved words.
However, I do doubt that member variables need any special notation. Even ignoring IDE help, it isn't obvious why there would be confusion between what is a local and what is a member variable.
A: No prefix. And, for purely functional / stack based no variable name. But if I really have to use side effects which I may if I want to output anything then I use p-> where p is an external Pointer to Parameters Passed to my function.
I think using a prefix / prefixes gets silly.
__EXTERN_GLOBAL_hungariannotationvariabletypeVendorName-my_member_prefix-Category-VariableName
Mulling over the mul example, my member variable which is an unsigned long representing the opcode for a multiply instruction might be mulmul.
A: Symbian uses 'i' as a prefix for members and 'a' for parameters.
A: I only use a _ suffix (the prefix _ is reserved in c/c++ as many have noted above). I like it mainly because I hate parameter names like 'aCircle', and I don't like writing out this.circle either unless absolutely necessary. (I only do that for public access member variables, 'cos for these I don't use the underscore suffix).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
}
|
Q: How do you store your code and files for use across machines I am interested to know what strategies people have to keep their code AND work versioned across multiple machines. For example I have a desktop PC running XP, a macbook running OSX and VMWare running XP as well as a sales laptop for running product demos. I want to know how I can always have these in sync. Subversion is a possibility for this but i find it less useful for dealing with binary files - maybe I have overlooked something here. What do other people use as they must have similar issues? Do they keep all files on a USB drive and never on the local file system. I am not always online so remote storage is not really an option.
A: Like others have said, subversion is your best bet for code. For binary files/non-code, I find DropBox to be very convenient. It stores revisions, has undelete, easy sharing, etc. basically an automagic, web-friendly SVN. Not having to think about it is the biggest plus for me.
A: I use mercurial for keeping my workfiles in sync. It's not great for big binaries either, but it lets me commit without being online and makes it easy to branche/merge different versions.
A: Ah the old VCS Debate.
The simplest way to share/sync Source Code is to use some sort of VCS (Version Control System) - this gives you plenty of benefits over being able to keep things synced. There are many VCSs out there, I personally use Bazaar-NG and Subversion - though I'd suggest you trial a few and see how you feel using them.
For syncing general files, espescially if it's only for yourself, I'd reccomend using "DropBox" (http://www.getdropbox.com/) - I've been using this for the last week or so, and it makes syncing up my multiple machines with a certain set of files so much more easy.
It also has some extra features that'd probably be useful for collaboration too, but I haven't tried those out yet.
A: Subversion works just great in our office for sales, project management, design and code files.
A: I store my dotfiles (.zshrc, etc) in a Git repository that is checked out into my homedir. I also do the same for the LaTeX files comprising my classwork.
A: I put important builds in Source Control -- it's fine for binary files.
A: For most files including source code we do use Subversion. It's really great.
If there are larger files oder Project management related documents which are used by people who have no access to the source control system, we use Microsoft SharePoint.
This is especially usefull if you are working with people outside your company.
A: I keep all my work encrypted on a USB stick. It also has a bootable Linux partition so I can get into a sensible working development environment from any machine, such as a borrowed work laptop with some software to carry to a conference that I can't move to my own machine.
When you have more people working on the same code, I'd put it in a central Subversion repository and set up scripts (in Windows you could use the autorun feature for the USB stick) to synchronize things between the repo and a USB stick always carried along.
A: FolderShare (http://foldershare.com) is also nice for syncing files. I use it to keep documents, etc. in sync between my laptop and my desktop, for example.
Of course, for code especially this doesn't obviate the need for source control.
A: The main point I see reg. using SVN as central repository for binary files, is that if those files are of any reasonable size, they will take some time to be synced over the net.
So if you don't want to spend time waiting for your files coming in over the net, here the building blocks for an other mirroring solution:
*
*MirrorFolder
No better tool to be found when it comes to syncing a Data-Tanks with
several other "local" copies.
*
*TrueCrypt
Use this to encrypt your USB-Tank just in case you drop it somewhere.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Using the Window API, how do I ensure controls retain a native appearance? Some of the controls I've created seem to default to the old Windows 95 theme, how do I prevent this? Here's an example of a button that does not retain the Operating System's native appearance (I'm using Vista as my development environment):
HWND button = CreateWindowEx(NULL, L"BUTTON", L"OK", WS_VISIBLE | WS_CHILD | BS_PUSHBUTTON,
170, 340, 80, 25, hwnd, NULL, GetModuleHandle(NULL), NULL);
I'm using native C++ with the Windows API, no managed code.
A: To add a manifest to the application you need create a MyApp.manifest file and add it to the application resource file:
//-- This define is normally part of the SDK but define it if this
//-- is an older version of the SDK.
#ifndef RT_MANIFEST
#define RT_MANIFEST 24
#endif
//-- Add the MyApp XP Manifest file
CREATEPROCESS_MANIFEST_RESOURCE_ID RT_MANIFEST "MyApp.manifest"
With newer versions of Visual Studio there is a Manifest Tool tab found in the project settings and the Additional Manifest Files field found on this tab can also be used to define the manifest file.
Here is a simple MyApp.manifest file for a Win32 application:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<assemblyIdentity
version="1.0.0.1"
processorArchitecture="X86"
name="Microsoft.Windows.MyApp"
type="win32"
/>
<description>MyApp</description>
</assembly>
If you application depends on the other dlls these details can also be added to the manifest and Windows will use this information to make sure your application always uses the correct versions of these dependent dlls.
For example here are the manifest dependency details for the common control and version 8.0 C runtime libraries:
<dependentAssembly>
<assemblyIdentity
type="win32"
name="Microsoft.Windows.Common-Controls"
version="6.0.0.0"
processorArchitecture="X86"
publicKeyToken="6595b64144ccf1df"
language="*"
/>
</dependentAssembly>
<dependentAssembly>
<assemblyIdentity
type="win32"
name="Microsoft.VC80.CRT"
version="8.0.50608.0"
processorArchitecture="x86"
publicKeyToken="1fc8b3b9a1e18e3b" />
</dependentAssembly>
A: I believe it has got nothing to do with your code, but you need to set up a proper manifest file to get the themed controls.
Some info here: @msdn.com and here: @blogs.msdn.com
You can see a difference between application with and without manifest here: heaventools.com
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Visual Studio Solutions / Multiple project : How to effectively propagate project properties amongst several C++ projects I am working with a Visual Studio 2005 C++ solution that includes multiple projects (about 30).
Based upon my experience, it often becomes annoying to maintain all the properties of the projects (i.e include path, lib path, linked libs, code generation options, ...), as you often have to click each and every project in order to modify them.
The situation becomes even worse when you have multiple configurations (Debug, Release, Release 64 bits, ...).
Real life examples:
*
*Assume you want to use a new library, and you need to add the include path to this library to all projects.
How will you avoid to have to edit the properties of each an every project?
*Assume you want to test drive a new version of library (say version 2.1beta) so that you need to quickly change the include paths / library path / linked library for a set of projects?
Notes:
*
*I am aware that it is possible to select multiple projects at a time, then make a right click and select "properties". However this method only works for properties that were already exactly identical for the different projects : you can not use it in order to add an include path to a set of project that were using different include path.
*I also know that it is possible to modify globally the environment options (Tools/Options/Project And solutions/Directories), however it is not that satisfying since it can not be integrated into a SCM
*I also know that one can add "Configurations" to a solutions. It does not helps since it makes another set of project properties to maintain
*I know that codegear C++ Builder 2009 offers a viable answer to this need through so called "Option sets" which can be inherited by several projects (I use both Visual Studio and C++ Builder, and I still thinks C++ Builder rocks on certain aspects as compared to Visual Studio)
*I expect that someone will suggest an "autconf" such as CMake, however is it possible to import vcproj files into such a tool?
A: As suggested, you should look at Property Sheets (aka .vsprops files).
I wrote a very short introduction to this feature here.
A: I think you need to investigate properties files, i.e. *.vsprops (older) or *.props (latest)
You do need to add the properties file manually to each project, but once that's done, you have multiple projects, but one .[vs]props file. If you change the properties, all projects inherit the new settings.
A: Yes, I'd definitely suggest using CMake. CMake is the best tool (I think I've actually tried them all) which can generate Studio project files.
I also had the issue of converting existing .vcproj files into CMakeLists.txt, and I wrote a Ruby-script which takes care of most of the conversion. The script doesn't handle things like post-build steps and such, so some tweaking is necessary, but it will save you the hassle of pulling all the source file names from the .vcproj files.
A: I often need to do something similar since I link to the static runtime libraries. I wrote a program to do it for me. It basically scans all of the subdirectories of whatever path you give it and ids any .vcproj files it finds. Then one by one, it opens them modifies them and saves them. Since I only use it rarely, the path is hard coded it, but I think you'll be able to adjust it how you like.
Another approach is to realize that Visual Studio Project files are simply XML files and can be manipulated with your favorite XML class. I've done something using C#'s XmlDocument for updating the include directories when there were A LOT of include directories that I didn't want to type in. :)
I'm including both examples. You will need to modify them to your own needs, but these should get you started.
This is the C++ version:
#include <stdio.h>
#include <tchar.h>
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
#include <vector>
#include <boost/filesystem/convenience.hpp>
#include <boost/filesystem/operations.hpp>
#include <boost/filesystem/path.hpp>
#include <boost/regex.hpp>
#include <boost/timer.hpp>
using boost::regex;
using boost::filesystem::path;
using namespace std;
vector<path> GetFileList(path dir, bool recursive, regex matchExp);
void FixProjectFile(path file);
string ReadFile( path &file );
void ReplaceRuntimeLibraries( string& contents );
void WriteFile(path file, string contents);
int _tmain(int argc, _TCHAR* argv[])
{
boost::timer stopwatch;
boost::filesystem::path::default_name_check(boost::filesystem::native);
regex projFileRegex("(.*)\\.vcproj");
path rootPath("D:\\Programming\\Projects\\IPP_Decoder");
vector<path> targetFiles = GetFileList(rootPath, true, projFileRegex);
double listTimeTaken = stopwatch.elapsed();
std::for_each(targetFiles.begin(), targetFiles.end(), FixProjectFile);
double totalTimeTaken = stopwatch.elapsed();
return 0;
}
void FixProjectFile(path file) {
string contents = ReadFile(file);
ReplaceRuntimeLibraries(contents);
WriteFile(file, contents);
}
vector<path> GetFileList(path dir, bool recursive, regex matchExp) {
vector<path> paths;
try {
boost::filesystem::directory_iterator di(dir);
boost::filesystem::directory_iterator end_iter;
while (di != end_iter) {
try {
if (is_directory(*di)) {
if (recursive) {
vector<path> tempPaths = GetFileList(*di, recursive, matchExp);
paths.insert(paths.end(), tempPaths.begin(), tempPaths.end());
}
} else {
if (regex_match(di->string(), matchExp)) {
paths.push_back(*di);
}
}
}
catch (std::exception& e) {
string str = e.what();
cout << str << endl;
int breakpoint = 0;
}
++di;
}
}
catch (std::exception& e) {
string str = e.what();
cout << str << endl;
int breakpoint = 0;
}
return paths;
}
string ReadFile( path &file ) {
// cout << "Reading file: " << file.native_file_string() << "\n";
ifstream infile (file.native_file_string().c_str(), ios::in | ios::ate);
assert (infile.is_open());
streampos sz = infile.tellg();
infile.seekg(0, ios::beg);
vector<char> v(sz);
infile.read(&v[0], sz);
string str (v.empty() ? string() : string (v.begin(), v.end()).c_str());
return str;
}
void ReplaceRuntimeLibraries( string& contents ) {
regex releaseRegex("RuntimeLibrary=\"2\"");
regex debugRegex("RuntimeLibrary=\"3\"");
string releaseReplacement("RuntimeLibrary=\"0\"");
string debugReplacement("RuntimeLibrary=\"1\"");
contents = boost::regex_replace(contents, releaseRegex, releaseReplacement);
contents = boost::regex_replace(contents, debugRegex, debugReplacement);
}
void WriteFile(path file, string contents) {
ofstream out(file.native_file_string().c_str() ,ios::out|ios::binary|ios::trunc);
out.write(contents.c_str(), contents.length());
}
This is the C# version. Enjoy...
using System;
using System.Collections.Generic;
using System.Text;
using System.Xml;
using System.IO;
namespace ProjectUpdater
{
class Program
{
static public String rootPath = "D:\\dev\\src\\co\\UMC6\\";
static void Main(string[] args)
{
String path = "D:/dev/src/co/UMC6/UMC.vcproj";
FileStream fs = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
XmlDocument xmldoc = new XmlDocument();
xmldoc.Load(fs);
XmlNodeList oldFiles = xmldoc.GetElementsByTagName("Files");
XmlNode rootNode = oldFiles[0].ParentNode;
rootNode.RemoveChild(oldFiles[0]);
XmlNodeList priorNode = xmldoc.GetElementsByTagName("References");
XmlElement filesNode = xmldoc.CreateElement("Files");
rootNode.InsertAfter(filesNode, priorNode[0]);
DirectoryInfo di = new DirectoryInfo(rootPath);
foreach (DirectoryInfo thisDir in di.GetDirectories())
{
AddAllFiles(xmldoc, filesNode, thisDir.FullName);
}
List<String> allDirectories = GetAllDirectories(rootPath);
for (int i = 0; i < allDirectories.Count; ++i)
{
allDirectories[i] = allDirectories[i].Replace(rootPath, "$(ProjectDir)");
}
String includeDirectories = "\"D:\\dev\\lib\\inc\\ipp\\\"";
foreach (String dir in allDirectories)
{
includeDirectories += ";\"" + dir + "\"";
}
XmlNodeList toolNodes = xmldoc.GetElementsByTagName("Tool");
foreach (XmlNode node in toolNodes)
{
if (node.Attributes["Name"].Value == "VCCLCompilerTool") {
try
{
node.Attributes["AdditionalIncludeDirectories"].Value = includeDirectories;
}
catch (System.Exception e)
{
XmlAttribute newAttr = xmldoc.CreateAttribute("AdditionalIncludeDirectories");
newAttr.Value = includeDirectories;
node.Attributes.InsertBefore(newAttr, node.Attributes["PreprocessorDefinitions"]);
}
}
}
String pathOut = "D:/dev/src/co/UMC6/UMC.xml";
FileStream fsOut = new FileStream(pathOut, FileMode.Create, FileAccess.Write, FileShare.ReadWrite);
xmldoc.Save(fsOut);
}
static void AddAllFiles(XmlDocument doc, XmlElement parent, String path) {
DirectoryInfo di = new DirectoryInfo(path);
XmlElement thisElement = doc.CreateElement("Filter");
thisElement.SetAttribute("Name", di.Name);
foreach (FileInfo fi in di.GetFiles())
{
XmlElement thisFile = doc.CreateElement("File");
String relPath = fi.FullName.Replace(rootPath, ".\\");
thisFile.SetAttribute("RelativePath", relPath);
thisElement.AppendChild(thisFile);
}
foreach (DirectoryInfo thisDir in di.GetDirectories())
{
AddAllFiles(doc, thisElement, thisDir.FullName);
}
parent.AppendChild(thisElement);
}
static List<String> GetAllDirectories(String dir)
{
DirectoryInfo di = new DirectoryInfo(dir);
Console.WriteLine(dir);
List<String> files = new List<String>();
foreach (DirectoryInfo subDir in di.GetDirectories())
{
List<String> newList = GetAllDirectories(subDir.FullName);
files.Add(subDir.FullName);
files.AddRange(newList);
}
return files;
}
static List<String> GetAllFiles(String dir)
{
DirectoryInfo di = new DirectoryInfo(dir);
Console.WriteLine(dir);
List<String> files = new List<String>();
foreach (DirectoryInfo subDir in di.GetDirectories())
{
List<String> newList = GetAllFiles(subDir.FullName);
files.AddRange(newList);
}
foreach (FileInfo fi in di.GetFiles())
{
files.Add(fi.FullName);
}
return files;
}
}
}
A: *.vcxproj files are msbuild files. So you just take a property you don't want in all your project files and delete it. Then put it in your property sheet. Then make sure all the projects files properly import that property sheet.
This can be incredibly tedious for hundreds of files. I wrote a tool make this interactive:
https://github.com/chris1248/MsbuildRefactor
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
}
|
Q: What happens on deserialization with this? I'm currently convering my ASP.NET v2 application to serialize/deserialize it's objects because I want to shift from inproc session state to stateserver. This is because my host, webhost4life, has a nasty tendency to recycle the worker process frequently thus causing session timeouts. Anyway... the question...
I'm trying to not serialize things I don't need to, i.e. variables that are re-initialised each page, don't need to be serialised. Here's one of them:
Private RollbackQueue As New Queue(Of DataServer.Rollback)
On deserialisation, will RollbackQueue be a) nothing or b) an empty queue? My guess is that when .NET deserialises, it creates the parent object as normal and then fills in the fields one by one. Therefore, the NEW bit will fire.
But that is a guess.
Thanks, Rob.
A: It will be nothing. The CLR serialization logic will create the object uninitialized by way of FormatterServices.GetSafeUnitializedObject without running any construction logic. If you need to ensure the field has a value I would recommend moving such initialization into an Initialize() method that is called both from your constructor and from a method marked with the OnDeserialized attribute.
A: Why not write a simple test application to find out? Here's one I wrote (excuse the C# instead of VB, but I have the C# Express version of VS2008 open at the moment).
[Serializable]
class TestClass
{
[NonSerialized]
public Queue<string> queue = new Queue<string>();
}
class Program
{
static void Main(string[] args)
{
var obj = new TestClass();
Console.WriteLine("Original is null? {0}", obj.queue == null);
var stream = new MemoryStream();
var formatter = new BinaryFormatter();
formatter.Serialize(stream, obj);
stream.Position = 0L;
var copy = (TestClass)formatter.Deserialize(stream);
Console.WriteLine("Copy is null? {0}", copy.queue == null);
Console.ReadLine();
}
}
The output from this is
Original is null? False
Copy is null? True
Now you know for sure, that it will be null when deserialized. Kent has already explained in another post why this is the case, and what you can do about it, so I won't re-state it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Locking Row in SQL 2005-2008 Is there a way to lock a row in the SQL 2005-2008 database without starting a transaction, so other processes cannot update the row until it is unlocked?
A: You can use RowLock or other hints but you should be careful..
The HOLDLOCK hint will instruct SQL Server to hold the lock until you commit the transaction. The ROWLOCK hint will lock only this record and not issue a page or table lock.
The lock will also be released if you close your connection or it times out. I'd be VERY careful doing this since it will stop any SELECT statements that hit this row dead in their tracks. SQL Server has numerous locking hints that you can use. You can see them in Books Online when you search on either HOLDLOCK or ROWLOCK.
A: Everything you execute in the server happens in a transaction, either implicit or explicit.
You can not simply lock a row with no transaction (make the row read only). You can make the database read only, but not just one row.
Explain your purpose and it might be a better solution. Isolation levels and lock hints and row versioning.
A: Do you need to lock a row, or should Sql Server's Application locks do what you need?
An Application Locks is just a lock with a name that you can "lock", "unlock" and check if it is locked. see above link for details. (They get unlocked if your connection gets closed etc, so tend to clean themselfs up)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: "Cocoa Touch Application" Template from Xcode 3.1.1 Just installed the latest SDK for iPhone 2.1. When I go to File -> New Project in Xcode, under the iPhone OS Application selection, I don't see a template icon for "Cocoa Touch Application". Am I missing something? Anything else I need to download other than the iPhone SDK? If not, how do I add it to the "iPhone OS Application" templates?
A: OK, after some more digging, I found several posts which seem to indicate that the template names have been changed (from the apple support site). So the problem is not with our templates, it is with the video tutorials - they have not been updated. Here is the template mapping between old and new, best I can tell:
Cocoa Touch OpenGL Application -> OpenGL ES Application
Cocoa Touch Tab Bar -> Tab Bar Application
Cocoa Touch Utility -> Utility Application
Cocoa Touch Application -> Window-based Application / View based application
Cocoa Touch List -> Navigation based Application
A: All the templates (under iPhone) are Cocoa based.
The difference between them is basically how you set up the main View and the navigational controls that are installed by default.
A: You shouldn't need to add any templates, this is what happens by default.
The closest thing to a normal Cocoa Touch application would be the Window-Based application as it gives you a window and a delegate...
The others, like Martin said, have different styles already applied to them... OpenGL, Navigation Controllers, Views, etc.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How do "spikes" figure in the schedule / estimation game? Might be subjective and/or discussion.. but here goes.
I've been asked to estimate a feature for the next big thing at work. I break it down.. use story points come up with a estimate. The feature however calls for interfacing with GoDiagrams a third party diagramming component in addition to various other company initiatives.. (a whole set of 2008_Limited_Edition frameworks/services:). I've been tracking myself using a burn-up chart and I find that I'm unable to sustain my pace primarily due to "spikes".. Definition
I estimate for 2 points a week and then I find myself working weekends (well trying to.. end up neither here nor there) because I can't figure out where to hook in so that I can preview user-actions, show a context menu, etc. In the end I spend time making spikes that throw my schedule off-track... and decreases its value.. doesn't give the right picture.
Spikes are needed to drive nails through the planks of ignorance. But how are they factored into the estimation equation? Doing all required spikes before the feature seems wrong.. (might turn out to be YAGNI) Doing it in between disrupts my flow. Right now it's during pre-iteration planning.. but this is pushing the touchline out on a weekly basis.
A: I guess you are constantly underestimating
*
*what you do already know about the 3rd party component
*how long it takes you to create usable/helpful spikes for unknown areas
1. Get better at estimating those two things.
So, it's all about experience. No matter what methodology you use, they will help you to use your experience better, not replace it.
2. Try not to get lose track when working on those spikes.
They should be short, time boxed sessions. They are not about playing around with all the possible features listed on the marketing slides.
Give them focus, two or three options to explore. Expect them to deliver one concrete result.
Update(Gishu): To summarize
*
*Spikes need to be explicit tasks defined in the iteration planning step.
*If spikes exceed the timebox period, stop working on it. Shelve the associated task. Complete the other tasks in the current iteration bucket. Return to the shelved task or add a more elaborate/broken down spike to the next iteration along with the associated task. Tag a more conservative estimate to the generation 1 spike the next time.
A: If you run out of time in your timeboxed spike, you should still stop and complete your other committed work. You should then add another spike to your next iteration to complete the necessary work you need to complete in order to accurately estimate the task resulting from the spike.
If there is a concern over spiking things for too long and this becoming a problem - this is one reason I like 1 week iterations. :-)
A: @pointernil..
It's more of no estimation coupled with a Indy-Jones Head-First approach to tackling a story. I estimate stories by their content.. currently I don't take into account the time required to find the right incantation for the control library to play nice. That sometimes takes more time than my application logic.. So to rephrase the Original question, should spikes be separate tasks in the iteration plan, added on a JIT basis before you start working on a particular story?
My Spikes are extremely focussed.. I just can't wait to get back to the "real" problems. e.g. 'How do I show a context menu from this control?' I may be guilty of not reading the entire 150+ page manual or code samples.. but then time is scarce. The first solution that solves the problem gets the nod and I move on. But when you're unable to find that elusive event or NIH pattern of notification used by the component, spikes can be time-consuming. How do I timebox something that is unknown? e.g. My timebox has elapsed and I still have no clue for plugging-in my custom context menu. How do I proceed? Keep hacking away?
Maybe this comes in the "Buffering Uncertainity" scheme of things.. I'll look if I find something useful in Mike Cohn's book.
A: I agree with pointernil. The only issue is that your estimates are incorrect. Which is no big drama, unless you've just blown out a 3 million dollar project of course :-)
If it happens once, its a learning experience. If it happens again and the result is better, then you've got another learning experience under your belt. If you are constantly underestimating and your percentages are getting worse, you need to wisen up a bit. No methodology will get you out of this.
Spikes just need to be given the time that they need. The one thing I've seen happen repeatedly in my experience is that people expect to be able to nail a technology within a couple of hours, or a day. That just doesn't happen in real life. The simplest issue, even a bug caused by a typo, can have a developer pulling their hair our for huge chunks of time. Be honest about how competent yourself or your staff really are, and put it in the budget.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Unit testing a multithreaded application? Does anyone have any advice for a consistent way to unit test a multithreaded application? I have done one application where our mock "worker threads" had a thread.sleep with a time that was specified by a public member variable. We would use this so we could set how long a particular thread would take to complete its work, then we could do our assertions. Any ideas of a better way to do this? Any good mock frameworks for .Net that can handle this?
A: TypeMock (commercial) has a unit testing framework that automatically tries to find deadlocks in multithreaded applications and I think can be set up to run threads with predictable context switching.
I saw a demo this week at a show -- apparently it's in Alpha (called Racer)
http://www.typemock.com/Typemock_software_development_tools.html
A: I've come across a research product, called Microsoft Chess. It's specifically designed for non-deterministic testing of multithreaded applications. Downside so far is that it is integrated into VS.
A: Step one is to realise that often much of the code you need to unit test is orthogonal to the threading. This means that you should try and break up the code that does the work from the code that does the threading. Once that's done, you can easily test the code that does the work using normal unit testing practices. But of course, you knew that.
The problem is then one of testing the threading side of the problem, but at least now you have a point where this threading interfaces with the code that does the work and hopefully you have an interface there that you can mock. Now that you have a mock for the code that the threading code calls into, I find the best thing to do is add some events to the mock (this may mean you need to hand roll your mock). The events will then be used to allow the test to synchronise with and block the threading code under test.
So, for example, let's say we have something really simple, a multi-threaded queue that processes work items. You'd mock the work item. The interface might include a 'Process()' method that the thread calls to do the work. You'd put two events in there. One that the mock sets when Process() is called and one that the mock waits on after it has set the first event. Now, in your test you can start up your queue, post a mock work item and then wait on the work item's "I'm being processed" event. If all you're testing is that process gets called, then you can set the other event and let the thread continue. If you're testing something more complex, like how the queue handles multiple dispatch or something, then you might do other things (like post and wait for other work items) before releasing the thread. Since you can wait with a timeout in the test, you can make sure that (say) only two work items get processed in parallel, etc, etc. The key thing is that you make the tests deterministic using events that the threaded code blocks on so that the test can control when they run.
I'm sure your situation is more complex, but this is the basic method that I use to test threaded code and it works pretty well. You can take a surprising amount of control over multi-threaded code if you mock out the right bits and put synchronisation points in.
Here is some more info on this kind of thing, though it's talking about a C++ codebase: http://www.lenholgate.com/blog/2004/05/practical-testing.html
A: My advice would be not to rely on unit tests to detect concurrency issues for several reasons:
*
*Lack of reproducibility: the tests will fail only once in a while, and won't be really helpful to pinpoint the problems.
*Erratic failing build will annoy everybody in the team - because the last commit will always be wrongly suspected for being the cause of the failing build.
*Deadlocks when encountered are likely to freeze the build until the execution timeout is encountered which can significantly slow down the build.
*The build environment is likely to be a single CPU environment (think build being run in a VM) where concurrency issues may never happen - no matter how much sleeping time is set.
*It defeats somehow the idea of having simple, isolated units of validating code.
A: It's important to test multi-threaded code on a multi-processor machine. A dual-core machine may not be sufficient. I've seen deadlocks occur on a 4 processor machine that did not occur on a dual-core single processor. Then you need to create a stress test based on a client program that spawns many threads and makes multiple requests against the target application. It helps if the client machine is multi-processor as well so there is more load on the target application.
A: I don't think that unit tests are an effective way to find threading bugs, but they can be a good way to demonstrate a known threading bug, isolate it, and test your fix for it. I've also used them to test the basic features of some coordinating class in my application, like a blocking queue for example.
I ported the multithreadedTC library from Java to .NET and called it TickingTest. It lets you start up several threads from a unit test method and coordinate them. It doesn't have all the features of the original, but I've found it useful. The biggest thing it's missing is the ability to monitor threads that are started during the test.
A: If you have to test that a background thread does something, a simple technique I find handy is to to have a WaitUntilTrue method, which looks something like this:
bool WaitUntilTrue(Func<bool> func,
int timeoutInMillis,
int timeBetweenChecksMillis)
{
Stopwatch stopwatch = Stopwatch.StartNew();
while(stopwatch.ElapsedMilliseconds < timeoutInMillis)
{
if (func())
return true;
Thread.Sleep(timeBetweenChecksMillis);
}
return false;
}
Used like this:
volatile bool backgroundThreadHasFinished = false;
//run your multithreaded test and make sure the thread sets the above variable.
Assert.IsTrue(WaitUntilTrue(x => backgroundThreadHasFinished, 1000, 10));
This way you don't have to sleep your main testing thread for a long time to give the background thread time to finish. If the background doesn't finish in a reasonable amount of time, the test fails.
A: Not quite a unit test, but you could write some test code which repeatedly calls the code that will execute on different threads. Trying to create maximal interleaving between threads with a periodic or final consistency check. Of course this approach has the downside of not being reproducable, so you would need to use extensive logging to figure out what went wrong. This approach would best be coupled with unit tests for each threads individual tasks.
A: Like GUI Unit-testing, this is a bit of a Waterloo for Automated tests. True threads are by definition.. unpredictable.. they'll interfere in ways that can't be pre-determined. Hence writing true tests is difficult if not impossible.
However you have some company with this... I suggest searching through the archives of the testdrivendevelopment yahoo group. I remember some posts in the vicinity.. Here's one of the newer ones.
(If someone would be kind enough to strafe and paraphrase.. that would be great. I'm too sleepy.. Need to LO SO)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
}
|
Q: How do you automate some routine actions for improving productivity? Every morning, after logging into your machine, you do a variety of routine stuffs.
The list can include stuffs like opening/checking your email clients, rss readers, launching visual studio, running some business apps, typing some replies, getting latest version from Source Control, compiling, connecting to a different domain etc. To a big extend, we can automate using scripting solutions like AutoIt, nightly jobs etc.
I would love to hear from you geeks out there for the list of stuffs you found doing repeatedly and how you solved it by automating it. Any cool tips?
A: I use Linux. I have a bunch of scripts that do anything I want. Typically I write a script whenever a "block" of work can be reused in the future. For example, simple refactorings, deployments, etc...
Over time I started to combine these blocks, hence getting ever more efficient.
Regarding the "load stuff at startup", under Linux that comes out of the box (you can "save your current session" when you log out or turn off the computer).
On windows, my suggestion is to use programs that can be automated via command line.
A: A favorite way is to leave the computer on at night or better, if it's a laptop, put it to sleep. Running a web browsing virtual machine in VMware or similar works also, you can set the VM start along with the machine and save its state on shutdown, so your web pages and email client stay open. This works for development also if you're doing scripting or something similar where the performance hit of the VM on large compiles won't negate the benefits.
A: SlickRun is very handy for this, just a few keys to navigate to anything common and a very small footprint. With input variables and file path recognition all part of it I can quick remote desktop to any machine, search anything, pull up whatever's needed.
A: On OS X, I have an Applescript that I run at the beginning of the day. It sets an away message on IM, hides or quits programs that would distract me, gets new mail, and so forth. I also plug in my USB backup disk, so when I'm going home, another script ejects it and quits some programs. When the script is done, so am I.
I invoke these scripts with key combos using Quicksilver.
If you don't have a Mac, by the way, Quicksilver and Applescript are probably the #1 and #2 reasons to switch. Between the two of them, you can tell your computer to do practically anything you want in very short order.
A: Use a good app launcher such as Quicksilver or Launchy to cut down on the time it takes to perform simple tasks. They're usually not scriptable, but they do let you do each step faster.
A: Writing shell scripts (Applescript, Bash, PowerShell, etc..) is a great way to automate most mundane tasks, assuming your apps are scriptable, as well as pick up a new language. As you venture further into this practice, you'll find yourself more and more annoyed at the apps you use that aren't scriptable, to the point where it starts to affect your choice of apps ;-)
Also, consider a cron job, Windows scheduled task, or similar OS X analog to automatically run certain tasks at certain times of day/week/month/year. You can use this for anything from the "workday morning" scripts mentioned previously, to reminding you of your wife's birthday and anniversary every year. There's some more info here for *NIX systems, or here for Windows boxes.
Happy automation!
A: I have a hard time wrapping my head around Applescript, but since Apple runs BASH scripts just fine, I just use those instead. I've got a development server on my mac, so I've got a script that I can run to create a new site directory, create a new virtual host in apache, add a new domain to my /etc/hosts file, etc.
It's especially cool to integrate Bash (or probably applescript, although I don't know how) with Growl. That way, you can put a nice message up on the screen, complete with a png icon. This is more useful for things that your scripts do during the day though.
A: I do most of my programming work on a development server at work, so in the evening I simply detach my screen session and re-attach it in the morning, so it takes just a few seconds until I'm exactly where I left the day before.
I have some macros defined in mutt to clean up my inbox (archive commit mails etc.), I have a script that mounts some directories on the development server on my notebook via sshfs (works without interaction using public keys), and after that all I have to do is start up a browser and get a coffee. :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: DropShadowBitmapEffect Doesn't work on TextBlock Does anyone know why the DropShadowBitmapEffect and the EmbossBitmapEffect won't work on a TextBlock (not textBOX) in WPF? OuterGlow, Blur and Bevel seem to work fine.
The transparent background brush is apparently not the answer because you can get a dropshadow with a null background brush. The default softness on a dropshadow is 50% and if you have a small font, the softness dissipates the shadow too much. There seems to be a steep drop off around softness of 39% (at which point the shadow more or less disappears). Try setting it to 0 and slowly moving you're way up until you find a number that still shows the shadow.
Yet another note: the softness is definitely a factor, but be aware in Xaml the valid values are really only 0 to 1, but in Blend it shows it as a percentage up to 100. So if you set the value to 100 in Xaml, it will be completely dissipated.
The background brush = transparent solution still may work for the embossing effect
A: Bitmap effects work by looking at the post-rendered pixels and running standard image manipulation on them. It should only be dependent on the color of the pixels. I wonder if their algorithms don't work well on white. Try changing the color to see if that has an effect -- if it does, you might want to try putting a black panel underneath with drop shadow set on it.
Edit: The questioner found the answer
"Thanks for pointing me in the correct general direction. It wasn't the color of the text or the DropShadow that mattered, what is needed is to make the Background Brush on the TextBlock the Transparent Brush (Alpha = 0) instead of null."
A: Important Sidenote: you shouldn't really be using BitmapEffects any more. Use the Effect property based on ShaderModel effects introduced in .net 3.5 SP1, it uses hardware rendering and has far better performance.
More Information
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Ruby - Ensure Syslog Gets Closed Is it absolutely critical that I always close Syslog when I'm done using it? Is there a huge negative impact from not doing so?
If it turns out that I definitely need to, what's a good way to do it? I'm opening Syslog in my class constructor and I don't see a way to do class destructors in Ruby, and currently have something resembling this:
class Foo
def initialize
@@log = Syslog.open("foo")
end
end
I don't immediately see the place where the Syslog.close call should be, but what do you recommend?
A: The open method accepts a block. Do something like this:
class Foo
def do_something
Syslog.open do
# work with the syslog here
end
end
end
A: It looks like you're opening it as a class variable... so the proper way would be to do...
class Foo
def initialize
@@log = Syslog.open("foo")
end
def Foo.finalize(id)
@@log.close if @@log
end
end
Though this is not necesssarily predictable or supported. It's the way to do it if you're going to keep the code the way you do.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: 6502 CPU Emulation It's the weekend, so I relax from spending all week programming by writing a hobby project.
I wrote the framework of a MOS 6502 CPU emulator yesterday, the registers, stack, memory and all the opcodes are implemented. (Link to source below)
I can manually run a series of operations in the debugger I wrote, but I'd like to load a NES rom and just point the program counter at its instructions, I figured that this would be the fastest way to find flawed opcodes.
I wrote a quick NES rom loader and loaded the ROM banks into the CPU memory.
The problem is that I don't know how the opcodes are encoded. I know that the opcodes themselves follow a pattern of one byte per opcode that uniquely identifies the opcode,
0 - BRK
1 - ORA (D,X)
2 - COP b
etc
However I'm not sure where I'm supposed to find the opcode argument. Is it the the byte directly following? In absolute memory, I suppose it might not be a byte but a short.
Is anyone familiar with this CPU's memory model?
EDIT: I realize that this is probably shot in the dark, but I was hoping there were some oldschool Apple and Commodore hackers lurking here.
EDIT: Thanks for your help everyone. After I implemented the proper changes to align each operation the CPU can load and run Mario Brothers. It doesn't do anything but loop waiting for Start, but its a good sign :)
I uploaded the source:
https://archive.codeplex.com/?p=cpu6502
If anyone has ever wondered how an emulator works, its pretty easy to follow. Not optimized in the least, but then again, I'm emulating a CPU that runs at 2mhz on a 2.4ghz machine :)
A: If you look into references like http://www.atarimax.com/jindroush.atari.org/aopc.html, you will see that each opcode has an encoding specified as:
HEX LEN TIM
The HEX is your 1-byte opcode. Immediately following it is LEN bytes of its argument. Consult the reference to see what those arguments are. The TIM data is important for emulators - it is the number of clock cycles this instruction takes to execute. You will need this to get your timing correct.
These values (LEN, TIM) are not encoded in the opcode itself. You need to store this data in your program loader/executer. It's just a big lookup table. Or you can define a mini-language to encode the data and reader.
A: The opcode takes one byte, and the operands are in the following bytes. Check out the byte size column here, for instance.
A: This book might help: http://www.atariarchives.org/mlb/
Also, try examing any other 6502 aseembler/simulator/debugger out there to see how Assembly gets coded as Machine Language.
A: The 6502 manuals are on the Web, at various history sites. The KIM-1 shipped with them. Maybe more in them than you need to know.
A: This is better - 6502 Instruction Set matrix:
https://www.masswerk.at/6502/6502_instruction_set.html
A: The apple II roms included a dissassembler, I think that's what it was called, and it would show you in a nice format the hex opcodes and the 3 character opcode and the operands.
So given how little memory was available, they managed to shove in the operand byte count (always 0, 1 or 2) the 3 character opcode for the entire 6502 instruction set into a really small space, because there's really not that much of it.
If you can dig up an apple II rom, you can just cut and paste from there...
A: The 6502 has different addressing modes, the same instruction has several different opcodes depending on it's addressing mode. Take a look at the following links which describes the different ways a 6502 can retrieve data from memory, or directly out of ROM.
http://obelisk.me.uk/6502/addressing.html#IMM
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: Zero SQL deadlock by design - any coding patterns? I am encountering very infrequent yet annoying SQL deadlocks on a .NET 2.0 webapp running on top of MS SQL Server 2005. In the past, we have been dealing with the SQL deadlocks in the very empirical way - basically tweaking the queries until it work.
Yet, I found this approach very unsatisfactory: time consuming and unreliable. I would highly prefer to follow deterministic query patterns that would ensure by design that no SQL deadlock will be encountered - ever.
For example, in C# multithreaded programming, a simple design rule such as the locks must be taken following their lexicographical order ensures that no deadlock will ever happen.
Are there any SQL coding patterns guaranteed to be deadlock-proof?
A: There is no magic general purpose solution to this problem that work in practice. You can push concurrency to the application but this can be very complex especially if you need to coordinate with other programs running in separate memory spaces.
General answers to reduce deadlock opportunities:
*
*Basic query optimization (proper index use) hotspot avoidanant design, hold transactions for shortest possible times...etc.
*When possible set reasonable query timeouts so that if a deadlock should occur it is self-clearing after the timeout period expires.
*Deadlocks in MSSQL are often due to its default read concurrency model so its very important not to depend on it - assume Oracle style MVCC in all designs. Use snapshot isolation or if possible the READ UNCOMMITED isolation level.
A: Writing deadlock-proof code is really hard. Even when you access the tables in the same order you may still get deadlocks [1]. I wrote a post on my blog that elaborates through some approaches that will help you avoid and resolve deadlock situations.
If you want to ensure two statements/transactions will never deadlock you may be able to achieve it by observing which locks each statement consumes using the sp_lock system stored procedure. To do this you have to either be very fast or use an open transaction with a holdlock hint.
Notes:
*
*Any SELECT statement that needs more than one lock at once can deadlock against an intelligently designed transaction which grabs the locks in reverse order.
A: I believe the following useful read/write pattern is dead lock proof given some constraints:
Constraints:
*
*One table
*An index or PK is used for read/write so engine does not resort to table locks.
*A batch of records can be read using a single SQL where clause.
*Using SQL Server terminology.
Write Cycle:
*
*All writes within a single "Read Committed" transaction.
*The first update in the transaction is to a specific, always-present record
within each update group.
*Multiple records may then be written in any order. (They are "protected"
by the write to the first record).
Read Cycle:
*
*The default read committed transaction level
*No transaction
*Read records as a single select statement.
Benefits:
*
*Secondary write cycles are blocked at the write of first record until the first write transaction completes entirely.
*Reads are blocked/queued/executed atomically between the write commits.
*Achieve transaction level consistency w/o resorting to "Serializable".
I need this to work too so please comment/correct!!
A: Zero deadlocks is basically an incredibly costly problem in the general case because you must know all the tables/obj that you're going to read and modify for every running transaction (this includes SELECTs). The general philosophy is called ordered strict two-phase locking (not to be confused with two-phase commit) (http://en.wikipedia.org/wiki/Two_phase_locking ; even 2PL does not guarantee no deadlocks)
Very few DBMS actually implement strict 2PL because of the massive performance hit such a thing causes (there are no free lunches) while all your transactions wait around for even simple SELECT statements to be executed.
Anyway, if this is something you're really interested in, take a look at SET ISOLATION LEVEL in SQL Server. You can tweak that as necessary. http://en.wikipedia.org/wiki/Isolation_level
For more info, see wikipedia on Serializability: http://en.wikipedia.org/wiki/Serializability
That said -- a great analogy is like source code revisions: check in early and often. Keep your transactions small (in # of SQL statements, # of rows modified) and quick (wall clock time helps avoid collisions with others). It may be nice and tidy to do a LOT of things in a single transaction -- and in general I agree with that philosophy -- but if you're experiencing a lot of deadlocks, you may break the trans up into smaller ones and then check their status in the application as you move along. TRAN 1 - OK Y/N? If Y, send TRAN 2 - OK Y/N? etc. etc
As an aside, in my many years of being a DBA and also a developer (of multiuser DB apps measuring thousands of concurrent users) I have never found deadlocks to be such a massive problem that I needed special cognizance of it (or to change isolation levels willy-nilly, etc).
A: As you said, always access tables in the same order is a very good way to avoid deadlocks. Furthermore, shorten your transactions as much as possible.
Another cool trick is to combine 2 sql statements in one whenever you can. Single statements are always transactional. For example use "UPDATE ... SELECT" or "INSERT ... SELECT", use "@@ERROR" and "@@ROWCOUNT" instead of "SELECT COUNT" or "IF (EXISTS ...)"
Lastly, make sure that your calling code can handle deadlocks by reposting the query a configurable amount of times. Sometimes it just happens, it's normal behaviour and your application must be able to deal with it.
A: If you have enough design control over your app, restrict your updates / inserts to specific stored procedures and remove update / insert privileges from the database roles used by the app (only explicitly allow updates through those stored procedures).
Isolate your database connections to a specific class in your app (every connection must come from this class) and specify that "query only" connections set the isolation level to "dirty read" ... the equivalent to a (nolock) on every join.
That way you isolate the activities that can cause locks (to specific stored procedures) and take "simple reads" out of the "locking loop".
A: In addition to consistent sequence of lock acquisition - another path is explicit use of locking and isolation hints to reduce time/resources wasted unintentionally acquiring locks such as shared-intent during read.
A: Something that none has mentioned (surprisingly), is that where SQL server is concerned many locking problems can be eliminated with the right set of covering indexes for a DB's query workload. Why? Because it can greatly reduce the number of bookmark lookups into a table's clustered index (assuming it's not a heap), thus reducing contention and locking.
A: Quick answer is no, there is no guaranteed technique.
I don't see how you can make any application deadlock proof in general as a design principle if it has any non-trivial throughput. If you pre-emptively lock all the resources you could potentially need in a process in the same order even if you don't end up needing them, you risk the more costly issue where the second process is waiting to acquire the first lock it needs, and your availability is impacted. And as the number of resources in your system grows, even trivial processes have to lock them all in the same order to prevent deadlocks.
The best way to solve SQL deadlock problems, like most performance and availability problems is to look at the workload in the profiler and understand the behavior.
A: Not a direct answer to your question, but food for thought:
http://en.wikipedia.org/wiki/Dining_philosophers_problem
The "Dining philosophers problem" is an old thought experiment for examining the deadlock problem. Reading about it might help you find a solution to your particular circumstance.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: How do you "preview" user actions like resize or editing in GoDiagrams?
*
*The GoDiagram object model has a GoDocument.
*GoViews have a reference to a GoDocument.
*If the user does any modification on the diagramming surface, a GoDocument.Changed event is raised with the relevant information in the event arguments.
I would like to be notified when some user-actions happen, so that I can confer with my Controller (disallow/cancel it if need be) and then issue view-update orders from there that actually modify the Northwoods GoDiagram third party component.
The Changed event is a notification that something just happened (past tense) - Doing all of the above in the event handler results in a .... (wait for it)... StackOverflowException. (GoDocument.Changed handler > Updates GoDocument > Firing new Changed events.. )
So question, how do I get a BeforeEditing or BeforeResizing kind of notification model in GoDiagrams? Has anyone who's been there lived to tell a tale?
A: JFYI...
The component-vendor recommendation is to subclass and override appropriate methods for this. Override the bool CanXXX() method, raise a cancelable custom event. If the subscriber returns false, bail out (return false to abort the user action) of CanXXX.
No built-in mechanism for this in GoDiagrams.
For example, you could define a
CustomView.ObjectResizing cancelable
event. In your override of
GoToolResizing.CanStart, you can raise
that event. If the
CancelEventArgs.Cancel property
becomes true, you would have
CanStart() return false.
Source http://www.nwoods.com/forum/forum_posts.asp?TID=2745
A: The event arguments (GoChangedEventArgs) for the change event has a property IsBeforeChanging which indicates whether the change event was raised from the "RaiseChanging" method (true), or the RaiseChanged (false). That should tell you whether the change has occurred yet, but I know of no way to cancel it.
The best I can suggest is instead of checking if the change is allowed and performing it, check if the change is not allowed, and if it isn't call the "Undo" method on the arguments in the change event. So essentially:
OnChanged(GoChangedEventArgs e)
{
if(NotAllowed)
{
e.Undo();
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Web interface tool for debian repository? What is the web interface tool that Debian or Ubuntu use for publicizing their custom repositories on the web?
Like packages.debian.org
Is such tool open sourced, so that it could be re-used for a custom repository?
A: The scripts that manage the archive are open source, they're in a debian package called dak. I don't think this includes the web pages, but I'm not sure. I'd suggest emailling ftpmaster@debian.org or debian-www@lists.debian.org and asking.
Parsing the packages file is indeed very straightforward but there's still a lot of work to make a nice set of web pages from it so it would be worth seeing if you can get hold of what debian use.
A: You really only need something to parse the Packages file, no? Example Packages file. I've never attempted to do this before, but I cant imagine it being a horrendous task.
Edit: Well it would technically be spidering the repo to process a series of Packages files, but that wouldn't make it too much tougher.
Edit 2: Unless you specify the Packages files manually. Then it would be simple again.
A: There are perl modules to parse the Packages file if you want to get at that type of information, DPKG::Parse for example can do that. You could build a web page from that data similar to the URL you provided.
There are also tools in debian to create a "custom repository." Such a repository might contain your locally built packages for example or specific versions of things you want to have around. Tools that you might want to look at to do this are reprepro, apt-ftparchive, mini-dinstall, and debarchiver. I have used reprepro for personal packages and can recommend it, I have not used the others.
Debian uses a tool called dak but it is designed for a repo with thousands of packages and is poorly documented since it was designed to be used only by debian. It is not recommended for use for personal packages.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Where can I learn about proven methods for sharing cryptographic keys? Suppose that a group wants to encrypt some information, then share the encryption key among the group members in a way that requires the consensus of the group to decrypt the information. I'm interested in a variety of scenarios where the breadth of consensus ranges from unanimity to an absolute majority. A useful technique can apply to symmetric keys, private keys, or both.
I could take a crack at rolling my own method, as I'm sure many SO members could. But for the purposes of this question, I am interested only in methods that have been widely published and have withstood scrutiny by expert cryptanalysts. Journal citations are good, but interpretation of academic sources are very useful too.
A: What you describe sounds a lot like "secret splitting" (Section 12.1. Introduction to Cyptography. Trappe & Washington. 2nd ed) The basic idea is you can come up with a polynomial that includes your "secret" (a key) as a point on the line. You can give out "shares" by picking other points on this polynomial. Two points define a line of the form f(x) = ax + b, three points define a polynomial of the form f(x) = ax^2 + bx + c, and four points define something of the form f(x) = ax^3 + bx^2 + cx + d, and so on. You can choose a polynomial that includes your secret as a point, and a degree for the polynomial sufficient so that any N people can reconstruct it.
This is the basic idea that is known as the "Shamir threshold scheme."
See wikipedia on Secret Splitting and Shamir's Secret Sharing
The wikipedia page has some links to implementations of this idea, including GPL'd code for Windows and UNIX.
A: I have always been fascinated by this secret sharing technique. I've seen code implementing it on the internet, but have never seen actual applications. Shamir's secret sharing The wikipedia article links to some actual code, as well as the original academic article.
A: This is easy to implement with error-correcting codes. You could use a command-line tool such as par2 (which is not exactly appropriate for this specific purpose btw, as it generates recovery blocks of varying size). Let's say you have (n+m) voters, and want a quorum of n votes. You generate n private keys K₁∘, K₂, ... Kn, and generate m additionnal ECC blocks Pₓ of the same size. That way any n blocks suffice to reconstitute the cipher K₁∘K₂∘...∘Kn
A: Go here for a discussion of the mathematical basis to Shamir's secret sharing and brief discussion of the type of practical applications that it has. Scroll down the page to the lecture notes on Polynomials and Secret Sharing. It's probably a v. basic overview of the area, but should be quite interesting for you.
Discrete Mathematics Notes
A: Lotus Notes provides a practcal implementation of 'Silo passwords' whereby access to some resource (data/info/document) is locked to a 'shared-id' - The ID (part of a certfied PKI system I think based on RSA) is setup with 2 or more (I think up to 16) individual user passwords. The certifier/administrator sets up a scheme whereby any number of passwords from those available or all passwords are necessary to 'open' the id for active use. This process is commonly used to lock-down Org or OU certificates to 2 of 5 or 3 of 5 administrators/corporate officer grant access and so ensure that high-level certificate usage/access can be controlled and absentee admin personnel are avoided.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: How to install a masked package in Gentoo 2008? I searched the net and handbook, but I only managed to learn what is the masked package, and not how to install it. I did find some commands, but they don't seem to work on 2008 (looking at it, it seems those are for earlier versions). I have something like this:
localhost ~ # emerge flamerobin
Calculating dependencies
!!! All ebuilds that could satisfy "dev-db/flamerobin" have been masked.
!!! One of the following masked packages is required to complete your request:
- dev-db/flamerobin-0.8.6 (masked by: ~x86 keyword)
- dev-db/flamerobin-0.8.3 (masked by: ~x86 keyword)
I would like to install version 0.8.6, but don't know how? I found some instructions, but they tell me to edit or write to some files under /etc/portage. However, I don't have /etc/portage on my system:
localhost ~ # ls /etc/portage
ls: cannot access /etc/portage: No such file or directory
A: There are two different kinds of masks in gentoo. Keyword masks and package masks. A keyword mask means that the package is either not supported (or untested) by your architecture, or still in testing. A package mask means that the package is masked for another reason (and for most users it is not smart to unmask). The solutions are:
*
*Add a line to /etc/portage/package.keywords (Check man portage in the package.keywords section). This is for the keyword problems.
*Add a line to /etc/portage/package.unmask for "package.mask" problems (you can also use package.mask for the converse). This is in the same man file, under the section package.unmask. I advise to use versioned atoms here to avoid shooting in your own foot with really broken future versions a couple of months down the line.
A: These days there's also a more 'automated' solution, called "autounmask". No more file editing needed to unmask!
The great benefit of the package is, it also unmasks / handles keywords of dependencies if needed. It's provided in the package app-portage/autounmask.
/etc/portage/package.keywords and
/etc/portage/package.unmask
can be directories as well nowadays (but autounmask handles single files as well). In those directories, multiple can place multiple "autounmask" files, one file in each dir per "unmask"-package. If you use single files instead of dirs, 'autounmask' will place some kind of header / footer, and this way it becomes easy to remove "unmasks" if wanted.
A: Simply mkdir /etc/portage and edit as mentioned here: http://gentoo-wiki.com/TIP_Dealing_with_masked_packages#But_you_want_to_install_the_package_anyway...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/111769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.