text
stringlengths
8
267k
meta
dict
Q: Database schema for hierarchical groups I'm working on a database design for groups hierarchy used as the foundation of a larger system. Each group can contain other groups, and also 'devices' as leaf objects (nothing goes below device). The database being used is MS SQL 2005. (Though working in MS SQL 2000 would be a bonus; a solution requiring MS SQL 2008 is unfortunately not feasible at this time). There are different types of groups, and these need to be dynamic and definable at run-time by users. For example, group types might be "customer", "account", "city", or "building", "floor", and each type is going to have a different set of attributes, definable by the user. There will also be business rules applied - eg, a "floor" can only be contained underneath a "building" group, and again, these are definable at runtime. A lot of the application functionality comes from running reports based on these groups, so there needs to be a relatively fast way to get a list of all devices contained within a certain group (and all sub-groups). Storing groups using modified pre-order tree traversal technique has the upside that it is fast, but the downside that it is fairly complex and fragile - if external users/applications modify the database, there is the potential for complete breakage. We're also implementing an ORM layer, and this method seems to complicate using relations in most ORM libraries. Using common table expressions and a "standard" id/parentid groups relation seem to be a powerful way to avoid running multiple recursive queries. Is there any downside to this method? As far as attributes, what is the best way to store them? A long, narrow table that relates back to group? Should a common attribute, like "name" be stored in a groups table, instead of the attributes table (a lot of the time, the name will be all that is required to display)? Are there going to be performance issues using this method (let's assume a high average of 2000 groups with average of 6 attributes each, and average 10 concurrent users, on a reasonable piece of hardware, eg, quad-core Xeon 2 Ghz, 4GB ram, discounting any other processes)? Feel free to suggest a completely different schema than what I've outlined here. I was just trying to illustrate the issues I'm concerned about. A: I'd recommend you actually construct the easiest-to-maintain way (the "standard" parent/child setup) and run at least some basic benchmarks on it. You'd be surprised what a database engine can do with the proper indexing, especially if your dataset can fit into memory. Assuming 6 attributes per group, 2000 groups, and 30 bytes/attribute, you're talking 360KB*expected items/group -- figure 400KB. If you expect to have 1000 items/group, you're only looking at 400MB of data -- that'll fit in memory without a problem, and databases are fast at joins when all the data is in memory. A: Common table expressions will let you get out a list of groups with the parent-child relations. Here is an example of a sproc using CTE's for a different application. It's reasonably efficient but beware the following caveats: * *If a part occurs more than once in the hierarchy it will be reported at each location. You may need to post-process the results. *CTE's are somewhat obtuse and offer limited scope to filter results within the query - the CTE may not appear more than once within the select statement. Oracle's CONNECT BY is somewhat more flexible as it doesn't impose nearly as many limitations on the query structure as CTE's do, but if you're using SQL Server this won't be an option. If you need to do anything clever with the intermediate results then write a sproc that uses the CTE to get a raw query into a temporary table and work on it from there. SELECT INTO will minimise the traffic incurred in this. The resulting table will be in cache so operations on it will be reasonably quick. Some possible physical optimisations that could help: * *Clustered indexes on the parent so that getting out child nodes for a parent uses less I/O. *Lots of RAM and (depending on the size of your BOM table) 64-bit servers with even more RAM so that the main BOM table can be cached in core. On a 32 bit O/S the /3G boot switch is your friend and has no real downside for a database server *DBCC PINTABLE can help force the database manager to hold the table in cache. Parent-Attribute Type-Attribute coding tables will not play nicely with CTE's as you will wind up with a combinatorical explosion in your row counts if you include the attribute table. This would preclude any business logic in the query that filtered on attributes. You would be much better off storing the attributes directly on the BOM table entry. A: Pre-order Tree Traversal is very handy. You can make it robust by keeping the traversal numbers up to date with triggers. A similar technique which I have used is to keep a separate table of (ancestor_id, descendant_id) which lists all ancestors and descendants. This is nearly as good as pre-order traversal numbers. Using a separate table is handy, because even though it introduces an extra join, it does remove the complexity into a separate table. A: The modified pre-order is, essentially, Joe Celko's Nested Sets method. His book, "Trees and Hierarchies..." covers both adjacency list and NS, with descriptions of advantages and disadvantages of each. With proper indexing, CTE of adjacency lists gets the most balanced performance. If you're going for read-mostly, then NS will be faster. What you seem to be describing is a Bill of Material processor. While not M$, Graeme Birchall has a free DB2 book, with a chapter on hierarchy processing using CTE (the syntax is virtually identical, IIRC, in that the ANSI syntax adopted DB2's, which M$ then adopted): http://mysite.verizon.net/Graeme_Birchall/cookbook/DB2V95CK.PDF
{ "language": "en", "url": "https://stackoverflow.com/questions/112866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Calling a ASP Page thru it Class Like in Windows Forms: Dim myForm as New AForm(Constr-arg1, Constr-arg2) myForm.Show ... is there a similar way to Load a Page in ASP.Net. I would like to overload the Page Constructor and instantiate the correct Page Contructor depending on the situation. A: Can you just link to the page passing parameters in the QueryString (after the ? in the URL) and then use them in the constructor (more likely PageLoad) A: I think the best approach here for ASP.NET is to write User Control (*.ascx file) that represents page content, and load different controls based on current situation using Page.LoadControl() method. This solution is flexible enough, because only reference to control is its name. And this approach is much more useful than page constructor overloading as soos as you're not related on strong types, only on controls' names. A: This isn't really the "correct" way to redirect to a page in .Net web programming. Instead, you should call either Request.Redirect("~/newpage.aspx") or Server.Transfer("~/newpage.aspx"). You should then handle the request in the new page's Page_Load handler. You can pass state between the pages by adding to the query string of the redirected URL (i.e. ~/newpage.aspx?q1=test), or by assiging values to the Session store (i.e Session["q1"] = value).
{ "language": "en", "url": "https://stackoverflow.com/questions/112870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Change Style/Look of Asp:CheckBox using CSS I want to change the standard "3D" look of the standard asp.net checkbox to say solid 1px. If I try to apply the styling to the Border for example it does just that - draws the standard checkbox with a border around it - which is valid I guess. Anyway, is there a way to change how the actual textbox is styled? A: I think the best way to make CheckBox looks really different is not to use checkbox control at all. Better use your own images for checked/unchecked state on-top of hyperlink or image element. Cheers. A: Simplest best way, using the ASP checkbox control with custom design. chkOrder.InputAttributes["class"] = "fancyCssClass"; you can use something like that.. hope that helps A: None of the above work well when using ASP.NET Web Forms and Bootstrap. I ended up using Paul Sheriff's Simple Bootstrap CheckBox for Web Forms <style> .checkbox .btn, .checkbox-inline .btn { padding-left: 2em; min-width: 8em; } .checkbox label, .checkbox-inline label { text-align: left; padding-left: 0.5em; } .checkbox input[type="checkbox"]{ float:none; } </style> <div class="form-group"> <div class="checkbox"> <label class="btn btn-default"> <asp:CheckBox ID="chk1" runat="server" Text="Required" /> </label> </div> </div> The result looks like this... A: paste this code in your css and it will let you customize your checkbox style. however, it's not the best solution, it's pretty much displaying your style on top of the existing checkbox/radiobutton. input[type='checkbox']:after { width: 9px; height: 9px; border-radius: 9px; top: -2px; left: -1px; position: relative; background-color: #3B8054; content: ''; display: inline-block; visibility: visible; border: 3px solid #3B8054; transition: 0.5s ease; cursor: pointer; } input[type='checkbox']:checked:after { background-color: #9DFF00; } A: Why not use Asp.net CheckBox button with ToggleButtonExtender available from the Ajax control toolkit. A: Rather than use some non-standard control, what you should be doing is using un-obtrusive javascript to do it after the fact. See http://code.google.com/p/jquery-checkbox/ for an example. Using the standard ASP checkbox simplifies writing the code. You don't have to write your own user control, and all your existing code/pages don't have to be updated. More importantly, it is a standard HTML control that all browsers can recognize. It is accessible to all users, and work if they don't have javascript. For example, screen readers for the blind will be able to understand it as a checkbox control, and not just an image with a link. A: Not sure that it's really an asp.net related question.. Give this a shot, lots of good info here: http://www.456bereastreet.com/archive/200409/styling_form_controls/ A: Keep in mind that the asp:CheckBox control actually outputs more than just a single checkbox input. For example, my code outputs <span class="CheckBoxStyle"> <input id="ctl00_cphContent_cbCheckBox" name="ctl00$cphContent$cbCheckBox" type="checkbox"> </span> where CheckBoxStyle is the value of the CssClass attribute applied to the control and cbCheckBox is the ID of the control. To style the input, you need to write CSS to target span.CheckBox input { /* Styles here */ } A: They're dependent on the browser really. Maybe you could do something similar to the answer in this question about changing the file browse button. A: Well, I went through every solution I could find. The Ajax Control Toolkit works, but it creates a weird html output with all kinds of spans and other styling that is hard to work with. Using css styling with the ::before tags to hide the original control's box would work, but if you placed runat=server into the element to make it accessible to the code-behind, the checkbox would not change values unless you actually clicked in the original control. In some of the methods, the entire line for the label would end up under the checkbox if the text was too long for the viewing screen, or would end up underneath the checkbox. In the end, (on the adice of @dimarzionist's answer here in this page) I used an asp.net ImageButton and used the codebehind to change the image. With this solution I get nice control over the styles and can determine whether the box is checked from the codebehind. <asp:ImageButton ID="mycheckbox" CssClass="checkbox" runat="server" OnClick="checkbox_Click" ImageUrl="unchecked.png" /> <span class="checkboxlabel">I have read and promise to fulfill the <a href="/index.aspx">rules and obligations</a></span> And in the code-behind protected void checkbox_Click(object sender, ImageClickEventArgs e) { if (mycheckbox.ImageUrl == "unchecked.png") { mycheckbox.ImageUrl = "checked.png"; //Do something if user checks the box } else { mycheckbox.ImageUrl = "unchecked.png"; //Do something if the user unchecks the box } } What's more, is with this method, The <span> you use for the checkbox's text will wrap perfectly with the checkbox. .checkboxlabel{ vertical-align:middle; font-weight: bold; } .checkbox{ height: 24px; /*height of the checkbox image*/ vertical-align: middle; }
{ "language": "en", "url": "https://stackoverflow.com/questions/112883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Monthly Birthday SQL Query How would retrieve all customer's birthdays for a given month in SQL? What about MySQL? I was thinking of using the following with SQL server. select c.name from cust c where datename(m,c.birthdate) = datename(m,@suppliedDate) order by c.name A: don't forget the 29th February... SELECT c.name FROM cust c WHERE ( MONTH(c.birthdate) = MONTH(@suppliedDate) AND DAY(c.birthdate) = DAY(@suppliedDate) ) OR ( MONTH(c.birthdate) = 2 AND DAY(c.birthdate) = 29 AND MONTH(@suppliedDate) = 3 AND DAY(@suppliedDate) = 1 AND (YEAR(@suppliedDate) % 4 = 0) AND ((YEAR(@suppliedDate) % 100 != 0) OR (YEAR(@suppliedDate) % 400 = 0)) ) A: Personally I would use DATEPART instead of DATENAME as DATENAME is open to interpretation depending on locale. A: If you're asking for all birthdays in a given month, then you should supply the month, not a date: SELECT c.name FROM cust c WHERE datepart(m,c.birthdate) = @SuppliedMonth A: I'd actually be tempted to add a birthmonth column, if you expect the list of customers to get very large. So far, the queries I've seen (including the example) will require a full table scan, as you're passing the the data column to a function and comparing that. If the table is of any size, this could take a fair amount of time since no index is going to be able to help. So, I'd add the birthmonth column (indexed) and just do (with possible MySQLisms): SELECT name FROM cust WHERE birthmonth = MONTH(NOW()) ORDER BY name; Of course, it should be easy to set the birthmonth column either with a trigger or with your client code. A: SELECT * FROM tbl_Employee WHERE DATEADD( Year, DATEPART( Year, GETDATE()) - DATEPART( Year, DOB), DOB) BETWEEN CONVERT( DATE, GETDATE()) AND CONVERT( DATE, GETDATE() + 30) This can be use to get the upcoming Birthday by days
{ "language": "en", "url": "https://stackoverflow.com/questions/112892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Determining the size of a file larger than 4GB The code currently does this and the fgetpos does handle files larger than 4GB but the seek returns an error, so any idea how to seek to the end of a file > 4GB? fpos_t currentpos; sok=fseek(fp,0,SEEK_END); assert(sok==0,"Seek error!"); fgetpos(fp,&currentpos); m_filesize=currentpos; A: Ignore all the answers with "64" appearing in them. On Linux, you should add -D_FILE_OFFSET_BITS=64 to your CFLAGS and use the fseeko and ftello functions which take/return off_t values instead of long. These are not part of C but POSIX. Other (non-Linux) POSIX systems may need different options to ensure that off_t is 64-bit; check your documentation. A: If you're in Windows, you want GetFileSizeEx (MSDN). The return value is a 64bit int. On linux stat64 (manpage) is correct. fstat if you're working with a FILE*. A: This code works for me in Linux: int64_t bigFileSize(const char *path) { struct stat64 S; if(-1 == stat64(path, &S)) { printf("Error!\r\n"); return -1; } return S.st_size; } A: (stolen from the glibc manual) int fgetpos64 (FILE *stream, fpos64_t *position) This function is similar to fgetpos but the file position is returned in a variable of type fpos64_t to which position points. If the sources are compiled with _FILE_OFFSET_BITS == 64 on a 32 bits machine this function is available under the name fgetpos and so transparently replaces the old interface. A: Stat is always better than fseek to determine file size, it will fail safely on things that aren't a file. 64 bit filesizes is an operating specific thing, on gcc you can put "64" on the end of the commands, or you can force it to make all the standard calls 64 by default. See your compiler manual for details. A: On linux, at least, you could use lseek64 instead of fseek.
{ "language": "en", "url": "https://stackoverflow.com/questions/112897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can you change the Visual Studio IDE profile? Can the Visual Studio IDE profile be changed without clearing all VS settings? A: Tools -> Import and Export Settings.. -> [X] Import Selected ... -> Save Current -> Choose options you wish to change
{ "language": "en", "url": "https://stackoverflow.com/questions/112926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Graphical DIFF programs for linux I really like Araxis Merge for a graphical DIFF program for the PC. I have no idea what's available for linux, though. We're running SUSE linux on our z800 mainframe. I'd be most grateful if I could get a few pointers to what programs everyone else likes. A: xxdiff is lightweight if that's what you're after. A: If you use Vim, you can use the inbuilt diff functionality. vim -d file1 file2 takes you right into the diff screen, where you can do all sort of merge and deletes. A: BeyondCompare has also just been released in a Linux version. Not free, but the Windows version is worth every penny - I'm assuming the Linux version is the same. A: I have used Meld once, which seemed very nice, and I may try more often. vimdiff works well, if you know vim well. Lastly I would mention I've found xxdiff does a reasonable job for a quick comparison. There are many diff programs out there which do a good job. A: Kompare is fine for diff, but I use dirdiff. Although it looks ugly, dirdiff can do 3-way merge - and you can get everything done inside the tool (both diff and merge). A: Diffuse is also very good. It even lets you easily adjust how lines are matched up, by defining match-points. A: I know of two graphical diff programs: Meld and KDiff3. I haven't used KDiff3, but Meld works well for me. It seems that both are in the standard package repositories for openSUSE 11.0 A: Emacs comes with Ediff. Here is what Ediff looks like A: Meld and KDiff are two of the most popular. A: There is DiffMerge from SourceGear. It's pretty good. Araxis Merge is one of the programs I miss from Windows. I wonder if it works under Wine ;) Might have to give it a try A: Subclipse for Eclipse has an excellent graphical diff plugin if you are using SVN (subversion) source control. A: I use Guiffy and it works well. (source: guiffy.org) A: I generally need to diff codes from subversion repositories and so far eclipse has worked really nicely for me... I use KDiff3 for other works.
{ "language": "en", "url": "https://stackoverflow.com/questions/112932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "233" }
Q: What are the statistics of HTML vs. Text email usage What the latest figures are on people viewing their emails in text only mode vs. HTML? Wikipedia and it's source both seem to reference this research from 2006 which is an eternity ago in internet terms. An issue with combining both HTML and text based emails is taking a disproportionate amount of time to resolve given the likely number of users it is affecting. A: As with web browser usage statistics, it depends entirely on the audience. I have access to a bit of data on this subject and it seems that text-only email use is very low (for non-technical audiences, at least). <0.1% up to ~6% depending on demographic. It's not that much effort to do both (especially if you can find something to help you do the heavy lifting when creating multipart MIME containers), and you can always write a script to generate text from your HTML or something.
{ "language": "en", "url": "https://stackoverflow.com/questions/112940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Create program installer in Visual Studio 2005? I'm a web-guy stuck in "application world" in VS 2005. I created my windows forms program and want to give my end users the ability to install it (and some of it's resources) into a standard Program Files/App Directory location along with a start menu/desktop launcher. The help files don't give any instructions (that I can find). This seems like such a trivial task to create an installer - but it's eluding me. Any hints would be greatly appreciated! A: I would suggest using something WiX (windows installer XML). Its the toolkit most products from codeplex or OOB code drops use, and its pretty easy to get the hang of. There's also (in version 3) an IDE add-in called Votive to help make things 'easier'. Personally I find using WiX more flexible then using the built in Visual Studio installer template, though your means might vary. Take a look at http://wix.sourceforge.net/ and there's also a great tutorial at http://www.tramontana.co.hu/wix/. If it seems kind of hard to start off with, persevere - I did and now I find it perfect for what I need. A: Another option is Inno Setup, a third-party installer which is free, easy to use and excellent: Inno Setup A: The exe file is actually just a boostrap loader, which launches the MSI file. The MSI file is the actual installation file. A: You're looking for a "Setup Project" which should be under the "Other Project Types" -> "Setup and Deployment" category in the "New Project" dialog. A: Add a "Setup Project" project to your solution. New Project > Other Project Types > Setup and Deployment. You can then choose what is installed and where. A: The Setup Project is the way to go. If you're going to be deploying the installer from a web site, I recommend creating an MSI file as the project output (as opposed to a Setup.exe output). Most of my clients block the download of EXE files.
{ "language": "en", "url": "https://stackoverflow.com/questions/112941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Accessing files across the windows network with near MAX_PATH length I'm using C++ and accessing a UNC path across the network. This path is slightly greater than MAX_PATH. So I cannot obtain a file handle. But if I run the program on the computer in question, the path is not greater than MAX_PATH. So I can get a file handle. If I rename the file to have less characters (minus length of computer name) I can access the file. Can this file be accessed across the network even know the computer name in the UNC path puts it over the MAX_PATH limit? A: I recall that there is some feature like using \\?\ at the start of the path to get around the MAX_PATH limit. Here is a reference on MSDN: http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx For remote machines, you would use a path name such as: \\?\unc\server\share\path\file. The \\?\unc\ is the special prefix and is not used as part of the actual filename. A: You might be able to get a handle to the file if you try opening the file after converting the file name to a short (8.3) file name. Failing that can you map the dir the file is in as a drive and access the file that way?
{ "language": "en", "url": "https://stackoverflow.com/questions/112946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I add another run level (level 7) in Ubuntu? Ubuntu has 8 run levels (0-6 and S), I want to add the run level 7. I have done the following: 1.- Created the folder /etc/rc7.d/, which contains some symbolic links to /etc/init.d/ 2.- Created the file /etc/event.d/rc7 This is its content: # rc7 - runlevel 7 compatibility # # This task runs the old sysv-rc runlevel 7 ("multi-user") scripts. It # is usually started by the telinit compatibility wrapper. start on runlevel 7 stop on runlevel [!7] console output script set $(runlevel --set 7 || true) if [ "$1" != "unknown" ]; then PREVLEVEL=$1 RUNLEVEL=$2 export PREVLEVEL RUNLEVEL fi exec /etc/init.d/rc 7 end script I thought that would be enough, but telinit 7 still throws this error: telinit: illegal runlevel: 7 A: You cannot; the runlevels are hardcoded into the utilities. But why do you need to? Runlevel 4 is essentially unused. And while it's not the best idea, you could repurpose either runlevel 3 or runlevel 5 depending on if you always/never use X. Note that some *nix systems have support for more than 6 runlevels, but Linux is not one of them. A: I'm not sure how to add them (never needed to), but I'm pretty sure /etc/inittab is where you'd add runlevels. Although I'd have to agree with Zathrus that other runlevels are available but unused. On Debian, only 1 and 2 are used, really. I'm not sure how Ubuntu has it set up, though. However, if you have a specific purpose, it should be possible to do. I've just never had to.
{ "language": "en", "url": "https://stackoverflow.com/questions/112964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What are some resources for learning to write specifications? At work I am responsible for writing specifications quite often and I am also the person who insisted on getting specifications in the first place. The problem is I am unsure how specifications should look and what they should contain. A lot of the time when my boss is writing the specifications (we are both inexperienced in it) they put in table names and things that I don't think belong there. So what is a good way to learn to write a good spec? EDIT: Should a functional spec include things like assuming I am specifying a web application, the input types (a textbox, dropdown list, etc)? A: There's a great chapter in Steve McConnell's Code Complete that runs through specification documents and what they should contain. When I was tasked to build an Architecture and Business Analysis team at a company that had never had either, I used McConnell's spec chapter to create the outline for the Technical Specification document. It evolved over time, but by starting out with this framework I made sure we didn't miss anything and it turned out to be surprisingly usable. When writing specs, a rule of thumb I follow is to try to have technical documents always start from the general and move to the specific -- always restate the business problem(s) or goal(s) that the technical solution is being developed to solve, so the person reading the spec doesn't need to go to other documents to put it in any sort of context. A: See Painless Functional Specs by Joel Spolsky. Some of the things he says every spec should have: * *A disclaimer *An author. One author *Scenarios *Nongoals *An Overview *Details, details, details *Open Issues *Side notes A: The most important part of development documentation in my opinion, is having the correct person do it. * *Requirements Docs - Users + Business Analyst *Functional Spec - Business Analyst + developer *Technical Spec (how the functionality will actually be implemented) - Sr. Developer / Architect *Time estimates for scheduling purposes - The specific developer assigned to the task Having anyone besides the Sr. Developer / Architect define table structures / interfaces etc. is an exercise in futility - as the more experienced developer will generally throw most of it out. Wikipedia is actually a good start for the Functional Spec, which seems similar to your Spec - http://en.wikipedia.org/wiki/Functional_specification. A: The important thing is to get something written down rather than worry about the format. A: Buy Books: Requirements Engineering by Ian Sommerville & Pete Sawyer ISBN 0-471-97444-7 or Software Requirements by Karl Wiegers ISBN 0-7356-0631-5
{ "language": "en", "url": "https://stackoverflow.com/questions/112969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Python - When to use file vs open What's the difference between file and open in Python? When should I use which one? (Say I'm in 2.5) A: Functionally, the two are the same; open will call file anyway, so currently the difference is a matter of style. The Python docs recommend using open. When opening a file, it's preferable to use open() instead of invoking the file constructor directly. The reason is that in future versions they is not guaranteed to be the same (open will become a factory function, which returns objects of different types depending on the path it's opening). A: Only ever use open() for opening files. file() is actually being removed in 3.0, and it's deprecated at the moment. They've had a sort of strange relationship, but file() is going now, so there's no need to worry anymore. The following is from the Python 2.6 docs. [bracket stuff] added by me. When opening a file, it’s preferable to use open() instead of invoking this [file()] constructor directly. file is more suited to type testing (for example, writing isinstance(f, file) A: Two reasons: The python philosophy of "There ought to be one way to do it" and file is going away. file is the actual type (using e.g. file('myfile.txt') is calling its constructor). open is a factory function that will return a file object. In python 3.0 file is going to move from being a built-in to being implemented by multiple classes in the io library (somewhat similar to Java with buffered readers, etc.) A: According to Mr Van Rossum, although open() is currently an alias for file() you should use open() because this might change in the future. A: file() is a type, like an int or a list. open() is a function for opening files, and will return a file object. This is an example of when you should use open: f = open(filename, 'r') for line in f: process(line) f.close() This is an example of when you should use file: class LoggingFile(file): def write(self, data): sys.stderr.write("Wrote %d bytes\n" % len(data)) super(LoggingFile, self).write(data) As you can see, there's a good reason for both to exist, and a clear use-case for both. A: You should always use open(). As the documentation states: When opening a file, it's preferable to use open() instead of invoking this constructor directly. file is more suited to type testing (for example, writing "isinstance(f, file)"). Also, file() has been removed since Python 3.0.
{ "language": "en", "url": "https://stackoverflow.com/questions/112970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "141" }
Q: Relative path in web config How can I have a relative path in the web.config file. This value is not in the connection string so I cannot use |DataDirectory| (I think), so what can I do? A: What is the relative path for? Are you talking about a physical directory path or a url path? Edit: I needed to do something similar for one of my projects. I needed to locate a config file that was stored in a certain folder. While the web.config file itself does not provide anything special for this, you can take a path from the web.config file and convert it to an app-relative path. Request.ApplicationPath gets you the base directory of the web application. YOu can append the relative path to this and give it to whatever needs it. Also see this blog post by Rick Strahl for other interesting directories that may help you. You could then append the relatvie path to
{ "language": "en", "url": "https://stackoverflow.com/questions/112975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to copy DLL file from PC to vs.net`s pocket pc 2003 simulator? I want to copy a DLL file from PC to vs.net`s pocket pc 2003 simulator, so i use the shared folder of the simulator, but i can not see the dll file in file list of simulator. How to do it, please ? A: Add it to the project's primary output. A: Suggestion: if you need to use an external DLL to be copied along with your code, add a reference to the DLL, and in its properties pane on Visual Studio make sure that Copy Local is set to True. That might accomplish what you're trying to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/112976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Generics on ASP.NET page Class I want to implement Generics in my Page Class like : Public Class MyClass(Of TheClass) Inherits System.Web.UI.Page But for this to work, I need to be able to instantiate the Class (with the correct Generic Class Type) and load the page, instead of a regular Response.Redirect. Is there a way to do this ? A: I'm not sure to fully understand what you want to do. If you want something like a generic Page, you can use a generic BasePage and put your generic methods into that BasePage: Partial Public Class MyPage Inherits MyGenericBasePage(Of MyType) End Class Public Class MyGenericBasePage(Of T As New) Inherits System.Web.UI.Page Public Function MyGenericMethod() As T Return New T() End Function End Class Public Class MyType End Class A: The answer that says to derive a type from the generic type is a good one. However, if your solution involves grabbing a page based upon a type determined at runtime then you should be able to handle the PreRequestHandlerExecute event on the current HttpApplication. This event is called just before a Request is forwarded to a Handler, so I believe you can inject your page into the HttpContext.Current.Handler property. Then you can create the page however you wish.
{ "language": "en", "url": "https://stackoverflow.com/questions/112977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using Divs to display table-like data I want to display data like the following: Title Subject Summary Date So my HTML looks like: <div class="title"></div> <div class="subject"></div> <div class="summary"></div> <div class="date"></div> The problem is, all the text doesn't appear on a single line. I tried adding display="block" but that doesn't seem to work. What am I doing wrong here? Important: In this instance I dont want to use a table element but stick with div tags. A: Is there a reason to not use tables? If you're displaying tabular data, it's best to use tables - that's what they're designed for. To answer your question, the best way is probably to assign a fixed width to each element, and set float:left. You'll need to have either a dummy element at the end that has clear:both, or you'll have to put clear:both on the first element in each row. This method is still not fool-proof, if the contents of one cell forces the div to be wider, it will not resize the whole column, only that cell. You maybe can avoid the resizing by using overflow:auto or overflow:hidden, but this won't work like regular tables at all. A: or indeed this, which is very literally using tables for tabular data: https://stackoverflow.com/badges A: Just to illustrate the remarks of the previous answers urging you to use table instead of div for tabular data: CSS Table gallery is a great way to display beautiful tables in many many different visual styles. A: Sorry, but, I'm going to tell you to use tables. Because this is tabular data. Perhaps you could tell us why you don't want to use tables? It appears to me, and I'm sure to a lot of other people, that you're confused about the "don't use tables" idea. It's not "don't use tables", it's "don't use tables to do page layout". What you're doing here is laying out tabular data, so of course it should be in a table. In case you're unclear about the idea "tabular data", I define it like this: bits of data whose meaning isn't clear from the data alone, it has to be determined by looking at a header. Say you have a train or bus timetable. It will be a huge block of times. What does any particular time mean? You can't tell from looking at the time itself, but refer to the row or column headings and you'll see it's the time it departs from a certain station. You've got strings of text. Are they the title, the summary, or the date? People will tell that from checking the column headings. So it's a table. A: It looks like you're wanting to display a table, right? So go ahead and use the <table> tag. A: I would use the following markup: <table> <tr> <th>Title</th> <th>Subject</th> <th>Summary</th> <th>Date</th> </tr> <!-- Data rows --> </table> One other thing to keep in mind with all of these div and list based layouts, even the ones that specify fixed widths, is that if you have a bit of text that is wider than the width (say, a url), the layout will break. The nice thing about tables for tabular data is that they actually have the notion of a column, which no other html construct has. Even if this site has some things, like the front page, that are implemented with divs, I would argue that tabular data (such as votes, responses, title, etc) SHOULD be in a table. People that push divs tend to do it for semantic markup. You are pursuing the opposite of this. A: I don't mean to sound patronizing; if I do, I've misunderstood you and I'm sorry. Most people frown upon tables because people use them for the wrong reason. Often, people use huge tables to position things in their website. This is what divs should be used for. Not tables! However, if you want to display tabular data, such as a list of football teams, wins, losses, and ties, you should most definitely use tables. It's almost unheard of (although not impossible) to use divs for this. Just remember, if it's for displaying data, you can definitely use a table! A: display:block garauntees that the elements will not appear on the same line. Floating for layout is abuse just like tables for layout is abuse (but for the time being, it's necessary abuse). The only way to garauntee that they all appear on the same line is to use a table tag. That, or display:inline, and use only &nbsp; (Non-Breaking Space) between your elements and words, instead of a normal space. The &nbsp; will help you prevent word wrapping. But yea, if there's not a legitimate reason for avoiding tables, use tables for tabular data. That's what they're for. A: In the age of CSS frameworks, I really don't see a point of drifting away from table tag completely. While it is now possible to do display: table-* for whatever element you like, but table is still a preferred tag to format data in tabular form (not forgetting it is more semantically correct). Just pick one of the popular CSS framework to make tabular data looks nice instead of hacking the presentation of <div> tags to achieve whatever it is not designed to do. display: block will certainly not work, try display: inline or float everything to the left then position them accordingly but if you have tabular data, then it is the best to markup in <table> tag some reference: from sitepoint A: The CSS property float is what you're looking for, if you want to stack div's horizontally. Here's a good tutorial on floats: http://css.maxdesign.com.au/floatutorial/ A: If there's a legitimate reason to not use a table then you could give each div a width and then float it. i.e. div.title { width: 150 px; float: left; } A: You'll need to make sure that all your "cells" float either left or right (depending on their internal ordering), and they also need a fix width. Also, make sure that their "row" has a fixed width which is equal to the sum of the cell widths + margin + padding. Lastly make sure there is a fixed width on the "table" level div, which is the sum of the row width + margin + padding. But if you want to show tabular data you really should use a table, some browsers (more common with previous generation) handle floats, padding and margin differently (remember the famous IE 6 bug which doubled the margin?). There's been plenty of other questions on here about when to use and when not to use tables which may help explain when and where to uses divs and tables. A: Using this code : <div class="title">MyTitle</div><div class="subject">MySubject</div><div class="Summary">MySummary</div> You have 2 solutions (adapt css selectors to you case): 1 - Use inline blocks div { display: inline; } This will result in putting the blocks on the same line but remove the control you can have over their sizes. 2 - Use float div { width: 15%; /* size of each column : adapt */ float: left; /* this make the block float at the left of the next one */ } div.last_element /* last_element must be a class of the last div of your line */ { clear: right; /* prevent your the next line to jump on the previous one */ } The float property is very useful for CSS positioning : http://www.w3schools.com/css/pr_class_float.asp A: The reason the questions page on stack overflow can use DIVs is because the vote/answers counter is a fixed width. A: Tabular data can also be represented as nested lists - i.e. lists of lists: <ul> <li> heading 1 <ul> <li>row 1 data</li> <li>row 2 data</li> <ul> </li> <li> heading 2 <ul> <li>row 1 data</li> <li>row 2 data</li> <ul> </li> </ul> Which you can layout like a table and is also semantically correct(ish). A: For the text to appear on a single line you would have to use display="inline" Moreover, you should really use lists to achieve this effect <ul class="headers"> <li>Title</li> <li>Subject</li> <li>Summary</li> <li>Date</li> </ul> The style would look like this: .headers{padding:0; margin:0} .headers li{display:inline; padding:0 10px} /The padding would control the space on the sides of the text in the header/ A: I asked a similar question a while ago Calendar in HTML and everyone told me to use tables too. If you have made an igoogle home page, just yoink their code. I made a system of columns and sections within the columns for a page. Notice with google you can't have an infinite number of columns and that offends our sensibilities as object people. Here's some of my findings: * * You need to know the width of the columns *You need to know the number of columns *You need to know the width of the space the columns inhabit. *You need to ensure whitespace doesn't overflow I made a calendar with DIV tags because it is impossible to get XSL to validate without hard coding a maximum number of weeks in the month, which is very offensive. The biggest problem is every box has to be the same height, if you want any information to be associated with a field in your table with div tags you're going to have to make sure the whitespace:scroll or whitespace:hidden is in your CSS. A: Preface: I'm a little confused by the responses so far, as doing columns using DIVs and CSS is pretty well documented, but it doesn't look like any of the responses so far covered the way it's normally done. What you need is four separate DIVS, each one with a greater "left:" attribute. You add your data for each column into the corresponding DIV (column). Here's a website that should help you. They have many examples of doing columns with CSS/DIV tags: http://www.dynamicdrive.com/style/layouts/ All you have to do is extrapolate from their 2-column examples to your 4-column needs. A: You should use spans with: display:inline-block This will allow you to set a width for each of elements while still keeping them on the same line. See here, specifically this section. Now, to appease the downvoters - of course tabular data should be in a table. But he very specifically does NOT WANT a table. The above is the answer to HIS QUESTION!!! A: First display:block should be display:inline-block , Although you might have figured it out already. Second you can also use display:table , display:table-cell , display:table-row and other properties. Although these are not as good as using table.
{ "language": "en", "url": "https://stackoverflow.com/questions/112983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I access my memory mapped I/O Device (FPGA) from a RTP in VxWorks? When using VxWorks, we are trying to access a memory mapped I/O device from a Real-Time Process. Since RTPs have memory protection, how can I access my I/O device from one? A: There are two methods you can use to access your I/O mapped device from an RTP. I/O Subsystem (preferred) You essentially create a small device driver. This driver can be integrated into the I/O Subsystem of VxWorks. Once integrated, the driver is available to the RTP by simply using standard I/O operations: open, close, read, write, ioctl. Note that "creating a device driver" doesn't have to be complicated. It could be as simple as just defining a wrapper for the ioctl function. See ioLib for more details. Map Memory Directly (not recommended) You can create a shared memory region via the sdOpen call. When creating the shared memory, you can specify what the physical address should be. Specify the address to be your device's I/O mapped region, and you can access the device directly. The problem is that a shared memory region is a public object that is available to any space, and poking directly at hardware goes against the philosophy behind RTPs.
{ "language": "en", "url": "https://stackoverflow.com/questions/113001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Writing to the windows logs in Python Is it possible to write to the windows logs in python? A: Yes, just use Windows Python Extension, as stated here. import win32evtlogutil win32evtlogutil.ReportEvent(ApplicationName, EventID, EventCategory, EventType, Inserts, Data, SID)
{ "language": "en", "url": "https://stackoverflow.com/questions/113007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: IIS 6.0 Is Stubbornly Remembering Authentication Settings I have an .asmx in a folder in my application and I keep getting a 401 trying to access it. I have double and triple checked the setting including the directory security settings. It allows anonymous. I turned off Windows Authentication. If I delete the application and the folder its in, then redeploy it under the same application name it magically reapplies the old settings. If I deploy the exact same application to a different folder on the server and create another application under a new name and set up the directory security setting again it works!!! How do I get IIS to forget the setting under the original application name? A: After deleting the first application in IIS and its associated files on the disk, try restarting IIS (or your server if possible). Then come back and recreate the whole setup. A: Eventually I got it working again. By deploying to a different folder and recreating the virtual folder / application to it. I am not sure how that makes a difference but at least things are working again. A: I ran into a similar situation with asp.net pages. I had Anonymous on and Integrated off for a virtual directory, but one page was the opposite. Everything worked fine until I went to the one special page, then my post backs stopped working and I couldn't log out of the site until I deployed to a new virtual directory. My eventual solution was to enable anonymous and integrated for the entire site and just turn off anonymous on that one page.
{ "language": "en", "url": "https://stackoverflow.com/questions/113013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Whats the best way to pass html embed code via rss feed to a rss parser in php? Im trying to put an html embed code for a flash video into the rss feed, which will then be parser by a parser (magpie) on my other site. How should I encode the embed code on one side, and then decode it on the other so I can insert clean html into the DB on the receiving server? A: Since RSS is XML, you might want to check out CDATA, which I believe is valid in the various RSS specs. <summary><![CDATA[Data Here]]> Here's the w3schools entry on it: http://www.w3schools.com/XML/xml_cdata.asp A: htmlencode/htmldecode should do the trick. A: Ive been using htmlentities/html_entity_decode but for some reason it doesnt work with the parser. In a normal test it works, but parser always returns html code without < > " characters. A: RSS is XML. It has very specific rules for encoding HTML. If you're generating it, I'd recommend using an xml library to write the node containing HTML, to be sure you get the encoding right. HTMLencode will only perform the escaping necessary for embedding data within HTML, XML rules are more strict. A: Instead of writing your own RSS XML feed, consider using the Django syndication framework from django.contrib.syndication: https://docs.djangoproject.com/en/dev/ref/contrib/syndication/ It also supports enclosures, which is the RSS way for embedding images or video. For custom tags, there is also an lowlevel API which allows you to change the XML: https://docs.djangoproject.com/en/dev/ref/contrib/syndication/#the-low-level-framework
{ "language": "en", "url": "https://stackoverflow.com/questions/113024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Stable, efficient sort? I'm trying to create an unusual associative array implementation that is very space-efficient, and I need a sorting algorithm that meets all of the following: * *Stable (Does not change the relative ordering of elements with equal keys.) *In-place or almost in-place (O(log n) stack is fine, but no O(n) space usage or heap allocations. *O(n log n) time complexity. Also note that the data structure to be sorted is an array. It's easy to see that there's a basic algorithm that matches any 2 of these three (insertion sort matches 1 and 2, merge sort matches 1 and 3, heap sort matches 2 and 3), but I cannot for the life of me find anything that matches all three of these criteria. A: Note: standard quicksort is not O(n log n) ! In the worst case, it can take up to O(n^2) time. The problem is that you might pivot on an element which is far from the median, so that your recursive calls are highly unbalanced. There is a way to combat this, which is to carefully pick a median which is guaranteed, or at least very likely, to be close to the median. It is surprising that you can actually find the exact median in linear time, although in your case it sounds like you care about speed so I would not suggest this. I think the most practical approach is to implement a stable quicksort (it's easy to keep stable) but use the median of 5 random values as the pivot at each step. This makes it highly unlikely that you'll have a slow sort, and is stable. By the way, merge sort can be done in-place, although it's tricky to do both in-place and stable. A: What about quicksort? Exchange can do that too, might be more "stable" by your terms, but quicksort is faster. A: There's a list of sort algorithms on Wikipedia. It includes categorization by execution time, stability, and allocation. Your best bet is probably going to be modifying an efficient unstable sort to be stable, thereby making it less efficient. A: There is a class of stable in-place merge algorithms, although they are complicated and linear with a rather high constant hidden in the O(n). To learn more, have a look at this article, and its bibliography. Edit: the merge phase is linear, thus the mergesort is nlog_n. A: Because your elements are in an array (rather than, say, a linked list) you have some information about their original order available to you in the array indices themselves. You can take advantage of this by writing your sort and comparison functions to be aware of the indices: function cmp( ar, idx1, idx2 ) { // first compare elements as usual rc = (ar[idx1]<ar[idx2]) ? -1 : ( (ar[idx1]>ar[idx2]) ? 1 : 0 ); // if the elements are identical, then compare their positions if( rc != 0 ) rc = (idx1<idx2) ? -1 : ((idx1>idx2) ? 1 : 0); return rc; } This technique can be used to make any sort stable, as long as the sort ONLY performs element swaps. The indices of elements will change, but the relative order of identical elements will stay the same, so the sort remains robust. It won't work out of the box for a sort like heapsort because the original heapification "throws away" the relative ordering, though you might be able to adapt the idea to other sorts. A: Quicksort can be made stable reasonably easy simply by having an sequence field added to each record, initializing it to the index before sorting and using it as the least significant part of the sort key. This has a slightly adverse effect on the time taken but it doesn't affect the time complexity of the algorithm. It also has a minimal storage cost overhead for each record, but that rarely matters until you get very large numbers of records (and is mimimized with larger record sizes). I've used this method with C's qsort() function to avoid writing my own. Each record has a 32-bit integer added and populated with the starting sequence number before calling qsort(). Then the comparison function checked the keys and the sequence (this guarantees there are no duplicate keys), turning the quicksort into a stable one. I recall that it still outperformed the inherently stable mergesort for the data sets I was using. Your mileage may vary, so always remember: Measure, don't guess! A: Quicksort can be made stable by doing it on a linked list. This costs n to pick random or median of 3 pivots but with a very small constant (list traversal). By splitting the list and ensuring that the left list is sorted so same values go left and the right list is sorted so the same values go right, the sort will be implicity stable for no real extra cost. Also, since this deals with assignment rather than swapping, I think the speed might actually be slightly better than a quick sort on an array since there's only a single write. So in conclusion, list up all your items and run quicksort on a list A: Merge sort can be written to be in-place I believe. That may be the best route. A: Don't worry too much about O(n log n) until you can demonstrate that it matters. If you can find an O(n^2) algorithm with a drastically lower constant, go for it! The general worst-case scenario is not relevant if your data is highly constrained. In short: Run some test. A: There's a nice list of sorting functions on wikipedia that can help you find whatever type of sorting function you're after. For example, to address your specific question, it looks like an in-place merge sort is what you want. However, you might also want to take a look at strand sort, it's got some very interesting properties. A: I have implemented a stable in-place quicksort and a stable in-place merge sort. The merge sort is a bit faster, and guaranteed to work in O(n*log(n)^2), but not the quicksort. Both use O(log(n)) space. A: Perhaps shell sort? If I recall my data structures course correctly, it tended to be stable, but it's worse case time is O(n log^2 n), although it performs O(n) on almost sorted data. It's based on insertion sort, so it sorts in place. A: Maybe I'm in a bit of a rut, but I like hand-coded merge sort. It's simple, stable, and well-behaved. The additional temporary storage it needs is only N*sizeof(int), which isn't too bad.
{ "language": "en", "url": "https://stackoverflow.com/questions/113025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Internet Explorer 8 beta 2 and Standards Internet Explorer 8 breaks what must be every 3rd page I look at. The point of this early release was, I presume, to give website owners the chance to update their sites so it wouldn't be such a hassle for the final release. Has anyone actually done this? Is anyone even planning on doing this? I have yet to notice any of the big sites like ebay, myspace, facebook and so on bother so why will smaller sites if they can just use the compatibility mode? I think i'll do it with mine, but how can you have your site compatible with IE7 and 8? A: I've developed a site with IE8 compatibility as a requirement, and it wasn't a problem as long as I tested in IE8 from the beginning. IE8's standards are very close to most other standards compliant browsers at this point. If you can't (or won't) do that, you can usually get your page or site working in IE8 with a simple meta tag: <html> <head> <title>Works in IE8</title> <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7"/> </head> <body>Renders the same in IE8 as it did in IE7</body> It doesn't always work smoothly - IE8 only notices that meta tag if it's the first tag in the head section, which conflicts with ASP.NET themes. In that case, you can fall back to using server-wide changes to write out the HTTP headers. I wrote about that here. MSDN has some more information on the ways to handle that: http://msdn.microsoft.com/en-us/library/cc817570(en-us).aspx A: You can also take a look at aggiorno express for IE8 Compat, it is a free tool that automates the tagging of your site with the meta tag Jon points to, it will also remove the flag once u have got ur pages to render correctly under the standards mode. The tool supports both a GUI and command line so it is easy to script to tag multiple sites. Worth check it out. A: Historically with Microsoft, the betas have been stricter at rendering pages in standards mode than the browsers' final releases. The idea is that in the preview stage developers are the only ones looking, and MS will try to make them fix their sites by making the stricter mode default on the browser. But when it comes time to release a final version to the users, they generally enable a compatibility mode that makes all the pages that never got fixed still work. A: I think the safest bet at this point is to opt out of IE8 rendering; if you use the "IE7 mode" tag on your pages, even when IE8 comes out, it will still use the IE7 rendering, so you won't be left with surprises. A: According to Microsoft IE8 passes the Acid2 test. The problem may be that everyone has been so used to IE7's issues that they've created so many IE conditional workarounds that it breaks now that IE8 is getting closer to the standards. A: Well, stadards are pretty much a myth. Ther is no one true standard. If you don't believe me, then just read Joel's article on the subject.
{ "language": "en", "url": "https://stackoverflow.com/questions/113028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to use one object's method to update another object's attribute? I have three (C++) classes: Player, Hand, and Card. Player has a member, hand, that holds a Hand. It also has a method, getHand(), that returns the contents of hand. Hand Player::getHand() { return hand; } Hand has a method, addCard(Card c), that adds a card to the hand. I want to do this: player1.getHand().addCard(c); but it doesn't work. It doesn't throw an error, so it's doing something. But if I examine the contents of player1's hand afterward, the card hasn't been added. How can I get this to work? A: If getHand() returns by-value you're modifying a copy of the hand and not the original. A: If getHand() is not returning a reference you will be in trouble. A: A Player.addCardToHand() method is not unreasonable, if you have no reason to otherwise expose a Hand. This is probably ideal in some ways, as you can still provide copies of the Hand for win-checking comparisons, and no-one can modify them. A: Your method needs to return a pointer or a refernce to the player's Hand object. You could then call it like "player1.getHand()->addCard(c)". Note that that is the syntax you'd use it it were a pointer. A: Return a reference to the hand object eg. Hand &Player::getHand() { return hand; } Now your addCard() function is operating on the correct object. A: What is the declaration of getHand()? Is it returning a new Hand value, or is it returning a Hand& reference? A: As has been stated, you're probably modifying a copy instead of the original. To prevent this kind of mistake, you can explicitly declare copy constructors and equals operators as private. private: Hand(const Hand& rhs); Hand& operator=(const Hand& rhs); A: getX() is often the name of an accessor function for member x, similar to your own usage. However, a "getX" accessor is also very often a read-only function, so that it may be surprising in other situations of your code base to see a call to "getX" that modifies X. So I would suggest, instead of just using a reference as a return value, to actually modify the code design a bit. Some alternatives: * *Expose a getMutableHand method that returns a pointer (or reference). By returning a pointer, you strongly suggest that the caller uses the pointer notation, so that anyone reading the code sees that this variable is changing values, and is not read-only. *Make Player a subclass of Hand, so that anything that manipulates a Hand also works directly on the Player. Intuitively, you could say that a Player is not a Hand, but functionally they have the right relationship - every Player has exactly one hand, and it seems that you do want to be able to have the same access to a Hand via Player as you would directly. *Directly implement an addCard method for your Player class.
{ "language": "en", "url": "https://stackoverflow.com/questions/113033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to return only the Date from a SQL Server DateTime datatype SELECT GETDATE() Returns: 2008-09-22 15:24:13.790 I want that date part without the time part: 2008-09-22 00:00:00.000 How can I get that? A: Simply you can do this way: SELECT CONVERT(date, getdate()) SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, @your_date)) SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE())) Outputs as: 2008-09-22 00:00:00.000 Or simply do like this: SELECT CONVERT (DATE, GETDATE()) 'Date Part Only' Result: Date Part Only -------------- 2013-07-14 A: In this case, date only, you we are gonna run this query: SELECT CONVERT(VARCHAR(10), getdate(), 111); A: SQLServer 2008 now has a 'date' data type which contains only a date with no time component. Anyone using SQLServer 2008 and beyond can do the following: SELECT CONVERT(date, GETDATE()) A: DATEADD and DATEDIFF are better than CONVERTing to varchar. Both queries have the same execution plan, but execution plans are primarily about data access strategies and do not always reveal implicit costs involved in the CPU time taken to perform all the pieces. If both queries are run against a table with millions of rows, the CPU time using DateDiff can be close to 1/3rd of the Convert CPU time! To see execution plans for queries: set showplan_text on GO Both DATEADD and DATEDIFF will execute a CONVERT_IMPLICIT. Although the CONVERT solution is simpler and easier to read for some, it is slower. There is no need to cast back to DateTime (this is implicitly done by the server). There is also no real need in the DateDiff method for DateAdd afterward as the integer result will also be implicitly converted back to DateTime. SELECT CONVERT(varchar, MyDate, 101) FROM DatesTable |--Compute Scalar(DEFINE:([Expr1004]=CONVERT(varchar(30),[TEST].[dbo].[DatesTable].[MyDate],101))) |--Table Scan(OBJECT:([TEST].[dbo].[DatesTable])) SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, MyDate)) FROM DatesTable |--Compute Scalar(DEFINE:([Expr1004]=dateadd(day,(0),CONVERT_IMPLICIT(datetime,datediff(day,'1900-01-01 00:00:00.000',CONVERT_IMPLICIT(datetime,[TEST].[dbo].[DatesTable].[MyDate],0)),0)))) |--Table Scan(OBJECT:([TEST].[dbo].[DatesTable])) Using FLOOR() as @digi suggested has performance closer to DateDiff, but is not recommended as casting the DateTime data type to float and back does not always yield the original value. Remember guys: Don't believe anyone. Look at the performance statistics, and test it yourself! Be careful when you're testing your results. Selecting many rows to the client will hide the performance difference because it takes longer to send the rows over the network than it does to perform the calculations. So make sure that the work for all the rows is done by the server but there is no row set sent to the client. There seems to be confusion for some people about when cache optimization affects queries. Running two queries in the same batch or in separate batches has no effect on caching. So you can either expire the cache manually or simply run the queries back and forth multiple times. Any optimization for query #2 would also affect any subsequent queries, so throw out execution #1 if you like. Here is full test script and performance results that prove DateDiff is substantially faster than converting to varchar. A: I think this would work in your case: CONVERT(VARCHAR(10),Person.DateOfBirth,111) AS BirthDate //here date is obtained as 1990/09/25 A: DECLARE @yourdate DATETIME = '11/1/2014 12:25pm' SELECT CONVERT(DATE, @yourdate) A: Okay, Though I'm bit late :), Here is the another solution. SELECT CAST(FLOOR(CAST(GETDATE() AS FLOAT)) as DATETIME) Result 2008-09-22 00:00:00.000 And if you are using SQL Server 2012 and higher then you can use FORMAT() function like this - SELECT FORMAT(GETDATE(), 'yyyy-MM-dd') A: Starting from SQL SERVER 2012, you can do this: SELECT FORMAT(GETDATE(), 'yyyy-MM-dd 00:00:00.000') A: Even using the ancient MSSQL Server 7.0, the code here (courtesy of this link) allowed me to get whatever date format I was looking for at the time: PRINT '1) Date/time in format MON DD YYYY HH:MI AM (OR PM): ' + CONVERT(CHAR(19),GETDATE()) PRINT '2) Date/time in format MM-DD-YY: ' + CONVERT(CHAR(8),GETDATE(),10) PRINT '3) Date/time in format MM-DD-YYYY: ' + CONVERT(CHAR(10),GETDATE(),110) PRINT '4) Date/time in format DD MON YYYY: ' + CONVERT(CHAR(11),GETDATE(),106) PRINT '5) Date/time in format DD MON YY: ' + CONVERT(CHAR(9),GETDATE(),6) PRINT '6) Date/time in format DD MON YYYY HH:MM:SS:MMM(24H): ' + CONVERT(CHAR(24),GETDATE(),113) It produced this output: 1) Date/time in format MON DD YYYY HH:MI AM (OR PM): Feb 27 2015 1:14PM 2) Date/time in format MM-DD-YY: 02-27-15 3) Date/time in format MM-DD-YYYY: 02-27-2015 4) Date/time in format DD MON YYYY: 27 Feb 2015 5) Date/time in format DD MON YY: 27 Feb 15 6) Date/time in format DD MON YYYY HH:MM:SS:MMM(24H): 27 Feb 2015 13:14:46:630 A: Try this: SELECT CONVERT(VARCHAR(10),GETDATE(),111) The above statement converts your current format to YYYY/MM/DD, please refer to this link to choose your preferable format. A: why don't you use DATE_FORMAT( your_datetiem_column, '%d-%m-%Y' ) ? EX: select DATE_FORMAT( some_datetime_column, '%d-%m-%Y' ) from table_name you can change sequence of m,d and year by re-arranging '%d-%m-%Y' part A: I favor the following which wasn't mentioned: DATEFROMPARTS(DATEPART(yyyy, @mydatetime), DATEPART(mm, @mydatetime), DATEPART(dd, @mydatetime)) It also doesn't care about local or do a double convert -- although each 'datepart' probably does math. So it may be a little slower than the datediff method, but to me it is much more clear. Especially when I want to group by just the year and month (set the day to 1). A: I know this is old, but I do not see where anyone stated it this way. From what I can tell, this is ANSI standard. SELECT CAST(CURRENT_TIMESTAMP AS DATE) It would be good if Microsoft could also support the ANSI standard CURRENT_DATE variable. A: On SQL Server 2000 CAST( ( STR( YEAR( GETDATE() ) ) + '/' + STR( MONTH( GETDATE() ) ) + '/' + STR( DAY( GETDATE() ) ) ) AS DATETIME) A: SELECT CONVERT(datetime, CONVERT(varchar, GETDATE(), 101)) A: You can use following for date part and formatting the date: DATENAME => Returns a character string that represents the specified datepart of the specified date DATEADD => The DATEPART() function is used to return a single part of a date/time, such as year, month, day, hour, minute, etc. DATEPART =>Returns an integer that represents the specified datepart of the specified date. CONVERT() = > The CONVERT() function is a general function that converts an expression of one data type to another. The CONVERT() function can be used to display date/time data in different formats. A: Date(date&time field) and DATE_FORMAT(date&time,'%Y-%m-%d') both returns only date from date&time A: SELECT * FROM tablename WHERE CAST ([my_date_time_var] AS DATE)= '8/5/2015' A: My common approach to get date without the time part.. SELECT CONVERT(VARCHAR(MAX),GETDATE(),103) SELECT CAST(GETDATE() AS DATE) A: If you want the date to show 2008-09-22 00:00:00.000 then you can round it using SELECT CONVERT(datetime, (ROUND(convert(float, getdate()-.5),0))) This will show the date in the format in the question A: Starting from SQL Server 2022 (16.x), another option is DATETRUNC() function using day as value of datepart parameter: SELECT DATETRUNC(day, GETDATE()); A: For return in date format CAST(OrderDate AS date) The above code will work in sql server 2010 It will return like 12/12/2013 For SQL Server 2012 use the below code CONVERT(VARCHAR(10), OrderDate , 111) A: SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, @your_date)) for example SELECT DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE())) gives me 2008-09-22 00:00:00.000 Pros: * *No varchar<->datetime conversions required *No need to think about locale A: Just do: SELECT CAST(date_variable AS date) or with with PostgreSQL: SELECT date_variable::date This is called typecasting btw! A: You can use the CONVERT function to return only the date. See the link(s) below: Date and Time Manipulation in SQL Server 2000 CAST and CONVERT The syntax for using the convert function is: CONVERT ( data_type [ ( length ) ] , expression [ , style ] ) A: If you need the result as a varchar, you should go through SELECT CONVERT(DATE, GETDATE()) --2014-03-26 SELECT CONVERT(VARCHAR(10), GETDATE(), 111) --2014/03/26 which is already mentioned above. If you need result in date and time format, you should use any of the queries below * *SELECT CONVERT(DATETIME, CONVERT(VARCHAR(10), GETDATE(), 111)) AS OnlyDate 2014-03-26 00:00:00.000 *SELECT CONVERT(DATETIME, CONVERT(VARCHAR(10), GETDATE(), 112)) AS OnlyDate 2014-03-26 00:00:00.000 *DECLARE @OnlyDate DATETIME SET @OnlyDate = DATEDIFF(DD, 0, GETDATE()) SELECT @OnlyDate AS OnlyDate 2014-03-26 00:00:00.000 A: If using SQL 2008 and above: select cast(getdate() as date) A: If you are using SQL Server 2012 or above versions, Use Format() function. There are already multiple answers and formatting types for SQL server. But most of the methods are somewhat ambiguous and it would be difficult for you to remember the numbers for format type or functions with respect to Specific Date Format. That's why in next versions of SQL server there is better option. FORMAT ( value, format [, culture ] ) Culture option is very useful, as you can specify date as per your viewers. You have to remember d (for small patterns) and D (for long patterns). 1."d" - Short date pattern. 2009-06-15T13:45:30 -> 6/15/2009 (en-US) 2009-06-15T13:45:30 -> 15/06/2009 (fr-FR) 2009-06-15T13:45:30 -> 2009/06/15 (ja-JP) 2."D" - Long date pattern. 2009-06-15T13:45:30 -> Monday, June 15, 2009 (en-US) 2009-06-15T13:45:30 -> 15 июня 2009 г. (ru-RU) 2009-06-15T13:45:30 -> Montag, 15. Juni 2009 (de-DE) More examples in query. DECLARE @d DATETIME = '10/01/2011'; SELECT FORMAT ( @d, 'd', 'en-US' ) AS 'US English Result' ,FORMAT ( @d, 'd', 'en-gb' ) AS 'Great Britain English Result' ,FORMAT ( @d, 'd', 'de-de' ) AS 'German Result' ,FORMAT ( @d, 'd', 'zh-cn' ) AS 'Simplified Chinese (PRC) Result'; SELECT FORMAT ( @d, 'D', 'en-US' ) AS 'US English Result' ,FORMAT ( @d, 'D', 'en-gb' ) AS 'Great Britain English Result' ,FORMAT ( @d, 'D', 'de-de' ) AS 'German Result' ,FORMAT ( @d, 'D', 'zh-cn' ) AS 'Chinese (Simplified PRC) Result'; US English Result Great Britain English Result German Result Simplified Chinese (PRC) Result ---------------- ----------------------------- ------------- ------------------------------------- 10/1/2011 01/10/2011 01.10.2011 2011/10/1 US English Result Great Britain English Result German Result Chinese (Simplified PRC) Result ---------------------------- ----------------------------- ----------------------------- --------------------------------------- Saturday, October 01, 2011 01 October 2011 Samstag, 1. Oktober 2011 2011年10月1日 If you want more formats, you can go to: * *Standard Date and Time Format Strings *Custom Date and Time Format Strings A: My Style select Convert(smalldatetime,Convert(int,Convert(float,getdate()))) A: select cast(createddate as date) as derivedate from table createdate is your datetime column , this works for sqlserver A: SELECT CONVERT(VARCHAR,DATEADD(DAY,-1,GETDATE()),103) --21/09/2011 SELECT CONVERT(VARCHAR,DATEADD(DAY,-1,GETDATE()),101) --09/21/2011 SELECT CONVERT(VARCHAR,DATEADD(DAY,-1,GETDATE()),111) --2011/09/21 SELECT CONVERT(VARCHAR,DATEADD(DAY,-1,GETDATE()),107) --Sep 21, 2011 A: Using FLOOR() - just cut time part. SELECT CAST(FLOOR(CAST(GETDATE() AS FLOAT)) AS DATETIME) A: To obtain the result indicated, I use the following command. SELECT CONVERT(DATETIME,CONVERT(DATE,GETDATE())) I holpe it is useful. A: SELECT DATEADD(DD, DATEDIFF(DD, 0, GETDATE()), 0) SELECT DATEADD(DAY, 0, DATEDIFF(DAY,0, GETDATE())) SELECT CONVERT(DATETIME, CONVERT(VARCHAR(10), GETDATE(), 101)) Edit: The first two methods are essentially the same, and out perform the convert to varchar method. A: IF you want to use CONVERT and get the same output as in the original question posed, that is, yyyy-mm-dd then use CONVERT(varchar(10),[SourceDate as dateTime],121) same code as the previous couple answers, but the code to convert to yyyy-mm-dd with dashes is 121. If I can get on my soapbox for a second, this kind of formatting doesn't belong in the data tier, and that's why it wasn't possible without silly high-overhead 'tricks' until SQL Server 2008 when actual datepart data types are introduced. Making such conversions in the data tier is a huge waste of overhead on your DBMS, but more importantly, the second you do something like this, you have basically created in-memory orphaned data that I assume you will then return to a program. You can't put it back in to another 3NF+ column or compare it to anything typed without reverting, so all you've done is introduced points of failure and removed relational reference. You should ALWAYS go ahead and return your dateTime data type to the calling program and in the PRESENTATION tier, make whatever adjustments are necessary. As soon as you go converting things before returning them to the caller, you are removing all hope of referential integrity from the application. This would prevent an UPDATE or DELETE operation, again, unless you do some sort of manual reversion, which again is exposing your data to human/code/gremlin error when there is no need. A: If you are assigning the results to a column or variable, give it the DATE type, and the conversion is implicit. DECLARE @Date DATE = GETDATE() SELECT @Date --> 2017-05-03 A: Convert(nvarchar(10), getdate(), 101) ---> 5/12/14 Convert(nvarchar(12), getdate(), 101) ---> 5/12/2014 A: Date: SELECT CONVERT(date, GETDATE()) SELECT CAST(GETDATE() as date) Time: SELECT CONVERT(time , GETDATE() , 114) SELECT CAST(GETDATE() as time) A: Syntax: SELECT CONVERT (data_type(length)),Date, DateFormatCode) Ex: Select CONVERT(varchar,GETDATE(),1) as [MM/DD/YY] Select CONVERT(varchar,GETDATE(),2) as [YY.MM.DD] all dateformatcodes about Date: DateFormatCode Format 1 [MM/DD/YY] 2 [YY.MM.DD] 3 [DD/MM/YY] 4 [DD.MM.YY] 5 [DD-MM-YY] 6 [DD MMM YY] 7 [MMM DD,YY] 10 [MM-DD-YY] 11 [YY/MM/DD] 12 [YYMMDD] 23 [yyyy-mm-dd] 101 [MM/DD/YYYY] 102 [YYYY.MM.DD] 103 [DD/MM/YYYY] 104 [DD/MM/YYYY] 105 [DD/MM/YYYY] 106 [DD MMM YYYY] 107 [MMM DD,YYYY] 110 [MM-DD-YYYY] 111 [YYYY/MM/DD] 112 [YYYYMMDD] A: you can use like below for different different type of output for date only * *SELECT CONVERT(datetime, CONVERT(varchar, GETDATE(), 103)) -----dd/mm/yyyy *SELECT CONVERT(datetime, CONVERT(varchar, GETDATE(), 101))------mm/dd/yyyy *SELECT CONVERT(datetime, CONVERT(varchar, GETDATE(), 102)) A: Wow, let me count the ways you can do this. (no pun intended) In order to get the results you want in this format specifically: 2008-09-22 Here are a few options. SELECT CAST(GETDATE() AS DATE) AS 'Date1' SELECT Date2 = CONVERT(DATE, GETDATE()) SELECT CONVERT(DATE, GETDATE()) AS 'Date3' SELECT CONVERT(CHAR(10), GETDATE(), 121) AS 'Date4' SELECT CONVERT(CHAR(10), GETDATE(), 126) AS 'Date5' SELECT CONVERT(CHAR(10), GETDATE(), 127) AS 'Date6' So, I would suggest picking one you are comfortable with and using that method across the board in all your tables. All these options return the date in the exact same format. Why does SQL Server have such redundancy? I have no idea, but they do. Maybe somebody smarter than me can answer that question. Hope this helps someone. A: SELECT CONVERT(varchar(100), GETDATE(), 102); --2023.02.15 SELECT CONVERT(varchar(100), GETDATE(), 23); --2023-02-15 you can fllow this url to find some other format example: https://www.cnblogs.com/wintuzi/p/16164124.html hope i can help you A: The easiest way would be to use: SELECT DATE(GETDATE()) A: You can simply use the code below to get only the date part and avoid the time part in SQL: SELECT SYSDATE TODAY FROM DUAL; A: select convert(getdate() as date) select CONVERT(datetime,CONVERT(date, getdate())) A: As there has been many changes since this question had answers, I wanted to provide a new way to get the requested result. There are two ways to parse DATETIME data. First, to get the date as this question asks: DATEVALUE([TableColumnName]) Second, to get the time from the value: TIMEVALUE([TableColumnName]) Example: Table: Customers Column: CreationDate as DateTime [Customers].[CreationDate]: 2/7/2020 09:50:00 DATEVALUE([Customers].[CreationDate]) '--> Output: 2/7/2020 TIMEVALUE([Customers].[CreationDate]) '--> Output: 09:50:00 I hope that this helps as I was searching for a while and found many answers as seen in this question and none of those worked. IE CAST and CONVERT. Happy Coding!
{ "language": "en", "url": "https://stackoverflow.com/questions/113045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2130" }
Q: How to write an RSS feed with Java? I'm using Java, and need to generate a simple, standards-compliant RSS feed. How can I go about this? A: I recommend using Rome: // Feed header SyndFeed feed = new SyndFeedImpl(); feed.setFeedType("rss_2.0"); feed.setTitle("Sample Feed"); feed.setLink("http://example.com/"); // Feed entries List entries = new ArrayList(); feed.setEntries(entries); SyndEntry entry = new SyndEntryImpl(); entry.setTitle("Entry #1"); entry.setLink("http://example.com/post/1"); SyndContent description = new SyndContentImpl(); description.setType("text/plain"); description.setValue("There is text in here."); entry.setDescription(description); entries.add(entry); // Write the feed to XML StringWriter writer = new StringWriter(); new SyndFeedOutput().output(feed, writer); System.out.println(writer.toString());
{ "language": "en", "url": "https://stackoverflow.com/questions/113063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Keeping my PHP pretty I am creating a site in which different pages can look very different depending upon certain conditions (ie logged in or not, form filled out or not, etc). This makes it necessary to output diferent blocks of html at different times. Doing that, however, makes my php code look horrific... it really messes with the formatting and "shape" of the code. How should I get around this? Including custom "html dump" functions at the bottom of my scripts? The same thing, but with includes? Heredocs (don't look too good)? Thanks! A: Use a mvc approach. http://www.phpmvc.net/ This is not something that you will pick up in a couple of hours. You really need to practice it. Main thing is the controller will access your model (the db layer), do stuff to your data and then send it to the view for rendering. This is oversimplified but you just need to read and practice it to understand it. This is something I used to help me learn it. http://www.onlamp.com/pub/a/php/2005/09/15/mvc_intro.html A: Try to separate your content and layout from your code as much as possible. Any time you write any HTML in a .php file, stop and think "Does this really belong here?" One solution is to use templates. Look at the Smarty templating system for a pretty easy-to-use option. A: Don't panic, every fresh Web programmer face this problem. You HAVE TO separate your program logic from your display. First, try to make your own solution using two files for each Web page : * *one with only PHP code (no HTML) that fills variables *another with HTML and very few PHP : this is your page design Then include where / when you need it. E.G : myPageLogic.php <?php // pure PHP code, no HTML $name = htmlspecialchars($_GET['name']); $age = date('Y') - htmlspecialchars($_GET['age']); ?> myPageView.php // very few php code // just enought to print variables // and some if / else, or foreach to manage the data stream <h1>Hello, <?php $name ?> !</h1> <p>So your are <?php $age?>, hu ?</p> (You may want to use the alternative PHP syntax for this one. But don't try to hard to make it perfect the first time, really.) myPage.php <?php require('myPageLogic.php'); require('myPageView.php'); ?> Don't bother about performance issues for now. This is not your priority as a newbie. This solution is imperfect, but will help you to solve the problem with your programming level and will teach you the basics. Then, once your are comfortable with this concept, buy a book about the MVC pattern (or look for stack overflow entries about it). That what you want to do the NEXT TIME. Then you'll try some templating systems and frameworks, but LATER. For now, just code and learn from the beginning. You can perfectly code a project like that, as a rookie, it's fine. A: Doing that, however, makes my php code look horrific... it really messes with the formatting and "shape" of the code. How should I get around this? Treat your PHP and HTML as a single hierarchy, with a single, consistent indentation structure. So a PHP enclosing-structure such as an 'if' or 'for' introduces a new level of indentation, and its contents are always a balanced set of start and end-tags. Essentially you are making your PHP 'well-formed' in the XML sense of the term, whether or not you are actually using XHTML. Example: <div class="prettybox"> Hello <?php echo(htmlspecialchars($name)) ?>! Your food: <?php foreach($foods as $food) { ?> <a href="/food.php?food=<?php echo(urlencode($food)) ?>"> <?php echo(htmlspecialchars($food)) ?> </a> <?php } ?> <?php if (count($foods)==0) { ?> (no food today) <?php } ?> </div> Be wary of the religious dogma around separating logic and markup rearing its head in the answers here again. Whilst certainly you want to keep your business-logic out of your page output code, this doesn't necessarily mean a load of overhead from using separate files, classes, templates and frameworks is really necessary for what you're doing. For a simple application, it is likely to be enough just to put the action/logic stuff at the top of the file and the page output below. (For example from one of the comments above, doing htmlspecialchars() is page-output functionality you definitely don't want to put in the action bit of your PHP, mixed up in all the business logic. Always keep text as plain, unescaped strings until the point where it leaves your application logic. If typing 'echo(htmlspecialchars(...))' all the time is too wordy, you can always make a function with a short name like 'h' that does the same.) A: From the sound of your problem, it seems you might not have much separation between logic and presentation in your code. When designing an application this is a very important consideration, for reasons exactly demonstrated by the situation your currently facing. If you haven't already I'd take a look at some PHP templating engines such as Smarty. A: In cases like that I write everything incrementally into a variable or sometimes an array and then echo the variable/array. It has the added advantage that the page always renders in one go, rather than progressively. A: What I end up doing in this situation is building 'template' files of HTML data that I Include, and then parse through with regular expressions. It keeps the code clean, and makes it easier to hand pieces off to a designer, or other HTML person. Check out the ideals behind MVC programming practices at Wikipedia A: You need a framework. This will not only make your code pretty (because of the already-mentioned model-view-controller pattern, MVC) - it will save you time because of the components that are already written for it. I recommend QCodo, it's amazing; there are others - CakePHP, Symfony. A: If it's a big project a framework might be in order. If not, create some template files and at the top of the page decide which template file to include. for example if ($_SESSION['logged_in']) include(TPL_DIR . 'main_logged_in.tpl'); else include(tPL_DIR . 'main.tpl'); just a simple example A: as mentioned above.. you are fixing the symptom, not the problem.. What you need is the separation of logic and presentation. Which can be done with some sort of mvc framework. Even if you don't want to go all the way with a mvc framework.. A templating engine is a must at least if your logic is complicated enough that you have trouble with what you are explaining A: I strongly suggest savant instead of smarty, * *you don't have to learn a new templating "language" in order to create templates, savant templates are written in php *if you don't want to use php you can define your own template "language" with a compiler *it's easily extendable by your own php objects Separating logic from presentation does not mean that all you business logic has to be in php and your presentation logic in something else, the separation is a conceptual thing, you hae to separate the logic that prepares data form the one that shows it. Obviously the business logic does not have to contain presentation elements. A: In stead of implementing templating systems on top of a templating system (PHP itself), creating overhead per default, you can go for a more robust solution like XSL Transformations, which also complies with the MVC princples (provided you seperate your data-retrieval; plus, I personally split the logic from displaying the XML with different files). Imagine having the following information in an array, which you want to display in a table. Array { [car] => green [bike] => red } You easily create a script that outputs this information in XML: echo "<VEHICLES>\n"; foreach(array_keys($aVehicles) as $sVehicle) echo "\t<VEHICLE>".$sVehicle."</NAME><COLOR>".$aVehicles[$sVehicle]."</COLOR></VEHICLE>\n"; echo "</VEHICLES>\n"; Resulting in the following XML: <VEHICLES> <VEHICLE> <NAME>car</NAME> <COLOR>green</COLOR> </VEHICLE> <VEHICLE> <NAME>bike</NAME> <COLOR>red</COLOR> </VEHICLE> </VEHICLES> Now this is all excellent, but that won't display in a nice format. This is where XSLT comes in. With some simple code, you can transform this into a table: <xsl:template match="VEHICLES"> <TABLE> <xsl:apply-templates select="VEHICLE"> </TABLE> </xsl:template> <xsl:template match="VEHICLE"> <TR> <TD><xsl:value-of select="NAME"></TD> <TD><xsl:value-of select="COLOR"></TD> </TR> </xsl:template> Et voila, you have: <TABLE> <TR> <TD>car</TD> <TD>green</TD> </TR> <TR> <TD>bike</TD> <TD>red</TD> </TR> </TABLE> Now for this simple example, this is a bit of overkill; but for complex structures in big projects, this is an absolute way to keep your scripting logic away from your markup. A: Check this question about separating PHP and HTML, there are various ways to do it, including self written templating systems, templating systems such as Smarty, PHP as a templating system on it's own, etc etc etc... A: I think you've two options here, either use a MVC Framework or use the lazy templating way which would put significant overhead on your code. Obviously I'm with the former, I think learning MVC is one of the best web development tricks in the book.
{ "language": "en", "url": "https://stackoverflow.com/questions/113077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Selenium internals How does selenium work? Can you explain the internal working of it. A: First there's a layer of javascript code that is used to automate the browser and simulate events, run and verify tests. Next, you run a proxy server - which you point your browser to - that injects this javascript code. Then, you can talk to this proxy server through another port using a set of commands which causes the proxy server to inject javascript code to be run on(or remote controlling) the running browser. Using this framework you can write automated test scripts in a style very much like writing macros for the browser. A: How Selenium Works Even has some pretty images. :) A: Basically it works on following principal . It first Searches for element which you specify in your Locator by searching it in HTML document shown in driver launched browser . After finding element it gets location of Object.After getting that location ROBOT Class methods like MOUSECLICK MOUSE Move etc to perform actions on these locations.I hope this works :-) A: I) If it is selenium RC then the process will be, * *Your script reaches the selenium server(which you started in specific port) *In server script will be converted to "Java Script"(Which will be understandable by all the browser) *Then it reaches the browser and do the further actions based on the script(type, click etc).. If element not found then it will through exception. :) II) If it is selenium webdriver then the process will be, * *Instead of the above process, Script will be directly interact to the specified browser(Using browser API) -> Then do the further actions.
{ "language": "en", "url": "https://stackoverflow.com/questions/113089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Dynamic IP-based blacklisting Folks, we all know that IP blacklisting doesn't work - spammers can come in through a proxy, plus, legitimate users might get affected... That said, blacklisting seems to me to be an efficient mechanism to stop a persistent attacker, given that the actual list of IP's is determined dynamically, based on application's feedback and user behavior. For example: - someone trying to brute-force your login screen - a poorly written bot issues very strange HTTP requests to your site - a script-kiddie uses a scanner to look for vulnerabilities in your app I'm wondering if the following mechanism would work, and if so, do you know if there are any tools that do it: * *In a web application, developer has a hook to report an "offense". An offense can be minor (invalid password) and it would take dozens of such offenses to get blacklisted; or it can be major, and a couple of such offenses in a 24-hour period kicks you out. *Some form of a web-server-level block kicks in on before every page is loaded, and determines if the user comes from a "bad" IP. *There's a "forgiveness" mechanism built-in: offenses no longer count against an IP after a while. Thanks! Extra note: it'd be awesome if the solution worked in PHP, but I'd love to hear your thoughts about the approach in general, for any language/platform A: Take a look at fail2ban. A python framework that allows you to raise IP tables blocks from tailing log files for patterns of errant behaviour. A: are you on a *nix machine? this sort of thing is probably better left to the OS level, using something like iptables edit: in response to the comment, yes (sort of). however, the idea is that iptables can work independently. you can set a certain threshold to throttle (for example, block requests on port 80 TCP that exceed x requests/minute), and that is all handled transparently (ie, your application really doesn't need to know anything about it, to have dynamic blocking take place). i would suggest the iptables method if you have full control of the box, and would prefer to let your firewall handle throttling (advantages are, you don't need to build this logic into your web app, and it can save resources as requests are dropped before they hit your webserver) otherwise, if you expect blocking won't be a huge component, (or your app is portable and can't guarantee access to iptables), then it would make more sense to build that logic into your app. A: I think it should be a combination of user-name plus IP block. Not just IP. A: you're looking at custom lockout code. There are applications in the open source world that contain various flavors of such code. Perhaps you should look at some of those, although your requirements are pretty trivial, so mark an IP/username combo, and utilize that for blocking an IP for x amount of time. (Note I said block the IP, not the user. The user may try to get online via a valid IP/username/pw combo.) Matter of fact, you could even keep traces of user logins, and when logging in from an unknown IP with a 3 strikes bad username/pw combo, lock that IP out for however long you like for that username. (Do note that a lot of ISPs share IPs, thus....) You might also want to place a delay in authentication, so that an IP cannot attempt a login more than once every 'y' seconds or so. A: I have developed a system for a client which kept track of hits against the web server and dynamically banned IP addresses at the operating system/firewall level for variable periods of time for certain offenses, so, yes, this is definitely possible. As Owen said, firewall rules are a much better place to do this sort of thing than in the web server. (Unfortunately, the client chose to hold a tight copyright on this code, so I am not at liberty to share it.) I generally work in Perl rather than PHP, but, so long as you have a command-line interface to your firewall rules engine (like, say, /sbin/iptables), you should be able to do this fairly easily from any language which has the ability to execute system commands. A: err this sort of system is easy and common, i can give you mine easily enough its simply and briefly explained here http://www.alandoherty.net/info/webservers/ the scripts as written arn't downloadable {as no commentry currently added} but drop me an e-mail, from the site above, and i'll fling the code at you and gladly help with debugging/taloring it to your server
{ "language": "en", "url": "https://stackoverflow.com/questions/113090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Why are a DisplayObject's child's .name property and the results of getChildByName() sometimes different? Can anyone explain the difference between the "name" property of a display object and the value found by getChildByName("XXX") function? They're the same 90% of the time, until they aren't, and things fall apart. For example, in the code below, I find an object by instance name only by directly examining the child's name property; getChildByName() fails. var gfx:MovieClip = new a_Character(); //(a library object exported for Actionscript) var do1:DisplayObject = null; var do2:DisplayObject = null; for( var i:int = 0 ; i < gfx.amSword.numChildren ; i++ ) { var child:DisplayObject = gfx.amSword.getChildAt(i); if( child.name == "amWeaponExchange" ) //An instance name set in the IDE { do2 = child; } } trace("do2:", do2 ); var do1:DisplayObject = gfx.amSword.getChildByName("amWeaponExchange"); Generates the following output: do2: [object MovieClip] ReferenceError: Error #1069: Property amWeaponExchange not found on builtin.as$0.MethodClosure and there is no default value. Any ideas what Flash is thinking? A: It seems you fixed it yourself! With: var do1:DisplayObject = gfx.amSword.getChildByName["amWeaponExchange"]; You get the error: ReferenceError: Error #1069: Property amWeaponExchange not found on builtin.as$0.MethodClosure and there is no default value. Because the compiler is looking for the property "amWeaponExchange" on the actual getChildByName method. When you change it to: var do1:DisplayObject = gfx.amSword.getChildByName("amWeaponExchange"); As you did in your edit, it successfully finds the child and compiles. A: I haven't really managed to understand what it is you're doing. But one thing I've found is that accessing MovieClip's children in the very first frame is a bit unreliable. For instance you can't gotoAndStop() and then access whatever children is on that frame, you have to wait a frame before they will be available. A: In one place you are looping through gfx.amSword and in another e.gfx.amSword - are you missing the e. ? Also, it's not the cause of your problem, but class names should start a with capital letter and not include underscores. "a_Character" should just be "Character". A: Oops, you're right about the e, Iain, but that's not the problem, I removed the e from the code to focus on the problem, but didn't catch that one. I think I should post a clearer example of the failure. The funny class name is just my personal naming convention for classes auto-generated by the Flash IDE with "export for Actionscript", but it's confusing the issue. A: I misunderstood with my first answer. This may have something to do with the Flash IDE Publish setting: "Automatically declare stage instances" in the ActionScript 3.0 Settings dialog.??
{ "language": "en", "url": "https://stackoverflow.com/questions/113103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to create basic Adobe Illustrator files programmatically? I need to create a really basic Adobe Illustrator file on the clipboard that I can paste in Adobe Illustrator or Expression Design. I'm looking for code samples on how to programmatically generate Adobe Illustrator Files, preferably from C# or some other .NET language (but at the moment any language goes). I have found the Adobe Illustrator 3 File Format documentation online but it's allot to digest for this simple scenario. I don't want to depend on the actual Adobe Illustrator program (COM interop for instance) to generate my documents. Must be pure code. The code is for an Expression Studio addin, and I need to be able to create something on the clipboard I can paste into Expression Design. After looking at the formats Expression Design puts on the clipboard when copying a basic shape I've concluded that ADOBE AI3 i the best one to use (the others are either rendered images, or cfXaml that you cannot paste INTO Design). So based on this I can't use SWG which would probably been easier. Another idea might be to use a PDF component as the AI and PDF format is supposed to be compatible? I'm also finding some references to a format called "Adobe Illustrator Clipboard Format" (AICB), but can't find allot of documentation about it. A: I know that Inkscape is free and open source and can edit .ai files. This might be a place to start. http://www.inkscape.org/ Also, I think Illustrator can handle standard svg files, so maybe generating those would be a lot easier. (They are XML based) http://www.w3.org/Graphics/SVG/ A: Try using ExtendScript, a javascript-like scripting environment provided by Adobe (an ExtendScript editor is included with CS), which allows you to manipulate various Adobe apps with scripts. I've been able to create, manipulate, and save Photoshop files with ES, so I'm sure you could do it with AI as well. A: SVG is probably the way to go.
{ "language": "en", "url": "https://stackoverflow.com/questions/113112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Where can I find some up to date information on OpenID authentication with rails? The question says it all. I can't seem to find any recent rails tutorials or whatever to set up an OpenID authentication system. I found RestfulOpenIDAuthentication but it's so much older than the vanilla Restful Authentication and the docs don't even mention Rails 2 that I am pretty wary. Does anyone have any tips? I'd like to do what stackoverflow does and only have OpenID support. Thanks! A: Check out the Railscast covering exactly this topic. It builds on the previous episode which discusses Restful Authentication.
{ "language": "en", "url": "https://stackoverflow.com/questions/113113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Tool to calculate # of lines of code in code behind and aspx files? Looking for a tool to calculate the # of lines of code in an asp.net (vb.net) application. The tricky part is that it needs to figure out the inline code in aspx files also. So it will be lines of code in vb files (minus comments) plus the inline code in aspx files (not all the lines of aspx files, just the code between <% %> tags. A: SlickEdit has some feature for that. I am not sure if it counts inline code. Worth giving it a try. If it does not work, let me know so that I can update my post. The SLOC Report The SLOC Report tool provides an easy way to count the lines of code. The line count is divided into three categories: code, comments, and whitespace. Once the lines of code have been counted, the results are drawn as a pie graph. SLOC reports may be generated for solutions, projects or individual files. A: I've not tried it myself but LineCounterAddin is visual studio plugin that includes the step-by-step guide to it's creation. It supports the formats you're asking about (VB and ASPX) as well as heaps more (e.g. XML, XSD, TXT, JS, SQL...). A: I've had great experience with CLOC. It has a wide variety of command line options. One counter-intutive thing with it, though, the first command line argument is the directory to begin counting in, usually you can just place cloc into the parent directory of your source and use "." (it goes through subdirectories of the specified directory). A: From a previous post, Source Monitor appears to be the answer and NDepend for .NET.
{ "language": "en", "url": "https://stackoverflow.com/questions/113118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Recommended WPF Calendar What WPF Calendar control would you recommend? I am looking for something that will let me display a variable amount of weeks potentially spanning multiple months. A: Microsoft has now released a WPF calendar control. <c:Calendar> <c:DatePicker.BlackoutDates> <c:CalendarDateRange Start="4/1/2008" End="4/6/2008"/> <c:CalendarDateRange Start="4/14/2008" End="4/17/2008"/> </c:DatePicker.BlackoutDates> </c:Calendar> <c:DatePicker> <c:DatePicker.BlackoutDates> <c:CalendarDateRange Start="4/1/2008" End="4/6/2008"/> <c:CalendarDateRange Start="4/14/2008" End="4/17/2008"/> </c:DatePicker.BlackoutDates> </c:DatePicker> (source: windowsclient.net) * *http://www.codeplex.com/wpf *http://windowsclient.net/wpf/wpf35/wpf-35sp1-toolkit-calendar-datepicker-walkthrough.aspx Charles Petzold wrote a good article about customising the WPF calendar control too. A: I would take a look at Kevin's Bag-o-tricks
{ "language": "en", "url": "https://stackoverflow.com/questions/113131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Large Image resizing libraries Does anyone know of any good image resizing libraries that will handling resizing large images(~7573 x ~9485). Something that is fast and doesn't chew to much memory would be great. At the moment I am using IrfanView and just shell invoking it with arguments but I would like to find something that integrates into .net a bit more. Thanks. A: ImageMagick all the way. It's a codebase with nearly every image-related operation you could possibly want to do, implemented fairly efficiently in C. This includes various types of resizing, both interpolated (bilinear, trilinear, adaptive, etc.), and not (just decimating (sampling) or replicating pixels. There are a ton of APIs (language bindings) that you can use in your applications, including MagickNet. Also, not sure if it's at all relevant to what you're trying to do, but I thought this was a pretty darn cool SIGGRAPH paper, so here goes: ImageMagick also supports what they call "liquid rescaling", or seam carving, a technique shown in this cool demo here, and whose implementation and use in ImageMagick is discussed here. A: A couple years ago I used FreeImage in a program that needed to load some relatively big images (12-mega-pixel images). It performed really well (waaaay better than GDI+) and the API is quite simple to understand and start using. I even wrote a .NET wrapper and I think I still have it laying around somewhere, but I suppose there must be better wrappers/bindings for .NET by now. A: I've used ImageMagick in the past - note that you would have to invoke it from command line, too. The good news is that it's a breeze to integrate into your project, and it's a very powerful utility. A: Yes, I'd go for ImageMagick definitely. I'd give http://midimick.com/magicknet/ a shot if I were you..
{ "language": "en", "url": "https://stackoverflow.com/questions/113144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the cost of using a pointer to member function vs. a switch? I have the following situation: class A { public: A(int whichFoo); int foo1(); int foo2(); int foo3(); int callFoo(); // cals one of the foo's depending on the value of whichFoo }; In my current implementation I save the value of whichFoo in a data member in the constructor and use a switch in callFoo() to decide which of the foo's to call. Alternatively, I can use a switch in the constructor to save a pointer to the right fooN() to be called in callFoo(). My question is which way is more efficient if an object of class A is only constructed once, while callFoo() is called a very large number of times. So in the first case we have multiple executions of a switch statement, while in the second there is only one switch, and multiple calls of a member function using the pointer to it. I know that calling a member function using a pointer is slower than just calling it directly. Does anybody know if this overhead is more or less than the cost of a switch? Clarification: I realize that you never really know which approach gives better performance until you try it and time it. However, in this case I already have approach 1 implemented, and I wanted to find out if approach 2 can be more efficient at least in principle. It appears that it can be, and now it makes sense for me to bother to implement it and try it. Oh, and I also like approach 2 better for aesthetic reasons. I guess I am looking for a justification to implement it. :) A: You can write this: class Foo { public: Foo() { calls[0] = &Foo::call0; calls[1] = &Foo::call1; calls[2] = &Foo::call2; calls[3] = &Foo::call3; } void call(int number, int arg) { assert(number < 4); (this->*(calls[number]))(arg); } void call0(int arg) { cout<<"call0("<<arg<<")\n"; } void call1(int arg) { cout<<"call1("<<arg<<")\n"; } void call2(int arg) { cout<<"call2("<<arg<<")\n"; } void call3(int arg) { cout<<"call3("<<arg<<")\n"; } private: FooCall calls[4]; }; The computation of the actual function pointer is linear and fast: (this->*(calls[number]))(arg); 004142E7 mov esi,esp 004142E9 mov eax,dword ptr [arg] 004142EC push eax 004142ED mov edx,dword ptr [number] 004142F0 mov eax,dword ptr [this] 004142F3 mov ecx,dword ptr [this] 004142F6 mov edx,dword ptr [eax+edx*4] 004142F9 call edx Note that you don't even have to fix the actual function number in the constructor. I've compared this code to the asm generated by a switch. The switch version doesn't provide any performance increase. A: To answer the asked question: at the finest-grained level, the pointer to the member function will perform better. To address the unasked question: what does "better" mean here? In most cases I would expect the difference to be negligible. Depending on what the class it doing, however, the difference may be significant. Performance testing before worrying about the difference is obviously the right first step. A: If you are going to keep using a switch, which is perfectly fine, then you probably should put the logic in a helper method and call if from the constructor. Alternatively, this is a classic case of the Strategy Pattern. You could create an interface (or abstract class) named IFoo which has one method with Foo's signature. You would have the constructor take in an instance of IFoo (constructor Dependancy Injection that implemented the foo method that you want. You would have a private IFoo that would be set with this constructor, and every time you wanted to call Foo you would call your IFoo's version. Note: I haven't worked with C++ since college, so my lingo might be off here, ut the general ideas hold for most OO languages. A: If your example is real code, then I think you should revisit your class design. Passing in a value to the constructor, and using that to change behaviour is really equivalent to creating a subclass. Consider refactoring to make it more explicit. The effect of doing so is that your code will end up using a function pointer (all virtual methods are, really, are function pointers in jump tables). If, however your code was just a simplified example to ask whether, in general, jump tables are faster than switch statements, then my intuition would say that jump tables are quicker, but you are dependent on the compiler's optimisation step. But if performance is really such a concern, never rely on intuition - knock up a test program and test it, or look at the generated assembler. One thing is certain, a switch statement will never be slower than a jump table. The reason being that the best a compiler's optimiser can do will be too turn a series of conditional tests (i.e. a switch) into a jump table. So if you really want to be certain, take the compiler out of the decision process and use a jump table. A: How sure are you that calling a member function via a pointer is slower than just calling it directly? Can you measure the difference? In general, you should not rely on your intuition when making performance evaluations. Sit down with your compiler and a timing function, and actually measure the different choices. You may be surprised! More info: There is an excellent article Member Function Pointers and the Fastest Possible C++ Delegates which goes into very deep detail about the implementation of member function pointers. A: Sounds like you should make callFoo a pure virtual function and create some subclasses of A. Unless you really need the speed, have done extensive profiling and instrumenting, and determined that the calls to callFoo are really the bottleneck. Have you? A: Function pointers are almost always better than chained-ifs. They make cleaner code, and are nearly always faster (except perhaps in a case where its only a choice between two functions and is always correctly predicted). A: I should think that the pointer would be faster. Modern CPUs prefetch instructions; mis-predicted branches flush the cache, which means it stalls while it refills the cache. A pointer doens't do that. Of course, you should measure both. A: Optimize only when needed First: Most of the time you most likely do not care, the difference will be very small. Make sure optimizing this call really makes sense first. Only if your measurements show there is really significant time spent in the call overhead, proceed to optimizing it (shameless plug - Cf. How to optimize an application to make it faster?) If the optimization is not significant, prefer the more readable code. Indirect call cost depends on target platform Once you have determined it is worth to apply low-level optimization, then it is a time to understand your target platform. The cost you can avoid here is the branch misprediction penalty. On modern x86/x64 CPU this misprediction is likely to be very small (they can predict indirect calls quite well most of the time), but when targeting PowerPC or other RISC platforms, the indirect calls/jumps are often not predicted at all and avoiding them can cause significant performance gain. See also Virtual call cost depends on platform. Compiler can implement switch using jump table as well One gotcha: Switch can sometimes be implemented as an indirect call (using a table) as well, especially when switching between many possible values. Such switch exhibits the same misprediction as a virtual function. To make this optimization reliable, one would probably prefer using if instead of switch for the most common case. A: Use timers to see which is quicker. Although unless this code is going to be over and over then it's unlikely that you'll notice any difference. Be sure that if you are running code from the constructor that if the contruction fails that you wont leak memory. This technique is used heavily with Symbian OS: http://www.titu.jyu.fi/modpa/Patterns/pattern-TwoPhaseConstruction.html A: If you are only calling callFoo() once, than most likely the function pointer will be slower by an insignificant amount. If you are calling it many times than most likely the function pointer will be faster by an insignificant amount (because it doesn't need to keep going through the switch). Either way look at the assembled code to find out for sure it is doing what you think it is doing. A: One often overlooked advantage to switch (even over sorting and indexing) is if you know that a particular value is used in the vast majority of cases. It's easy to order the switch so that the most common are checked first. ps. To reinforce greg's answer, if you care about speed - measure. Looking at assembler doesn't help when CPUs have prefetch / predictive branching and pipeline stalls etc
{ "language": "en", "url": "https://stackoverflow.com/questions/113150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: What are the Python equivalents of the sighold and sigrelse functions found in C? It appears the Python signal module doesn't have anything similar to the sighold and sigrelse functions found in C, using signal.h. Are there Python equivalents of any sort? Many thanks! A: There is no way to ``block'' signals temporarily from critical sections (since this is not supported by all Unix flavors). https://docs.python.org/library/signal.html A: There are no direct bindings for this in Python. Accessing them through ctypes is easy enough; here is an example. import ctypes, signal libc = ctypes.cdll.LoadLibrary("libc.so.6") libc.sighold(signal.SIGKILL) libc.sigrelse(signal.SIGKILL) I'm not familiar with the use of these calls, but be aware that Python's signal handlers work differently than C. When Python code is attached to a signal callback, the signal is caught on the C side of the interpreter and queued. The interpreter is occasionally interrupted for internal housekeeping (and thread switching, etc). It is during that interrupt the Python handler for the signal will be called. All that to say, just be aware that Python's signal handling is a little less asynchronous than normal C signal handlers.
{ "language": "en", "url": "https://stackoverflow.com/questions/113170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Set operation in .NET C# I'm working on a something related to roughset right now. The project uses alot of sets operation and manipulation. I've been using string operations as a stop gap measure for set operation. It has worked fine until we need to process some ungodly amount of data ( 500,000 records with about 40+ columns each ) through the algorithm. I know that there is no set data structure in .net 2.0(2.0 was the latest when I started the project) I want to know if there is any library that offer fast set operation in .net c# or if 3.5 has added native set data structure. Thanks . A: LINQ supports some set operations. See LINQ 101 page for examples. Also there is a class HashSet (.NET 3.5) Here is Microsoft guidelines for set operations in .NET: HashSet and LINQ Set Operations List of set operations supported by HasSet class: HashSet Collection Type A: Update: This is for .Net 2.0. For .Net 3.5, refer posts by aku, Jon.. This is a good reference for efficiently representing sets in .Net. A: It may be worth taking a look at C5, it's a generic collection library for .NET which includes sets. Note that I haven't looked into it much, but it seems to be a pretty fantastic collection library. A: .NET 3.5 already has a native set data type: HashSet. You might also want to look at HashSet and LINQ set operators for the operations. In .NET 1.0, there was a third party Set data type: Iesi.Collections which was extended with .NET 2.0 generics with Iesi.Collections.Generic. You might want to try and look at all of them to see which one would benefit you the most. :) A: Try HashSet in .NET 3.5. This page from a member of the .NET BCL team has some good information on the intent of HashSet A: I have been abusing the Dictionary class in .NET 2.0 as a set: private object dummy = "ok"; public void Add(object el) { dict[el] = dummy; } public bool Contains(object el) { return dict.ContainsKey(el); } A: You can use Linq to Objects in C# 3.0. A: You ever think about sing F#? This seems like a job for a functional programming language. A: You should take a look at C5 Generic Collection Library. This library is a systematic approach to fix holes in .NET class library by providing missing structures, as well as replacing existing ones with set of well designed interfaces and generic classes. Among others, there is HashSet<T> - generic Set class based on linear hashing.
{ "language": "en", "url": "https://stackoverflow.com/questions/113173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I use owfs to read an iButton temperature logger? I've installed owfs and am trying to read the data off a iButton temperature logger. owfs lets me mount the iButton as a fuse filesystem and I can see all the data. I'm having trouble figuring out what is the best way to access the data though. I can get individual readings by catting the files, e.g. cat onewire/{deviceid}/log/temperature.1, but the onewire/{deviceid}/log/temperature.ALL file is "broken" (possible too large, as histogram/temperature.ALL work fine). A python script to read all files seems to work but takes a very long time. Is there a better way to do it? Does anyone have any examples? I'm using Ubuntu 8.04 and couldn't get the java "one wire viewer" app to run. Update: Using owpython (installed with owfs), I can get the current temperature but can't figure out how to get access to the recorded logs: >>> import ow >>> ow.init("u") # initialize USB >>> ow.Sensor("/").sensorList() [Sensor("/81.7FD921000000"), Sensor("/21.C4B912000000")] >>> x = ow.Sensor("/21.C4B912000000") >>> print x.type, x.temperature DS1921 22 x.log gives an AttributeError. A: I've also had problems with owfs. I found it to be an overengineered solution to what is a simple problem. Now I'm using the DigiTemp code without a problem. I found it to be flexible and reliable. For instance, I store the room's temperature in a log file every minute by running /usr/local/bin/digitemp_DS9097U -c /usr/local/etc/digitemp.conf \ -q -t0 -n0 -d60 -l/var/log/temperature To reach that point I downloaded the source file, untarred it and then did the following. # Compile the hardware-specific command make ds9097u # Initialize the configuration file ./digitemp_DS9097U -s/dev/ttyS0 -i # Run command to obtain temperature, and verify your setup ./digitemp_DS9097U -a # Copy the configuration file to an accessible place cp .digitemprc /usr/local/etc/digitemp.conf I also hand-edited my configuration file to adjust it to my setup. This is how it ended-up. TTY /dev/ttyS0 READ_TIME 1000 LOG_TYPE 1 LOG_FORMAT "%b %d %H:%M:%S Sensor %s C: %.2C F: %.2F" CNT_FORMAT "%b %d %H:%M:%S Sensor %s #%n %C" HUM_FORMAT "%b %d %H:%M:%S Sensor %s C: %.2C F: %.2F H: %h%%" SENSORS 1 ROM 0 0x10 0xD3 0x5B 0x07 0x00 0x00 0x00 0x05 In my case I also created a /etc/init.d/digitemp file and enabled it to run at startup. #! /bin/sh # # System startup script for the temperature monitoring daemon # ### BEGIN INIT INFO # Provides: digitemp # Required-Start: # Should-Start: # Required-Stop: # Should-Stop: # Default-Start: 2 3 5 # Default-Stop: 0 1 6 # Description: Start the temperature monitoring daemon ### END INIT INFO DIGITEMP=/usr/local/bin/digitemp_DS9097U test -x $DIGITEMP || exit 5 DIGITEMP_CONFIG=/root/digitemp.conf test -f $DIGITEMP_CONFIG || exit 6 DIGITEMP_LOGFILE=/var/log/temperature # Source SuSE config . /etc/rc.status rc_reset case "$1" in start) echo -n "Starting temperature monitoring daemon" startproc $DIGITEMP -c $DIGITEMP_CONFIG -q -t0 -n0 -d60 -l$DIGITEMP_LOGFILE rc_status -v ;; stop) echo -n "Shutting down temperature monitoring daemon" killproc -TERM $DIGITEMP rc_status -v ;; try-restart) $0 status >/dev/null && $0 restart rc_status ;; restart) $0 stop $0 start rc_status ;; force-reload) $0 try-restart rc_status ;; reload) $0 try-restart rc_status ;; status) echo -n "Checking for temperature monitoring service" checkproc $DIGITEMP rc_status -v ;; *) echo "Usage: $0 {start|stop|status|try-restart|restart|force-reload|reload}" exit 1 ;; esac rc_exit A: I don't think there is a clever way. owpython doesn't support that telling from the API documentation. I guess /proc is your safest bet. Maybe have a look at the source of the owpython module and check if you can find out how it works. A: Well I have just started to look at ibuttons and want to use python. This looks more promising: http://www.ohloh.net/p/pyonewire
{ "language": "en", "url": "https://stackoverflow.com/questions/113185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Hidden markov models implementation in .net? Does anyone know of any HMM implementation in .net? I've some stuff that I did that was basicly reverse engineered from what little knowledge that I have regarding the subject and some c++ code of a very domain specific application that one of my professors did. I want to improve it somewhat and having another's work that is in the same language to compare to can really help to see if I've got it right or not. A: A quick Google search returned this C# implementation of what appears to be a Hidden Markov Model (they said it was an n-gram, but the implementation appears to be an HMM). A: I'm sure it's very late, but maybe this will serve for reference for other people looking: the Accord.NET framework integrates a simple, efficient implementation of HMMs.
{ "language": "en", "url": "https://stackoverflow.com/questions/113189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Create a small 'window-form' application that runs anywhere I work in the embedded world, using mainly C and no GUI at all (because there is no display screen). Moving over to the non-embedded world, in which I have nearly no experience, what is the best programming environment (langauge/IDE/etc) for me to build a simple window-form application that will run on all the common platforms: windows/linux/mac-os ? I do not want to build a web-app. I have my eye on Python and one of it's widget libraries, QT or WxWidgets. Is that a good option ? A: I like GTK+ personally but that or any of the ones you mentioned should be OK. I don't know which is the best in terms of least RAM usage. A: Both wx and QT have embedded/universal versions where the widgets are drawn directly. They can both be called from python,but if you have a very small system python or py2exe might not be available. A: Unless you want to embed HtmlWindow I'd go with wxWindows... works everywhere without problems so far for me. A: Qt is a good choice to start with. In my opinion it has a best (easy to use, simple & informative) API Documentation. Package also includes many examples - from very basic to complex. And, yep, it`s truly crossplatform. Check Qt Licensing page, the library is free only for GPL projects. I`m using QDevelop as text editor, but there are many other alternatives - Eclipse, KDevelop, Code:Blocks, VS plugin & etc. A: I have both worked with PyQt and wxPython extensively. PyQt is better designed and comes with very good UI designer so that you can quickly assemble your UI wxPython has a very good demo and it can do pretty much anything which PyQT can do, I would anyday prefer PyQt but it may bot be free for commercial purpose but wxPython is free and is decent cross platform library. A: Why not use swing and java? It is quite cross platform, and looks reasonable for form apps. If you squint a bit and ignore the java, its quite pleasant - or alternatively, use one of them dynamic languages on the JVM (Groovy is my recommended one). A: What kind of application is it going to be? Have you considered a web-based application instead? Web-based apps can be super flexible in that sense - you can run them on any platform that has a modern browser. A: By far the simplest choice for creating native cross-platform applications is REALbasic. Give it a try and you'll have a working app for Mac OS X, Windows and Linux in minutes. No run-times or other stuff to worry about. A: I think you should try Html Application.It is something like web page it contain DHTML,java script,ActiveX but it is execute like .exe . Edit: Sorry for advice you html application.I just know it can run on windows only.
{ "language": "en", "url": "https://stackoverflow.com/questions/113206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Removing blotchiness on transparent PNGs filtered with additional opacity in IE I made a rotating image fader using Javascript and CSS to show images and unload them. I wanted to use transparent PNG's, I didn't expect or care if they looked good in IE 6, but IE 7 and 8 treated them with the same disrespect. Firefox and other modern browsers looked great. Every picture with image.filter = alpha(opacity=xxx) in it looks like some of the transparency has some leftover noise, maybe from compression or something, no matter what I do to the picture, there's still something there. I've done workarounds by placing JPG's on white background and using GIF's. Also can someone tell me if this is actually a bug in IE? Let me know if you need an example and I'll make one A: You have to use 'finishopacity' with 'opacity' in order to get even opacity across the picture. By the way, transparency doesn't work all that great in IE 6 either. I use Bob Osola's JavaScript fix for this, works great! http://homepage.ntlworld.com/bobosola/ A: I had this same issue -- all the IEs would fail, but Firefox and all other browsers would not have problems. The way I fixed it was to open up the PNG in Gimp, choose the Fuzzy Select Tool, set the threshold to 150%, check Antialiasing, uncheck Feather Edges, check Select Transparent Areas. Next, I clicked on the transparent areas -- all the ones I could find on the image and clicked the Delete key (to mean "Clear"). Then, I resaved the image again. This resolves the problem about 98% for most images in all the Internet Exploders. I want to caveat that instruction a little, though. If you choose Fuzzy Select and it ends up selecting more than the previous transparent area, then set to 3%, fuzzy select, click Delete, then reselect again with fuzzy select at 150%, then click Delete, and it should be resolved without deleting any of your image. If you don't have Gimp, it's cross-platform and free for Windows, Mac, and Linux.
{ "language": "en", "url": "https://stackoverflow.com/questions/113212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Add options to select box without Internet Explorer closing the box? I'm trying to build a web page with a number of drop-down select boxes that load their options asynchronously when the box is first opened. This works very well under Firefox, but not under Internet Explorer. Below is a small example of what I'm trying to achieve. Basically, there is a select box (with the id "selectBox"), which contains just one option ("Any"). Then there is an onmousedown handler that loads the other options when the box is clicked. <html> <head> <script type="text/javascript"> function appendOption(select,option) { try { selectBox.add(option,null); // Standards compliant. } catch (e) { selectBox.add(option); // IE only version. } } function loadOptions() { // Simulate an AJAX request that will call the // loadOptionsCallback function after 500ms. setTimeout(loadOptionsCallback,500); } function loadOptionsCallback() { var selectBox = document.getElementById('selectBox'); var option = document.createElement('option'); option.text = 'new option'; appendOption(selectBox,option); } </script> </head> <body> <select id="selectBox" onmousedown="loadOptions();"> <option>Any</option> </select> </body> </html> The desired behavior (which Firefox does) is: * *the user see's a closed select box containing "Any". *the user clicks on the select box. *the select box opens to reveal the one and only option ("Any"). *500ms later (or when the AJAX call has returned) the dropped-down list expands to include the new options (hard coded to 'new option' in this example). So that's exactly what Firefox does, which is great. However, in Internet Explorer, as soon as the new option is added in "4" the browser closes the select box. The select box does contain the correct options, but the box is closed, requiring the user to click to re-open it. So, does anyone have any suggestions for how I can load the select control's options asynchronously without IE closing the drop-down box? I know that I can load the list before the box is even clicked, but the real form I'm developing contains many such select boxes, which are all interrelated, so it will be much better for both the client and server if I can load each set of options only when needed. Also, if the results are loaded synchronously, before the select box's onmousedown handler completes, then IE will show the full list as expected - however, synchronous loading is a bad idea here, since it will completely "lock" the browser while the network requests are taking place. Finally, I've also tried using IE's click() method to open the select box once the new options have been added, but that does not re-open the select box. Any ideas or suggestions would be really appreciated!! :) Thanks! Paul. A: Have you considered calling the loadOptions method in the onblur event of one of the other interrelated fields on the form? This would load the list into the drop down box before it is clicked, but the behavior should still be similar. I think you are going to have explore slightly different options to obtain what you want as you are probably stuck with Internet Explorer closing that drop down list if you use the onmousedown or onclick events. Another downside to using those events is if the user uses the keyboard to select the fields, your method may never get called. A: I would suggest to load the contents of the selects that don't depend on any other select boxes on page load. Then in the onchange event of those selects load the contents of the rest of the selects that depend on them. Your idea is sound from a programming point of view, but you will get that lag between clicking on the select and it being populated with all the options which from the user's point of view looks kind of sloppy. A: I found a solution to this, the problem seems to lie in IE's implementation of onclick, hover, mouseover etc. After the items are added to the dropdown, the dropdown closes. If you instead of providing the method in the select attribute, let jquery add the function at runtime it works. $(function() { jQuery(".selectBox").live("focus", function() { loadOptions(); }); }); The whole page: <html> <head> <script src="jquery-latest.js" type="text/javascript"/> </head> <body> <select id="selectBox" onmousedown="loadOptions();"> <option>Any</option> </select> <script type="text/javascript"> $(function() { jQuery(".selectBox").live("focus", function() { loadOptions(); }); }); function appendOption(select, option) { try { selectBox.add(option, null); // Standards compliant. } catch (e) { selectBox.add(option); // IE only version. } } function loadOptions() { // Simulate an AJAX request that will call the // loadOptionsCallback function after 500ms. setTimeout(loadOptionsCallback, 500); } function loadOptionsCallback() { var selectBox = document.getElementById('selectBox'); var option = document.createElement('option'); option.text = 'new option'; appendOption(selectBox, option); } </script> </body>
{ "language": "en", "url": "https://stackoverflow.com/questions/113218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the largest TCP/IP network port number allowable for IPv4? What is the highest port number one can use? A: As I understand it, you should only use up to 49151, as from 49152 up to 65535 are reserved for Ephemeral ports A: It should be 65535. A: The port number is an unsigned 16-bit integer, so 65535. A: Just a followup to smashery's answer. The ephemeral port range (on Linux at least, and I suspect other Unices as well) is not a fixed. This can be controlled by writing to /proc/sys/net/ipv4/ip_local_port_range The only restriction (as far as IANA is concerned) is that ports below 1024 are designated to be well-known ports. Ports above that are free for use. Often you'll find that ports below 1024 are restricted to superuser access, I believe for this very reason. A: According to RFC 793, the port is a 16 bit unsigned int. This means the range is 0 - 65535. However, within that range, ports 0 - 1023 are generally reserved for specific purposes. I say generally because, apart from port 0, there is usually no enforcement of the 0-1023 reservation. TCP/UDP implementations usually don't enforce reservations apart from 0. You can, if you want to, run up a web server's TLS port on port 80, or 25, or 65535 instead of the standard 443. Likewise, even tho it is the standard that SMTP servers listen on port 25, you can run it on 80, 443, or others. Most implementations reserve 0 for a specific purpose - random port assignment. So in most implementations, saying "listen on port 0" actually means "I don't care what port I use, just give me some random unassigned port to listen on". So any limitation on using a port in the 0-65535 range, including 0, ephemeral reservation range etc, is implementation (i.e. OS/driver) specific, however all, including 0, are valid ports in the RFC 793. A: Valid numbers for ports are: 0 to 2^16-1 = 0 to 65535 That is because a port number is 16 bit length. However ports are divided into: Well-known ports: 0 to 1023 (used for system services e.g. HTTP, FTP, SSH, DHCP ...) Registered/user ports: 1024 to 49151 (you can use it for your server, but be careful some famous applications: like Microsoft SQL Server database management system (MSSQL) server or Apache Derby Network Server are already taking from this range i.e. it is not recommended to assign the port of MSSQL to your server otherwise if MSSQL is running then your server most probably will not run because of port conflict ) Dynamic/private ports: 49152 to 65535. (not used for the servers rather the clients e.g. in NATing service) In programming you can use any numbers 0 to 65535 for your server, however you should stick to the ranges mentioned above, otherwise some system services or some applications will not run because of port conflict. Check the list of most ports here: https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers A: The largest port number is an unsigned short 2^16-1: 65535 A registered port is one assigned by the Internet Corporation for Assigned Names and Numbers (ICANN) to a certain use. Each registered port is in the range 1024–49151. Since 21 March 2001 the registry agency is ICANN; before that time it was IANA. Ports with numbers lower than those of the registered ports are called well known ports; port with numbers greater than those of the registered ports are called dynamic and/or private ports. Wikipedia: Registered Ports A: It depends on which range you're talking about, but the dynamic range goes up to 65535 or 2^16-1 (16 bits). http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers
{ "language": "en", "url": "https://stackoverflow.com/questions/113224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "453" }
Q: How to release .Net apps without bundling .Net framework? I have a strange requirement to ship an application without bundling .Net framework (to save memory footprint and bandwidth). Is this possible? Customers may or may not have .Net runtime installed on their systems. Will doing Ngen take care of this problem? I was looking for something like the good old ways of releasing C++ apps (using linker to link only the binaries you need). A: One option without using Ngen may be to release using the .Net Framework 3.5 SP1 "Client Profile". This is a sub-set of the .Net Framework used for building client applications which can be downloaded as a separate, much smaller, package. See details from the BCL Team Blog here and Scott Guthrie here. A: Common solution in such situation which a the standard de-facto is that your customers should have the proper version of .Net framework, as soon as it's the part of Windows Update. So your installer should check availability of .NET of version your use on client's machine and propose to download it from Microsoft. This will prevent your company to transfer it through your channel and ensure your application has correct infrastructure, A: have you checked salamander?remotesoft A: Just FYI, This topic is already discussed. Unfortunately I can't find the link at the moment (SO search should be improved). Ok I found similar question: .NET Framework dependency I recall that there was exactly the same question, but I can't find it :( A: If your software requires .NET then your end users will need the same version of .NET. You cannot "link in" .NET into your executable to create a single .exe, like you can with MFC or Delphi. If your installer doesn't install the .NET runtime then you will need to ensure that the user is aware if this and point them to the .NET download from Microsoft. A: You can use "Client Profile", it is a subset of .NET Framework for desktop applications. Size of client profile is about 20 MB A: You can also include the bootstrapper 'setup.exe' that is created in VS. It'll detect whether you have the neccessary .net version, and if so, launch the installer; if not, it'll prompt you to download the framework.
{ "language": "en", "url": "https://stackoverflow.com/questions/113233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Centered background image is off by 1px My web page sits in a DIV that is 960px wide, I center this DIV in the middle of the page by using the code: html,body{background: url(images/INF_pageBg.gif) center top repeat-y #777777;text-align:center;} #container{background-color:#ffffff;width:960px;text-align:left;margin:0 auto 0 auto;} I need the background image of the html/body to tile down the middle of the page, which it does, however if the viewable pane in the browser is an odd number of pixels width then the centered background and centered DIV don't align together. This is only happening in FF. Does anybody know of a workaround? A: Yeah, it's known issue. Unfortunately you only can fix div and image width, or use script to dynamically change stye.backgroundPosition property. Another trick is to put expression to the CSS class definition. A: The (most) common problem is that your background image has an odd number while your container is an even number. I have wrote an article in my best English about where I also explain how the browser positioned your picture: check it out here. A: I found that by making the background image on odd number of pixels wide, the problem goes away for Firefox. Setting padding:0px 0px 0px 1px; fixes the problem for IE. Carlo Capocasa, Travian Games A: I was able to resolve this with jQuery: $(document).ready(function(){ $('body').css({ 'margin-left': $(document).width()%2 }); }); A: I had the same problem. To get the background centered, you need to have a background-image wider than the viewport. Try to use a background 2500px wide. It will force the browser to center the part of image that is viewable. Let me know if it works for you. A: What about creating a wrapper div with the same background-image. body{ background: url(your-image.jpg) no-repeat center top; } #wrapper{ background: url(your-image.jpg) no-repeat center top; margin: 0 auto; width: 984px; } The wrapper has an even number, the background will keep the same position on any screen size.
{ "language": "en", "url": "https://stackoverflow.com/questions/113253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: VB.NET - Should a Finalize method be added when implementing IDisposable? In Visual Studio, when I type the line "Implements IDisposable", the IDE automatically adds: * *a disposedValue member variable *a Sub Dispose() Implements IDisposable.Dispose *a Sub Dispose(ByVal disposing As Boolean) The Dispose() should be left alone, and the clean up code should be put in Dispose(disposing). However the Dispose Finalize Pattern says you should also override Sub Finalize() to call Dispose(False). Why doesn't the IDE also add this? Must I add it myself, or is it somehow called implicitly? EDIT: Any idea why the IDE automatically adds 80% of the required stuff but leaves out the Finalize method? Isn't the whole point of this kind of feature to help you not forget these things? EDIT2: Thank you all for your excellent answers, this now makes perfect sense! A: No, you don't need to have Finalize unless you have unmanaged resources to clean up. In most cases the reason a class is disposable is because it keeps references to other managed IDisposable objects. In this case no Finalize method is necessary or desirable. A: Implements IDisposable Public Overloads Sub Dispose() Implements IDisposable.Dispose Dispose(True) GC.SuppressFinalize(Me) End Sub Protected Overloads Sub Dispose(ByVal disposing As Boolean) If disposing Then ' Free other state (managed objects). End If ' Free your own state (unmanaged objects). ' Set large fields to null. End Sub Protected Overrides Sub Finalize() Dispose(False) MyBase.Finalize() End Sub A: If you actually are holding non-managed resources that will not be automatically cleaned up by the garbage collector and cleaning those up in your Dispose(), then yes, you should do the same in Finalize(). If you're implementing IDisposable for some other reason, implementing Finalize() isn't required. The basic question is this: If Dispose() wasn't called and your object garbage collected, would memory leak? If yes, implement Finalize. If no, you don't need to. Also, avoid implementing Finalize "just because it's safer". Objects with custom finalizers can potentially need two GC passes to free them -- once to put them on the pending finalizers queue, and a second pass to actually free their memory. A: As others have said, you don't need to implement a finalizer unless you're directly holding unmanaged resources. Also, assuming you're working in .NET 2.0 or later, it's unlikely you'll ever need to implement a finalizer because typically SafeHandle can be used to wrap your unmanaged resources. I wrote a fairly long blog post covering the background and implementation of IDisposable and finalizers a while back, which may be worth a read if you're not totally clear about it.
{ "language": "en", "url": "https://stackoverflow.com/questions/113267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I remove duplication in shoulda tests? Here is what I have: context "Create ingredient from string" do context "1 cups butter" do setup do @ingredient = Ingredient.create(:ingredient_string => "1 cups butter") end should "return unit" do assert_equal @ingredient.unit, 'cups' end should "return amount" do assert_equal @ingredient.amount, 1.0 end should "return name" do assert_equal @ingredient.name, 'butter' end end context "1 (18.25 ounce) package devil's food cake mix with pudding" do setup do @ingredient = Ingredient.create(:ingredient_string => "1 (18.25 ounce) package devil's food cake mix with pudding") end should "return unit" do assert_equal @ingredient.unit, '(18.25 ounce) package' end should "return amount" do assert_equal @ingredient.amount, 1.0 end should "return name" do assert_equal @ingredient.name, 'devil\'s food cake mix with pudding' end end end Clearly there is a lot of duplication there. Any thoughts on how to remove it, if only at the very least the context and the string? A: Here's a solution to your specific problem. The idea is to create a class method (like Shoulda's context, setup and should). Encapsulate the repetition in a class method accepting all varying parts as arguments like this: def self.should_get_unit_amount_and_name_from_string(unit, amount, name, string_to_analyze) context string_to_analyze do setup do @ingredient = Ingredient.create(:ingredient_string => string_to_analyze) end should "return unit" do assert_equal @ingredient.unit, unit end should "return amount" do assert_equal @ingredient.amount, amount end should "return name" do assert_equal @ingredient.name, name end end end Now you can call all these encapsulated tests with one liners (5-liners here for readability ;-) context "Create ingredient from string" do should_get_unit_amount_and_name_from_string( 'cups', 1.0, 'butter', "1 cups butter") should_get_unit_amount_and_name_from_string( '(18.25 ounce) package', 1.0, 'devil\'s food cake mix with pudding', "1 (18.25 ounce) package devil's food cake mix with pudding") end In some cases, you may want to accept a block which could serve as your Shoulda setup. A: Duplication in tests is not necessarily a Bad Thing(tm) I suggest you read the following articles from Jay Field http://blog.jayfields.com/2007/06/testing-one-assertion-per-test.html http://blog.jayfields.com/2008/05/testing-duplicate-code-in-your-tests.html They make a convinving case for code duplication in the tests and keeping one assertion per test. A: Tests/specs are not production code and so being dry is not a priority. The principle is that the specs should be clear to read, even if it means there is duplication of text across tests. Don't be too concerned about specs being dry. Overemphasis of dry tests tends to make things more difficult as you have to jump around to the definitions of things to understand what is happening. A: Personally for this test, I wouldn't use Shoulda. You can easily remove duplication by using dynamic method creation as follows: class DefineMethodTest < Test::Unit::TestCase [{:string => '1 cups butter', :unit => 'cups', :amount => 1.0, :name => 'butter'},{:string => '1 (18.25 ounce) package devil's food cake mix with pudding', :unit => '(18.25 ounce) package', :unit => 1.0, :name => "devil's food cake mix with pudding"}].each do |t| define_method "test_create_ingredient_from_string_#{t[:string].downcase.gsub(/[^a-z0-9]+/, '_')}" do @ingredient = Ingredient.create(:ingredient_string => t[:string]) assert_equal @ingredient.unit, t[:unit], "Should return unit #{t[:unit]}" assert_equal @ingredient.amount, t[:amount], "Should return amount #{t[:amount]}" assert_equal @ingredient.name, t[:name], "Should return name #{t[:name]}" end end end
{ "language": "en", "url": "https://stackoverflow.com/questions/113275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Maintaining Multiple Databases Across Several Platforms What's the best way to maintain a multiple databases across several platforms (Windows, Linux, Mac OS X and Solaris) and keep them in sync with one another? I've tried several different programs and nothing seems to work! A: I think you should ask yourself why you have to go through the hassle of maintaining multiple databases across several platforms and have them in sync with one another. Sounds like there's a lot of redundancy there. Why not just have one instance of that database, since I'm sure it can be made accessible (e.g. via SOA approach) to multiple apps on multiple platforms anyway? A: Why go through the hassle? Management claims it's more expensive? Here's how to prove them wrong. Pick one database, call it the "master" or "system of record". Write scripts to export data from the master and load it into your copies. If you have a nice database (MySQL, SQL/Server, Oracle or DB2) there are nice tools to do this replication for you. If you have a mixture of databases, you'll have to resort to exporting changed data and reloading changed data. The idea is that this is a 1-way copy: master to replicants. Fix each application, one at a time, to do updates in the master database only. Since each application has a JDBC (or ODBC or whatever) connection to a database, it can just as easily be a connection to the master database. Once you've fixed the applications to update only the master, the replicas are worthless. Management can insist that it's cheaper to have them. And there they are -- clones of the master database -- just what management says you must have. Your life is simpler because the apps are only updating the system of record. They're happy because you have all the cloned databases lying around.
{ "language": "en", "url": "https://stackoverflow.com/questions/113277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Do you create your own code generators? The Pragmatic Programmer advocates the use of code generators. Do you create code generators on your projects? If yes, what do you use them for? A: Code generators if used widely without correct argumentation make code less understandable and decrease maintainability (the same with dynamic SQL by the way). Personally I'm using it with some of ORM tools, because their usage here mostly obvious and sometimes for things like searcher-parser algorithms and grammatic analyzers which are not designed to be maintained "by hands" lately. Cheers. A: In hardware design, it's fairly common practice to do this at several levels of the 'stack'. For instance, I wrote a code generator to emit Verilog for various widths, topologies, and structures of DMA engines and crossbar switches, because the constructs needed to express this parameterization weren't yet mature in the synthesis and simulation tool flows. It's also routine to emit logical models all the way down to layout data for very regular things that can be expressed and generated algorithmically, like SRAM, cache, and register file structures. I also spent a fair bit of time writing, essentially, a code generator that would take an XML description of all the registers on a System-on-Chip, and emit HTML (yes, yes, I know about XSLT, I just found emitting it programatically to be more time-effective), Verilog, SystemVerilog, C, Assembly etc. "views" of that data for different teams (front-end and back-end ASIC design, firmware, documentation, etc.) to use (and keep them consistent by virtue of this single XML "codebase"). Does that count? People also like to write code generators for e.g. taking terse descriptions of very common things, like finite state machines, and mechanically outputting more verbose imperative language code to implement them efficiently (e.g. transition tables and traversal code). A: We use code generators for generating data entity classes, database objects (like triggers, stored procs), service proxies etc. Anywhere you see lot of repititive code following a pattern and lot of manual work involved, code generators can help. But, you should not use it too much to the extend that maintainability is a pain. Some issues also arise if you want to regenerate them. Tools like Visual Studio, Codesmith have their own templates for most of the common tasks and make this process easier. But, it is easy to roll out on your own. A: It is often useful to create a code generator that generates code from a specification - usually one that has regular tabular rules. It reduces the chance of introducing an error via a typo or omission. A: Yes , I developed my own code generator for AAA protocol Diameter (RFC 3588). It could generate structures and Api's for diameter messages reading from an XML file that described diameter application's grammar. That greatly reduced the time to develop complete diameter interface (such as SH/CX/RO etc.). A: In "Pragmatic Programmer" Hunt and Thomas distinguish between Passive and Active code generators. Passive generators are run-once, after which you edit the result. Active generators are run as often as desired, and you should never edit the result because it will be replaced. IMO, the latter are much more valuable because they approach the DRY (don't-repeat-yourself) principle. If the input information to your program can be split into two parts, the part that changes seldom (A) (like metadata or a DSL), and the part that is different each time the program is run (B)(the live input), you can write a generator program that takes only A as input, and writes out an ad-hoc program that only takes B as input. (Another name for this is partial evaluation.) The generator program is simpler because it only has to wade through input A, not A and B. Also, it does not have to be fast because it is not run often, and it doesn't have to care about memory leaks. The ad-hoc program is faster because it's not having to wade through input that is almost always the same (A). It is simpler because it only has to make decisions about input B, not A and B. It's a good idea for the generated ad-hoc program to be quite readable, so you can more easily find any errors in it. Once you get the errors removed from the generator, they are gone forever. In one project I worked on, a team designed a complex database application with a design spec two inches thick and a lengthy implementation schedule, fraught with concerns about performance. By writing a code generator, two people did the job in three months, and the source code listings (in C) were about a half-inch thick, and the generated code was so fast as to not be an issue. The ad-hoc program was regenerated weekly, at trivial cost. So active code generation, when you can use it, is a win-win. And, I think it's no accident that this is exactly what compilers do. A: in my opinion a good programming language would not need code generators because introspection and runtime code generation would be part of language e.g. in python metaclasses and new module etc. A: code generators usually generate more unmanageable code in long term usage. however, if it is absolutely imperative to use a code generator (eclipse VE for swing development is what I use at times) then make sure you know what code is being generated. Believe me, you wouldn't want code in your application that you are not familiar with. A: Writing own generator for project is not efficient. Instead, use a generator such as T4, CodeSmith and Zontroy. T4 is more complex and you need to know a .Net programming language. You have to write your template line by line and you have to complete data relational operations on your own. You can use it over Visual Studio. CodeSmith is an functional tool and there are plenty of templates ready to use. It is based on T4 and writing your own temlate takes too much time as it is in T4. There is a trial and a commercial version. Zontroy is a new tool with a user friendly user interface. It has its own template language and is easy to learn. There is an online template market and it is developing. Even you can deliver templates and sell them online over market. It has a free and a commercial version. Even the free version is enough to complete a medium-scale project. A: there might be a lot of code generators out there , however I always create my own to make the code more understandable and suit the frameworks and guidelines we are using A: We use a generator for all new code to help ensure that coding standards are followed. We recently replaced our in-house C++ generator with CodeSmith. We still have to create the templates for the tool, but it seems ideal to not have to maintain the tool ourselves. A: My most recent need for a generator was a project that read data from hardware and ultimately posted it to a 'dashboard' UI. In-between were models, properties, presenters, events, interfaces, flags, etc. for several data points. I worked up the framework for a couple data points until I was satisfied that I could live with the design. Then, with the help of some carefully placed comments, I put the "generation" in a visual studio macro, tweaked and cleaned the macro, added the datapoints to a function in the macro to call the generation - and saved several tedious hours (days?) in the end. Don't underestimate the power of macros :) I am also now trying to get my head around CodeRush customization capabilities to help me with some more local generation requirements. There is powerful stuff in there if you need on-the-fly decision making when generating a code block. A: I have my own code generator that I run against SQL tables. It generates the SQL procedures to access the data, the data access layer and the business logic. It has done wonders in standardising my code and naming conventions. Because it expects certain fields in the database tables (such as an id column and updated datetime column) it has also helped standardise my data design. A: How many are you looking for? I've created two major ones and numerous minor ones. The first of the major ones allowed me to generate programs 1500 line programs (give or take) that had a strong family resemblance but were attuned to the different tables in a database - and to do that fast, and reliably. The downside of a code generator is that if there's a bug in the code generated (because the template contains a bug), then there's a lot of fixing to do. However, for languages or systems where there is a lot of near-repetitious coding to be done, a good (enough) code generator is a boon (and more of a boon than a 'doggle'). A: In embedded systems, sometimes you need a big block of binary data in the flash. For example, I have one that takes a text file containing bitmap font glyphs and turns it into a .cc/.h file pair declaring interesting constants (such as first character, last character, character width and height) and then the actual data as a large static const uint8_t[]. Trying to do such a thing in C++ itself, so the font data would auto-generate on compilation without a first pass, would be a pain and most likely illegible. Writing a .o file by hand is out of the question. So is breaking out graph paper, hand encoding to binary, and typing all that in. IMHO, this kind of thing is what code generators are for. Never forget that the computer works for you, not the other way around. BTW, if you use a generator, always always always include some lines such as this at both the start and end of each generated file: // This code was automatically generated from Font_foo.txt. DO NOT EDIT THIS FILE. // If there's a bug, fix the font text file or the generator program, not this file. A: Yes I've had to maintain a few. CORBA or some other object communication style of interface is probably the general thing that I think of first. You have object definitions that are provided to you by the interface you are going to talk over but you still have to build those objects up in code. Building and running a code generator is a fairly routine way of doing that. This can become a fairly lengthy compile just to support some legacy communication channel, and since there is a large tendency to put wrappers around CORBA to make it simpler, well things just get worse. In general if you have a large amount of structures, or just rapidly changing structures that you need to use, but you can't handle the performance hit of building objects through metadata, then your into writing a code generator. A: I can't think of any projects where we needed to create our own code generators from scratch but there are several where we used preexisting generators. (I have used both Antlr and the Eclipse Modeling Framework for building parsers and models in java for enterprise software.) The beauty of using a code generator that someone else has written is that the authors tend to be experts in that area and have solved problems that I didn't even know existed yet. This saves me time and frustration. So even though I might be able to write code that solves the problem at hand, I can generate the code a lot faster and there is a good chance that it will be less buggy than anything I write. A: If you're not going to write the code, are you going to be comfortable with someone else's generated code? Is it cheaper in both time and $$$ in the long run to write your own code or code generator? I wrote a code generator that would build 100's of classes (java) that would output XML data from database in a DTD or schema compliant manner. The code generation was generally a one time thing and the code would then be smartened up with various business rules etc. The output was for a rather pedantic bank. A: Code generators are work-around for programming language limitations. I personally prefer reflection instead of code generators but I agree that code generators are more flexible and resulting code obviously faster during runtime. I hope, future versions of C# will include some kind of DSL environment. A: The only code generators that I use are webservice parsers. I personally stay away from code generators because of the maintenance problems for new employees or a separate team after hand off. A: I write my own code generators, mainly in T-SQL, which are called during the build process. Based on meta-model data, they generate triggers, logging, C# const declarations, INSERT/UPDATE statements, data model information to check whether the app is running on the expected database schema. I still need to write a forms generator for increased productivity, more specs and less coding ;) A: I've created a few code generators. I had a passive code generator for SQL Stored procedures which used templates. This generated generated 90% of our stored procedures. Since we made the switch to Entity Framework I've created an active codegenerator using T4 (Text Template Transformation Toolkit) inside visual studio. I've used it to create basic repository partial classes for our entities. Works very nicely and saves a bunch of coding. I also use T4 for decorating the entity classes with certain Attributes. A: I use code generation features provided by EMF - Eclipse Modeling Framework. A: Code generators are really useful in many cases, especially when mapping from one format to another. I've done code generators for IDL to C++, database tables to OO types, and marshalling code just to name a few. I think the point the authors are trying to make is that if you're a developer you should be able to make the computer work for you. Generating code is just one obvious task to automate. I once worked with a guy who insisted that he would do our IDL to C++ mapping manually. In the beginning of the project he was able to keep up, because the rest of us were trying to figure out what to do, but eventually he became a bottleneck. I did a code generator in Perl and then we could pretty much do his "work" in a few minutes. A: See our "universal" code generator based on program transformations. I'm the architect and a key implementer. It is worth noting that a significant fraction of this generator, is generated using this generator. A: We uses Telosys code generator in our projects : http://www.telosys.org/ We have created it to reduce the development duration in recurrent tasks like CRUD screens, documentation, etc... For us the most important thing is to be able to customize the generator's templates, in order to create new generation targets if necessary and to customize existing templates. That's why we have also created a template editor (for Velocity .vm files). It works fine for Java/Spring/AngularJS code generator and can be adapt for other targets (PHP, C#, Python, etc )
{ "language": "en", "url": "https://stackoverflow.com/questions/113286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Multiple services from the same executable I've written a small service (plain Win32) and I'd like to know if it's possible to run multiple instances of it when multiple users are logged on. Basically, let's say we've got UserA and UserB for UserA the service would log on as "domain\UserA" and for UserB the service would log on as "domain\UserB" - this is from the same executable of course. I can change the logon dynamically using the ChangeServiceConfig() function, but it changes it system-wide it seems, while I'd like each user to have his own copy of the service running only for him. Thank you in advance for any pointers. A: Win32 services are designed to be system-wide, and start running before any user is logged in. If you want something to run on a per-user basis, it's probably better to design it as a regular application and run it from the user's Startup group. A: The whole concept of a service is that it is started before any user is even logged on. so even if this was possible, you wouldn't be able to choose between userA and userB when the service starts because none of them is logged on yet. A possible direction would be for the service to run as SYSTEM And every few minutes check if there is a user logged in, if there is- impersonate that user and do this stuff. A: Is it possible to perhaps have the service create child processes which then adopt the user credentials (or be started with them)? This way you're still limited to a single instance of the service, but it is able to do its per-user jobs all the same. IIRC the Windows Task Scheduler service does this. A: Yes, that sounds close (I'm answering comment from Greg, but comments are too short to fit my reply). I don't know the list of users beforehand, but there's a GUI control application that would be used to enter username/password pairs for each user. So, userA would log on, run the application, enter his credentials and service would use that. At the same time (after userA has logged off, but the service is still running with userA's credentials) userB logs on, uses the app, and another copy of the service starts running as logged on userB. Thus, at the same time userA and userB services are running. Is that possible? A: You are probably looking to Impersonate the users. Check out some references I found with a quick Google search here: * *MSDN Article on WindowsIdentity.Impersonate *.Net Security Blog Article A: It sounds as if you actually have two different, conflicting requirements, as to timing and identity. * *Run as each logged in user *Run automatically even if no user is logged in. No way to do this trivially, instead consider wrapping your program in a service; the program will run normally on startup for each user (either thru the startup folder or taskscheduler), and in addition create a service to run your app as a system user (or any other user you define). Since you also need (you mention this in the comments) the app to keep running as the enduser even after he logs out, you can have the service manage this process for you. HOWEVER this might not be the best idea, since the user is still effectively logged in. This can have numerous side effects, including security, performance (too many users logged in at once...), etc. A: You could create an service application and a non-service(normal) application and make them communicate through IPC (Mapped File, Pipes, MailSolts ... you name it). This way you solve all the troubles. NOTE: The same application can behave differently - when started as a process and when started by a user, but in the end it is the same thing, you still have 2 applications (no matter if you got only one executable). A: Running with different accounts is possible. In fact, this is common. See svchost.exe, which implements a bunch of OS services. I just don't get how you determine which accounts. In a big company, many PCs are set up so all 100.000+ employees could use it. You don't want to run your service as the logged-in users, nor can you want to run it for all 100.000 users. So for which accounts, I have to ask? A: A Windows process can only execute with the privileges of one single user at a time. This applies to services and other processes. With enough privileges it is possible to "switch" between different users by using impersonation. The most common pattern for what you are trying to do is to have one instance of a privileged service which registers to log in/log out events and creates children processes accordingly, each one of them impersonating the logged in user. The pattern will also simplify UI as each process runs on each separate user's Desktop, as if it were a regular application. If you keep the privileged service's code as simple as possible this pattern has the added benefit that you are minimizing the attack surface of your code. If a user finds a security problem on the "running as user" side of your service it is a non-issue, while security problems in the privileged services could lead to privilege escalation. In fact, before Vista privileged services implementing a Windows message processing loop are vulnerable to a type of attack called Shatter attacks, which you should be aware of given what you are trying to do. A: You don't need multiple instances of your service. From the description of your problem it looks like what you need is one service that can impersonate users and execute jobs on their behalf. You can do this by implementing a COM object hosted in a service. Your client application (that the end user runs) will call CoCreateInstanceEx on your CLSID. This would cause new instance of your COM object to be created in your service. Then the application can use a method on one of your interfaces to pass the collected user credentials to the COM object (though I'd be wary of collecting credentials and instead see if I can pass the user token instead). The COM object which is running in the context of the service can then call LogonUser() to log on the user and impersonate it, so it can do whatever on her behalf (like finding the user local appdata folder :-)). Other answers havve good links to impersonating users using credentials or token. If you feel comfortable with COM, I'd suggest you create your objects as multithreaded (living in the MTA), so that their execution is not serialized by COM. If not, the default single threaded model would be good enough for you. The Visual Studio ATL wizard can generate the skeleton of a COM object living in a service. You can also read about implementing Windows Service with ATL here: http://msdn.microsoft.com/en-us/library/74y2334x(VS.80).aspx If you don't know COM at all, you can use other communication channels to pass the credentials to your service. In any case, once your service gets the credentials, all the work on behalf of the user will have to be executed on a background thread, so as to not block the application running as the user. A: You want this running all the time, so you want a service. You want something tracking each user, so you want an application which runs in the user session and communicates with the service (using named pipes or DCOM or whatever fits your requirements).
{ "language": "en", "url": "https://stackoverflow.com/questions/113288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is the host localhost always available for the own system? Is it always possible to ping localhost and it resolves to 127.0.0.1? I know Windows Vista, XP, Ubuntu and Debian do it but does everyone do it? A: 127.0.0.1 is reserved in any IP stack for the local host. "localhost" as a host name is not guaranteed to be there. If the host/DNS settings are misconfigured, localhost will not resolve. Example on a debian box: topaz:/root# vi /etc/hosts [comment out localhost entry] topaz:/root# ping localhost ping: unknown host localhost A: No. For a start localhost is a convention rather than a rule. Mostly it's set by default, but there's nothing to mandate it. Secondly, there's nothing to say that you can always ping 127.0.0.1. As an example (on a unix system) try the following: sudo ifconfig lo down ping 127.0.0.1 As cruizer said, 127.0.0.1 (if it exists) is defined to be the local machine. But it doesn't have to exist. A: The pedantic answer (sorry, Greg :), is to read RFC 3330: 127.0.0.0/8 - This block is assigned for use as the Internet host loopback address. A datagram sent by a higher level protocol to an address anywhere within this block should loop back inside the host. This is ordinarily implemented using only 127.0.0.1/32 for loopback, but no addresses within this block should ever appear on any network anywhere [RFC1700, page 5]. (The "ordinarily" above should probably be read as "often" - most current operating systems support using all of 127.0.0.0/8 as loopback.) With regards to whether "localhost" always resolves to 127.0.0.1 - he is correct, it's generally the same, but technically implementation specific: ~> dig localhost.t...e.org ... ;; ANSWER SECTION: localhost.t...e.org. 86400 IN A 127.0.0.2 A: Any correct implementation of TCP/IP will reserve the address 127.0.0.1 to refer to the local machine. However, the mapping of the name "localhost" to that address is generally dependent on the system hosts file. If you were to remove the localhost entry from hosts, then the localhost name may no longer resolve properly at all. A: If the DNS servers your client is connected to is following rfc1912 then yes, localhost should resolve to 127.0.0.1. RFC1912 4.1 ... Certain zones should **always be present** in nameserver configurations: primary localhost localhost primary 0.0.127.in-addr.arpa 127.0 ... The "localhost" address is a "special" address which always refers to the local host. It should contain the following line: localhost. IN A 127.0.0.1 The "127.0" file should contain the line: 1 PTR localhost. A: I think localhost pretty much resolves to 127.0.0.1 for most platforms but all IPs that start with 127...* resolve to localhost as well. Try pinging 127.255.255.254 and it'll still respond. A: In theory, there are cases where it might not exist. In practice, it's always there. A: Decent firewalls allow you to filter access on the loopback interfaces too. So, it's possible to set up a firewall rule that drops icmp ping packets going to localhost (127.0.0.1). Also, as everyone else has already mentioned, even the existence of the localhost or 127.0.0.1 address and the loopback interface isn't guaranteed. A: The answer is: 127.0.0.1, often referred to as the "loopback", is required. Although your computer might let you do silly things, like disable it, or configure that range on a physical interface, these are all invalid. "localhost" is just a hostname, which by convention, should be 127.0.0.1 As a system administrator or hostmaster, you should avoid configurations that allow localhost to point to other addresses. You should not edit your hosts file to change the address of "localhost". You should configure your domains to have a localhost. and localhost.domain.com entry that points to 127.0.0.1. You should not let your proxy servers respond to "localhost" or any FQDN that starts with localhost. A: Ok. The reason why it resolves it is record in %WINDOWS_DIR%\System32\drivers\etc\hosts file like this: 127.0.0.1 localhost
{ "language": "en", "url": "https://stackoverflow.com/questions/113293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Team size and project iteration length Do you think that project iteration length is related to project team size? If so, how? What other key factors do you use to recognize correct iteration length for different projects? A: Iteration length is primarily related to the teams ability to communicate and complete a working version of the software. More team members equals more communication channels (Brooks's Law) which will likely increase your iteration time. I think that 2 week iterations, whether you deliver to the client or not, are a good goal, as it allows for very good health checks. Ultimately, the iteration length will depend on the features you wish to implement in the next iteration, and in the early phases your iterations may jump around from 1 week to 1 month as you become comfortable with the team and the technology stack. A: One of the main drivers for short iterations, is easing integration between modules/features/programmers. Obviously, the bigger your team, the more integration you will have. It's a tradeoff: short iterations mean you're integrating often, which is good - BUT if its a big team you'll be spending a LOT of team on integration overhead, even without new code. Longer iterations obviously mean more integration each time less seldom, and a lot more risky. If your team is very large, you can try branched integration, i.e. integrating small subteams often, and integrating between the teams less often... BUT then you'll be having inconsistencies between branches, and you lose much of that benefit right there. Another key factor to consider is complexity - obviously complex, backend systems are riskier integration, simple Web-UI pages are less risky. (I realize I didnt give you a clear cut answer, there is none. It's always tradeoffs, I hope I give you some food for thought.) A: My experience is that the length of the iterations are somewhat dependent on team size, External dependencies, like in cases where we had to integrate with in house systems that was not using a iteration based development cycle (read waterfall) where another factor we observed. Our team were real noobs when it came to iterative development, so in the beginning the iterations where really long (12 weeks). But later on we saw that there was no need to worry, and the iterations shrunk considerably (4-6 weeks). So another factor in how long the iterations are is how familiar you are with the concept of iterative development. A: I think that 2 week iterations, whether you deliver to the client or not, are a good goal, as it allows for very good health checks. 2-week iterations are most comfortable for me and the kinds of projects I usually do, but I disagree that not delivering is a good outcome - the focus needs to stay on the "working software over process" side of things. I would consider making iterations longer if the product owner / user isn't available, even if only for a showcase every couple weeks, as the same health checks that fast iterations allow on the technical side need to happen on the side of the engagement with the business. A: Iteration length should be decided on many factors... and team size is really only part of the considerations made for the "Overhead of Iterating". This article explains many of them. The important ones IMO: * *Overall Release Length *How Long Priorities Can Remain Unchanged *The Overhead of Iterating A: There is relation in terms of how much work can get done but there are a couple of other key factors here like what type of project are they working on, e.g. Windows Application, Console Application, or Web Application as well as how developed is the codebase in terms of size, complexity and stye compared to the current team's style, and what expertise does the team have both within the methodology and the work that they are doing as inexperience may be costly in terms of getting everyone proficient with the process.
{ "language": "en", "url": "https://stackoverflow.com/questions/113339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Python-passing variable between classes I'm trying to create a character generation wizard for a game. In one class I calculate the attributes of the character. In a different class, I'm displaying to the user which specialties are available based on the attributes of the character. However, I can't remember how to pass variables between different classes. Here is an example of what I have: class BasicInfoPage(wx.wizard.WizardPageSimple): def __init__(self, parent, title): wiz.WizardPageSimple.__init__(self, parent) self.next = self.prev = None self.sizer = makePageTitle(self, title) <---snip---> self.intelligence = self.genAttribs() class MOS(wx.wizard.WizardPageSimple): def __init__(self, parent, title): wiz.WizardPageSimple.__init__(self, parent) self.next = self.prev = None self.sizer = makePageTitle(self, title) def eligibleMOS(self, event): if self.intelligence >= 12: self.MOS_list.append("Analyst") The problem is that I can't figure out how to use the "intelligence" variable from the BasicInfoPage class to the MOS class. I've tried several different things from around the Internet but nothing seems to work. What am I missing? Edit I realized after I posted this that I didn't explain it that well. I'm trying to create a computer version of the Twilight 2000 RPG from the 1980s. I'm using wxPython to create a wizard; the parent class of my classes is the Wizard from wxPython. That wizard will walk a user through the creation of a character, so the Basic Information page (class BasicInfoPage) lets the user give the character's name and "roll" for the character's attributes. That's where the "self.intelligence" comes from. I'm trying to use the attributes created her for a page further on in the wizard, where the user selects the speciality of the character. The specialities that are available depend on the attributes the character has, e.g. if the intelligence is high enough, the character can be an Intel Anaylst. It's been several years since I've programmed, especially with OOP ideas. That's why I'm confused on how to create what's essentially a global variable with classes and methods. A: You may have "Class" and "Instance" confused. It's not clear from your example, so I'll presume that you're using a lot of class definitions and don't have appropriate object instances of those classes. Classes don't really have usable attribute values. A class is just a common set of definitions for a collection of objects. You should think of of classes as definitions, not actual things. Instances of classes, "objects", are actual things that have actual attribute values and execute method functions. You don't pass variables among classes. You pass variables among instances. As a practical matter only instance variables matter. [Yes, there are class variables, but they're a fairly specialized and often confusing thing, best avoided.] When you create an object (an instance of a class) b= BasicInfoPage(...) Then b.intelligence is the value of intelligence for the b instance of BasicInfoPage. A really common thing is class MOS( wx.wizard.PageSimple ): def __init__( self, parent, title, basicInfoPage ): <snip> self.basicInfo= basicInfoPage Now, within MOS methods, you can say self.basicInfo.intelligence because MOS has an object that's a BasicInfoPage available to it. When you build MOS, you provide it with the instance of BasicInfoPage that it's supposed to use. someBasicInfoPage= BasicInfoPage( ... ) m= MOS( ..., someBasicInfoPage ) Now, the object m can examine someBasicInfoPage.intelligence A: Each page of a Wizard -- by itself -- shouldn't actually be the container for the information you're gathering. Read up on the Model-View-Control design pattern. Your pages have the View and Control parts of the design. They aren't the data model, however. You'll be happier if you have a separate object that is "built" by the pages. Each page will set some attributes of that underlying model object. Then, the pages are independent of each other, since the pages all get and set values of this underlying model object. Since you're building a character, you'd have some class like this class Character( object ): def __init__( self ): self.intelligence= 10 <default values for all attributes.> Then your various Wizard instances just need to be given the underlying Character object as a place to put and get values. A: My problem was indeed the confusion of classes vs. instances. I was trying to do everything via classes without ever creating an actual instance. Plus, I was forcing the "BasicInfoPage" class to do too much work. Ultimately, I created a new class (BaseAttribs) to hold all the variables I need. I then created in instance of that class when I run the wizard and pass that instance as an argument to the classes that need it, as shown below: #---Run the wizard if __name__ == "__main__": app = wx.PySimpleApp() wizard = wiz.Wizard(None, -1, "TW2K Character Creation") attribs = BaseAttribs #---Create each page page1 = IntroPage(wizard, "Introduction") page2 = BasicInfoPage(wizard, "Basic Info", attribs) page3 = Ethnicity(wizard, "Ethnicity") page4 = MOS(wizard, "Military Occupational Specialty", attribs) I then used the information S.Lott provided and created individual instances (if that's what it's called) within each class; each class is accessing the same variables though. Everything works, as far as I can tell. Thanks. A: All you need is a reference. It's not really a simple problem that I can give some one-line solution to (other than a simple ugly global that would probably break something else), but one of program structure. You don't magically get access to a variable that was created on another instance of another class. You have to either give the intelligence reference to MOS, or take it from BasicInfoPage, however that might happen. It seems to me that the classes are designed rather oddly-- an information page, for one thing, should not generate anything, and if it does, it should give it back to whatever needs to know-- some sort of central place, which should have been the one generating it in the first place. Ordinarily, you'd set the variables there, and get them from there. Or at least, I would. If you want the basic answer of "how do I pass variables between different classes", then here you go, but I doubt it's exactly what you want, as you look to be using some sort of controlling framework: class Foo(object): def __init__(self, var): self.var = var class Bar(object): def do_something(self, var): print var*3 if __name__ == '__main__': f = Foo(3) b = Bar() # look, I'm using the variable from one instance in another! b.do_something(f.var) A: If I understood you correctly, then the answer is: You can't. intelligence should be an attribute of WizardPageSimple, if you'd want both classes to inherit it. Depending on your situation, you might try to extract intelligence and related attributes into another baseclass. Then you could inherit from both: class MOS(wiz.WizardPageSimple, wiz.IntelligenceAttributes): # Or something like that. In that case you must use the co-operative super. In fact, you should be using it already. Instead of calling wiz.WizardPageSimple.__init__(self, parent) call super(MOS, self).__init__(self, parent)
{ "language": "en", "url": "https://stackoverflow.com/questions/113341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Flex AdvancedDataGrid: How do I style the summary rows? I have an AdvancedDataGrid with a GroupingCollection and a SummaryRow. How do I display the summary row data in bold? Below is my code: <mx:AdvancedDataGrid width="100%" height="100%" id="adg" defaultLeafIcon="{null}" > <mx:dataProvider> <mx:GroupingCollection id="gc" source="{dataProvider}"> <mx:Grouping> <mx:GroupingField name="bankType"> <mx:summaries> <mx:SummaryRow summaryPlacement="group" id="summaryRow"> <mx:fields> <mx:SummaryField dataField="t0" label="t0" operation="SUM" /> </mx:fields> </mx:SummaryRow> </mx:summaries> </mx:GroupingField> </mx:Grouping> </mx:GroupingCollection> </mx:dataProvider> <mx:columns> <mx:AdvancedDataGridColumn dataField="GroupLabel" headerText=""/> <mx:AdvancedDataGridColumn dataField="name" headerText="Bank" /> <mx:AdvancedDataGridColumn dataField="t0" headerText="Amount" formatter="{formatter}"/> </mx:columns> </mx:AdvancedDataGrid> A: in the past when I have need to do this I had to put a condition in my style function to try and determine if it is a summary row or not. public function dataGrid_styleFunction (data:Object, column:AdvancedDataGridColumn) : Object { var output:Object; if ( data.children != null ) { output = {color:0x081EA6, fontWeight:"bold", fontSize:14} } return output; } if it has children, it should be a summary row. I am not sure this is the quote/unquote right way of doing this, but it does work, at least in my uses. HTH A: If I've understood the documentation correctly, you should be able to do this by specifying the item renderer in the rendererProviders property, and linking the summary to the rendererProvider using a dummy dataField name. A: http://livedocs.adobe.com/flex/3/html/help.html?content=advdatagrid_04.html should help. A: private function styleCallback(data:Object, col:AdvancedDataGridColumn):Object { if (data["city"] == citySel) return {color:0xFF0000,backgroundColor:0xFFF552, fontWeight:'bold',fontStyle:'italic'}; // Return null if the Artist name does not match. return null; } A: I wanted to format ONLY my grouping filed, so I set the styleFunction on my ADG, then in my styleCallback() method I checked for a piece of data that exists in my sub rows, but doesn't exist in my group heading. For example I have Major headings as my groups and then rows of data with Minor headings, and descriptions, etc. So in my function I check for: if (data["MinorHeading"] == null) return {color:0xFF0000,backgroundColor:0xFFF552,fontWeight:'bold'}; That way only my group headings get formatted in red and bold. FYI, the backgroundColor style doesn't apply (I assume I would need a solid color Graphic Renderer to do that)
{ "language": "en", "url": "https://stackoverflow.com/questions/113349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: OpenGL: projecting mouse click onto geometry I have this view set: glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective glLoadIdentity(); //Reset the drawing perspective and I get a screen position (sx, sy) from a mouse click. Given a value of z, how can I calculate x and y in 3d-space from sx and sy? A: You should use gluUnProject: First, compute the "unprojection" to the near plane: GLdouble modelMatrix[16]; GLdouble projMatrix[16]; GLint viewport[4]; glGetIntegerv(GL_VIEWPORT, viewport); glGetDoublev(GL_MODELVIEW_MATRIX, modelMatrix); glGetDoublev(GL_PROJECTION_MATRIX, projMatrix); GLdouble x, y, z; gluUnProject(sx, viewport[1] + viewport[3] - sy, 0, modelMatrix, projMatrix, viewport, &x, &y, &z); and then to the far plane: // replace the above gluUnProject call with gluUnProject(sx, viewport[1] + viewport[3] - sy, 1, modelMatrix, projMatrix, viewport, &x, &y, &z); Now you've got a line in world coordinates that traces out all possible points you could have been clicking on. So now you just need to interpolate: suppose you're given the z-coordinate: GLfloat nearv[3], farv[3]; // already computed as above if(nearv[2] == farv[2]) // this means we have no solutions return; GLfloat t = (nearv[2] - z) / (nearv[2] - farv[2]); // so here are the desired (x, y) coordinates GLfloat x = nearv[0] + (farv[0] - nearv[0]) * t, y = nearv[1] + (farv[1] - nearv[1]) * t;
{ "language": "en", "url": "https://stackoverflow.com/questions/113352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do you get the Eclipse Package Explorer to show files whose names begins with a . (period)? When a folder in the Eclipse Package Explorer (one which is linked to a directory somewhere in the filesystem) contains files whose names begin with a . (period), those files do not appear. Can Eclipse be configured to show these files, and if so, how? A: Click the down-arrow in the package explorer (next to the editor linker). Then you just change the filters. Unmark the box that says '.*' resources.
{ "language": "en", "url": "https://stackoverflow.com/questions/113365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Character Limit in HTML How do you impose a character limit on a text input in HTML? A: use the "maxlength" attribute as others have said. if you need to put a max character length on a text AREA, you need to turn to Javascript. Take a look here: How to impose maxlength on textArea in HTML using JavaScript A: there's a maxlength attribute <input type="text" name="textboxname" maxlength="100" /> A: There are 2 main solutions: The pure HTML one: <input type="text" id="Textbox" name="Textbox" maxlength="10" /> The JavaScript one (attach it to a onKey Event): function limitText(limitField, limitNum) { if (limitField.value.length > limitNum) { limitField.value = limitField.value.substring(0, limitNum); } } But anyway, there is no good solution. You can not adapt to every client's bad HTML implementation, it's an impossible fight to win. That's why it's far better to check it on the server side, with a PHP / Python / whatever script. A: In addition to the above, I would like to point out that client-side validation (HTML code, javascript, etc.) is never enough. Also check the length server-side, or just don't check at all (if it's not so important that people can be allowed to get around it, then it's not important enough to really warrant any steps to prevent that, either). Also, fellows, he (or she) said HTML, not XHTML. ;) A: For the <input> element there's the maxlength attribute: <input type="text" id="Textbox" name="Textbox" maxlength="10" /> (by the way, the type is "text", not "textbox" as others are writing), however, you have to use javascript with <textarea>s. Either way the length should be checked on the server anyway. A: you can set maxlength with jquery which is very fast jQuery(document).ready(function($){ //fire on DOM ready setformfieldsize(jQuery('#comment'), 50, 'charsremain') })
{ "language": "en", "url": "https://stackoverflow.com/questions/113376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "96" }
Q: How do I determine the value of a generic parameter on my class instance I have a marker interface defined as public interface IExtender<T> { } I have a class that implements IExtender public class UserExtender : IExtender<User> At runtime I recieve the UserExtender type as a parameter to my evaluating method public Type Evaluate(Type type) // type == typeof(UserExtender) How do I make my Evaluate method return typeof(User) based on the runtime evaluation. I am sure reflection is involved but I can't seem to crack it. (I was unsure how to word this question. I hope it is clear enough.) A: There is an example of doing what you describe in the MSDN documentation for the GetGenericTypeDefinition method. It uses the GetGenericArguments method. Type[] typeArguments = t.GetGenericArguments(); Console.WriteLine("\tList type arguments ({0}):", typeArguments.Length); foreach (Type tParam in typeArguments) { Console.WriteLine("\t\t{0}", tParam); } In your example I think you would want to replace t with this. If that doesn't work directly you may need to do something with the GetInterfaces method to enumerate the current interfaces on your type and then GetGenericArguments() from the interface type. A: I went this way based on some of the tidbits provided. It could be made more robust to handle multiple generic arguments on the interface.... but I didn't need it to ;) private static Type SafeGetSingleGenericParameter(Type type, Type interfaceType) { if (!interfaceType.IsGenericType || interfaceType.GetGenericArguments().Count() != 1) return type; foreach (Type baseInterface in type.GetInterfaces()) { if (baseInterface.IsGenericType && baseInterface.GetGenericTypeDefinition() == interfaceType.GetGenericTypeDefinition()) { return baseInterface.GetGenericArguments().Single(); } } return type; } A: I read your question completely differently than the other answers. If the evaluate signature can be changed to: public Type Evaluate<T>(IExtender<T> it) { return typeof(T); } This doesn't require the calling code to change, but does require the parameter to be of type IExtender<T>, however you can easily get at the type T : // ** compiled and tested UserExtender ue = new UserExtender(); Type t = Evaluate(ue); Certainly it's not as generic as something just taking a Type parameter, but this is a different take on the problem. Also note that there are Security Considerations for Reflection [msdn]
{ "language": "en", "url": "https://stackoverflow.com/questions/113384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Declare an object even before that class is created Is there anyway to declare an object of a class before the class is created in C++? I ask because I am trying to use two classes, the first needs to have an instance of the second class within it, but the second class also contains an instance of the first class. I realize that you may think I might get into an infinite loop, but I actually need to create and instance of the second class before the first class. A: You can't declare an instance of an undefined class but you can declare a pointer to one: class A; // Declare that we have a class A without defining it yet. class B { public: A *itemA; }; class A { public: B *itemB; }; A: You can't do something like this: class A { B b; }; class B { A a; }; The most obvious problem is the compiler doesn't know how to large it needs to make class A, because the size of B depends on the size of A! You can, however, do this: class B; // this is a "forward declaration" class A { B *b; }; class B { A a; }; Declaring class B as a forward declaration allows you to use pointers (and references) to that class without yet having the whole class definition. A: There's an elegant solution using templates. template< int T > class BaseTemplate {}; typedef BaseTemplate< 0 > A; typedef BaseTemplate< 1 > B; // A template<> class BaseTemplate< 0 > { public: BaseTemplate() {} // A constructor B getB(); } // B template<> class BaseTemplate< 1 > { public: BaseTemplate() {} // B constructor A getA(); } inline B A::getB() { return A(); } inline A B::getA() { return B(); } This code will work! So, why does it work? The reason has to do with how templates are compiled. Templates delay the creation of function signatures until you actually use the template somewhere. This means that neither getA() nor getB() will have their signatures analyzed until after both classes A and B have already been fully declared. That's the magic of this method. A: Is this close to what you want: The first class contains the second class, but the second class (that is to be created first) just has a reference to the first class? A: This is called cross reference. See here an example.
{ "language": "en", "url": "https://stackoverflow.com/questions/113385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What's the relationship between margin, padding and width in different browsers? CSS width value = display width of inside? or CSS width value = display width of inside + CSS margin-left + CSS margin-right? A: You have to make yourself familiar with the CSS Box Model. It explains where padding, margin and border as well as width work. However do note that different browsers implement this differently: most notably, Internet Explorer has a box model bug (this is infamously present in IE6 -- I am not aware if this has been fixed in IE7 or IE8) that caused the infamous "quirks mode" CSS hack. Briefly stated, Internet Explorer set their box model to include padding in computing the width, as opposed the official standard wherein width should only constitute the content. A: As mentioned by others, the rule of thumb is the CSS box model. This works generally as advertised by browsers such as Opera, Firefox & Safari. Internet Explorer is your exception, where the "width" includes the margins, padding and borders. There are some great tools out there that visually depict how the browser has rendered the content. For Firefox check out Firebug and for Internet Explorer check out the Developer Toolbar. A: It not only depends from the browser and version you choose, but also from the doctype of your html document. Internet explorer in "quirks mode" is for example completely different from Internet explorer with doctype XHTML 1.0 Transitional. A: Here you can see an animated diagram which "explodes" the box. A: I think IE before version 6 incorrectly included borders and padding in width and height. See: Microsoft Admits IE 5 is Messed Up
{ "language": "en", "url": "https://stackoverflow.com/questions/113387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Dynamically added controls in Asp.Net I'm trying to wrap my head around asp.net. I have a background as a long time php developer, but I'm now facing the task of learning asp.net and I'm having some trouble with it. It might very well be because I'm trying to force the framework into something it is not intended for - so I'd like to learn how to do it "the right way". :-) My problem is how to add controls to a page programmatically at runtime. As far as I can figure out you need to create the controls at page_init as they otherwise disappears at the next PostBack. But many times I'm facing the problem that I don't know which controls to add in page_init as it is dependent on values from at previous PostBack. A simple scenario could be a form with a dropdown control added in the designer. The dropdown is set to AutoPostBack. When the PostBack occur I need to render one or more controls denepending on the selected value from the dropdown control and preferably have those controls act as if they had been added by the design (as in "when posted back, behave "properly"). Am I going down the wrong path here? A: I agree with the other points made here "If you can get out of creating controls dynamically, then do so..." (by @Jesper Blad Jenson aka) but here is a trick I worked out with dynamically created controls in the past. The problem becomes chicken and the egg. You need your ViewState to create the control tree and you need your control tree created to get at your ViewState. Well, that's almost correct. There is a way to get at your ViewState values just before the rest of the tree is populated. That is by overriding LoadViewState(...) and SaveViewState(...). In SaveViewState store the control you wish to create: protected override object SaveViewState() { object[] myState = new object[2]; myState[0] = base.SaveViewState(); myState[1] = controlPickerDropDown.SelectedValue; return myState } When the framework calls your "LoadViewState" override you'll get back the exact object you returned from "SaveViewState": protected override void LoadViewState(object savedState) { object[] myState = (object[])savedState; // Here is the trick, use the value you saved here to create your control tree. CreateControlBasedOnDropDownValue(myState[1]); // Call the base method to ensure everything works correctly. base.LoadViewState(myState[0]); } I've used this successfully to create ASP.Net pages where a DataSet was serialised to the ViewState to store changes to an entire grid of data allowing the user to make multiple edits with PostBacks and finally commit all their changes in a single "Save" operation. A: You must add your control inside OnInit event and viewstate will be preserved. Don't use if(ispostback), because controls must be added every time, event in postback! (De)Serialization of viewstate happens after OnInit and before OnLoad, so your viewstate persistence provider will see dynamically added controls if they are added in OnInit. But in scenario you're describing, probably multiview or simple hide/show (visible property) will be better solution. It's because in OnInit event, when you must read dropdown and add new controls, viewstate isn't read (deserialized) yet and you don't know what did user choose! (you can do request.form(), but that feels kinda wrong) A: After having wrestled with this problem for at while I have come up with these groundrules which seems to work, but YMMV. * *Use declarative controls whenever possible *Use databinding where possible *Understand how ViewState works *The Visibilty property can go a long way *If you must use add controls in an event handler use Aydsman's tip and recreate the controls in an overridden LoadViewState. TRULY Understanding ViewState is a must-read. Understanding Dynamic Controls By Example shows some techniques on how to use databinding instead of dynamic controls. TRULY Understanding Dynamic Controls also clarifies techniques which can be used to avoid dynamic controls. Hope this helps others with same problems. A: If you truly need to use dynamic controls, the following should work: * *In OnInit, recreate the exact same control hierarchy that was on the page when the previous request was fulfilled. (If this isn't the initial request, of course) *After OnInit, the framework will load the viewstate from the previous request and all your controls should be in a stable state now. *In OnLoad, remove the controls that are not required and add the necessary ones. You will also have to somehow save the current control tree at this point, to be used in the first step during the following request. You could use a session variable that dictates how the dynamic control tree was created. I even stored the whole Controls collection in the session once (put aside your pitchforks, it was just for a demo). Re-adding the "stale" controls that you will not need and will be removed at OnLoad anyway seems a bit quirky, but Asp.Net was not really designed with dynamic control creation in mind. If the exact same control hierarchy is not preserved during viewstate loading, all kinds of hard-to find bugs begin lurking in the page, because states of older controls are loaded into newly added ones. Read up on Asp.Net page life cycle and especially on how the viewstate works and it will become clear. Edit: This is a very good article about how viewstate behaves and what you should consider while dealing with dynamic controls: <Link> A: Well. If you can get out of creating controls dynamicly, then do so - otherwise, what i whould do is to use Page_Load instead of Page_Init, but instead of placing stuff inside the If Not IsPostBack, then set i just directly in the method. A: I think the answer here is in the MultiView control, so that for example the dropdown switches between different views in the multi-view. You can probably even data-bind the current view property of the multiview to the value of the dropdown! A: Ah, that's the problem with the leaky abstraction of ASP.NET web forms. Maybe you'll be interested to look at ASP.NET MVC, which was used for the creation of this stackoverflow.com web site? That should be an easier fit for you, coming from a PHP (thus, pedal-to-the-metal when it comes to HTML and Javascript) background. A: The only correct answer was given by Aydsman. LoadViewState is the only place to add dynamic controls where their viewstate values will be restored when recreated and you can access the viewstate in order to determine which controls to add. A: I ran across this in the book "Pro ASP.NET 3.5 in C# 2008" under the section Dynamic Control Creation: If you need to re-create a control multiple times, you should perform the control creation in the Page.Load event handler. This has the additional benefit of allowing you to use view state with your dynamic control. Even though view state is normally restored before the Page.Load event, if you create a control in the handler for the Page.Load event, ASP.NET will apply any view state information that it has after the Page.Load event handler ends. This process is automatic. I have not tested this, but you might look into it.
{ "language": "en", "url": "https://stackoverflow.com/questions/113392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How can I test for an expected exception with a specific exception message from a resource file in Visual Studio Test? Visual Studio Test can check for expected exceptions using the ExpectedException attribute. You can pass in an exception like this: [TestMethod] [ExpectedException(typeof(CriticalException))] public void GetOrganisation_MultipleOrganisations_ThrowsException() You can also check for the message contained within the ExpectedException like this: [TestMethod] [ExpectedException(typeof(CriticalException), "An error occured")] public void GetOrganisation_MultipleOrganisations_ThrowsException() But when testing I18N applications I would use a resource file to get that error message (any may even decide to test the different localizations of the error message if I want to, but Visual Studio will not let me do this: [TestMethod] [ExpectedException(typeof(CriticalException), MyRes.MultipleOrganisationsNotAllowed)] public void GetOrganisation_MultipleOrganisations_ThrowsException() The compiler will give the following error: An attribute argument must be a constant expression, typeof expression or array creation expression of an attribute Does anybody know how to test for an exception that has a message from a resource file? One option I have considered is using custom exception classes, but based on often heard advice such as: "Do create and throw custom exceptions if you have an error condition that can be programmatically handled in a different way than any other existing exception. Otherwise, throw one of the existing exceptions." Source I'm not expecting to handle the exceptions differently in normal flow (it's a critical exception, so I'm going into panic mode anyway) and I don't think creating an exception for each test case is the right thing to do. Any opinions? A: Just an opinion, but I would say the error text: * *is part of the test, in which case getting it from the resource would be 'wrong' (otherwise you could end up with a consistantly mangled resource), so just update the test when you change the resource (or the test fails) *is not part of the test, and you should only care that it throws the exception. Note that the first option should let you test multiple languages, given the ability to run with a locale. As for multiple exceptions, I'm from C++ land, where creating loads and loads of exceptions (to the point of one per 'throw' statement!) in big heirachies is acceptable (if not common), but .Net's metadata system probably doesn't like that, hence that advice. A: I would recommend using a helper method instead of an attribute. Something like this: public static class ExceptionAssert { public static T Throws<T>(Action action) where T : Exception { try { action(); } catch (T ex) { return ex; } Assert.Fail("Exception of type {0} should be thrown.", typeof(T)); // The compiler doesn't know that Assert.Fail // will always throw an exception return null; } } Then you can write your test something like this: [TestMethod] public void GetOrganisation_MultipleOrganisations_ThrowsException() { OrganizationList organizations = new Organizations(); organizations.Add(new Organization()); organizations.Add(new Organization()); var ex = ExceptionAssert.Throws<CriticalException>( () => organizations.GetOrganization()); Assert.AreEqual(MyRes.MultipleOrganisationsNotAllowed, ex.Message); } This also has the benefit that it verifies that the exception is thrown on the line you were expecting it to be thrown instead of anywhere in your test method. A: I think you can just do an explicit try-catch in your test code instead of relying on the ExpectedException attribute to do it for you. Then you can come up with some helper method that will read the resource file and compare the error message to the one that comes with the exception that was caught. (of course if there wasn't an exception then the test case should be considered a fail) A: If you switch over to using the very nice xUnit.Net testing library, you can replace [ExpectedException] with something like this: [Fact] public void TestException() { Exception ex = Record.Exception(() => myClass.DoSomethingExceptional()); // Assert whatever you like about the exception here. } A: The ExpectedException Message argument does not match against the message of the exception. Rather this is the message that is printed in the test results if the expected exception did not in fact occur. A: I wonder if NUnit is moving down the path away from simplicity... but here you go. New enhancements (2.4.3 and up?) to the ExpectedException attribute allow you more control on the checks to be performed on the expected Exception via a Handler method. More Details on the official NUnit doc page.. towards the end of the page. [ExpectedException( Handler="HandlerMethod" )] public void TestMethod() { ... } public void HandlerMethod( System.Exception ex ) { ... } Note: Something doesn't feel right here.. Why are your exceptions messages internationalized.. Are you using exceptions for things that need to be handled or notified to the user. Unless you have a bunch of culturally diverse developers fixing bugs.. you shouldn't be needing this. Exceptions in English or a common accepted language would suffice. But in case you have to have this.. its possible :) A: I came across this question while trying to resolve a similar issue on my own. (I'll detail the solution that I settled on below.) I have to agree with Gishu's comments about internationalizing the exception messages being a code smell. I had done this initially in my own project so that I could have consistency between the error messages throw by my application and in my unit tests. ie, to only have to define my exception messages in one place and at the time, the Resource file seemed like a sensible place to do this since I was already using it for various labels and strings (and since it made sense to add a reference to it in my test code to verify that those same labels showed in the appropriate places). At one point I had considered (and tested) using try/catch blocks to avoid the requirement of a constant by the ExpectedException attribute, but this seemed like it would lead to quite a lot of extra code if applied on a large scale. In the end, the solution that I settled on was to create a static class in my Resource library and store my exception messages in that. This way there's no need to internationalize them (which I'll agree doesn't make sense) and they're made accessible anytime that a resource string would be accessible since they're in the same namespace. (This fits with my desire not to make verifying the exception text a complex process.) My test code then simply boils down to (pardon the mangling...): [Test, ExpectedException(typeof(System.ArgumentException), ExpectedException=ProductExceptionMessages.DuplicateProductName)] public void TestCreateDuplicateProduct() { _repository.CreateProduct("TestCreateDuplicateProduct"); _repository.CreateProduct("TestCreateDuplicateProduct"); }
{ "language": "en", "url": "https://stackoverflow.com/questions/113395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: Good Secure Backups Developers at Home What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: * *The backups must ALWAYS be within reasonably easy reach. *Internet connection cannot be guaranteed to be always available. *The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): * *BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. *Storebackup is a backup utility that stores files on other disks. *mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. *Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. *AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. *rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. *rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. *Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives *simplebup, to do real-time backup of files under active development, as they are modified. This tool can also be used for monitoring of other directories as well. It is intended as on-the-fly automated backup, and not as a version control. It is very easy to use. Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer A: Scott Hanselman recommends Windows Home Server in his aptly titled post The Case of the Failing Disk Drive or Windows Home Server Saved My Marriage. A: First of all: keeping backups off-site is as important for individuals as it is for businesses. If you house burns down, you don't want to loose everything. This is especially true because it is so easy to accomplish. Personally, I have an external USB harddisk I keep at my fathers house. Normally, it is hooked up to his internet connections and I backup over the net (using rsync), but when I need to backup really big things, I collect it and copy things over USB. Ideally, I should get another disk, to spread the risk. Other options are free online storage facilities (use encryption!). For security, just use TrueCrypt. It has a good name in the IT world, and seems to work very well. A: Depends on which platform you are running on (Windows/Linux/Mac/...?) As a platform independent way, I use a personal subversion server. All the valuables are there, so if I lose one of the machines, a simple 'svn checkout' will take things back. This takes some initial work, though, and requires discipline. It might not be for you? As a second backup for the non-svn stuff, I use Time Machine, which is built-in to OS X. Simply great. :) A: I highly recommend www.mozy.com. Their software is easy and works great, and since it's stored on their servers you implicitly get offsite backups. No worrying about running a backup server and making sure it's working. Also, the company is backed by EMC (a leading data storage product company), so gives me enough confidence to trust them. A: I'm a big fan of Acronis Trueimage.Make sure you rotate through a few backup HDDs to you have a few generations to go back to, or if one of the backups goes bang. If it's a major milestone I snail-mail a set of DVDs to Mum and she files em for me. She lives in a different state so it should cover most disasters of less-than-biblical proportions. EDIT: Acronis has encryption via a password. I also find the bandwidth of snailmail to be somewhat infinite - 10GB overnight = 115 kb/s, give or take. Never been throttled by Australia Post. A: My vote goes for cloud storage of some kind. The problem with nearly all 'home' backups is they stay in the home, that means any catastrophic damage to the system being backed up will probably damage the backups as well (fire, flood etc). My requirements would be 1) automated - manual backups get forgotten, usually just when most needed 2) off-site - see above 3) multiple versions - that is backup to more than one thing, in case that one thing fails. As a developer, usually data sizes for backup are relatively small so a couple of free cloud backup accounts might do. They also often fulfil part 1 as they can usually be automated. I've heard good things about www.getdropbox.com/. The other advantage of more than 1 account is you could have one on 'daily sync' and another on 'weekly sync' to give you some history. This is nowhere near as good as true incremental backups. Personally I prefer a scripted backup (to local hard-drives, which I rotate to work as 'offsites'. This is in large part due to my hobby (photography) and thus my relatively lame internet upstream bandwith not coping with the data volume. Take home message - don't rely on one solution and don't assume that your data is not important enough to think about the issues as deeply as the 'Enterprise' does. A: Buy a fire-safe. This is not just a good idea for storing backups, but a good idea period. Exactly what media you put in it is the subject of other answers here. But, from the perspective of recovering from a fire, having a washable medium is good. As long as the temperature doesn't get too high CDs and DVDs seem reasonably resilient, although I'd be concerned about smoke damage. Ditto for hard-drives. A flash drive does have the benefit that there are no moving parts to be damaged and you don't need to be concerned about the optical properties. A: mozy.com is king. I started using it just to backup code and then ponied up the 5 bux a month to backup my personal pictures and other stuff that I'd rather not lose if the house burns down. The initial backup can take a little while but after that you can pretty much forget about it until you need to restore something. A: Get an external hard drive with a network port so you can keep your backups in another room which provides a little security against fire in addition to being a simple solution you can do yourself at home. The next step is to get storage space in some remote location (there are very cheap monthly prices for servers for example) or to have several external hard drives and periodically switch between the one at home and a remote location. If you use encryption, this can be anywhere such as a friend's or parents' place or work. A: Bacula is a good software, it's open source, and shall give good performance, kind of commercial software, a bit difficult the first time to configure, but not so hard. It has good documentation A: I second the vote for JungleDisk. I use it to push my documents and project folders to S3. My average monthly bill from amazon is about 20c. All my projects are in Subversion on an external host. As well as this, I am on a Mac, so I use SuperDuper to take a nightly image of my drive. I am sure there are good options in the Windows/Linux world. I have two external drives that I rotate on a weekly basis, and I store one of the drives off-site during it's week off. This means that I am only ever 24 hours away from an image in case of failure, and I am only 7 days from an image in case of catastrophic failure (fire theft). The ability to plug the drive in to a machine and be running instantly from the image has saved me immensely. My boot partition was corrupted during a power failure (not a hardware failure, luckily). I plugged the backup in, restored and was working again in the time it took to transfer the files of the external drive. A: These are interesting times for "the personal backup question". There are several schools of thought now: * *Frequent Automated Local Backup + Periodic Local Manual Backup Automated: Scheduled Nightly backup to external drive. Manual: Copy to second external drive once per week / month / year / oops-forgot and drop it of at "Mom's house".   Lot's of software in the field, but here's a few: There's RSync and TimeMachine on Mac, and DeltaCopy www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp for Windows. *Frequent Remote Backup There are a pile of services that enable you to backup across you internet connection to a remote data centre. Amazon's S3 service + JungleDisk's client software is a strong choice these days - not the cheapest option, but you pay for what you use and Amazon's track record suggests as a company it will be in business as long or longer than any other storage providers who hang their shingle today.   Did I mention it should be encrypted? Props to JungleDisk for handling the "encryption issue" and future-proofing (open source library to interoperate with Jungle Disk) pretty well. *All of the above. Some people call it being paranoid ... others think to themselves "Ahhh, I can sleep at night now". Also, it's more fault-tolerance than backup, but you should check out Drobo - basically it's dead simple RAID that seems to work quite well. A: Another vote for mozy.com You get 2gb for free, or for $5/month gives you unlimited backup space. Backups can occur on a timed basis, or when your PC/Mac is not busy. It's encrypted during transit and storage. You can retrieve files via built in software, through the web or pay for a DVD to be burned and posted back. William Macdonald A: If you feel like syncing to the cloud and don't mind the initial, beta, 2GB cap, I've fallen in love with Dropbox. It has versions for Windows, OSX, and Linux, works effortlessly, keeps files versioned, and works entirely in the background based on when the files changed (not a daily schedule or manual activations). Ars Technica and Joel Spolsky have both fallen in love (though the love seems strong with Spolsky, but lets pretend!) with the service if the word of a random internet geek is not enough. A: Here are the features I'd look out for: * *As near to fully automatic as possible. If it relies on you to press a button or run a program regularly, you will get bored and eventually stop bothering. An hourly cron job takes care of this for me; I rsync to the 24x7 server I run on my home net. *Multiple removable backup media so you can keep some off site (and/or take one with you when you travel). I do this with a warm-pluggable SATA drive bay and a cron job which emails me every week to remind me to change drives. *Strongly encrypted media, in case you lose one. The linux encrypted device support (cryptsetup et al) does this for me. *Some sort of point-in-time recovery, but consider carefully what resolution you want. Daily might be enough - having multiple backup media probably gets you this - or you might want something more comprehensive like Apple's Time Machine. I've used some careful rsync options with my removable drives: every day creates a fresh snapshot directory, but files which are unchanged from the previous day are hard linked instead of copied, to save space. A: I prefer http://www.jungledisk.com/ . It's based on Amazon S3, cheap, multiplatform, multiple machines with a single license. A: usb hard disk + rsync works for me (see here for a Win32 build) A: Or simply just set up a gmail account and mail it to yourself :) Unless you're a bit paranoid about google knowing about your stuff since you said research. It doesn't help you much with structure and stuff but it's free, big storage and off-site so quite safe. A: If you use OS X 10.5 or above then the cost of Time Machine is the cost of an external hard drive. Not only that, but the interface is dead simple to use. Open the folder you wish to recover, click on the time machine icon, and browse the directory as if it was 1999 all over again! I haven't tried to encrypt it, but I imagine you could use truecrypt. Yes this answer was posted quite some time after the question was asked, however I believe it should help those who stumble across this posting in the future (like I did). A: Setup a Linux or xBSD server: -Setup a source control system of your choice on it. --Mirror Raid (raid 1) at min --Daily (or even hourly) backups to external drive[s]. From the server you could also setup an automatic offsite backup. If the internet is out, you'd still have your external drive and just have it auto sync once it comes back. Once it's setup it should be about 0 work. You don't need anything "fancy" for offsite backup. Get a webhost that allows storing non-web data. sync via sftp or rsync over ssh. Store data on other end in true crypt container if your paranoid. If you work for an employeer/contractor also ask them. Most places already have something in place or let you work with their IT. A: My vote goes to dirvish (for linux). It uses rsync as backend but is very easy to configure. It makes automatic, periodically and differential backups of directories. The big benefit is, that it creates hardlinks to all files not changed since the last backup. So restore is easy: Just copy last created directory back - instead of restoring all diffs one after another like other differential backup tools need to do. A: I have the following backup scenarios and use rsync scripts to store on USB and network shares. * *(weekly) Windows backup for "bare metal" recovery Content of System drive C:\ using Windows Backup for quick recovery after physical disk failure, as I don't want to reinstall Windows and applications from scratch. This is configured to run automatically using Windows Backup schedule. *(daily and conditional) Active content backup using rsync Rsync takes care of all changed files from laptop, phone, other devices. I backup laptop every night and after significant changes in content, like import of the recent photo RAWs from SD card to laptop. I've created a bash script that I run from Cygwin on Windows to start rsync: https://github.com/paravz/windows-rsync-backup A: If you're using deduplicaiton STAY AWAY from JungleDisk. Their restore client makes a mess of the reparsepoint, and makes the file unusable. You hopefully can fix it in safe mode with: fsutil reparsepoint delete
{ "language": "en", "url": "https://stackoverflow.com/questions/113423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Converting Reverse Polish Notation Is there any way to interpret Reverse Polish Notation into "normal" mathematical notation when using either C++ or C#? I work for an engineering firm, so they use RPN occasionally and we need a way to convert it. Any suggestions? A: The Shunting Yard Algorithm is used to convert Infix (i.e. algebraic) to RPN. This is the opposite of what you want. Can you give me an example of your RPN input? I am a veteran HP calculator user/programmer. I presume you have a stack containing all the inputs & operators. I would guess that you need to reconstruct the expression tree and then traverse the tree to generate the infix form. A: C# doesn't have built-in support for parsing Reverse Polish Notation (RPN).You'll need to write your own parser, or find one online. There are dozens of tutorials for converting postfix form (RPN) to infix (Algebraic Equation). Take a look at this, maybe you'll find it useful and you can try to ‘reverse engineer’ it to convert postfix expressions to infix form, keeping in mind that there can be multiple infix notations for a given postfix one. There are very few useful examples that actually discuss converting postfix to infix. Here’s a 2-part entry that I found very useful. It also has some pseudo code: * *PostFix to Infix: converting RPN to algebraic expressions *Postfix to infix, part 2: adding the parentheses A: Yes. Think of how a RPN calculator works. Now, instead of calculating the value, instead you add the operation to the tree. So, for example, 2 3 4 + *, when you get to the +, then rather than putting 7 on the stack, you put (+ 3 4) on the stack. And similarly when you get to the * (your stack will look like 2 (+ 3 4) * at that stage), it becomes (* 2 (+ 3 4)). This is prefix notation, which you then have to convert to infix. Traverse the tree left-to-right, depth first. For each "inner level", if the precedence of the operator is lower, you will have to place the operation in brackets. Here, then, you will say, 2 * (3 + 4), because the + has lower precedence than *. Hope this helps! Edit: There's a subtlety (apart from not considering unary operations in the above): I assumed left-associative operators. For right-associative (e.g., **), then you get different results for 2 3 4 ** ** ⇒ (** 2 (** 3 4)) versus 2 3 ** 4 ** ⇒ (** (** 2 3) 4). When reconstructing infix from the tree, both cases show that the precedence doesn't require bracketing, but in reality the latter case needs to be bracketed ((2 ** 3) ** 4). So, for right-associative operators, the left-hand branch needs to be higher-precedence (instead of higher-or-equal) to avoid bracketing. Also, further thoughts are that you need brackets for the right-hand branch of - and / operators too. A: If you can read ruby, you'll find some good solutions to this here A: One approach is to take the example from the second chapter of the dragon book which explains how to write a parser to convert from infix to postfix notation, and reverse it. A: If you have some source text (string/s) that you're looking to convert from RPN (postfix notation) to "normal notation" (infix), this is certainly possible (and likely not too difficult). RPN was designed for stack-based machines, as the way the operation was represented ("2 + 3" -> "2 3 +") fit how it was actually executed on the hardware (push "2" onto stack, push "3" onto stack, pop top two arguments off stack and add them, push back onto stack). Basically, you want to create a syntax tree out of your RPN by making the 2 expressions you want to operate on "leaf nodes" and the operation itself, which comes afterward, the "parent node". This will probably be done by recursively looking at your input string (you'll probably want to make sure that subexpressions are correctly parenthesized for extra clarity, if they aren't already). Once you have that syntax tree, you can output prefix, infix, or postfix notation simply by doing a pre-order, post-order, or in-order traversal of that tree (again, parenthesizing your output for clarity if desired). Some more info can be found here. A: I just wrote a version in Java, it's here and one in Objective-C, over here. Possible algorithm: Given you have a stack with the input in rpn as the user would enter it, e.g. 8, 9, *. You iterate over the array from first to last and you always remove the current element. This element you evaluate. If it is an operand, you add it on a result stack. When it is an operator you pop the result stack twice (for binary operations) for the operands and write the result string on the result stack. With the example input of "8, 9, +, 2, *" you get these values on the resultstack (square brackets to indicate single elements): step 1: [8] step 2: [8], [9] step 3: [(8 + 9)] step 4: [(8 + 9)], [2] step 5: [(8 + 9) * 2] When the input stack is empty, you are finished and the resultStack's only element is your result. (Note however that the input could contain multiple entries or such that don't make sense, like a leading operation: "+ 2 3 /".) The implementations in the links deliberately don't use any selfmade types for e.g. operators or operands nor does it apply e.g. composite pattern. It's just clean and simple so it can be easily understood and ported to any other language. Porting it to C# is straight forward.
{ "language": "en", "url": "https://stackoverflow.com/questions/113424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do I connect to a USB webcam in .NET? I want to connect to a USB Webcam in .NET, specifically using C#. Being new to .NET I don't know what kind of support there is in the standard libraries for doing so. I found one example on the web that copies bitmaps through the clipboard, but that seems very hacky (and probably slow). Is there a better way? A: Interesting side note, WIA isn't supported by Vista for doing Captures from Webcams anymore. They mainly targeted it towards Scanners and pulling stills from cameras. Also, larger manufacturers like logitech have abandoned WIA is favor of DirectShow. A: Here is nice example of doing this. It's using DirectShow.Net (http://directshownet.sourceforge.net/), which is propably better than using "clipboard" :D. https://www.codeproject.com/Articles/18511/Webcam-using-DirectShow-NET A: Theres a package with functions with a lot of things to do with computer vision systems called AForge. And they have an easy way to get webcam images from a USB camera if you're still looking. Just check out the sample code for computer vision motion sensor example code. I'm sure you can pull out the function calls you need from it as I did. [sorry to necro, but this could be of use to someone in the future] A: On my computer, WIA was painstakingly sloooow... so i decided to give the Windows Multimedia Video Capture a try. You can find a demo here. A: You will need to use Windows Image Acquisition (WIA) to integrate a webcam with your application. There are plenty examples of this readily available. Here is a C# Webcam User Control with source. Here are some more articles and blog posts from people looking to solve the same problem you are: * *MSDN Coding4Fun: Look at me! Windows Image Acquisition *CodeProject: WIA Scripting and .NET *CodeProject: WebCam Fast Image Capture Service using WIA *clausn.dk: Webcam control from C# and WIA A: It really depends on what you want to do. WIA is primarily for capturing stills from imaging devices, and DirectShow (used either through directshow.net or managed DirectX) is for access to fuller video features. The other option is to create a WPF application. It has a huge amount of built in support for video (to the extent that having a looping video clip as a button is pretty trivial), and should be quick and easy to develop.
{ "language": "en", "url": "https://stackoverflow.com/questions/113426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How to clear the scrollback in the screen command? I use the screen command for command-line multitasking in Linux and I set my scrollback buffer length to a very large value. Is there a key combination to clear the buffer for a certain tab when I don't want it sitting there anymore? A: I added the command "clear" as well to clean the current screen. N.B. You have to press enter to regain you prompt. bind '/' eval "clear" "scrollback 0" "scrollback 15000" Also add it to you ".screenrc" to make it permanent. N.B. I added single quotes around the slash to be sure it didn't interfere in my ".screenrc". May not be necessary. A: This thread has the following suggestion: In the window whose scrollback you want to delete, set the scrollback to zero, then return it to its normal value (in your case, 15000). If you want, you can bind this to a key: bind / eval "scrollback 0" "scrollback 15000" You can issue the scrollback 0 command from the session as well, after typing C-a :. HTH. A: * *C-a C will clear the screen, including the prompt *clear (command, not key combination) will clear the screen, leaving a prompt ETA: misread the original question; these will just clear the visible text, but will not clear the buffer! A: alias cls='printf "\e[3J\033c"' Clears the screen and scrollback buffer. A: From the man page: C-a C (clear) Clear the screen. A: ^a : clear worked for me on Ubuntu. A: Command-K seems to be best solution for Mac. For more details and explanations, please refer to this page.
{ "language": "en", "url": "https://stackoverflow.com/questions/113427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: iPhone application : is-it possible to use a "double" slider to select a price range I'm working on an iphone application (not web app) and I'd like to build a form asking a user to indicate a price range. Instead of using two text fields, I would prefer to use a double slider to set the minimum and the maximum price. I know that it is possible de use a simple slider (sound control for exemple) but i've never seen a double one. Can anyone help ? A: This is not possible without creating a custom control. You'll need to inherit from UIControl or UIView and provide a custom drawRect method. You'll also need to respond to touch and drag events to update the state of the control. I have not done this myself, but I would be prepared for a fairly significant amount of work to get everything to respond and display as expected. I'm curious as to why you need to have both values specified on a single slider? Why not use two sliders either side-by-side or stacked? It would not require any more input steps than a double slider, and would conform more to standard UI guidelines. A: I think you can specify multiple thumbs for a single slider if you subclass UISlider, at least I vaguely remember that being possible in MacOSX. But Code Addict is right, you'll probably be better off using the standard controls - a double-thumbed slider seems like it'd be pretty difficult to deal with in the touchscreen environment. A: I built such a control and added it to GitHub so feel free to have a look and if you like the control extend it and contribute. GitHub page: http://github.com/doukasd/DoubleSlider Blog post (showing a video of how it works): http://dev.doukasd.com/2010/08/double-slider/
{ "language": "en", "url": "https://stackoverflow.com/questions/113437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Displaying code in blog posts What libraries and/or packages have you used to create blog posts with code blocks? Having a JavaScript library that would support line numbers and indentation is ideal. A: The GeSHi text highlighter is pretty awesome. If you're using WordPress, there's a plugin for you already A: A simple Google query reveals http://code.google.com/p/syntaxhighlighter/ From initial looks it seems pretty good. Entirly JS based so can be implemented independent of the server side language used. A: Syntax Highlighter is used by wordpress and produces nice results. A: Copy Visual studio code as HTML http://www.jtleigh.com/people/colin/software/CopySourceAsHtml/ A: I use Live Writer and I use VS addin that copies source code as html to copy the code and then I change into HTML view in Writer and paste the result you can download the addin at:http://blogs.microsoft.co.il/blogs/bursteg/archive/2007/11/21/copy-source-as-html-copysourceashtml-for-visual-studio-2008-rtm.aspx A: Some time ago I've done some research on this topic and came to the conclusion that using GeSHi is the way to go. However recently I've been looking to some more alternatives: * *Using Windows Live Writer with a syntax highlighter plugin (there are several available) *Using the syntaxhighlighter library or the google code prettify library. Both are written in JS and I think the second one is used on stackoverflow *Use some intermediate process, where I write the posts in Markdown for example and let a program generate the final HTML A: Personally, I use this website to do it for me: http://puzzleware.net/codehtmler/default.aspx A: If that's my own code, I would just use SciTE's export to HTML and paste it. Otherwise (highlighting code like it is done here), I would prefer to do it on server side: JS highlighting (as seen, for example, on JavaLobby) happens after the page have been displayed in default mode (so there is a sudden change of look, not very nice), and is often slow, plus JS can be disabled. Actually, such task can be done once, after user input, it doesn't need to be done over and over on each page served to visitor. A: I usually use this free online tool that formats C# code. Along with C#, it also formats code for VB, HTML, XML, T-SQL and MSH (code name Monad).
{ "language": "en", "url": "https://stackoverflow.com/questions/113440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Java User Interface Specification Java supplies standard User Interface guidelines for applications built using Java Swing. The basic guidelines are good, but I really feel the look and feel is really boring and outdated. Is anyone aware of a publicly available Java User Interface Guide that has better look & feel guidelines than the Sun provided guidelines? A: You have many LNF (Look And Feel) displayed here but they have not exactly a 'Java User Guide' Provided. However MigLayout does follow closely the main User Interface standards that exist out there (including some obcure points of button order): For instance the OK and Cancel buttons have different order on Windows and Mac OS X. While other layout managers use factories and button builders for this, it is inherently supported by MigLayout by just tagging the buttons. One just tags the OK button with "ok" and the Cancel button with "cancel" and they will end up in the correct order for the platform the application is running on, if they are put in the same grid cell. Example on Mac: (source: miglayout.com) A: the apple developer guide has a human computer interface guide - http://developer.apple.com/documentation/UserExperience/Conceptual/AppleHIGuidelines/XHIGIntro/chapter_1_section_1.html#//apple_ref/doc/uid/TP30000894-TP6 . Even though its targeted at the mac platform, you could learn something from it - its the reason why so many mac apps are pleasant to use, as well as aesthetically pleasing! A: Along the line of Chii's answer, I would recommend taking a look at the Windows Vista User Experience Guidelines for general tips on making user interfaces. Although the name ("Windows Vista User Experience Guidelines") and source (Microsoft) may suggest that it only contains Windows-centric tips and advice, it does offer good general tips and directions that can be used when designing interfaces for non-Windows applications as well. The Design Principles sections address some points to keep in mind when designing an effective user interface. For example, bullet three of How to Design a Great User Experience says: Don't be all things to all people Your program is going to be more successful by delighting its target users than attempting to satisfy everyone. These are the kinds of tips that apply to designing user interfaces on any platform. Of course, there are also Windows-specific guidelines as well. I believe one of the biggest reasons why look and feel of Swing applications seems "boring" and "outdated" is due to the platform-independent nature of Java. In order for the graphical user interfaces to work on several different platforms, Java needs to have facilities to adapt the user interface to the different host operating systems. For example, various platforms have various sizes for windows, buttons, and other visual components, so absolute positioning does not work too well. To combat that problem, Swing uses Layout Managers which (generally) use relative positioning to place the visual components on the screen. Despite these "limitations" of building graphical user interfaces for Java, I think that using tips from guidelines that are provided by non-Sun sources and non-Java-specific sources can still be a good source of information in designing and implementing an user interface that is effective. After all, designing an user interface is less about programming languages and more about human-machine interaction. A: I don't think there are any other complete guidelines. But if you are not talking about the spacing/positioning of components (I don't think that part of Look And Feel Design Guidelines is outdated), but only about the look and feel good starting points are singlabx / swingx: http://swinglabs.org http://swinglabs.org/docs/presentations/2007/DesktopMatters/FilthyRichClients.pdf http://parleys.com/display/PARLEYS/Home#slide=1;talk=7643;title=Filthy%20Rich%20Clients and JGoodies: http://www.jgoodies.com/articles/index.html http://www.jgoodies.com/articles/efficient%20swing%20design.pdf
{ "language": "en", "url": "https://stackoverflow.com/questions/113464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: When to enable/disable Viewstate I generaly disable viewstate for my ASP.net controls unless I explicitly know I am going to require view state for them. I have found that this can significantly reduce the page size of the HTML generated. Is this good practice? When should be enabled or disabled? A: I think it's good practice. Many ASP.NET devs are unaware that their viewstates add tremendous baggage to the HTML that's being sent to their users' browsers. A: You may find the information contained in the "ASP.NET State Management Recommendations" article on MSDN useful for making your decision. Generally in ASP.NET 2.0 and above disabling the ViewState is less destructive due to the introduction of the Control State for storing informaton required for raising events etc. A: It's a good practice. Unless you use ViewState values on postbacks, or they are required by some complex control itself it's good idea to save on ViewState as part of what will be sent to the client. A: Yes it is a very good idea. One could argue that it should have been disabled by default by Microsoft, just like caching. To see how bad Viewstate is in terms of size increased you can use a tool called Viewstate Analyzer. This is particularly useful when you have an existing application developed with Viewstate enabled. Another good reason to disable Viewstate is that it is really hard to disable at a later stage, when you have loads of components depending on it. A: Definately a good idea, nothing worse that a page which a developer is binding a dataGrid in the Page_Load every time but also submitting the viewstate! It's also a really good idea if you are planning on using the UpdatePanel from the AJAX Extensions, it means you're submitting less during the UpdatePanel request. (Don't flame for saying that an UpdatePanel can be good :P) A: _Viewstate can unnecessarily increase the number of bytes that need to be transferred. So unless the data is going to be used the next time , it's a good idea to switch it off.
{ "language": "en", "url": "https://stackoverflow.com/questions/113479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Getting IIS6 to play nice with WordPress Pretty Permalinks I've got a WordPress powered blog that I'm trying to get setup on our IIS6 server and everything works besides the permalink structure which I'm having a big headache with. After googling around/wordpress codex I learned that it's because IIS6 doesn't have the equivalent of Apache's mod_rewrite which is required for this feature to work. So that's where I'm at now. I can't seem to find a functional solution to get the pretty permalinks to work without the "index.php/," anyone have any recommendations? What I can't do: * *Upgrade to IIS7 *Switch to Apache *Quit my job Those suggestions have been offered to me, which sadly, I can't do any of those. Just an, FYI. Much thanks for anyone who can lead me in the right direction. A: I just came across the following answer on another question: Pretty URLs for search pages Hope that helps! A: IIRF does this, for IIS6. Free. A: I researched this topic briefly and it seems you need an additional piece which is called URL Rewrite (Go Live). Here is an article that walks you through how to create a rewrite rule using this. They also require IIS7, which I am not sure if it's really important. But it might be another thing you have to take care of. Just in case the above URL fails later, here is an example rewrite rule for Wordpress: <rewrite> <rules> <rule name="Main Rule" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="index.php" /> </rule> </rules> </rewrite> A: I use a shared IIS7 host for my Wordpress blog, so I don't have the option of installing a URL rewrite module either. After a bit of searching round, the best workaround I could come up with was to use a custom 404 error handler, that fixes up some server variables and then hands the request on to index.php for processing. To show that this actually works, I will link to the relevant post on my blog :-) A: i was struggling with this problem from few days, and after search so much stuff i got solution and now i have pretty permalinks in my self hosted (IIS7+ windows Server)blog. (Prerequisites: PHP5.0+ Version and FAST CGI SCRIPT - Don't use ISAPI Filter) I have made one web.config you need to put that file in your root directory and done. http://www.geekblogger.org/2010/03/how-to-set-pretty-permalinks-in.html
{ "language": "en", "url": "https://stackoverflow.com/questions/113489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Windows Performance Counter Port to Linux, HP-UX and AIX We implemented a server application available on Windows only. Now we like to port it to Linux, HP-UX and AIX, too. This application provides internal statistics through performance counters into the Windows Performance Monitor. To be more precise: The application is a data base, and we like to provide information like number of connected users or number of requests executed to the administrator. So these are "new" information, proprietary to our application. But we like to make them available in the same environment where the operating system delivers information like the CPU, etc. The goal is to make them easily readable for the administrator. What is the appropriate and commonly used performance monitor under Linux, HP-UX and AIX? A: I would say: that depends on which performance you want to monitor. Used CPU time? Free RAM? Disk IO? Number of beers in your freezer... But regardless of this you can look at any files below /proc. I'm not sure for HP, but at least Linux and AIX should have that tree (if it's not deactivated at kernel compile time). A: Management is where most OSes depart from one another. For this reason there are not many tools that are common between all the OSes. Additionally, Unix tools follow the single process single responsibility idiom where one tool gets cpu info, another gets memory etc. The only tool i have seen in the Unix world that gets all this info in one place is top. Almost all sys admins are familiar with this tool and works on all the flavors of OSes you are interested in. It also has the additional advantage of being open source. You could simply extend this tool to expose the counters you are interested in and ship it along with your application. Another way to do this might be to expose your counters through SNMP and leave it to some third party SNMP tool like HP open view that can collect and present a consistent view along with other management info. This might be a more enterprisy solution, which might appeal to the marketing folks. I would also say its a good idea to write a standalone console tool that admins can use from their custom home grown scripts (there are many firsm out there with super human admins / over paid it staff that does this). All together would be a healthy solution for your requirement i think. A: The most standard unix tools for such data are the *stat (iostat, vmstat, netstat) tools and sar. On Linux you'll find all this information in /proc, but most Unixes don't have /proc nicely filled with what you are looking for. The mentioned tools are quite standardized and can be used to gather the data you need.
{ "language": "en", "url": "https://stackoverflow.com/questions/113498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Bad reference to an object already freed Is there a way to be sure we hold a useable reference to an object i.e. being sure it has not been already freed leaving that non nil reference dangling. A: If you're using FastMM4 as your Memory Manager, you can check that the class is not TFreeObject. Or, in a more standard case, use a routine that will verify that your object is what it says it is by checking the class VMT. There have been such ValidateObj functions hannging around for some time (by Ray Lischner and Hallvard Vassbotn: http://hallvards.blogspot.com/2004/06/hack-6checking-for-valid-object.html) Here's another: function ValidateObj(Obj: TObject): Pointer; // see { Virtual method table entries } in System.pas begin Result := Obj; if Assigned(Result) then try if Pointer(PPointer(Obj)^) <> Pointer(Pointer(Cardinal(PPointer(Obj)^) + Cardinal(vmtSelfPtr))^) then // object not valid anymore Result := nil; except Result := nil; end; end; Update: A bit of caution... The above function will ensure that the result is either nil or a valid non nil Object. It does not guarantee that the Obj is still what you think it is, in case where the Memory Manager has already reallocated that previously freed memory. A: No. Unless you use something like reference counting or a garbage collector to make sure no object will be freeed before they have zero references. Delphi can do reference counting for you if you use interfaces. Of course Delphi for .Net has a gargage collector. As mentioned you could use the knowledege of Delphi or the memory manager internals to check for valid pointers or objects, but they are not the only ones that can give you pointers. So you can't cover all pointers even with those methods. And there also is a chance that your pointer happens to be valid again, but given to somebody else. So it is not the pointer you are looking for. Your design should not rely on them. Use a tool to detect any reference bugs you make. A: Standard, no... That's why VCL components can register themselves to be notified of the destruction of an object, so that they can remove the reference from there internal list of components or just reset their property. So if you'd want to make sure you haven't got any invalid references their are two options: * *Implement a destruction notification handler which every class can subscribe to. *Fix your code in a way that the references aren't spread around trough different object. You could for instance only provide the access to the reference via a property of another object. And instead of copying the reference to a private field you access the property of the other object. A: As others have said, no definitive way, but if you manage the ownership well, then the FreeAndNil routine will ensure that your variable is nil if it doesn't point to anything. A: It's usually not a good idea to check a reference is valid anyway. If a reference is not valid, your program will crash at the place where it is using the invalid reference. Otherwise the invalid reference might survive longer and debugging becomes harder. Here are some references to why it's better to crash on an invalid reference. (They talk about pointers in Win32, but the ideas are still relevant): * *IsBadXxxPtr should really be called CrashProgramRandomly *Should I check the parameters to my function? A: Unfortunately there is no way to 100% guarantee that a pointer to anything is still valid, except by meticolously writing the correct code. A: With the usage of interface references (instead of object references) it is possible to avoid these invalid pointer problems because there is no explicit call to Free in your code anymore.
{ "language": "en", "url": "https://stackoverflow.com/questions/113504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Including a WebService reference in a control I've written a control in C# that overrides the built-in DropDownList control. For this I need a javascript resource included, which I'm including as an embedded resource then adding the WebResource attribute, which works fine. However, I also need to reference a webservice, which I would normally include in the scriptmanager on the page like this <asp:scriptmanager id="scriptmanager" runat="server"> <Services> <asp:ServiceReference Path="~/Path/To/Service.asmx" /> </Services> </asp:scriptmanager> Is there any way to make the page include this reference in the code behind on the control I've created, similar to how it includes the embedded javascript file? A: You can add a ScriptManagerProxy in the code or the markup of your control and add the service reference through it. The settings in the ScriptManagerProxy are merged with the "real" ScriptManager at compile time. A: If you know the page the usercontrol is in you can do a ((PageName)this.Page).scriptmanager.Services.Add() from the user control A: You can just add the javascript to call the webservice yourself: Sys.Net.WebServiceProxy.invoke(url, methodName, useHttpGet, parameters, succeededCallback, failedCallback, userContext, timeOut); http://www.asp.net/AJAX/Documentation/Live/ClientReference/Sys.Net/WebServiceProxyClass/WebServiceProxyInvokeMethod.aspx The docs are for asp.net Ajax 1.0, but it's the same .net 3.5.
{ "language": "en", "url": "https://stackoverflow.com/questions/113507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best implementation for hashCode method for a collection How do we decide on the best implementation of hashCode() method for a collection (assuming that equals method has been overridden correctly) ? A: There's a good implementation of the Effective Java's hashcode() and equals() logic in Apache Commons Lang. Checkout HashCodeBuilder and EqualsBuilder. A: If you use eclipse, you can generate equals() and hashCode() using: Source -> Generate hashCode() and equals(). Using this function you can decide which fields you want to use for equality and hash code calculation, and Eclipse generates the corresponding methods. A: Although this is linked to Android documentation (Wayback Machine) and My own code on Github, it will work for Java in general. My answer is an extension of dmeister's Answer with just code that is much easier to read and understand. @Override public int hashCode() { // Start with a non-zero constant. Prime is preferred int result = 17; // Include a hash for each field. // Primatives result = 31 * result + (booleanField ? 1 : 0); // 1 bit » 32-bit result = 31 * result + byteField; // 8 bits » 32-bit result = 31 * result + charField; // 16 bits » 32-bit result = 31 * result + shortField; // 16 bits » 32-bit result = 31 * result + intField; // 32 bits » 32-bit result = 31 * result + (int)(longField ^ (longField >>> 32)); // 64 bits » 32-bit result = 31 * result + Float.floatToIntBits(floatField); // 32 bits » 32-bit long doubleFieldBits = Double.doubleToLongBits(doubleField); // 64 bits (double) » 64-bit (long) » 32-bit (int) result = 31 * result + (int)(doubleFieldBits ^ (doubleFieldBits >>> 32)); // Objects result = 31 * result + Arrays.hashCode(arrayField); // var bits » 32-bit result = 31 * result + referenceField.hashCode(); // var bits » 32-bit (non-nullable) result = 31 * result + // var bits » 32-bit (nullable) (nullableReferenceField == null ? 0 : nullableReferenceField.hashCode()); return result; } EDIT Typically, when you override hashcode(...), you also want to override equals(...). So for those that will or has already implemented equals, here is a good reference from my Github... @Override public boolean equals(Object o) { // Optimization (not required). if (this == o) { return true; } // Return false if the other object has the wrong type, interface, or is null. if (!(o instanceof MyType)) { return false; } MyType lhs = (MyType) o; // lhs means "left hand side" // Primitive fields return booleanField == lhs.booleanField && byteField == lhs.byteField && charField == lhs.charField && shortField == lhs.shortField && intField == lhs.intField && longField == lhs.longField && floatField == lhs.floatField && doubleField == lhs.doubleField // Arrays && Arrays.equals(arrayField, lhs.arrayField) // Objects && referenceField.equals(lhs.referenceField) && (nullableReferenceField == null ? lhs.nullableReferenceField == null : nullableReferenceField.equals(lhs.nullableReferenceField)); } A: Just a quick note for completing other more detailed answer (in term of code): If I consider the question how-do-i-create-a-hash-table-in-java and especially the jGuru FAQ entry, I believe some other criteria upon which a hash code could be judged are: * *synchronization (does the algo support concurrent access or not) ? *fail safe iteration (does the algo detect a collection which changes during iteration) *null value (does the hash code support null value in the collection) A: It is better to use the functionality provided by Eclipse which does a pretty good job and you can put your efforts and energy in developing the business logic. A: The best implementation? That is a hard question because it depends on the usage pattern. A for nearly all cases reasonable good implementation was proposed in Josh Bloch's Effective Java in Item 8 (second edition). The best thing is to look it up there because the author explains there why the approach is good. A short version * *Create a int result and assign a non-zero value. *For every field f tested in the equals() method, calculate a hash code c by: * *If the field f is a boolean: calculate (f ? 0 : 1); *If the field f is a byte, char, short or int: calculate (int)f; *If the field f is a long: calculate (int)(f ^ (f >>> 32)); *If the field f is a float: calculate Float.floatToIntBits(f); *If the field f is a double: calculate Double.doubleToLongBits(f) and handle the return value like every long value; *If the field f is an object: Use the result of the hashCode() method or 0 if f == null; *If the field f is an array: see every field as separate element and calculate the hash value in a recursive fashion and combine the values as described next. *Combine the hash value c with result: result = 37 * result + c *Return result This should result in a proper distribution of hash values for most use situations. A: If I understand your question correctly, you have a custom collection class (i.e. a new class that extends from the Collection interface) and you want to implement the hashCode() method. If your collection class extends AbstractList, then you don't have to worry about it, there is already an implementation of equals() and hashCode() that works by iterating through all the objects and adding their hashCodes() together. public int hashCode() { int hashCode = 1; Iterator i = iterator(); while (i.hasNext()) { Object obj = i.next(); hashCode = 31*hashCode + (obj==null ? 0 : obj.hashCode()); } return hashCode; } Now if what you want is the best way to calculate the hash code for a specific class, I normally use the ^ (bitwise exclusive or) operator to process all fields that I use in the equals method: public int hashCode(){ return intMember ^ (stringField != null ? stringField.hashCode() : 0); } A: @about8 : there is a pretty serious bug there. Zam obj1 = new Zam("foo", "bar", "baz"); Zam obj2 = new Zam("fo", "obar", "baz"); same hashcode you probably want something like public int hashCode() { return (getFoo().hashCode() + getBar().hashCode()).toString().hashCode(); (can you get hashCode directly from int in Java these days? I think it does some autocasting.. if that's the case, skip the toString, it's ugly.) A: As you specifically asked for collections, I'd like to add an aspect that the other answers haven't mentioned yet: A HashMap doesn't expect their keys to change their hashcode once they are added to the collection. Would defeat the whole purpose... A: Use the reflection methods on Apache Commons EqualsBuilder and HashCodeBuilder. A: I use a tiny wrapper around Arrays.deepHashCode(...) because it handles arrays supplied as parameters correctly public static int hash(final Object... objects) { return Arrays.deepHashCode(objects); } A: First make sure that equals is implemented correctly. From an IBM DeveloperWorks article: * *Symmetry: For two references, a and b, a.equals(b) if and only if b.equals(a) *Reflexivity: For all non-null references, a.equals(a) *Transitivity: If a.equals(b) and b.equals(c), then a.equals(c) Then make sure that their relation with hashCode respects the contact (from the same article): * *Consistency with hashCode(): Two equal objects must have the same hashCode() value Finally a good hash function should strive to approach the ideal hash function. A: If you're happy with the Effective Java implementation recommended by dmeister, you can use a library call instead of rolling your own: @Override public int hashCode() { return Objects.hash(this.firstName, this.lastName); } This requires either Guava (com.google.common.base.Objects.hashCode) or the standard library in Java 7 (java.util.Objects.hash) but works the same way. A: about8.blogspot.com, you said if equals() returns true for two objects, then hashCode() should return the same value. If equals() returns false, then hashCode() should return different values I cannot agree with you. If two objects have the same hashcode it doesn't have to mean that they are equal. If A equals B then A.hashcode must be equal to B.hascode but if A.hashcode equals B.hascode it does not mean that A must equals B A: any hashing method that evenly distributes the hash value over the possible range is a good implementation. See effective java ( http://books.google.com.au/books?id=ZZOiqZQIbRMC&dq=effective+java&pg=PP1&ots=UZMZ2siN25&sig=kR0n73DHJOn-D77qGj0wOxAxiZw&hl=en&sa=X&oi=book_result&resnum=1&ct=result ) , there is a good tip in there for hashcode implementation (item 9 i think...). A: I prefer using utility methods fromm Google Collections lib from class Objects that helps me to keep my code clean. Very often equals and hashcode methods are made from IDE's template, so their are not clean to read. A: Here is another JDK 1.7+ approach demonstration with superclass logics accounted. I see it as pretty convinient with Object class hashCode() accounted, pure JDK dependency and no extra manual work. Please note Objects.hash() is null tolerant. I have not include any equals() implementation but in reality you will of course need it. import java.util.Objects; public class Demo { public static class A { private final String param1; public A(final String param1) { this.param1 = param1; } @Override public int hashCode() { return Objects.hash( super.hashCode(), this.param1); } } public static class B extends A { private final String param2; private final String param3; public B( final String param1, final String param2, final String param3) { super(param1); this.param2 = param2; this.param3 = param3; } @Override public final int hashCode() { return Objects.hash( super.hashCode(), this.param2, this.param3); } } public static void main(String [] args) { A a = new A("A"); B b = new B("A", "B", "C"); System.out.println("A: " + a.hashCode()); System.out.println("B: " + b.hashCode()); } } A: The standard implementation is weak and using it leads to unnecessary collisions. Imagine a class ListPair { List<Integer> first; List<Integer> second; ListPair(List<Integer> first, List<Integer> second) { this.first = first; this.second = second; } public int hashCode() { return Objects.hashCode(first, second); } ... } Now, new ListPair(List.of(a), List.of(b, c)) and new ListPair(List.of(b), List.of(a, c)) have the same hashCode, namely 31*(a+b) + c as the multiplier used for List.hashCode gets reused here. Obviously, collisions are unavoidable, but producing needless collisions is just... needless. There's nothing substantially smart about using 31. The multiplier must be odd in order to avoid losing information (any even multiplier loses at least the most significant bit, multiples of four lose two, etc.). Any odd multiplier is usable. Small multipliers may lead to faster computation (the JIT can use shifts and additions), but given that multiplication has latency of only three cycles on modern Intel/AMD, this hardly matters. Small multipliers also leads to more collision for small inputs, which may be a problem sometimes. Using a prime is pointless as primes have no meaning in the ring Z/(2**32). So, I'd recommend using a randomly chosen big odd number (feel free to take a prime). As i86/amd64 CPUs can use a shorter instruction for operands fitting in a single signed byte, there is a tiny speed advantage for multipliers like 109. For minimizing collisions, take something like 0x58a54cf5. Using different multipliers in different places is helpful, but probably not enough to justify the additional work. A: When combining hash values, I usually use the combining method that's used in the boost c++ library, namely: seed ^= hasher(v) + 0x9e3779b9 + (seed<<6) + (seed>>2); This does a fairly good job of ensuring an even distribution. For some discussion of how this formula works, see the StackOverflow post: Magic number in boost::hash_combine There's a good discussion of different hash functions at: http://burtleburtle.net/bob/hash/doobs.html A: For a simple class it is often easiest to implement hashCode() based on the class fields which are checked by the equals() implementation. public class Zam { private String foo; private String bar; private String somethingElse; public boolean equals(Object obj) { if (this == obj) { return true; } if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } Zam otherObj = (Zam)obj; if ((getFoo() == null && otherObj.getFoo() == null) || (getFoo() != null && getFoo().equals(otherObj.getFoo()))) { if ((getBar() == null && otherObj. getBar() == null) || (getBar() != null && getBar().equals(otherObj. getBar()))) { return true; } } return false; } public int hashCode() { return (getFoo() + getBar()).hashCode(); } public String getFoo() { return foo; } public String getBar() { return bar; } } The most important thing is to keep hashCode() and equals() consistent: if equals() returns true for two objects, then hashCode() should return the same value. If equals() returns false, then hashCode() should return different values.
{ "language": "en", "url": "https://stackoverflow.com/questions/113511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "321" }
Q: Potential legal issues with storing Social Security/Insurance Numbers (SSNs/SINs)? A client using our system has requested that we store the SSNs/SINs of the end users in our database. Currently, we store minimal information about users (name, email address, and optionally, country), so I'm not overly concerned about a security breach - however, I have a suspicion there could be legal issues about storing SSNs and not taking "appropriate" measures to secure them (coming from Australia, this is my first encounter with them). Is this a valid concern? I also read on the Wikipedia page about SINs (Canada's equivalent to SSNs) that it should ONLY be used when absolutely necessary and definitely shouldn't be used as a general identifier, or similar. So, are there any potential legal issues about this sort of thing? Do you have any recommendations? A: The baseline recommendation would be to: * *Inform the user that you are storing their SSN before they use your site/application. Since the request appears to be to collect the information after the fact, the users should have a way to opt out of your system before they log in or before they put in their SSN *Issue a legal guarantee that you will not provide, sell, or otherwise distribute the above information (along with their other personal information of course) *Have them check a checkbox stating that they understand that you really are storing their SSNs but the most important part would probably be: * *Hire a lawyer well-versed with legal matters over the web A: Funny thing about SSNs... the law that created them, also clearly defined what they may be used for (basically tax records, retirement benefits, etc.) and what they are not allowed to be used for - everything else. So the fact that the bank requires your SSN to open a checking account, your ISP asks for it for high speed internet access, airlines demand it before allowing you on a plane, your local grocery/pub keeps a tab stored by your SSN - that is all illegal. Shocking, isn't it... All the hooha around identity theft, and how easy it is thanks to a single, unprotected "secret" that "uniquely" identifies you across the board (not to mention that its sometimes used as authentication) - should never have been made possible. A: Some good warning stated already here. I'll just add that speaking of SIN (Canada's Social Insurance Number) codes, I believe it's possible to have collisions between a SIN and a SSN (in other words the same number, but two different people/countries). It shouldn't be a surprise since these are separate codification systems, but I somehow can imagine some doing data entry that may be inclined to stick a SIN into a SSN field and vis-versa (think international students in college/university as one instance - I was told by a DBA friend that he saw this happen). A given information system may be designed to not allow duplicates, and either way, you can see why there might be confusion and data integrity issues (using a SSN column as a unique key? Hmm). A: Way too many organizations in the USA use SSNs as unique identifiers for people, despite the well-documented problems with them. Unless your application actually has something to do with government benefits, there's no good reason for you to store SSns. Given that so many organizations (mis)use them to identify people for things like credit checks, you really need to be careful with them. With nothing more than someone's name, address, and SSN, it's pretty easy to get credit under their name, and steal their identity. The legal issues are along the lines of getting sued into oblivion for any leak of personal information that contains SSNs. A: If it were me I'd avoid them like the plague, or figure out some very very secure way to store them. Additionally (not a legal expert by any extent but..) if you can put in writing somewhere that you are no way responsible if any of this gets out. A: At a minimum, you want to be sure that SSNs are never emailed without some protection. I think the built-in "password to open" in Excel is enough, legally. I think email is the weakest link, at least in my industry. Every now and then, there is a news item "Laptop Stolen: Thousands of SSNs Possibly Compromised." It's my great fear that it could be my laptop. I put all SSN containing files in a PGP-protected virtual drive. You do have good security on your database, don't you? If not, why not?
{ "language": "en", "url": "https://stackoverflow.com/questions/113526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Running SQL Server on the Web Server Is it good, bad, or indifferent to run SQL Server on your webserver? I'm using Server 2008 and SQL Server 2005, but I don't think that matters to this question. A: Larger shops would probably not consider this a best practice... However, if you aren't dealing with hundreds of requests per second, you're fine putting them both on one box. In fact, for small apps, you will see better performance on the back-end because data does not have to go across the wire. It's all about scale. Keep in mind that database servers eat memory. Here's one important lesson from the school of hard knocks: if you decide to run SQL Server 2005 on the same machine as your web server (and that is the setup you mentioned in your question), make sure you go into Sql Server Management Studio and do this: * *Right click on the server instance and click properties *Select 'memory' from the list on the left *Change 'maximum server memory' to something your server can sustain. If you don't do that, SQL Server will eventually take up all of your server's RAM and hang onto it indefinitely. This will cause your server to more or less sputter and die. If you are not aware of this, it can be very frustrating to troubleshoot. I've done this quite a few times. It's not something you would do if you had the infrastructure of a large corporation and it does not scale, but it's fine for a lot of things. A: It really comes down to how much work your webserver and your sql server are doing. Without more information I doubt you are going to get any helpful answers. A: If your web server is publicly accessible, this is a VERY bad idea from a security perspective. Although it makes a lot of things more difficult from a routing, firewall, ports, authentication, etc. perspective, separation is good. When you have your database server running on the web server, if your web server is compromised, then your sql server is, too. When you have them on separate boxes, you've raised the bar a little. There's still a lot more work to be done to secure your web server AND your database server, but why make it easier than it needs to be? A: I'd say it was best to run them on the same server until it becomes a problem. That way you'll save yourself some money and time upfront. Once the site becomes a success and requires a some architectural changes it should have already paid for itself. Remember to back up :) A: For small sites, it doesn't make a bit of a difference. As the load grows, though, this scales really badly, and quicker than you think: * *Database servers are built on the premise they "own" the server. They trade memory for speed and they easily use all available RAM for internal caching. *Once resources start to be scarce, profiling becomes very difficult -- it is clear that IIS and SQL are both suffering, less clear where the bottleneck is. IIS needs CPU, SQL Server needs RAM or CPU etc etc *No matter how many layers you put in your code, it all runs on the same CPU, therefore a single layered application will run better in this context -- less overhead -- but it will not scale. *Security is really bad, usually you isolate SQL behind a firewall! If you can afford it, it's probably better to shell out a few bucks and get a second server, maybe using PostgreSQL. One IIS server and one PostgreSQL cost about as much as on IIS + SQL Server because of licensing costs... A: It will depend on the expected load of the server. For small sites, it is no problem at all (if correctly configured). For large sites, you might want to consider distributing the load over different servers: web server, file server, database server, etc. A: I've seen this issue over and over again. The right answer is to put SQL Server on one machine and IIS (web server) on the other. Your money will go into the SQL Server machine because the right drive system and RAM must be purchased to support a efficient server but the web server can be a much scaled down & less expensive machine with just a mirrored drive set.
{ "language": "en", "url": "https://stackoverflow.com/questions/113531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there a function in Python to split a string without ignoring the spaces? Is there a function in Python to split a string without ignoring the spaces in the resulting list? E.g: s="This is the string I want to split".split() gives me >>> s ['This', 'is', 'the', 'string', 'I', 'want', 'to', 'split'] I want something like ['This',' ','is',' ', 'the',' ','string', ' ', .....] A: >>> import re >>> re.split(r"(\s+)", "This is the string I want to split") ['This', ' ', 'is', ' ', 'the', ' ', 'string', ' ', 'I', ' ', 'want', ' ', 'to', ' ', 'split'] Using the capturing parentheses in re.split() causes the function to return the separators as well. A: I don't think there is a function in the standard library that does that by itself, but "partition" comes close The best way is probably to use regular expressions (which is how I'd do this in any language!) import re print re.split(r"(\s+)", "Your string here") A: Silly answer just for the heck of it: mystring.replace(" ","! !").split("!") A: The hard part with what you're trying to do is that you aren't giving it a character to split on. split() explodes a string on the character you provide to it, and removes that character. Perhaps this may help: s = "String to split" mylist = [] for item in s.split(): mylist.append(item) mylist.append(' ') mylist = mylist[:-1] Messy, but it'll do the trick for you...
{ "language": "en", "url": "https://stackoverflow.com/questions/113534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How can I uninstall an application using PowerShell? Is there a simple way to hook into the standard 'Add or Remove Programs' functionality using PowerShell to uninstall an existing application? Or to check if the application is installed? A: function Uninstall-App { Write-Output "Uninstalling $($args[0])" foreach($obj in Get-ChildItem "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall") { $dname = $obj.GetValue("DisplayName") if ($dname -contains $args[0]) { $uninstString = $obj.GetValue("UninstallString") foreach ($line in $uninstString) { $found = $line -match '(\{.+\}).*' If ($found) { $appid = $matches[1] Write-Output $appid start-process "msiexec.exe" -arg "/X $appid /qb" -Wait } } } } } Call it this way: Uninstall-App "Autodesk Revit DB Link 2019" A: To add a little to this post, I needed to be able to remove software from multiple Servers. I used Jeff's answer to lead me to this: First I got a list of servers, I used an AD query, but you can provide the array of computer names however you want: $computers = @("computer1", "computer2", "computer3") Then I looped through them, adding the -computer parameter to the gwmi query: foreach($server in $computers){ $app = Get-WmiObject -Class Win32_Product -computer $server | Where-Object { $_.IdentifyingNumber -match "5A5F312145AE-0252130-432C34-9D89-1" } $app.Uninstall() } I used the IdentifyingNumber property to match against instead of name, just to be sure I was uninstalling the correct application. A: I found out that Win32_Product class is not recommended because it triggers repairs and is not query optimized. Source I found this post from Sitaram Pamarthi with a script to uninstall if you know the app guid. He also supplies another script to search for apps really fast here. Use like this: .\uninstall.ps1 -GUID {C9E7751E-88ED-36CF-B610-71A1D262E906} [cmdletbinding()] param ( [parameter(ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true)] [string]$ComputerName = $env:computername, [parameter(ValueFromPipeline=$true,ValueFromPipelineByPropertyName=$true,Mandatory=$true)] [string]$AppGUID ) try { $returnval = ([WMICLASS]"\\$computerName\ROOT\CIMV2:win32_process").Create("msiexec `/x$AppGUID `/norestart `/qn") } catch { write-error "Failed to trigger the uninstallation. Review the error message" $_ exit } switch ($($returnval.returnvalue)){ 0 { "Uninstallation command triggered successfully" } 2 { "You don't have sufficient permissions to trigger the command on $Computer" } 3 { "You don't have sufficient permissions to trigger the command on $Computer" } 8 { "An unknown error has occurred" } 9 { "Path Not Found" } 9 { "Invalid Parameter"} } A: EDIT: Over the years this answer has gotten quite a few upvotes. I would like to add some comments. I have not used PowerShell since, but I remember observing some issues: * *If there are more matches than 1 for the below script, it does not work and you must append the PowerShell filter that limits results to 1. I believe it's -First 1 but I'm not sure. Feel free to edit. *If the application is not installed by MSI it does not work. The reason it was written as below is because it modifies the MSI to uninstall without intervention, which is not always the default case when using the native uninstall string. Using the WMI object takes forever. This is very fast if you just know the name of the program you want to uninstall. $uninstall32 = gci "HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall" | foreach { gp $_.PSPath } | ? { $_ -match "SOFTWARE NAME" } | select UninstallString $uninstall64 = gci "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall" | foreach { gp $_.PSPath } | ? { $_ -match "SOFTWARE NAME" } | select UninstallString if ($uninstall64) { $uninstall64 = $uninstall64.UninstallString -Replace "msiexec.exe","" -Replace "/I","" -Replace "/X","" $uninstall64 = $uninstall64.Trim() Write "Uninstalling..." start-process "msiexec.exe" -arg "/X $uninstall64 /qb" -Wait} if ($uninstall32) { $uninstall32 = $uninstall32.UninstallString -Replace "msiexec.exe","" -Replace "/I","" -Replace "/X","" $uninstall32 = $uninstall32.Trim() Write "Uninstalling..." start-process "msiexec.exe" -arg "/X $uninstall32 /qb" -Wait} A: Here is the PowerShell script using msiexec: echo "Getting product code" $ProductCode = Get-WmiObject win32_product -Filter "Name='Name of my Software in Add Remove Program Window'" | Select-Object -Expand IdentifyingNumber echo "removing Product" # Out-Null argument is just for keeping the power shell command window waiting for msiexec command to finish else it moves to execute the next echo command & msiexec /x $ProductCode | Out-Null echo "uninstallation finished" A: To fix up the second method in Jeff Hillman's post, you could either do a: $app = Get-WmiObject -Query "SELECT * FROM Win32_Product WHERE Name = 'Software Name'" Or $app = Get-WmiObject -Class Win32_Product ` -Filter "Name = 'Software Name'" A: I will make my own little contribution. I needed to remove a list of packages from the same computer. This is the script I came up with. $packages = @("package1", "package2", "package3") foreach($package in $packages){ $app = Get-WmiObject -Class Win32_Product | Where-Object { $_.Name -match "$package" } $app.Uninstall() } I hope this proves to be useful. Note that I owe David Stetler the credit for this script since it is based on his. A: Based on Jeff Hillman's answer: Here's a function you can just add to your profile.ps1 or define in current PowerShell session: # Uninstall a Windows program function uninstall($programName) { $app = Get-WmiObject -Class Win32_Product -Filter ("Name = '" + $programName + "'") if($app -ne $null) { $app.Uninstall() } else { echo ("Could not find program '" + $programName + "'") } } Let's say you wanted to uninstall Notepad++. Just type this into PowerShell: > uninstall("notepad++") Just be aware that Get-WmiObject can take some time, so be patient! A: $app = Get-WmiObject -Class Win32_Product | Where-Object { $_.Name -match "Software Name" } $app.Uninstall() Edit: Rob found another way to do it with the Filter parameter: $app = Get-WmiObject -Class Win32_Product ` -Filter "Name = 'Software Name'" A: One line of code: get-package *notepad* |% { & $_.Meta.Attributes["UninstallString"]} A: Use: function remove-HSsoftware{ [cmdletbinding()] param( [parameter(Mandatory=$true, ValuefromPipeline = $true, HelpMessage="IdentifyingNumber can be retrieved with `"get-wmiobject -class win32_product`"")] [ValidatePattern('{[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12}}')] [string[]]$ids, [parameter(Mandatory=$false, ValuefromPipeline=$true, ValueFromPipelineByPropertyName=$true, HelpMessage="Computer name or IP adress to query via WMI")] [Alias('hostname,CN,computername')] [string[]]$computers ) begin {} process{ if($computers -eq $null){ $computers = Get-ADComputer -Filter * | Select dnshostname |%{$_.dnshostname} } foreach($computer in $computers){ foreach($id in $ids){ write-host "Trying to uninstall sofware with ID ", "$id", "from computer ", "$computer" $app = Get-WmiObject -class Win32_Product -Computername "$computer" -Filter "IdentifyingNumber = '$id'" $app | Remove-WmiObject } } } end{}} remove-hssoftware -ids "{8C299CF3-E529-414E-AKD8-68C23BA4CBE8}","{5A9C53A5-FF48-497D-AB86-1F6418B569B9}","{62092246-CFA2-4452-BEDB-62AC4BCE6C26}" It's not fully tested, but it ran under PowerShell 4. I've run the PS1 file as it is seen here. Letting it retrieve all the Systems from the AD and trying to uninstall multiple applications on all systems. I've used the IdentifyingNumber to search for the Software cause of David Stetlers input. Not tested: * *Not adding ids to the call of the function in the script, instead starting the script with parameter IDs *Calling the script with more then 1 computer name not automatically retrieved from the function *Retrieving data from the pipe *Using IP addresses to connect to the system What it does not: * *It doesn't give any information if the software actually was found on any given system. *It does not give any information about failure or success of the deinstallation. I wasn't able to use uninstall(). Trying that I got an error telling me that calling a method for an expression that has a value of NULL is not possible. Instead I used Remove-WmiObject, which seems to accomplish the same. CAUTION: Without a computer name given it removes the software from ALL systems in the Active Directory. A: For Most of my programs the scripts in this Post did the job. But I had to face a legacy program that I couldn't remove using msiexec.exe or Win32_Product class. (from some reason I got exit 0 but the program was still there) My solution was to use Win32_Process class: with the help from nickdnk this command is to get the uninstall exe file path: 64bit: [array]$unInstallPathReg= gci "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall" | foreach { gp $_.PSPath } | ? { $_ -match $programName } | select UninstallString 32bit: [array]$unInstallPathReg= gci "HKLM:\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall" | foreach { gp $_.PSPath } | ? { $_ -match $programName } | select UninstallString you will have to clean the the result string: $uninstallPath = $unInstallPathReg[0].UninstallString $uninstallPath = $uninstallPath -Replace "msiexec.exe","" -Replace "/I","" -Replace "/X","" $uninstallPath = $uninstallPath .Trim() now when you have the relevant program uninstall exe file path you can use this command: $uninstallResult = (Get-WMIObject -List -Verbose | Where-Object {$_.Name -eq "Win32_Process"}).InvokeMethod("Create","$unInstallPath") $uninstallResult - will have the exit code. 0 is success the above commands can also run remotely - I did it using invoke command but I believe that adding the argument -computername can work A: For msi installs, "uninstall-package whatever" works fine. For non-msi installs (Programs provider), it takes more string parsing. This should also take into account if the uninstall exe is in a path with spaces and is double quoted. Install-package works with msi's as well. $uninstall = get-package whatever | % { $_.metadata['uninstallstring'] } # split quoted and unquoted things on whitespace $prog, $myargs = $uninstall | select-string '("[^"]*"|\S)+' -AllMatches | % matches | % value $prog = $prog -replace '"',$null # call & operator doesn't like quotes $silentoption = '/S' $myargs += $silentoption # add whatever silent uninstall option & $prog $myargs # run uninstaller silently Start-process doesn't mind the double quotes, if you need to wait anyway: # "C:\Program Files (x86)\myapp\unins000.exe" get-package myapp | foreach { start -wait $_.metadata['uninstallstring'] /SILENT } A: On more recent windows systems, you can use the following to uninstall msi installed software. You can also check $pkg.ProviderName -EQ "msi" if you like. $pkg = get-package *name* $prodCode = "{" + $pkg.TagId + "}" msiexec.exe /X $prodCode /passive
{ "language": "en", "url": "https://stackoverflow.com/questions/113542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "144" }
Q: Role Based Access Control Is there any open-source, PHP based, role-based access control system that can be used for CodeIgniter? A: Maybe I'm misunderstanding the question, but isn't the whole point of Role-Based Access Control (RBAC) to avoid Access Control Lists (ACLs)? RBAC differs from access control lists (ACLs) (...) in that it assigns permissions to specific operations with meaning in the organization, rather than to low-level data objects. For example, an access control list could be used to grant or deny write access to a particular system file, but it would not say in what ways that file could be changed. In an RBAC-based system, an operation might be to create a 'credit account' transaction in a financial application (...). The assignment of permission to perform a particular operation is meaningful because the operations are fine-grained and themselves have meaning within the application. (Quote: Wikipedia) I don't know the specifics on Zend_ACL or the other implementations mentioned, but if they are ACL-based, I would not recommend using them for role-based authorization. A: I created an Open Source project called PHP-Bouncer which may be of interest to you. It's still fairly young, but works well and is easy to configure. I ended up developing it because none of the existing solutions seemed to meet my needs. I hope this helps! A: Brandon Savage gave a presentation on his PHP package "ApplicationACL" that may or may not accomplish role-based access. PHPGACL might work as well, but I can't tell you for sure. What I can tell you, however, is the Zend_ACL component of the Zend Framework will do role-based setups (however you'll have to subclass to check multiple roles at once). Granted the pain of this is you'll have to pull out Zend_ACL, I do not believe it has any external dependencies, from the monolithic download (or SVN checkout). The nice thing about Zend_ACL is though its storage agnostic. You can either rebuild it every time or it's designed to be serialized (I use a combination of both, serialize for the cache and rebuild from the DB). A: phpgacl http://phpgacl.sourceforge.net/ is a generic acl based access control framework while I don't know about any CI specific implementation, i know that you only need the main class file to make phpgacl work. So i belive that integration with CI won't be any problem. (I've work passingly with CI) A: Here are two RBAC libraries for PHP I found: * *https://github.com/leighmacdonald/php_rbac *https://github.com/brandonlamb/php-rbac I actually used the first one in PolyAuth: https://github.com/Polycademy/PolyAuth/ It's a full featured auth library that includes NIST level 1 RBAC. And yes, RBAC is not the same as an ACL. I use Codeigniter as well, all you have to do is use the PDO driver and pass in the connection id. See this tutorial for how to do that: http://codebyjeff.com/blog/2013/03/codeigniter-with-pdo A: Found out about Khaos ACL which is a CI library... I'm also checking out phpgacl and how to use it for CI... Have'nt checked Zend ACL yet. But maybe it can be "ported" to CI A: Try DX_Auth plugin for CodeIgniter. I am working on a similar (rather, superset) of the functions that DX_Auth have. My set of CI addon's include display of menus (that can be controlled via CSS), Role-bases access controll before controller is invoked and other features. I hope to publish it soon. Will give project URL when I do so A: RBAC != ACL - Roland has the only correct answer for this question. BTW of course it is an essential part of a framework to implement any kind of permission system - at least there is no point in using a framework, if it does not give you a well engeneered RBAC system - it might be better using a simple template system with any ORM layer then. It is a common antipattern in the php world, that frameworks like Ruby or Django are "cloned" only as a subset of what these modern frameworks deliver - as a typical syndrome yuo see a lack of good ACL or RBAC integration into these frameworks - what essentially is a joke. There is currently only the Yii PHP Framework that comes with a decent RBAC implementation. A: http://www.jframework.info (deadlink) jFramework has a standard NIST level 2 RBAC with enhancements which is said to be the fastest available (includes benchmarks) it can operate on a single SQLite database file and is tested thoroughly, works like a glove. Has a dependency on jFramework DBAL but you can simple replace DBAL SQL Queries in the code with your desired DBAL and of course you can use jFramework in a SOP manner. A: I know the trail is cold, but a new project has popped up : PHP-RBAC is a PHP Hierarchical NIST Level 2 Standard Role Based Access Control and is pretty mature. It is also an OWASP project. I hope you enjoy it at http://phprbac.net A: Ion Auth Library uses users and groups - https://github.com/benedmunds/CodeIgniter-Ion-Auth but there are no working RBAC system to use them and manage. But you can white your functions.
{ "language": "en", "url": "https://stackoverflow.com/questions/113543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: iPhone development on Windows Possible Duplicate: How can I develop for iPhone using a Windows development machine? Is there a way to develop iPhone (iOS) applications on Windows? I really don't want to get yet another machine. There is a project on http://code.google.com/p/winchain/wiki/HowToUse that seemed to work with iPhone 1.0, but had limited success with iPhone 2.0, plus it requires all the Cygwin insanity. Is there anything else, or do I have to buy a Mac?
{ "language": "en", "url": "https://stackoverflow.com/questions/113547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "228" }
Q: What is the best Linux distribution for Vmware server? In terms of Webserver and low package size installation. A: To be honest, the best distro for VMWare is the one the admin has the most experience with. With the GUI stuff all disabled I've not found any difference in performance between RedHat, Centos and Ubuntu when running VMWare. Picking the distro that you can adminster easiest will save you hassle. If you already have a few linux systems using the same flavour makes the admins job a lot easier. A: It is not clear to me if you are asking about the distro for the Vmware host, or for the guest operating system that will be your web server. I generally really like Debian or Debian based distributions. But as far as Vmware is concerned Centos or anything really should work. If you are looking at setting up many vms on this server you might want to look at using the bare-metal hypervisor product that has been released as a free product. (Vmware ESX)
{ "language": "en", "url": "https://stackoverflow.com/questions/113561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Why Re-throw Exceptions? I've seen the following code many times: try { ... // some code } catch (Exception ex) { ... // Do something throw new CustomException(ex); // or // throw; // or // throw ex; } Can you please explain the purpose of re-throwing an exception? Is it following a pattern/best practice in exception handling? (I've read somewhere that it's called "Caller Inform" pattern?) A: Rethrowing the same exception is useful if you want to, say, log the exception, but not handle it. Throwing a new exception that wraps the caught exception is good for abstraction. e.g., your library uses a third-party library that throws an exception that the clients of your library shouldn't know about. In that case, you wrap it into an exception type more native to your library, and throw that instead. A: Actually there is a difference between throw new CustomException(ex); and throw; The second will preserve the stack information. But sometimes you want to make the Exception more "friendly" to your application domain, instead of letting the DatabaseException reach your GUI, you'll raise your custom exception which contains the original exception. For instance: try { } catch (SqlException ex) { switch (ex.Number) { case 17: case 4060: case 18456: throw new InvalidDatabaseConnectionException("The database does not exists or cannot be reached using the supplied connection settings.", ex); case 547: throw new CouldNotDeleteException("There is a another object still using this object, therefore it cannot be deleted.", ex); default: throw new UnexpectedDatabaseErrorException("There was an unexpected error from the database.", ex); } } A: I can think of the following reasons: * *Keeping the set of thrown exception types fixed, as part of the API, so that the callers only have to worry about the fixed set of exceptions. In Java, you are practically forced to do that, because of the checked exceptions mechanism. *Adding some context information to the exception. For example, instead of letting the bare "record not found" pass through from the DB, you might want to catch it and add "... while processing order no XXX, looking for product YYY". *Doing some cleanup - closing files, rolling back transactions, freeing some handles. A: Sometimes you want to hide the implementation details of a method or improve the level of abstraction of a problem so that it’s more meaningful to the caller of a method. To do this, you can intercept the original exception and substitute a custom exception that’s better suited for explaining the problem. Take for example a method that loads the requested user’s details from a text file. The method assumes that a text file exists named with the user’s ID and a suffix of “.data”. When that file doesn’t actually exist, it doesn’t make much sense to throw a FileNotFoundException because the fact that each user’s details are stored in a text file is an implementation detail internal to the method. So this method could instead wrap the original exception in a custom exception with an explanatory message. Unlike the code you're shown, best practice is that the original exception should be kept by loading it as the InnerException property of your new exception. This means that a developer can still analyze the underlying problem if necessary. When you're creating a custom exception, here's a useful checklist: • Find a good name that conveys why the exception was thrown and make sure that the name ends with the word “Exception”. • Ensure that you implement the three standard exception constructors. • Ensure that you mark your exception with the Serializable attribute. • Ensure that you implement the deserialization constructor. • Add any custom exception properties that might help developers to understand and handle your exception better. • If you add any custom properties, make sure that you implement and override GetObjectData to serialize your custom properties. • If you add any custom properties, override the Message property so that you can add your properties to the standard exception message. • Remember to attach the original exception using the InnerException property of your custom exception. A: You typically catch and re-throw for one of two reasons, depending on where the code sits architecturally within an application. At the core of an application you typically catch and re-throw to translate an exception into something more meaningful. For example if you're writing a data access layer and using custom error codes with SQL Server, you might translate SqlException into things like ObjectNotFoundException. This is useful because (a) it makes it easier for callers to handle specific types of exception, and (b) because it prevents implementation details of that layer such as the fact you're using SQL Server for persistence leaking into other layers, which allows you to change things in the future more easily. At boundaries of applications it's common to catch and re-throw without translating an exception so that you can log details of it, aiding in debugging and diagnosing live issues. Ideally you want to publish error somewhere that the operations team can easily monitor (e.g. the event log) as well as somewhere that gives context around where the exception happened in the control flow for developers (typically tracing). A: Generally the "Do Something" either involves better explaining the exception (For instance, wrapping it in another exception), or tracing information through a certain source. Another possibility is if the exception type is not enough information to know if an exception needs to be caught, in which case catching it an examining it will provide more information. This is not to say that method is used for purely good reasons, many times it is used when a developer thinks tracing information may be needed at some future point, in which case you get try {} catch {throw;} style, which is not helpful at all. A: I think it depends on what you are trying to do with the exception. One good reason would be to log the error first in the catch, and then throw it up to the UI to generate a friendly error message with the option to see a more "advanced/detailed" view of the error, which contains the original error. Another approach is a "retry" approach, e.g., an error count is kept, and after a certain amount of retries that's the only time the error is sent up the stack (this is sometimes done for database access for database calls that timeout, or in accessing web services over slow networks). There will be a bunch of other reasons to do it though. A: FYI, this is a related question about each type of re-throw: Performance Considerations for throwing Exceptions My question focuses on "Why" we re-throw exceptions and its usage in application exception handling strategy. A: Until I started using the EntLib ExceptionBlock, I was using them to log errors before throwing them. Kind of nasty when you think I could have handled them at that point, but at the time it was better to have them fail nastily in UAT (after logging them) rather than cover a flow-on bug. A: The application will most probably be catching those re-thrown exceptions higher up the call stack and so re-throwing them allows that higher up handler to intercept and process them as appropriate. It is quite common for application to have a top-level exception handler that logs or reports the expections. Another alternative is that the coder was lazy and instead of catching just the set of exceptions they want to handle they have caught everything and then re-thrown only the ones they cannot actually handle. A: As Rafal mentioned, sometimes this is done to convert a checked exception to an unchecked exception, or to a checked exception that's more suitable for an API. There is an example here: http://radio-weblogs.com/0122027/stories/2003/04/01/JavasCheckedExceptionsWereAMistake.html A: If you look at exceptions as on an alternative way to get a method result, then re-throwing an exception is like wrapping your result into some other object. And this is a common operation in a non-exceptional world. Usually this happens on a border of two application layers - when a function from layer B calls a function from layer C, it transforms C's result into B's internal form. A -- calls --> B -- calls --> C If it doesn't, then at the layer A which calls the layer B there will be a full set of JDK exceptions to handle. :-) As also the accepted answer points out, layer A might not even be aware of C's exception. Example Layer A, servlet: retrieves an image and it's meta information Layer B, JPEG library: collects available DCIM tags to parse a JPEG file Layer C, a simple DB: a class reading string records from a random access file. Some bytes are broken, so it throws an exception saying "can't read UTF-8 string for record 'bibliographicCitation'". So A won't understand the meaning of 'bibliographicCitation'. So B should translate this exception for A into TagsReadingException which wraps the original. A: THE MAIN REASON of re-throwing exceptions is to leave Call Stack untouched, so you can get more complete picture of what happens and calls sequence.
{ "language": "en", "url": "https://stackoverflow.com/questions/113565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: How to read a barcode from an image I'm seeking a library, technique or advice on how to read an EAN-13 barcode from an image (including ISBN,and ISSN encodings). The image would come from a mobile phone or webcam, so resolution may be quite poor and not well aligned. I'm specifically interested in something that could be used from ruby on rails, but answers for other languages are welcome. Open Source solutions preferred. Leading solutions to date: * *ZBar (previously known as Zebra - h/t @bgbg, @Natim) - implemented in C with interfaces for Python, Perl, and C++ *ZXing (h/t @codr) - implemented in Java (J2SE and Android) with other modules/ports in varying states of development (JavaME, C#, C++, JRuby, RIM, iPhone/Objective C) A: The zebra barcode reader (http://zebra.sourceforge.net/) is a small, layered bar code scanning and decoding library implemented in C (C++ wrappers are also provided). It supports many popular symbologies (types of barcodes), including EAN-13. However, I'm not aware about any Ruby bindings. The library is available under the GPL A: This project might be what you're looking for: ZXing A: You might want to try this if it's to allow your site's visitors to scan stuff, I think it's embeddable in your own site, but I've never used it : http://en.barcodepedia.com/ A: We use the Softek library. Very pleased with the results.
{ "language": "en", "url": "https://stackoverflow.com/questions/113571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Attaching Informix .dat and .idx files We are trying to duplicate one of our informix database on a test server, but without Informix expertise in house we can only guess what we need to do. I am learning this stuff on the fly myself and nowhere near the expertise level needed to operate Informix efficiently or even inefficiently. Anyhow... We managed to copy the .dat and .idx files from the live server somewhere. Installed Linux and the latest Informix Dynamic Server on it and have it up and running. Now what should we do with the .dat and idx files from the live server? Do we copy it somewhere and it will recognize it automatically? Or is there an equivalent way like you can do attach DB from MS SQLServer to register the database files in the new database? At my rope end... A: You've asked a pretty complicated question without realizing it. Informix is architected as a shared everything database engine, meaning all resources available to the instance are available to every database in that instance. This means that more than one database can store data in any given dbspace, .dat or .idx file in your case. Most DBA's know better than to do that but it's something to be aware of. Given this knowledge you now know that the .dat and .idx files do not belong to a database but belong the instance. The dbspaces and files were created to contain your databases data but they technically belong to the instance. It's worth noting that the .dat and .idx files are known to the database by the logical dbspace name. Armed with this background info and assuming that the production and development servers are running the same OS and that your hardware is relatively the same, not a combination of PARISC, Itanium or x86/x64, I'll throw a couple of options out for you. * *Create the dbspaces that you need in the new instance and use onunload and onload to copy the database from production to development. *Use ontape or onbar to backup the entire production instance and restore it over your development instance. Option 1 requires that you know what the dbspaces are named and how large they are. Use onstat -d on the production instance to find this out. BTW, the numbers listed in onstat -d are in pages, I believe that Linux is a 2K page. Option 2 simply requires that the paths for the data files are the same on both servers. This means that the ROOTDBS needs to be the same in both instances. That can be found by executing onstat -c | grep ROOTDBS There's a lot that has been left out but I hope that this gives you the info that you need to move forward with your task. A: The .dat and .idx files are associated with C-ISAM, or, when organized in a directory called dbase.dbs (where dbase is the name of your database), the .dat and .idx files are associated with Informix Standard Engine, aka Informix SE. SE uses C-ISAM to manage its storage. SE is rather different from (and much simpler than) Informix Dynamic Server (IDS). It is not impossible that the .dat and .idx files are associated with IDS; it is just extremely unlikely. From the information available, it sounds as though your production server is running SE. To get the data from SE to IDS, you will probably want to use DB-Export at the SE end and DB-Import at the Linux/IDS end. Certainly, that is the simplest way to do it. There are other possible solutions - C-ISAM datablade being one such - but they are more expensive and probably not warranted. There are other possible loading solutions, such as HPL (High-Performance Loader). For more information about Informix, either use the various web sites already referenced (http://www.informix.com is a link to the Informix section of IBM's web site), or use the International Informix User Group (IIUG) web site. There are mailing lists available (which require you to belong, but membership is free) for discussing Informix in detail. A: Those Informix-SE datafiles (.DAT) and their associated index files (.IDX) are useless unless you also have all the associated catalog files, such as SYSTABLES.DAT SYSTABLES.IDX, SYSCOLUMNS, SYSINDEXES, etc. Then you also have to worry about which version of Informix-SE created them, as some have a 2K or 4K index file node size. Your best approach is to obtain all the .DAT and .IDX files from the source db, plus the correct standard engine, installed on the same hardware and operating system it came from. Long story short, on the source machine, run "dbexport" to unload all the data to ascii files, and run "dbschema" to generate all the table schemas and indexes. It also wouldn't hurt to run a "bcheck" on all the files before unloading them to ascii flat files. A: I don't have any Informix-specific advice but for situations like this you can usually find the answer by looking up how to move a database (a common admin task, and usually well described in the manual) and just skipping the steps that would remove the old database. Also, be careful of problems caused by different system architectures; some DBs fail spectacularly if you move them from a big-endian system (such as Solaris) to a little-endian system (such as x86 Linux) Again, the manual section on moving a DB would cover any extra steps that are needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/113582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Getting user name/password of the logged in user in Windows Is there any API to get the currently logged in user's name and password in Windows? Thank you in advance. A: For the many commenters who believe it is not possible to reveal the password of the currently logged-in user, see Dump cleartext passwords of logged in user(s) which shows how to use mimikatz to do just that: mimikatz # privilege::debug Demande d'ACTIVATION du privilège : SeDebugPrivilege : OK mimikatz # sekurlsa::logonPasswords full ... Utilisateur principal : user Domaine d'authentification : domain kerberos : * Utilisateur : user * Domaine : domain * Mot de passe : pass A: I'd consider it a huge security flaw if that were possible! A: You can't get the password of a user since its encrypted (not to mention that its a standard practice not to store passwords in plaintext). For getting the username, you can use GetUserName or NPGetUser A: Note sure how it is done, but "Network Password Recovery" tool from http://www.nirsoft.net/utils/network_password_recovery.html seems to get the password from some cache. A: Password: No, this is not retained for security reasons - it's used, then discarded. You could retrieve the encrypted password for this user from the registry, given sufficient privileges, then decrypt it using something like rainbow tables, but that's extremely resource intensive and time consuming using current methods. Much better to prompt the user. Alternatively, if you want to implement some sort of 'single signon' system as Novell does, you should do it via either a GINA (pre-Vista) or a Credential Provider (Vista), which will result in your code being given the username and password at login, the only time at which the password is available. For username, getting the current username (the one who is running your code) is easy: the GetUserName function in AdvApi32.dll does exactly this for you. If you're running as a service, you need to remember there is no one "logged in user": there are several at any time, such as LocalSystem, NetworkService, SYSTEM and other accounts, in addition to any actual people. This article provides some sample code and documentation for doing that. A: GetUserName will get you the name, but the password you can't get. It's not even something Windows stores, AFAIK - only a hash of your password. Depending on what you're trying to achieve (you can tell us a bit more..) it's possible to impersonate a logged on user and do stuff on his/her behalf. A: Full details of Authentication in the Windows API can be found on MSDN: http://msdn.microsoft.com/en-us/library/aa374735(VS.85).aspx A: I don't know about the windows login password... but you can definitely pull plaintext passwords from the Credentials Manager. For example here is a program to pull the password for TFS. In most cases, this is the same as the Windows Login. namespace ShowPassword { using Microsoft.TeamFoundation.Client; using System; using System.Net; class Program { static void Main(string[] args) { var tpc = new TfsTeamProjectCollection(new Uri("http://mycompany.com/tfs")); var nc = tpc.Credentials as NetworkCredential; Console.WriteLine("the password is " + nc.Password); } } } I compiled this as "console" app under vs 2015 with Nuget package TeamFoundation ExtendedClient. A: You can get the user name with GetUserName(), but you cannot get the password; this would violate security for dummies 101. A: re "Network Password Recovery" tool Windows (upto XP) stores a copy of the passwd with a simpler easy to break encryption - for connecting to older style lanmanager network shares. The tools generaly try all possible passwords against this, using rainbow tables (precaluted encrypted versions of dictionary words) speeds this up. In XPsp2/3 Vista this feature is removed. The new encryption is much harder to crack and needs many hours to try all possible values, there are online services that will run it on large number of machines to give you a quick answer for a price. To answer the original poster - you do not generally store the password and compare it with what the user typd in. You encrypt (actually hash) the entered password and store that. To check a password you perform the same encryption on whatever the user enetered and compare that. It is generally impossible to go from the encrypted form back to the real password. EDIT I suspect you are asking the wrong question here - why do you want the password, what are you trying to verify and when?
{ "language": "en", "url": "https://stackoverflow.com/questions/113592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Why is ENUM better than INT I just ran a "PROCEDURE ANALYSE ( )" on one of my tables. And I have this column that is of type INT and it only ever contains values from 0 to 12 (category IDs). And MySQL said that I would be better of with a ENUM('0','1','2',...,'12'). This category's are basically static and won't change in the future, but if they do I can just alter that column and add it to the ENUM list... So why is ENUM better in this case? edit: I'm mostly interested in the performance aspect of this... A: Because it introduces a constraint on the possible values. A: Put simply, it's because it's indexed in a different way. In this case, ENUM says "It's one of these 13 values" whereas INT is saying "It could be any integer." This means that indexing is easier, as it doesn't have to take into account indexing for those integers you don't use "just in case" you ever use them. It's all to do with the algorithms. I'd be interested myself though when it gets to a point where the INT would be quicker than the ENUM. Using numbers in an ENUM might be a little dangerous though... as if you send this number unquoted to SQL - you might end up getting the wrong value back! A: Yikes! There's a bunch of ambiguities with using numbers in an ENUM field. Be careful. The one gotcha I remember is that you can access values in ENUMS by index: if your enum is ENUM('A', 'B', 'C', '1', '2, '3'), then these two queries are very different: INSERT INTO TABLE (example_col) VALUES( '1' ); -- example_col == 1 INSERT INTO TABLE (example_col) VALUES( 1 ); -- example_col == A I'm assuming the recommendation is because it limits the valid values that can get into the table. For instance, inserting 13 should get the default choice. A better choice would by to use TINYINT instead of INT. an UNSIGNED TINYINT has a range of 0 to 255 and only takes 1 byte to store. An INT takes 4 bytes to store. If you want to constrain the values getting into the table, you can add ON INSERT and ON UPDATE triggers that check the values. If you're worried about the performance difference between ENUM and TINYINT, you can always benchmark to see the different. This article seems somewhat relevant. A: I'm not a MySQL expert, but my guess is that integers always take up four bytes of space where enums take up varying amounts of space based upon the range of data needed. Since you only need 13 items, it could get away with using 1 byte for your column. A: On Oracle I would have a BITMAP index which is much faster than a hash-based lookup for such a small number of values. (So I presume a similar benefit in query optomisation or indexing is available for MySQL.) Interestingly The MySQL docs suggest that using 'things that look like numbers' are a bad choice for the ENUM type because of potential confusion between the enum value and the enum index (http://dev.mysql.com/doc/refman/5.0/en/enum.html).
{ "language": "en", "url": "https://stackoverflow.com/questions/113609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: jBPM, concurrent execution and process variables When a process in jBPM forks into concurrent paths, each of these paths gets their own copy of the process variables, so that they run isolated from each other. But what happens when the paths join again ? Obviously there could be conflicting updates. Does the context revert back to the state before the fork? Can I choose to copy individual variables from the separate tracks? A: I think that you have to configure the Task Controllers of your tasks. In some cases it is enough to set the access attribute in a way that does not result in conflicts (e.g. read access to the first path and read,write access to the second path). If this is not the case then you can implement your own TaskControllerHandler and implement the method void submitTaskVariables(TaskInstance taskInstance, ContextInstance contextInstance, Token token) with your custom logic. Please see: Task Controllers. A: I tried a little experiment: <fork name="fork1" > <transition to="right" /> <transition to="left" /> </fork> <node name="left"> <event type="node-enter"> <script> <expression > left="left"; shared = left; </expression> <variable name='left' access='write' /> <variable name='shared' access='write' /> </script> </event> <transition to="join" /> </node> <node name="right"> <event type="node-enter"> <script> <expression > right="right"; token.parent.processInstance.contextInstance.setVariable("fromRight", "woot!"); shared = right; </expression> <variable name='right' access='write' /> <variable name='shared' access='write' /> </script> </event> <transition to="join" /> </node> <join name="join" > <transition to="done"></transition> </join> <end-state name="done"/> At the end I had access to three variables, shared, right and "fromRight" which was set by the script against the parent explicitly. The shared variable took its value from the right fork, changes made on the left seemed to dissappear. Note that the transitions aren't actually asynchronous for me, and the whole experiment will have run in one transaction, these factors may affect the outcome
{ "language": "en", "url": "https://stackoverflow.com/questions/113626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Which CSS tag creates a box like this with title? I want to create a box like this with title: Can any one please let me know if there is a default CSS tag to do this? Or do I need to create my custom style? A: I believe you are looking for the fieldset HTML tag, which you can then style with CSS. E.g., <fieldset style="border: 1px black solid"> <legend style="border: 1px black solid;margin-left: 1em; padding: 0.2em 0.8em ">title</legend> Text within the box <br /> Etc </fieldset> A: from http://www.pixy.cz/blogg/clanky/css-fieldsetandlabels.html fieldset { border: 1px solid green } legend { padding: 0.2em 0.5em; border: 1px solid green; color: green; font-size: 90%; text-align: right; } <form> <fieldset> <legend>Subscription info</legend> <label for="name">Username:</label> <input type="text" name="name" id="name" /> <br /> <label for="mail">E-mail:</label> <input type="text" name="mail" id="mail" /> <br /> <label for="address">Address:</label> <input type="text" name="address" id="address" size="40" /> </fieldset> </form> A: This will give you what you want <head> <title></title> <style type="text/css"> legend {border:solid 1px;} </style> </head> <body> <fieldset> <legend>Test</legend> <br /><br /> </fieldset> </body> A: As far as I know (correct me if I'm wrong!), there isn't. I'd recommend you to use a div with a negative-margin-h1 inside. Depending on the semantic structure of your document, you could also use a fieldset (HTML) with one legend (HTML) inside which approximately looks like this by default. A: If you are not using it in forms, and instead want to use it in an non-editable form, you can do this via the following code - .title_box { border: #3c5a86 1px dotted; } .title_box #title { position: relative; top: -0.5em; margin-left: 1em; display: inline; background-color: white; } .title_box #content {} <div class="title_box" id="bill_to"> <div id="title">Bill To</div> <div id="content"> Stuff goes here.<br> For example, a bill-to address </div> </div> A: You can try this out. <fieldset class="fldset-class"> <legend class="legend-class">Your Personal Information</legend> <table> <tr> <td><label>Name</label></td> <td><input type='text' name='name'></td> </tr> <tr> <td><label>Address</label></td> <td><input type='text' name='Address'></td> </tr> <tr> <td><label>City</label></td> <td><input type='text' name='City'></td> </tr> </table> </fieldset> DEMO A: I think this example can also be useful to someone: .fldset-class { border: 1px solid #0099dd; margin: 3pt; border-top: 15px solid #0099dd } .legend-class { color: #0099dd; } <fieldset class="fldset-class"> <legend class="legend-class">Your Personal Information</legend> <table> <tr> <td><label>Name</label></td> <td><input type='text' name='name'></td> </tr> <tr> <td><label>Address</label></td> <td><input type='text' name='Address'></td> </tr> <tr> <td><label>City</label></td> <td><input type='text' name='City'></td> </tr> </table> </fieldset>
{ "language": "en", "url": "https://stackoverflow.com/questions/113640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: File format of CF10-jpg While working on a tool that allows to exchange images of several third-party applications and thus creating individual "skins" for those applications, I have stumbled across a jpg-format about which I cannot seem to find any decent informations. When looking at it in a hex-editor, it starts with the tag "CF10". Searching the internet has only provided a tool that is able to handle these kind of files, without any additional informations. Does anyone have any further informations about this type of jpg-format? A: file(1) should give you some useful information. You can also use ImageMagick's identify(1) program (optionally with the -verbose option) to get even more details about the file. See the example on that page for a good idea of what information it provides. A: You could also try and see what the Droid identification tool says about that file. A: CF stands for "Compression Factor". CF-10 means factor ten, and I don't think it's different from any "standard" jpeg. A: DROID gives it as being a "JTIP (JPEG Tiled Image Pyramid)". Some info from http://www.bcr.org/cdp/digitaltb/digital_imaging/formats.html : JTIP (JPEG Tiled Image Pyramid) is similar to GridPrix. It offers multiple layers of higher and higher resolutions. Each layer is further divided into tiles. A user can zoom into these tiles, or request a corresponding tile at a higher resolution.
{ "language": "en", "url": "https://stackoverflow.com/questions/113641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Property Grid Object failing on combo box selection but OK when combobox scrolled or double clicked I have a Property Grid in C#, loading up a 'PropertyAdapter' object (a basic wrapper around one of my objects displaying relevant properties with the appropriate tags) I have a TypeConverter on one of the properties (DataType, that returns an enumeration of possible values) as I want to limit the values available to the property grid to Decimal and Integer, with the 2 methods as follows public override bool GetStandardValuesSupported(ITypeDescriptorContext context) { return true; } public override StandardValuesCollection GetStandardValues(ITypeDescriptorContext context) { return new StandardValuesCollection(new List<Constants.DataTypes>() { Constants.DataTypes.Decimal, Constants.DataTypes.Integer }); } This is displaying just as I want it on the property grid, and when I double click the property field in the property grid, it happily switches between Integer and Decimal. Similarily I can use the mouse wheel to scroll through the options in the property filed's combobox. If I however use the property field as a Combo Box and select a value from the drop-down, I get the standard property grid error box with the error: Object of type 'System.String' cannot be converted to type 'Pelion.PM3.Utils.Constants+DataTypes'. I am assuming I can use the Converter overrides in the Type converter to trap these and convert them to an Enum of DataTypes, but why would the property grid fail when I select from the drop-down instead of double clicking or 'mouseewheeling' on the drop down? A: When selected from the combo box drop down, the value is returned as string. I am not sure why that is, but I've seen in happen before. I think that basically double clicking or scrolling the mousewheel changes values from the value collection, while selecting from the drop down is like editing the field value as a string. Then, you have the convert the value from a string to the enum value.
{ "language": "en", "url": "https://stackoverflow.com/questions/113644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: VS 2003 CrystalReports - details section issue I have 2 detail sections on my report (details a and details b). Fields in both sections can grow up to 10 lines. How do I force the Crystal Report to print both sections on one page? Currently the report on bottom page print section "details a", but section "details b" prints on next page. How do I prevent this behavior? A: I found solution. Add new group section (empty) and mark Keep together on this group. :) A: You does not need an extra group. You can set on the detail area (node over the sections) the flag keep together.
{ "language": "en", "url": "https://stackoverflow.com/questions/113645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a function in python to split a word into a list? Is there a function in python to split a word into a list of single letters? e.g: s = "Word to Split" to get wordlist = ['W', 'o', 'r', 'd', ' ', 't', 'o', ' ', 'S', 'p', 'l', 'i', 't'] A: Abuse of the rules, same result: (x for x in 'Word to split') Actually an iterator, not a list. But it's likely you won't really care. A: >>> list("Word to Split") ['W', 'o', 'r', 'd', ' ', 't', 'o', ' ', 'S', 'p', 'l', 'i', 't'] A: The easiest way is probably just to use list(), but there is at least one other option as well: s = "Word to Split" wordlist = list(s) # option 1, wordlist = [ch for ch in s] # option 2, list comprehension. They should both give you what you need: ['W','o','r','d',' ','t','o',' ','S','p','l','i','t'] As stated, the first is likely the most preferable for your example but there are use cases that may make the latter quite handy for more complex stuff, such as if you want to apply some arbitrary function to the items, such as with: [doSomethingWith(ch) for ch in s] A: text = "just trying out" word_list = [] for i in range(len(text)): word_list.append(text[i]) print(word_list) Output: ['j', 'u', 's', 't', ' ', 't', 'r', 'y', 'i', 'n', 'g', ' ', 'o', 'u', 't'] A: The list function will do this >>> list('foo') ['f', 'o', 'o'] A: The easiest option is to just use the list() command. However, if you don't want to use it or it dose not work for some bazaar reason, you can always use this method. word = 'foo' splitWord = [] for letter in word: splitWord.append(letter) print(splitWord) #prints ['f', 'o', 'o'] A: def count(): list = 'oixfjhibokxnjfklmhjpxesriktglanwekgfvnk' word_list = [] # dict = {} for i in range(len(list)): word_list.append(list[i]) # word_list1 = sorted(word_list) for i in range(len(word_list) - 1, 0, -1): for j in range(i): if word_list[j] > word_list[j + 1]: temp = word_list[j] word_list[j] = word_list[j + 1] word_list[j + 1] = temp print("final count of arrival of each letter is : \n", dict(map(lambda x: (x, word_list.count(x)), word_list)))
{ "language": "en", "url": "https://stackoverflow.com/questions/113655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "119" }
Q: Installing PowerShell on 600 client computers - Recommended settings I want to install PowerShell to 600 Window XP computers, and use it as the main processing shell. For example, for replacing batch scripts, VB scripts, and some other little programs. The installation process is not a problem. Some issues I think I'm going to come across are: * *Changing permissions to allow PowerShell to run scripts *The speed of PowerShell starting *Using PowerShell for logon/logoff scripts with GPO Problem 2: There is a script that is supposed to speed up PowerShell, but it seems to need to be run as administrator (which of course isn't something that normal users do). Has anyone had any experience with using PowerShell in this way? A: To speed up the start of PowerShell, Jeffrey Snover (the partner/architect responsible for PowerShell) provides an "Update-GAC" script here. Basically, it is just running through the assemblies that are loaded for PowerShell and NGen'ing (pre-compiling the IL to machine code) them. This does speed up the start of PowerShell. Another trick is to run PowerShell with the -nologo and -noprofile switches. This will skip the profile scripts and splash logo. There is a product for using PowerShell for logon/logoff scripts from Special Operations Software. There are other ways to do it also. %windir%\system32\WindowsPowerShell\v1.0\powershell.exe -nologo -noprofile A: Changing permissions to allow Powershell scripts is possible to do via group policy. Microsoft provide ADM templates here, there is only one option "Turn on Script Execution" and can be assigned at a user or computer level. A: It seems it's possible to run poweshell silently, but not just by calling itself. This article has more information. So answering my own questions * *This can be done via GPOs *First run takes at least 10 seconds on our computers. this could add that time onto the logon time which is unacceptable. *This seems fairly simple to do, using the scripts above is invisibility is needed, or by calling the powershell exe and passing it startup options. On our computers, to use powershell for logon seems not to be worthwhile just because of the logon time increase.
{ "language": "en", "url": "https://stackoverflow.com/questions/113664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Hiding toolbar / status bar with javascript in CURRENT browser window? Is there some way to hide the browser toolbar / statusbar etc in current window via javascript? I know I can do it in a popup with window.open() but I need to do it this way. Is it possible at all? A: I believe this is not possible. And anyway, just don't do it. Your page can do what it wants with the rendering area, but the rest of the browser belongs to the user and websites have no business messing with it. A: As per the previous answer, this isn't possible to my knowledge and is best avoided anyway. Even if a solution can be found, bear in mind that most browsers these days allow the user to prevent Javascript from interfering with their browser settings and window chrome, even when using window.open. So you've got absolutely no way of guarenteeing the behaviour that you're looking for and consequently you're best off forgetting about it altogether. Let the user decide how they want their window configured. A: Marijn: ok thanks. This is for an intranet site and we display InfoPath forms as separate, no-toolbar, no-statusbar windows. This is a client requirement, I'm not trying to do evil ;) A: To Martin Meredith, Luke, Marijn: thanks for your quick reply. It is now settled that it's not possible. I agree with you all about this being an undesirable behavior, but as i stated before, this is for a bank intranet application where all users are running a tightly controlled, centrally-configured, customized and hacked to death browser they have no control over anyway, and the client actually wants this behavior for the application. It would be dumb and annoying to do this in a public facing/general website, of course. But sometimes we just have to get the job done :( A: No. This would be a massive security hole if it were possible... not to mention annoying. My browser wont even let you do this in popups... which can be annoying aswell! A: You may want to investigate using an HTA (HTML Application). It will render HTML pages with zero browser chrome, a custom icon can be shown on the task bar, and the entire "caption" can be removed. The last option yields a floating window without eve a close button. For how I imagine your needs to be, you would want to start with something like: <html> <head> <title>HTA Demonstration</title> <hta:application innerborder="no" icon="magnify.exe" /> </head> <body style="overflow: hidden; margin: 0;"> <iframe src="http://www.yahoo.com" style="width: 100%; height: 100%;"></iframe> </body> </html> Save the above HTML into a file and give it "example.hta" as the file name. You'll then have a generic icon on your desktop which you can double click on to start. <hta:application innerborder="no" caption="no" icon="magnify.exe" /> This change will remove the title bar when running the script. Press Alt-F4 to exit the script if you do this. This will also only work with IE, however that should not be an issue on an intranet.
{ "language": "en", "url": "https://stackoverflow.com/questions/113682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How not to repeat yourself across projects and/or languages I'm working on several distinct but related projects in different programming languages. Some of these projects need to parse filenames written by other projects, and expect a certain filename pattern. This pattern is now hardcoded in several places and in several languages, making it a maintenance bomb. It is fairly easy to define this pattern exactly once in a given project, but what are the techniques for defining it once and for all for all projects and for all languages in use? A: Creating a Domain Specific Language, then compile that into the code for each of the target languages that you are using would be the best solution (and most elegant). Its not difficult to make a DSL - wither embed it in something (like inside Ruby since its the 'in' thing right now, or another language like LISP/Haskell...), or create a grammar from scratch (use Antlr?). It seems like the project is large, then this path is worth your while. A: I'd store the pattern in a simple text file and, depending on a particular project: * *Embed it in the source at build time (preprocessing) *If the above is not an option, treat it as a config file read at runtime Edit: I assume the pattern is something no more complicated than a regex, otherwise I'd go with the DSL solution from another answer. A: You could use a common script, process or web service for generating the file names (depending on your set-up). A: I don't know which languages you are speaking about but most of languages can use external dynamic libraries dlls/shared objects and export common functionality from this library. For example you implement function get file name in simple c lib and use acrros rest of languages. Another option will be to create common code dynamically as part of the build process for each language this should not be to complex. I will suggest using dynamic link approach if feasible (you did not give enough information to determine this),since maintaining this solution will be much easier then maintaining code generation for different languages. A: Put the pattern in a database - the easiest and comfortable way could be using XML database. This database will be accessible by all the projects and they will read the pattern from there
{ "language": "en", "url": "https://stackoverflow.com/questions/113696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I add a "last" class on the last * within a Views-generated list? How do I add a "last" class on the last <li> within a Views-generated list? A: You could use the last-child pseudo-class on the li element to achieve this <html> <head> <style type="text/css"> ul li:last-child { font-weight:bold } </style> </head> <body> <ul> <li>IE</li> <li>Firefox</li> <li>Safari</li> </ul> </body> </html> There is also a first-child pseudo class available. I am not sure the last-child element works in IE though. A: Alternatively, you could achieve this via JavaScript if certain browsers don't support the last-child class. For example, this script sets the class name for the last 'li' element in all 'ul' tags, although it could easily be adapted for other tags, or specific elements. function highlightLastLI() { var liList, ulTag, liTag; var ulList = document.getElementsByTagName("ul"); for (var i = 0; i < ulList.length; i++) { ulTag = ulList[i]; liList = ulTag.getElementsByTagName("li"); liTag = liList[liList.length - 1]; liTag.className = "lastchild"; } } A: Does jquery come bundled with drupal, if so you could use $('ul>li:last').addClass('last'); to achieve this A: You can use that pesudo-class last-child, first-child for first or nth-child for any number of child li:last-child{ color:red; font-weight:bold; }
{ "language": "en", "url": "https://stackoverflow.com/questions/113702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: LocBaml include previous translations Is there a way to combine a previous translation when extracting the csv file from an application? Or any other tool that could do this job for me? I can’t really see how could i use locbaml if i had to translate everything from scratch every time i add a new control in my application. A: You might consider using something like WinMerge to merge the existing file with the old one. A: You can try my open source addin http://easybaml.codeplex.com, which combines previous translations automatically. Right now it uses .resx instead of .csv, which can be changed if needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/113712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: I'm looking for publications about the history of the internet browsers I'm looking for publications about the history of the internet browsers. Papers, articles, blog posts whatever. Cannot find anything on ACM IEEE etc. and my blog search also didn't reveal anything remarkable. A: Did you take a look at the entries in Wikipedia? It's a useful starting point. Here are a few to start you off: Wikipedia - Web browser Wikipedia - Timeline of web browsers Wikipedia - Browser Wars A: There's Eric Sink's blog post: "Memoirs From the Browser Wars". Eric Sink was one of the members of the team that implemented Mosaic, the first web browser. He litterally is part of the the history of the internet browser :-) A: The keywords I would search for in a decent library index (or Google) are: * *Tim Berners-Lee (inventor of first HTTP client and HTTP server) *WorldWideWeb (HTTP client mentioned above. Notice no spaces in name.) *NCSA Mosaic (first graphical web browser, evolved into Netscape and eventually Firefox) *Marc Andreessen (project leader for Mosaic, founder of Netscape. Also one of the first technologists to envision a browser-based operating system, what we might now call "web-apps".) *Browser Wars (should cover most of the major players involved in how we think of modern Web browsers). Most of this stuff is covered in the articles suggested in the previous posts. Just hope this helps you pick out the terms that will help you with finding scholarly sources. The HTTP client (now better known as the Web browser) is one of the key components of the World Wide Web (or just "the web"), which is distinguishable from the more generic "Internet" in that it uses a combination of technologies (most notably: HTML, HTTP (client and server) and Domain Names). The reason why you may be having trouble finding good sources in your search is that you are searching of "history of the web browser" which is kind of like searching for "history of guitar solos" in that the info is out there but combined with the larger topic. While you may get some results, most of the information you want will probably be integrated into sources on the history of the web. That's why I suggested searching for Tim Berners-Lee first, as he invented all of the major components essential to the web. A: I would start with Wikipedia as Eward mentioned. But after you read wikipedia, check the bottom of the articles for the sources used. Then read those sources. If this is for a school paper I doubt you'll get full points for using wikipedia.
{ "language": "en", "url": "https://stackoverflow.com/questions/113717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }