text
stringlengths
8
267k
meta
dict
Q: What uses can you think of for Perl 6's junctions? More information from the Perl 6 Wikipedia entry Junctions Perl 6 introduces the concept of junctions: values that are composites of other values.[24] In the earliest days of Perl 6's design, these were called "superpositions", by analogy to the concept in quantum physics of quantum superpositions — waveforms that can simultaneously occupy several states until observation "collapses" them. A Perl 5 module released in 2000 by Damian Conway called Quantum::Superpositions[25] provided an initial proof of concept. While at first, such superpositional values seemed like merely a programmatic curiosity, over time their utility and intuitiveness became widely recognized, and junctions now occupy a central place in Perl 6's design. In their simplest form, junctions are created by combining a set of values with junctive operators: my $any_even_digit = 0|2|4|6|8; # any(0, 2, 4, 6, 8) my $all_odd_digits = 1&3&5&7&9; # all(1, 3, 5, 7, 9) | indicates a value which is equal to either its left or right-hand arguments. & indicates a value which is equal to both its left and right-hand arguments. These values can be used in any code that would use a normal value. Operations performed on a junction act on all members of the junction equally, and combine according to the junctive operator. So, ("apple"|"banana") ~ "s" would yield "apples"|"bananas". In comparisons, junctions return a single true or false result for the comparison. "any" junctions return true if the comparison is true for any one of the elements of the junction. "all" junctions return true if the comparison is true for all of the elements of the junction. Junctions can also be used to more richly augment the type system by introducing a style of generic programming that is constrained to junctions of types: sub get_tint ( RGB_Color|CMYK_Color $color, num $opacity) { ... } sub store_record (Record&Storable $rec) { ... } A: The most attractive feature of junctions is that you don't need to write a lot of code test for complex situations. You describe the situation with the junctions, then apply the test. You don't think about how you get the answer (for instance, using short circuit operators or if blocks) but what question you are asking. A: Autothreading sounds cool, although I don't know what its current status is. for all(@files) -> $file { do_something($file); } Junctions have no order, so the VM is free to spawn a thread for every element in @files and process them all in parallel. A: How many days are in a given month? given( $month ){ when any(qw'1 3 5 7 8 10 12') { $day = 31 } when any(qw'4 6 9 11') { $day = 30 } when 2 { $day = 29 } }
{ "language": "en", "url": "https://stackoverflow.com/questions/102271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: `active' flag or not? OK, so practically every database based application has to deal with "non-active" records. Either, soft-deletions or marking something as "to be ignored". I'm curious as to whether there are any radical alternatives thoughts on an `active' column (or a status column). For example, if I had a list of people CREATE TABLE people ( id INTEGER PRIMARY KEY, name VARCHAR(100), active BOOLEAN, ... ); That means to get a list of active people, you need to use SELECT * FROM people WHERE active=True; Does anyone suggest that non active records would be moved off to a separate table and where appropiate a UNION is done to join the two? Curiosity striking... EDIT: I should make clear, I'm coming at this from a purist perspective. I can see how data archiving might be necessary for large amounts of data, but that is not where I'm coming from. If you do a SELECT * FROM people it would make sense to me that those entries are in a sense "active" Thanks A: Well, to ensure that you only draw active records in most situations, you could create views that only contain the active records. That way it's much easier to not leave out the active part. A: We use an enum('ACTIVE','INACTIVE','DELETED') in most tables so we actually have a 3-way flag. I find it works well for us in different situations. Your mileage may vary. A: Moving inactive stuff is usually a stupid idea. It's a lot of overhead with lots of potential for bugs, everything becomes more complicated, like unarchiving the stuff etc. What do you do with related data? If you move all that, too, you have to modify every single query. If you don't move it, what advantage were you hoping to get? That leads to the next point: WHY would you move it? A properly indexed table requires one additional lookup when the size doubles. Any performance improvement is bound to be negligible. And why would you even think about it until the distant future time when you actually have performance problems? A: You partition the table on the active flag, so that active records are in one partition, and inactive records are in the other partition. Then you create an active view for each table which automatically has the active filter on it. The database query engine automatically restricts the query to the partition that has the active records in it, which is much faster than even using an index on that flag. Here is an example of how to create a partitioned table in Oracle. Oracle doesn't have boolean column types, so I've modified your table structure for Oracle purposes. CREATE TABLE people ( id NUMBER(10), name VARCHAR2(100), active NUMBER(1) ) PARTITION BY LIST(active) ( PARTITION active_records VALUES (0) PARTITION inactive_records VALUES (1) ); If you wanted to you could put each partition in different tablespaces. You can also partition your indexes as well. Incidentally, this seems a repeat of this question, as a newbie I need to ask, what's the procedure on dealing with unintended duplicates? Edit: As requested in comments, provided an example for creating a partitioned table in Oracle A: I think looking at it strictly as a piece of data then the way that is shown in the original post is proper. The active flag piece of data is directly dependent upon the primary key and should be in the table. That table holds data on people, irrespective of the current status of their data. A: The active flag is sort of ugly, but it is simple and works well. You could move them to another table as you suggested. I'd suggest looking at the percentage of active / inactive records. If you have over 20 or 30 % inactive records, then you might consider moving them elsewhere. Otherwise, it's not a big deal. A: Yes, we would. We currently have the "active='T/F'" column in many of our tables, mainly to show the 'latest' row. When a new row is inserted, the previous T row is marked F to keep it for audit purposes. Now, we're moving to a 2-table approach, when a new row is inserted, the previous row is moved to an history table. This give us better performance for the majority of cases - looking at the current data. The cost is slightly more than the old method, previously you had to update and insert, now you have to insert and update (ie instead of inserting a new T row, you modify the existing row with all the new data), so the cost is just that of passing in a whole row of data instead of passing in just the changes. That's hardly going to make any effect. The performance benefit is that your main table's index is significantly smaller, and you can optimise your tablespaces better (they won't grow quite so much!) A: Binary flags like this in your schema are a BAD idea. Consider the query SELECT count(*) FROM users WHERE active=1 Looks simple enough. But what happens when you have a large number of users, so many that adding an index to this table would be required. Again, it looks straight forward ALTER TABLE users ADD INDEX index_users_on_active (active) EXCEPT!! This index is useless because the cardinality on this column is exactly two! Any database query optimiser will ignore this index because of it's low cardinality and do a table scan. Before filling up your schema with helpful flags consider how you are going to access that data. https://stackoverflow.com/questions/108503/mysql-advisable-number-of-rows A: We use active flags quite often. If your database is going to be very large, I could see the value in migrating inactive values to a separate table, though. You would then only require a union of the tables when someone wants to see all records, active or inactive. A: In most cases a binary field indicating deletion is sufficient. Often there is a clean up mechanism that will remove those deleted records after a certain amount of time, so you may wish to start the schema with a deleted timestamp. A: Moving off to a separate table and bringing them back up takes time. Depending on how many records go offline and how often you need to bring them back, it might or might not be a good idea. If the mostly dont come back once they are buried, and are only used for summaries/reports/whatever, then it will make your main table smaller, queries simpler and probably faster. A: We use both methods for dealing with inactive records. The method we use is dependent upon the situation. For records that are essentially lookup values, we use the Active bit field. This allows us to deactivate entries so they wont be used, but also allows us to maintain data integrity with relations. We use the "move to separation table" method where the data is no longer needed and the data is not part of a relation. A: The situation really dictates the solution, methinks: If the table contains users, then several "flag" fields could be used. One for Deleted, Disabled etc. Or if space is an issue, then a flag for disabled would suffice, and then actually deleting the row if they have been deleted. It also depends on policies for storing data. If there are policies for keeping data archived, then a separate table would most likely be necessary after any great length of time. A: No - this is a pretty common thing - couple of variations depending on specific requirements (but you already covered them): 1) If you expect to have a whole BUNCH of data - like multiple terabytes or more - not a bad idea to archive deleted records immediately - though you might use a combination approach of marking as deleted then copying to archive tables. 2) Of course the option to hard delete a record still exists - though us developers tend to be data pack-rats - I suggest that you should look at the business process and decide if there is now any need to even keep the data - if there is - do so... if there isn't - you should probably feel free just to throw the stuff away.....again, according to the specific business scenario. A: From a 'purist perspective' the realtional model doesn't differentiate between a view and a table - both are relations. So that use of a view that uses the discriminator is perfectly meaningful and valid provided the entities are correctly named e.g. Person/ActivePerson. Also, from a 'purist perspective' the table should be named person, not people as the name of the relation reflects a tuple, not the entire set. A: Regarding indexing the boolean, why not: ALTER TABLE users ADD INDEX index_users_on_active (id, active) ; Would that not improve the search? However I don't know how much of that answer depends on the platform. A: This is an old question but for those search for low cardinality/selectivity indexes, I'd like to propose the following approach that avoids partitioning, secondary tables, etc.: The trick is to use "dateInactivated" column that stores the timestamp of when the record is inactivated/deleted. As the name implies, the value is NULL while the record is active, but once inactivated, write in the system datetime. Thus, an index on that column ends up having high selectivity as the number of "deleted" records grows since each record will have a unique (not strictly speaking) value. Then your query becomes: SELECT * FROM people WHERE dateInactivated is NULL; The index will pull in just the right set of rows that you care about. A: Filtering data on a bit flag for big tables is not really good in terms of performance. In case when 'active' determinate virtual deletion you can create 'TableName_delted' table with the same structure and move deleted data there using delete trigger. That solution will help with performance and simplifies data queries.
{ "language": "en", "url": "https://stackoverflow.com/questions/102278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Does full trust mean the same as Run As Administrator Does full trust mean the same as Run As Administrator? I have read things stating that "for this to work, the application must be a full-trust application." Is that the same as you must have administrator privileges to run the application? If not, what's the difference? How can you tell if an app is "full-trust"? I am told that "Administrator or not, .Net apps won't do certain things if they aren't running from a 'trusted' location." What is a "trusted location"? If you run an app from a "trusted location", can you do things that "require full-trust" without being an administrator? A: No. As of version 2.0, the .Net framework has it's own little filesystem setup for security. Administrator or not, .Net apps won't do certain things if they aren't running from a 'trusted' location. Just about anything on your local hard drive is trusted, but (and supposedly they fixed this for 3.5sp1) even the local intranet is not trusted, so most .Net desktop apps will fail to even start if they're sitting on a network drive or share. You can change the configuration on a machine so it will allow apps from that zone, but it has to be done for every machine that's going to run the application, which breaks a common corporate deployment scenario. From an ASP.Net standpoint, it also means that certain activities require more 'trust' than others. Sending e-mail, for example, can cause exceptions if not set up correctly. A: No. Full-trust is a .NET term used to indicate that it's not running in a reduced-priviledge .NET sandbox. In .NET prior to 3.5 SP1, this included running from a network share (in the default configuration). It also includes running as a ClickOnce application that has not requested additional permissions, or in some other browser-based sandbox. Full-trust means it can do anything the user it is running as can do, not that is running as an administrator. A: Basically Full Trust means that the C# code has total control over the current (.Net) process and all processes running under the Application Pool account. It is the same as running a C++ dll Admin access will depend on the IIS settings (ie. if you run the website under System or an admin account)
{ "language": "en", "url": "https://stackoverflow.com/questions/102282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: What WPF C# Control is similar to a CWnd in C++? What would be the best WPF control in C# (VS 2008) that you can place on a form that would allow you to do drawing similar to the "Paint" function for the CWnd class in C++? Also, that could display bitmaps, have a scroll bar, and the ability to accept user inputs (ie. MouseMove, Button Clicks, etc...). Basically all the functionality of a CWnd in a control on a WPF form? A: The UIElement is the lowest level element that supports input and drawing. Although, using WPF, you really have to do a lot less manual drawing. Are you sure that you need to do this? Also, the scroll bar will never be inherit in your element. If you need scrolling behavior, just wrap your element in a ScrollViewer. A: UIElement is the place to start and OnRender is the method to override. Be warned that WPF is heavily geared toward composing UI as opposed to the WM_PAINT ways of Win32. Unless you are creating new low level primitives there is almost always a more productive alternative.
{ "language": "en", "url": "https://stackoverflow.com/questions/102283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to get multiple records against one record based on relation? I have two tables Organisation and Employee having one to many relation i.e one organisation can have multiple employees. Now I want to select all information of a particular organisation plus first name of all employees for this organisation. What’s the best way to do it? Can I get all of this in single record set or I will have to get multiple rows based on no. of employees? Here is a bit graphical demonstration of what I want: Org_ID Org_Address Org_OtherDetails Employess 1 132A B Road List of details Emp1, Emp2, Emp3..... A: Since the question is tagged as MySQL, you should be able to use a MySQL-specific solution, namely, GROUP_CONCAT. For example, select Org_ID, Org_Address, Org_OtherDetails, GROUP_CONCAT(employees) as Employees from employees a, organization b where a.org_id=b.org_id group by b.org_id; A: The original question was database specific, but perhaps this is a good place to include a more generic answer. It's a common question. The concept that you are describing is often referred to as 'Group Concatenation'. There's no standard solution in SQL-92 or SQL-99. So you'll need a vendor-specific solution. * *MySQL - Use the built-in GROUP_CONCAT function. In your example you would want something like this: select o.ID, o.Address, o.OtherDetails, GROUP_CONCAT( concat(e.firstname, ' ', e.lastname) ) as Employees from employees e inner join organization o on o.org_id=e.org_id group by o.org_id * *PostgreSQL - PostgreSQL 9.0 is equally simple now that string_agg(expression, delimiter) is built-in. Here it is with 'comma-space' between elements: select o.ID, o.Address, o.OtherDetails, STRING_AGG( (e.firstname || ' ' || e.lastname), ', ' ) as Employees from employees e inner join organization o on o.org_id=e.org_id group by o.org_id PostgreSQL before 9.0 allows you to define your own aggregate functions with CREATE AGGREGATE. Slightly more work than MySQL, but much more flexible. See this other post for more details. (Of course PostgreSQL 9.0 and later have this option as well.) * *Oracle - same idea using LISTAGG. *MS SQL Server - same idea using STRING_AGG *Fallback solution - in other database technologies or in very very old versions of the technologies listed above you don't have these group concatenation functions. In that case create a stored procedure that takes the org_id as its input and outputs the concatenated employee names. Then use this stored procedure in your query. Some of the other responses here include some details about how to write stored procedures like these. select o.ID, o.Address, o.OtherDetails, MY_CUSTOM_GROUP_CONCAT_PROCEDURE( o.ID ) as Employees from organization o A: in MS SQL you can do: create function dbo.table2list (@input int) returns varchar(8000) as BEGIN declare @putout varchar(8000) set @putout = '' select @putout = @putout + ', ' + <employeename> from <employeetable> where <orgid> = @input return @putout end then do: select * from org, dbo.table2list(orgid) from <organisationtable> I think you can do it with COALESCE() as well, but can't remember the syntax off the top of my head A: If you use Oracle you can create a PL/SQL function you can use in your query that accepts an organization_id as input, and returns the first name of all employees belonging to that org as a string. For example:- select o.org_id, o.org_address, o.org_otherdetails, org_employees( o.org_id ) as org_employees from organization o A: It all depends. If you do a join, you get all the organization data on every row. (1 row per employee). That has a cost. If you do two queries. (Org and Emp) that has a different cost. Pick your poison. A: The short answer is "no". As noted in other answers, there are vendor-specific ways to achieve this result, but there is no pure SQL solution which works in one query. sorry about that :( Presumably one of the vendor specific solutions will work for you? A: Here's what you can do, you have 2 options: select * FROM users u LEFT JOIN organizations o ON (u.idorg = o.id); This way you will get extra data on each row - full organization info you don't really need. Or you can do: select o.*, group_concat(u.name) FROM users u LEFT JOIN organizations o ON (u.idorg = o.id) GROUP BY o.id http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat The second approach is applicable if you want to see ie. list of usernames "user1, user2, user3", but don't want to operate on the fields themselves... A: For SQL Server SQLCLR aggregates in this project are inspired by the MSDN code sample however they perform much better and allow for sorting (as strings) and alternate delimiters if needed. They offer almost equivalent functionality to MySQL's GROUP_CONCAT for SQL Server. A full comparison of the advantages/disadvantages of the CLR aggregates and the FOR XML solution can be found in the documentation: http://groupconcat.codeplex.com A: For SQL Server 2017 & Azure SQL there is now a function STRING_AGG (Transact-SQL) for this to be on par more with MySQL: https://learn.microsoft.com/en-us/sql/t-sql/functions/string-agg-transact-sql?view=sql-server-ver15. SELECT attribute1, STRING_AGG (attribute2, '|') AS Attribute2 FROM table GROUP BY attribute
{ "language": "en", "url": "https://stackoverflow.com/questions/102317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: MySQL 5.0 instance manager like functionality for earlier versions? MySQL introduced a server side utility that lets you manage multiple instances on a remote machine. I am looking for similar functionality for earlier versions of mysql. [1]http://dev.mysql.com/doc/refman/5.0/en/instance-manager.html A: I am not familiar with the instance manager, but I have used phpMyAdmin on several systems (including a remotely hosted server) with great success. It supports MySQL 5.0 and 4.1.
{ "language": "en", "url": "https://stackoverflow.com/questions/102318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Not supported platforms for java.awt.Desktop.getDesktop() Since Java 6 there is a class java.awt.Desktop. There are some nice methods but the class is not supported on all platforms. The methods java.awt.Desktop.getDesktop() throws an java.lang.UnsupportedOperationException: Desktop API is not supported on the current platform on some platforms. Or the method java.awt.Desktop.isDesktopSupported() return false. I know that it work on Windows XP, Windows 2003 and also Windows Vista. The question is on which platform is it not supported? A: Quote: Desktop API was developed to support Windows and Gnome only from http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6486393 This article however says that even Gnome support is flawed on Fedora. A: Works on OS X, too. A: Does not work in the current Debian (squeeze) whether in gnome or fvwm. I did not try kde. This bug prevents the latest version of limewire to start. The stack output is: FATAL ERROR! java.lang.ExceptionInInitializerError at com.limegroup.gnutella.gui.Initializer$6.run(Unknown Source) ....... Caused by: java.lang.UnsupportedOperationException: The system tray is not supported on the current platform. at java.awt.SystemTray.getSystemTray(SystemTray.java:151) A: Well its not Supported into the Ubuntu 12.04 and its giving Error like this. java.lang.UnsupportedOperationException: The system tray is not supported on the current platform. A: to solve it on ubuntu, run the next command: apt-get install libgnome2-0 A: On arch linux, I had to install the AUR libgnome package
{ "language": "en", "url": "https://stackoverflow.com/questions/102325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Including eval / bind values in OnClientClick code I have a need to open a popup detail window from a gridview (VS 2005 / 2008). What I am trying to do is in the markup for my TemplateColumn have an asp:Button control, sort of like this: <asp:Button ID="btnShowDetails" runat="server" CausesValidation="false" CommandName="Details" Text="Order Details" onClientClick="window.open('PubsOrderDetails.aspx?OrderId=<%# Eval("order_id") %>', '','scrollbars=yes,resizable=yes, width=350, height=550');" Of course, what isn't working is the appending of the <%# Eval...%> section to set the query string variable. Any suggestions? Or is there a far better way of achieving the same result? A: I like @AviewAnew's suggestion, though you can also just write that from the code-behind by wiring up and event to the grid views ItemDataBound event. You'd then use the FindControl method on the event args you get to grab a reference to your button, and set the onclick attribute to your window.open statement. A: Do this in the code-behind. Just use an event handler for gridview_RowDataBound. (My example uses a gridview with the id of "gvBoxes". Private Sub gvBoxes_RowDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles gvBoxes.RowDataBound Select Case e.Row.RowType Case DataControlRowType.DataRow Dim btn As Button = e.Row.FindControl("btnShowDetails") btn.OnClientClick = "window.open('PubsOrderDetails.aspx?OrderId=" & DataItem.Eval("OrderId") & "','','scrollbars=yes,resizable=yes, width=350, height=550');" End Select End Sub A: I believe the way to do it is onClientClick=<%# string.Format("window.open('PubsOrderDetails.aspx?OrderId={0}',scrollbars=yes,resizable=yes, width=350, height=550);", Eval("order_id")) %>
{ "language": "en", "url": "https://stackoverflow.com/questions/102343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to add a version number to an Access file in a .msi I'm building an install using VS 2003. The install has an Excel workbook and two Access databases. I need to force the Access files to load regardless of the create/mod date of the existing databases on the user's computer. I currently use ORCA to force in a Version number on the two files, but would like to find a simpler, more elegant solution (hand editing a .msi file is not something I see as "best practice". Is there a way to add a version number to the databases using Access that would then be used in the install? Is there a better way for me to do this? A: @LanceSc I don't think MsiFileHash table will help here. See this excellent post by Aaron Stebner. Most likely last modified date of Access database on client computer will be different from its creation date. Windows Installer will correctly assume that the file has changed since installation and will not replace it. The right way to solve this (as question author pointed out) is to set Version field in File table. Unfortunately setup projects in Visual Studio are very limited. You can create simple VBS script that would modify records in File table (using SQL) but I suggest looking at alternative setup authoring tools instead, such as WiX, InstallShield or Wise. WiX in my opinion is the best. A: Since it sounds like you don't have properly versioned resources, have you tried changing the REINSTALLMODE property? IIRC, in the default value of 'omus', it's the 'o' flag that's only allowing you to install if you have an older version. You may try changing this from 'o' to 'e'. Be warned that this will overwrite missing, older AND equally versioned files. Manually adding in versions was the wrong way to start, but this should ensure that you don't have to manually bump up the version numbers to get them to install. A: Look into Build Events for your project. It may be possible to rev the versions of the files during a build event. [Just don't quote me on that]. I am not sure if you can or not, but that would be the place I would start investigating first. A: You should populate the MsiFileHash table for these files. Look at WiFilVer.vbs thtat is part of the Microsoft Platform SDK to see how to do this. My other suggestion would be to look at WiX instead of Visual Studio 2003 for doing installs. Visual Studio 2003 has very limited MSI support and you can end up spending a lot of time fighting it, rather than getting useful work don.
{ "language": "en", "url": "https://stackoverflow.com/questions/102374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to optimize compiling a 32 bit application in Visual C++ 2005 on a 64 bit windows sever 2008 I have just installed a build server with a 64 bit windows server 2008 for continuous integration. The reason I choose a 64 bit server was to have more than ~3Gb of RAM. I had hopes that this machine would provide blazing fast builds. Unfortunately, the result are lacking greatly to say the least. My desktop provides faster builds than this server equipped with a Xeon quad core, 15k RPM SAS and 8 Gigs of RAM. We use Visual C++ 2005 to compile our 32 bit application with Cygwin. Could the WOW64 emulator be the bottleneck that is slowing down the build process? Any pointers, comments would be greatly appreciated. Regards, A: WOW64 is not an emulator on x64. The processor natively executes 32-bit x86 code. At the bottom of the user-mode stack, under kernel32 et al, are DLLs which map system calls to the 64-bit call interface. See WOW64 Implementation Details. A: We use Visual C++ 2005 to compile our 32 bit application with Cygwin. I think that's the problem. I like Cygwin a lot, but it is really slow when it comes to file I/O. It helps a bit to deactivate the NTFS filesystem feature to keep track of the last file-access. To get a better speed boost port your build-script / makefile to use the native command shell if pssible and only call cygwin-tools if there is really no replacement available. If you use the gcc compiler try the mingw version. That one is a lot faster.
{ "language": "en", "url": "https://stackoverflow.com/questions/102377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ssh-agent with passwords without spawning too many processes I use ssh-agent with password-protected keys on Linux. Every time I log into a certain machine, I do this: eval `ssh-agent` && ssh-add This works well enough, but every time I log in and do this, I create another ssh-agent. Once in a while, I will do a killall ssh-agent to reap them. Is there a simple way to reuse the same ssh-agent process across different sessions? A: have a look at Keychain. It was written b people in a similar situation to yourself. Keychain A: How much control do you have over this machine? One answer would be to run ssh-agent as a daemon process. Other options are explained on this web page, basically testing to see if the agent is around and then running it if it's not. To reproduce one of the ideas here: SSH_ENV="$HOME/.ssh/environment" function start_agent { echo "Initialising new SSH agent..." /usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}" echo succeeded chmod 600 "${SSH_ENV}" . "${SSH_ENV}" > /dev/null /usr/bin/ssh-add; } # Source SSH settings, if applicable if [ -f "${SSH_ENV}" ]; then . "${SSH_ENV}" > /dev/null #ps ${SSH_AGENT_PID} doesn’t work under cywgin ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || { start_agent; } else start_agent; fi A: You can do: ssh-agent $SHELL This will cause ssh-agent to exit when the shell exits. They still won't be shared across sessions, but at least they will go away when you do. A: Depending on which shell you use, you can set different profiles for login shells and mere regular new shells. In general you want to start ssh-agent for login shells, but not for every subshell. In bash these files would be .bashrc and .bash_login, for example. Most desktop linuxes these days run ssh-agent for you. You just add your key with ssh-add, and then forward the keys over to remote ssh sessions by running ssh -A
{ "language": "en", "url": "https://stackoverflow.com/questions/102382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Using Vim's tabs like buffers I have looked at the ability to use tabs in Vim (with :tabe, :tabnew, etc.) as a replacement for my current practice of having many files open in the same window in hidden buffers. I would like every distinct file that I have open to always be in its own tab. However, there are some things that get in the way of this. How do I fix these: * *When commands like gf and ^] jump to a location in another file, the file opens in a new buffer in the current tab. Is there a way to have all of these sorts of commands open the file in a new tab, or switch to the existing tab with the file if it is already open? *When switching buffers I can use :b <part of filename><tab> and it will complete the names of files in existing buffers. <part of filename> can even be the middle of a filename instead of the beginning. Is there an equivalent for switching tabs? A: Stop, stop, stop. This is not how Vim's tabs are designed to be used. In fact, they're misnamed. A better name would be "viewport" or "layout", because that's what a tab is—it's a different layout of windows of all of your existing buffers. Trying to beat Vim into 1 tab == 1 buffer is an exercise in futility. Vim doesn't know or care and it will not respect it on all commands—in particular, anything that uses the quickfix buffer (:make, :grep, and :helpgrep are the ones that spring to mind) will happily ignore tabs and there's nothing you can do to stop that. Instead: * *:set hidden If you don't have this set already, then do so. It makes vim work like every other multiple-file editor on the planet. You can have edited buffers that aren't visible in a window somewhere. *Use :bn, :bp, :b #, :b name, and ctrl-6 to switch between buffers. I like ctrl-6 myself (alone it switches to the previously used buffer, or #ctrl-6 switches to buffer number #). *Use :ls to list buffers, or a plugin like MiniBufExpl or BufExplorer. A: Contrary to some of the other answers here, I say that you can use tabs however you want. vim was designed to be versatile and customizable, rather than forcing you to work according to predefined parameters. We all know how us programmers love to impose our "ethics" on everyone else, so this achievement is certainly a primary feature. <C-w>gf is the tab equivalent of buffers' gf command. <C-PageUp> and <C-PageDown> will switch between tabs. (In Byobu, these two commands never work for me, but they work outside of Byobu/tmux. Alternatives are gt and gT.) <C-w>T will move the current window to a new tab page. If you'd prefer that vim use an existing tab if possible, rather than creating a duplicate tab, add :set switchbuf=usetab to your .vimrc file. You can add newtab to the list (:set switchbuf=usetab,newtab) to force QuickFix commands that display compile errors to open in separate tabs. I prefer split instead, which opens the compile errors in a split window. If you have mouse support enabled with :set mouse=a, you can interact with the tabs by clicking on them. There's also a + button by default that will create a new tab. For the documentation on tabs, type :help tab-page in normal mode. (After you do that, you can practice moving a window to a tab using <C-w>T.) There's a long list of commands. Some of the window commands have to do with tabs, so you might want to look at that documentation as well via :help windows. Addition: 2013-12-19 To open multiple files in vim with each file in a separate tab, use vim -p file1 file2 .... If you're like me and always forget to add -p, you can add it at the end, as vim follows the normal command line option parsing rules. Alternatively, you can add a bash alias mapping vim to vim -p. A: * *You can map commands that normally manipulate buffers to manipulate tabs, as I've done with gf in my .vimrc: map gf :tabe <cfile><CR> I'm sure you can do the same with [^ *I don't think vim supports this for tabs (yet). I use gt and gT to move to the next and previous tabs, respectively. You can also use Ngt, where N is the tab number. One peeve I have is that, by default, the tab number is not displayed in the tab line. To fix this, I put a couple functions at the end of my .vimrc file (I didn't paste here because it's long and didn't format correctly). A: I use buffers like tabs, using the BufExplorer plugin and a few macros: " CTRL+b opens the buffer list map <C-b> <esc>:BufExplorer<cr> " gz in command mode closes the current buffer map gz :bdelete<cr> " g[bB] in command mode switch to the next/prev. buffer map gb :bnext<cr> map gB :bprev<cr> With BufExplorer you don't have a tab bar at the top, but on the other hand it saves space on your screen, plus you can have an infinite number of files/buffers open and the buffer list is searchable... A: Looking at :help tabs it doesn't look like vim wants to work the way you do... Buffers are shared across tabs, so it doesn't seem possible to lock a given buffer to appear only on a certain tab. It's a good idea, though. You could probably get the effect you want by using a terminal that supports tabs, like multi-gnome-terminal, then running vim instances in each terminal tab. Not perfect, though... A: Bit late to the party here but surprised I didn't see the following in this list: :tab sball - this opens a new tab for each open buffer. :help switchbuf - this controls buffer switching behaviour, try :set switchbuf=usetab,newtab. This should mean switching to the existing tab if the buffer is open, or creating a new one if not. A: If you want buffers to work like tabs, check out the tabline plugin. That uses a single window, and adds a line on the top to simulate the tabs (just showing the list of buffers). This came out a long time ago when tabs were only supported in GVim but not in the command line vim. Since it is only operating with buffers, everything integrates well with the rest of vim. A: Vim :help window explains the confusion "tabs vs buffers" pretty well. A buffer is the in-memory text of a file. A window is a viewport on a buffer. A tab page is a collection of windows. Opening multiple files is achieved in vim with buffers. In other editors (e.g. notepad++) this is done with tabs, so the name tab in vim maybe misleading. Windows are for the purpose of splitting the workspace and displaying multiple files (buffers) together on one screen. In other editors this could be achieved by opening multiple GUI windows and rearranging them on the desktop. Finally in this analogy vim's tab pages would correspond to multiple desktops, that is different rearrangements of windows. As vim help: tab-page explains a tab page can be used, when one wants to temporarily edit a file, but does not want to change anything in the current layout of windows and buffers. In such a case another tab page can be used just for the purpose of editing that particular file. Of course you have to remember that displaying the same file in many tab pages or windows would result in displaying the same working copy (buffer). A: I ran into the same problem. I wanted tabs to work like buffers and I never quite manage to get them to. The solution that I finally settled on was to make buffers behave like tabs! Check out the plugin called Mini Buffer Explorer, once installed and configured, you'll be able to work with buffers virtaully the same way as tabs without losing any functionality. A: This is an answer for those not familiar with Vim and coming from other text editors (in my case Sublime Text). I read through all these answers and it still wasn't clear. If you read through them enough things begin to make sense, but it took me hours of going back and forth between questions. The first thing is, as others have explained: Tab Pages, sound a lot like tabs, they act like tabs and look a lot like tabs in most other GUI editors, but they're not. I think it's an a bad mental model that was built on in Vim, which unfortunately clouds the extra power that you have within a tab page. The first description that I understood was from @crenate's answer is that they are the equivalent to multiple desktops. When seen in that regard you'd only ever have a couple of desktops open but have lots of GUI windows open within each one. I would say they are similar to in other editors/browsers: * *Tab groupings *Sublime Text workspaces (i.e. a list of the open files that you have in a project) When you see them like that you realise the power of them that you can easily group sets of files (buffers) together e.g. your CSS files, your HTML files and your JS files in different tab pages. Which is actually pretty awesome. Other descriptions that I find confusing Viewport This makes no sense to me. A viewport which although it does have a defined dictionary term, I've only heard referring to Vim windows in the :help window doc. Viewport is not a term I've ever heard with regards to editors like Sublime Text, Visual Studio, Atom, Notepad++. In fact I'd never heard about it for Vim until I started to try using tab pages. If you view tab pages like multiple desktops, then referring to a desktop as a single window seems odd. Workspaces This possibly makes more sense, the dictionary definition is: A memory storage facility for temporary use. So it's like a place where you store a group of buffers. I didn't initially sound like Sublime Text's concept of a workspace which is a list of all the files that you have open in your project: the sublime-workspace file, which contains user specific data, such as the open files and the modifications to each. However thinking about it more, this does actually agree. If you regard a Vim tab page like a Sublime Text project, then it would seem odd to have just one file open in each project and keep switching between projects. Hence why using a tab page to have open only one file is odd. Collection of windows The :help window refers to tab pages this way. Plus numerous other answers use the same concept. However until you get your head around what a vim window is, then that's not much use, like building a castle on sand. As I referred to above, a vim window is the same as a viewport and quiet excellently explained in this linux.com article: A really useful feature in Vim is the ability to split the viewable area between one or more files, or just to split the window to view two bits of the same file more easily. The Vim documentation refers to this as a viewport or window, interchangeably. You may already be familiar with this feature if you've ever used Vim's help feature by using :help topic or pressing the F1 key. When you enter help, Vim splits the viewport and opens the help documentation in the top viewport, leaving your document open in the bottom viewport. I find it odd that a tab page is referred to as a collection of windows instead of a collection of buffers. But I guess you can have two separate tab pages open each with multiple windows all pointing at the same buffer, at least that's what I understand so far. A: relevant: Vim: can global marks switch tabs instead of the file in the current tab? maybe useful to solve a little part of OP's problem: nno \ <Cmd>call To_global_mark()<cr> fun! To_global_mark() -tab drop /tmp/useless.md exe 'normal! `' .. toupper(input("To global mark: ")) endf
{ "language": "en", "url": "https://stackoverflow.com/questions/102384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "389" }
Q: Sorting a dict on __iter__ I am trying to sort a dict based on its key and return an iterator to the values from within an overridden iter method in a class. Is there a nicer and more efficient way of doing this than creating a new list, inserting into the list as I sort through the keys? A: How about something like this: def itersorted(d): for key in sorted(d): yield d[key] A: By far the easiest approach, and almost certainly the fastest, is something along the lines of: def sorted_dict(d): keys = d.keys() keys.sort() for key in keys: yield d[key] You can't sort without fetching all keys. Fetching all keys into a list and then sorting that list is the most efficient way to do that; list sorting is very fast, and fetching the keys list like that is as fast as it can be. You can then either create a new list of values or yield the values as the example does. Keep in mind that you can't modify the dict if you are iterating over it (the next iteration would fail) so if you want to modify the dict before you're done with the result of sorted_dict(), make it return a list. A: def sortedDict(dictobj): return (value for key, value in sorted(dictobj.iteritems())) This will create a single intermediate list, the 'sorted()' method returns a real list. But at least it's only one. A: Assuming you want a default sort order, you can used sorted(list) or list.sort(). If you want your own sort logic, Python lists support the ability to sort based on a function you pass in. For example, the following would be a way to sort numbers from least to greatest (the default behavior) using a function. def compareTwo(a, b): if a > b: return 1 if a == b: return 0 if a < b: return -1 List.Sort(compareTwo) print a This approach is conceptually a bit cleaner than manually creating a new list and appending the new values and allows you to control the sort logic.
{ "language": "en", "url": "https://stackoverflow.com/questions/102394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What are some good usability addins for Visual Studio 2008? I've been using Visual Studio for a long time and the constant shuffling of the code tabs at the top drives me bonkers. I was wondering if there were any add-ins that might change this or other UI behavior. Things that might be cool: * *Sticky Tabs that won't go away. *Multi-code file collapsible tabs (maybe each tab being a project?). *Having the solution tree go to the file you are currently looking at automatically. Thanks Omlette! *Your idea here. I've done a bit of googling and haven't been able to find anything useful. A: The "Having the solution tree go to the file you are currently looking at automatically" feature already exists in VS2008, but isn't enabled by default. Go to tools -> options -> projects and solutions -> general and check the "Track Active Item in Solution Explorer" box. A: Rick, Tabs Studio add-in for Visual Studio is a replacement for built-in tabs. Multiple rows of tabs make them always visible. Tabs can be grouped, it is probably close to what you call "multi-code file tabs". See Tabs Studio home page for more information.
{ "language": "en", "url": "https://stackoverflow.com/questions/102396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Priority queue in .Net I am looking for a .NET implementation of a priority queue or heap data structure Priority queues are data structures that provide more flexibility than simple sorting, because they allow new elements to enter a system at arbitrary intervals. It is much more cost-effective to insert a new job into a priority queue than to re-sort everything on each such arrival. The basic priority queue supports three primary operations: * *Insert(Q,x). Given an item x with key k, insert it into the priority queue Q. *Find-Minimum(Q). Return a pointer to the item whose key value is smaller than any other key in the priority queue Q. *Delete-Minimum(Q). Remove the item from the priority queue Q whose key is minimum Unless I am looking in the wrong place, there isn't one in the framework. Is anyone aware of a good one, or should I roll my own? A: You might like IntervalHeap from the C5 Generic Collection Library. To quote the user guide Class IntervalHeap<T> implements interface IPriorityQueue<T> using an interval heap stored as an array of pairs. The FindMin and FindMax operations, and the indexer’s get-accessor, take time O(1). The DeleteMin, DeleteMax, Add and Update operations, and the indexer’s set-accessor, take time O(log n). In contrast to an ordinary priority queue, an interval heap offers both minimum and maximum operations with the same efficiency. The API is simple enough > var heap = new C5.IntervalHeap<int>(); > heap.Add(10); > heap.Add(5); > heap.FindMin(); 5 Install from Nuget https://www.nuget.org/packages/C5 or GitHub https://github.com/sestoft/C5/ A: You may find useful this implementation: http://www.codeproject.com/Articles/126751/Priority-queue-in-Csharp-with-help-of-heap-data-st.aspx it is generic and based on heap data structure A: class PriorityQueue<T> { IComparer<T> comparer; T[] heap; public int Count { get; private set; } public PriorityQueue() : this(null) { } public PriorityQueue(int capacity) : this(capacity, null) { } public PriorityQueue(IComparer<T> comparer) : this(16, comparer) { } public PriorityQueue(int capacity, IComparer<T> comparer) { this.comparer = (comparer == null) ? Comparer<T>.Default : comparer; this.heap = new T[capacity]; } public void push(T v) { if (Count >= heap.Length) Array.Resize(ref heap, Count * 2); heap[Count] = v; SiftUp(Count++); } public T pop() { var v = top(); heap[0] = heap[--Count]; if (Count > 0) SiftDown(0); return v; } public T top() { if (Count > 0) return heap[0]; throw new InvalidOperationException("优先队列为空"); } void SiftUp(int n) { var v = heap[n]; for (var n2 = n / 2; n > 0 && comparer.Compare(v, heap[n2]) > 0; n = n2, n2 /= 2) heap[n] = heap[n2]; heap[n] = v; } void SiftDown(int n) { var v = heap[n]; for (var n2 = n * 2; n2 < Count; n = n2, n2 *= 2) { if (n2 + 1 < Count && comparer.Compare(heap[n2 + 1], heap[n2]) > 0) n2++; if (comparer.Compare(v, heap[n2]) >= 0) break; heap[n] = heap[n2]; } heap[n] = v; } } easy. A: AlgoKit I wrote an open source library called AlgoKit, available via NuGet. It contains: * *Implicit d-ary heaps (ArrayHeap), *Binomial heaps, *Pairing heaps. The code has been extensively tested. I definitely recommend you to give it a try. Example var comparer = Comparer<int>.Default; var heap = new PairingHeap<int, string>(comparer); heap.Add(3, "your"); heap.Add(5, "of"); heap.Add(7, "disturbing."); heap.Add(2, "find"); heap.Add(1, "I"); heap.Add(6, "faith"); heap.Add(4, "lack"); while (!heap.IsEmpty) Console.WriteLine(heap.Pop().Value); Why those three heaps? The optimal choice of implementation is strongly input-dependent — as Larkin, Sen, and Tarjan show in A back-to-basics empirical study of priority queues, arXiv:1403.0252v1 [cs.DS]. They tested implicit d-ary heaps, pairing heaps, Fibonacci heaps, binomial heaps, explicit d-ary heaps, rank-pairing heaps, quake heaps, violation heaps, rank-relaxed weak heaps, and strict Fibonacci heaps. AlgoKit features three types of heaps that appeared to be most efficient among those tested. Hint on choice For a relatively small number of elements, you would likely be interested in using implicit heaps, especially quaternary heaps (implicit 4-ary). In case of operating on larger heap sizes, amortized structures like binomial heaps and pairing heaps should perform better. A: Here's my attempt at a .NET heap public abstract class Heap<T> : IEnumerable<T> { private const int InitialCapacity = 0; private const int GrowFactor = 2; private const int MinGrow = 1; private int _capacity = InitialCapacity; private T[] _heap = new T[InitialCapacity]; private int _tail = 0; public int Count { get { return _tail; } } public int Capacity { get { return _capacity; } } protected Comparer<T> Comparer { get; private set; } protected abstract bool Dominates(T x, T y); protected Heap() : this(Comparer<T>.Default) { } protected Heap(Comparer<T> comparer) : this(Enumerable.Empty<T>(), comparer) { } protected Heap(IEnumerable<T> collection) : this(collection, Comparer<T>.Default) { } protected Heap(IEnumerable<T> collection, Comparer<T> comparer) { if (collection == null) throw new ArgumentNullException("collection"); if (comparer == null) throw new ArgumentNullException("comparer"); Comparer = comparer; foreach (var item in collection) { if (Count == Capacity) Grow(); _heap[_tail++] = item; } for (int i = Parent(_tail - 1); i >= 0; i--) BubbleDown(i); } public void Add(T item) { if (Count == Capacity) Grow(); _heap[_tail++] = item; BubbleUp(_tail - 1); } private void BubbleUp(int i) { if (i == 0 || Dominates(_heap[Parent(i)], _heap[i])) return; //correct domination (or root) Swap(i, Parent(i)); BubbleUp(Parent(i)); } public T GetMin() { if (Count == 0) throw new InvalidOperationException("Heap is empty"); return _heap[0]; } public T ExtractDominating() { if (Count == 0) throw new InvalidOperationException("Heap is empty"); T ret = _heap[0]; _tail--; Swap(_tail, 0); BubbleDown(0); return ret; } private void BubbleDown(int i) { int dominatingNode = Dominating(i); if (dominatingNode == i) return; Swap(i, dominatingNode); BubbleDown(dominatingNode); } private int Dominating(int i) { int dominatingNode = i; dominatingNode = GetDominating(YoungChild(i), dominatingNode); dominatingNode = GetDominating(OldChild(i), dominatingNode); return dominatingNode; } private int GetDominating(int newNode, int dominatingNode) { if (newNode < _tail && !Dominates(_heap[dominatingNode], _heap[newNode])) return newNode; else return dominatingNode; } private void Swap(int i, int j) { T tmp = _heap[i]; _heap[i] = _heap[j]; _heap[j] = tmp; } private static int Parent(int i) { return (i + 1)/2 - 1; } private static int YoungChild(int i) { return (i + 1)*2 - 1; } private static int OldChild(int i) { return YoungChild(i) + 1; } private void Grow() { int newCapacity = _capacity*GrowFactor + MinGrow; var newHeap = new T[newCapacity]; Array.Copy(_heap, newHeap, _capacity); _heap = newHeap; _capacity = newCapacity; } public IEnumerator<T> GetEnumerator() { return _heap.Take(Count).GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } } public class MaxHeap<T> : Heap<T> { public MaxHeap() : this(Comparer<T>.Default) { } public MaxHeap(Comparer<T> comparer) : base(comparer) { } public MaxHeap(IEnumerable<T> collection, Comparer<T> comparer) : base(collection, comparer) { } public MaxHeap(IEnumerable<T> collection) : base(collection) { } protected override bool Dominates(T x, T y) { return Comparer.Compare(x, y) >= 0; } } public class MinHeap<T> : Heap<T> { public MinHeap() : this(Comparer<T>.Default) { } public MinHeap(Comparer<T> comparer) : base(comparer) { } public MinHeap(IEnumerable<T> collection) : base(collection) { } public MinHeap(IEnumerable<T> collection, Comparer<T> comparer) : base(collection, comparer) { } protected override bool Dominates(T x, T y) { return Comparer.Compare(x, y) <= 0; } } Some tests: [TestClass] public class HeapTests { [TestMethod] public void TestHeapBySorting() { var minHeap = new MinHeap<int>(new[] {9, 8, 4, 1, 6, 2, 7, 4, 1, 2}); AssertHeapSort(minHeap, minHeap.OrderBy(i => i).ToArray()); minHeap = new MinHeap<int> { 7, 5, 1, 6, 3, 2, 4, 1, 2, 1, 3, 4, 7 }; AssertHeapSort(minHeap, minHeap.OrderBy(i => i).ToArray()); var maxHeap = new MaxHeap<int>(new[] {1, 5, 3, 2, 7, 56, 3, 1, 23, 5, 2, 1}); AssertHeapSort(maxHeap, maxHeap.OrderBy(d => -d).ToArray()); maxHeap = new MaxHeap<int> {2, 6, 1, 3, 56, 1, 4, 7, 8, 23, 4, 5, 7, 34, 1, 4}; AssertHeapSort(maxHeap, maxHeap.OrderBy(d => -d).ToArray()); } private static void AssertHeapSort(Heap<int> heap, IEnumerable<int> expected) { var sorted = new List<int>(); while (heap.Count > 0) sorted.Add(heap.ExtractDominating()); Assert.IsTrue(sorted.SequenceEqual(expected)); } } A: I like using the OrderedBag and OrderedSet classes in PowerCollections as priority queues. A: A Simple Max Heap Implementation. https://github.com/bharathkumarms/AlgorithmsMadeEasy/blob/master/AlgorithmsMadeEasy/MaxHeap.cs using System; using System.Collections.Generic; using System.Linq; namespace AlgorithmsMadeEasy { class MaxHeap { private static int capacity = 10; private int size = 0; int[] items = new int[capacity]; private int getLeftChildIndex(int parentIndex) { return 2 * parentIndex + 1; } private int getRightChildIndex(int parentIndex) { return 2 * parentIndex + 2; } private int getParentIndex(int childIndex) { return (childIndex - 1) / 2; } private int getLeftChild(int parentIndex) { return this.items[getLeftChildIndex(parentIndex)]; } private int getRightChild(int parentIndex) { return this.items[getRightChildIndex(parentIndex)]; } private int getParent(int childIndex) { return this.items[getParentIndex(childIndex)]; } private bool hasLeftChild(int parentIndex) { return getLeftChildIndex(parentIndex) < size; } private bool hasRightChild(int parentIndex) { return getRightChildIndex(parentIndex) < size; } private bool hasParent(int childIndex) { return getLeftChildIndex(childIndex) > 0; } private void swap(int indexOne, int indexTwo) { int temp = this.items[indexOne]; this.items[indexOne] = this.items[indexTwo]; this.items[indexTwo] = temp; } private void hasEnoughCapacity() { if (this.size == capacity) { Array.Resize(ref this.items,capacity*2); capacity *= 2; } } public void Add(int item) { this.hasEnoughCapacity(); this.items[size] = item; this.size++; heapifyUp(); } public int Remove() { int item = this.items[0]; this.items[0] = this.items[size-1]; this.items[this.size - 1] = 0; size--; heapifyDown(); return item; } private void heapifyUp() { int index = this.size - 1; while (hasParent(index) && this.items[index] > getParent(index)) { swap(index, getParentIndex(index)); index = getParentIndex(index); } } private void heapifyDown() { int index = 0; while (hasLeftChild(index)) { int bigChildIndex = getLeftChildIndex(index); if (hasRightChild(index) && getLeftChild(index) < getRightChild(index)) { bigChildIndex = getRightChildIndex(index); } if (this.items[bigChildIndex] < this.items[index]) { break; } else { swap(bigChildIndex,index); index = bigChildIndex; } } } } } /* Calling Code: MaxHeap mh = new MaxHeap(); mh.Add(10); mh.Add(5); mh.Add(2); mh.Add(1); mh.Add(50); int maxVal = mh.Remove(); int newMaxVal = mh.Remove(); */ A: Use a Java to C# translator on the Java implementation (java.util.PriorityQueue) in the Java Collections framework, or more intelligently use the algorithm and core code and plug it into a C# class of your own making that adheres to the C# Collections framework API for Queues, or at least Collections. A: here's one i just wrote, maybe it's not as optimized (just uses a sorted dictionary) but simple to understand. you can insert objects of different kinds, so no generic queues. using System; using System.Diagnostics; using System.Collections; using System.Collections.Generic; namespace PrioQueue { public class PrioQueue { int total_size; SortedDictionary<int, Queue> storage; public PrioQueue () { this.storage = new SortedDictionary<int, Queue> (); this.total_size = 0; } public bool IsEmpty () { return (total_size == 0); } public object Dequeue () { if (IsEmpty ()) { throw new Exception ("Please check that priorityQueue is not empty before dequeing"); } else foreach (Queue q in storage.Values) { // we use a sorted dictionary if (q.Count > 0) { total_size--; return q.Dequeue (); } } Debug.Assert(false,"not supposed to reach here. problem with changing total_size"); return null; // not supposed to reach here. } // same as above, except for peek. public object Peek () { if (IsEmpty ()) throw new Exception ("Please check that priorityQueue is not empty before peeking"); else foreach (Queue q in storage.Values) { if (q.Count > 0) return q.Peek (); } Debug.Assert(false,"not supposed to reach here. problem with changing total_size"); return null; // not supposed to reach here. } public object Dequeue (int prio) { total_size--; return storage[prio].Dequeue (); } public void Enqueue (object item, int prio) { if (!storage.ContainsKey (prio)) { storage.Add (prio, new Queue ()); } storage[prio].Enqueue (item); total_size++; } } } A: .NET 6+: As @rustyx commented, .NET 6 adds a System.Collections.Generic.PriorityQueue<TElement,TPriority> class. And FWIW it is open-source and implemented in c#. Earlier .NET Core versions and .NET Framework: Microsoft has written (and shared online) 2 internal PriorityQueue classes within the .NET Framework. However, as @mathusum-mut commented, there is a bug in one of them (the SO community has, of course, provided fixes for it): Bug in Microsoft's internal PriorityQueue<T>? A: I found one by Julian Bucknall on his blog here - http://www.boyet.com/Articles/PriorityQueueCSharp3.html We modified it slightly so that low-priority items on the queue would eventually 'bubble-up' to the top over time, so they wouldn't suffer starvation. A: Here is the another implementation from NGenerics team: NGenerics PriorityQueue A: I had the same issue recently and ended up creating a NuGet package for this. This implements a standard heap-based priority queue. It also has all the usual niceties of the BCL collections: ICollection<T> and IReadOnlyCollection<T> implementation, custom IComparer<T> support, ability to specify an initial capacity, and a DebuggerTypeProxy to make the collection easier to work with in the debugger. There is also an Inline version of the package which just installs a single .cs file into your project (useful if you want to avoid taking externally-visible dependencies). More information is available on the github page. A: The following implementation of a PriorityQueue uses SortedSet from the System library. using System; using System.Collections.Generic; namespace CDiggins { interface IPriorityQueue<T, K> where K : IComparable<K> { bool Empty { get; } void Enqueue(T x, K key); void Dequeue(); T Top { get; } } class PriorityQueue<T, K> : IPriorityQueue<T, K> where K : IComparable<K> { SortedSet<Tuple<T, K>> set; class Comparer : IComparer<Tuple<T, K>> { public int Compare(Tuple<T, K> x, Tuple<T, K> y) { return x.Item2.CompareTo(y.Item2); } } PriorityQueue() { set = new SortedSet<Tuple<T, K>>(new Comparer()); } public bool Empty { get { return set.Count == 0; } } public void Enqueue(T x, K key) { set.Add(Tuple.Create(x, key)); } public void Dequeue() { set.Remove(set.Max); } public T Top { get { return set.Max.Item1; } } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/102398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "242" }
Q: Protect embedded password I have a properties file in java, in which I store all information of my app, like logo image filename, database name, database user and database password. I can store the password encrypted on the properties file. But, the key or passphrase can be read out of the jar using a decompiler. Is there a way to store the db pass in a properties file securely? A: There are multiple ways to manage this. If you can figure out a way to have a user provide a password for a keystore when the application starts up the most appropriate way would be to encrypt all the values using a key, and store this key in the keystore. The command line interface to the keystore is by using keytool. However JSE has APIs to programmatically access the keystore as well. If you do not have an ability to have a user manually provide a password to the keystore on startup (say for a web application), one way to do it is to write an exceptionally complex obfuscation routine which can obfuscate the key and store it in a property file as well. Important things to remember is that the obfuscation and deobfuscation logic should be multi layered (could involve scrambling, encoding, introduction of spurious characters etc. etc.) and should itself have at least one key which could be hidden away in other classes in the application using non intuitive names. This is not a fully safe mechanism since someone with a decompiler and a fair amount of time and intelligence can still work around it but is the only one I know of which does not require you to break into native (ie. non easily decompilable) code. A: You store a SHA1 hash of the password in your properties file. Then when you validate a users password, you hash their login attempt and make sure that the two hashes match. This is the code that will hash some bytes for you. You can easily ger bytes from a String using the getBytes() method. /** * Returns the hash value of the given chars * * Uses the default hash algorithm described above * * @param in * the byte[] to hash * @return a byte[] of hashed values */ public static byte[] getHashedBytes(byte[] in) { MessageDigest msg; try { msg = MessageDigest.getInstance(hashingAlgorithmUsed); } catch (NoSuchAlgorithmException e) { throw new AssertionError("Someone chose to use a hashing algorithm that doesn't exist. Epic fail, go change it in the Util file. SHA(1) or MD5"); } msg.update(in); return msg.digest(); } A: No there is not. Even if you encrypt it, somebody will decompile the code that decrypts it. A: You could make a separate properties file (outside the jar) for passwords (either direct DB password or or key passphrase) and not include that properties file with the distribution. Or you might be able to make the server only accept that login from a specific machine so that spoofing would be required. A: In addition to encrypting the passwords as described above put any passwords in separate properties file and on deployment try to give this file the most locked down permissions possible. For example, if your Application Server runs on Linux/Unix as root then make the password properties file owned by root with 400/-r-------- permissions. A: Couldn't you have the app contact a server over https and download the password, after authenticating in some way of course?
{ "language": "en", "url": "https://stackoverflow.com/questions/102425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Viewing, pausing and killing threads in a running .Net application I'm working on a .Net applications with multiple threads doing all sorts of things. When something goes wrong in production I want to be able to see which threads are running (by their managed name) and also be able to pause / kill them. Anyway to achieve this ? VS isn't always available (although a good option when is), and WinDbg UI isn't for the lite hearted. I considered a in-program threads window, like VS has while debugging, but couldn't find a programmatic way to do this. Process.GetThreads returns very little usable data. A: Finding threads: using System.Diagnostics; ProcessThreadCollection threads = Process.GetCurrentProcess().Threads; You can usually kill managed threads using Thread.Abort(). If they're in a Sleep, Wait or Join you may even be able to get away with the (less nasty) Thread.Interrupt(). A: There's nothing built in to .net that will do this. If you want to programmatically iterate through your active threads, you have to register them somewhere on launch and either unregister them on completion or filter them before you act on them. We did a version of this and it requires a non-trivial amount of work. A: You can attach a managed debugger to view/freeze the threads, or use WinDbg with the SOS extensions if you want something lighter weight. A: You can use ProcessExplorer to view running threads in a process by id and state. http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx Use the "thread" tab of the "properties" window. You can also kill threads and view the thread stack A: What about remote debugging? It can be a bit finicky to setup because of security and ensuring you have the right debugging symbols though. Remote Debugging Setup
{ "language": "en", "url": "https://stackoverflow.com/questions/102427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why do all Winforms programs require the [STAThread] attribute? Why do Winforms programs have the [STAThread] attribute on the Main() method and what are the consequences of removing it?
{ "language": "en", "url": "https://stackoverflow.com/questions/102437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: WF and ASPNET MVC Is there a way to use MS WF interactively in ASPNET MVC? A: Did you see how WF and MVC were used inthe StoreFront example? http://www.asp.net/learn/mvc-videos/video-428.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/102445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Reuse Spring Webflow definitions with different action implementations I got pretty big webflow definition, which I do not want to copy/paste for reusing. There are references to action bean in XML, which is kind natural. I want to use same flow definiton twice: second time with actions configured differently (inject different implementation of service to it). Is there easy way to do this? Problem is I want to use same flow with different beans at once, in the same app. Copy/Paste is bad, but I dont see other solution for now. A: You could try creating a new flow that extends the "pretty big one" and adding flowExecutionListeners to it. The interface "FlowExecutionListener"defines methods for the following events in flow execution: * *requestSubmitted *requestProceessed *sessionCreating *sessionStarting *sessionStarted *eventSignaled *transitionExecuting *stateEntering *viewRendered *viewRendering *stateEntered *paused *resuming *sessionEnding *sessionEnded *exceptionThrown You can write a handler that injects the required resources to your flow (and use different handles with different flows) by storing it in the RequestContext, where you can access it in your flow definition. Note that in that case you would still have to modify the "pretty big flow" to use those resources instead of referencing the beans directly. A: I'm in the same fix that you're in...i have different subclasses which have corresponding action beans, but a lot of the flow is the same. In the past we have just copied and pasted...not happy with that! I have some ideas I am going to try out with using the expression language. First, I came up with an action bean factory that will return the right action bean to use for a given class, then i can call that factory to set a variable that i can use instead of the hard-coded bean name. Here's part of the flow: <action-state id="checkForParams"> <on-entry> <set name="flowScope.clientKey" value="requestParameters.clientKey"/> <set name="flowScope.viewReportBean" value="reportActionFactory.getViewBean(reportUnit)"/> </on-entry> <evaluate expression="viewReportBean"/> The evaluate in the last line would normally refer directly to a bean, but now it refers to the result of the "set" I just did. Good news--the right bean gets called. Bad news--anything in the flow scope needs to be Serializable, so I get a NotSerializableException--arggh! I can try setting something on a very short-lived scope, in which case it will need to get called all the time...or I can figure out some kind of proxy which holds the real bean as a proxy declared "transient". BTW, I am using Spring 2.5.6 and webflow 2.0.7. Later versions may have better ways of handling this; in particular, EL's have gotten some attention, it seems. I'm still stuck with OGNL, which is the Spring 1.x EL. I'm sure some webflow guru knows other ways of doing things in a less clunky fashion... A: I don't think you can use the same webflow definition with the actions configured in two different ways. If you want to use different actions you'll either have to reconfigure your action beans then redeploy your app or create a separate webflow definition with the differently configured beans. This is a great Spring WebFlow resource. A: Try to refactor the common configurable part in a subflow, and call the subflow from the different main flows where you want to reuse it. Pass parameters to the subflow to configure it in any way needed, using the spring expression language to pass different spring beans, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/102453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to create an SVG "tooltip"-like box? Given an existing valid SVG document, what's the best way to create "informational popups", so that when you hover or click on certain elements (let's say ) you popup a box with an arbitrary amount (i.e. not just a single line tooltip) of extra information? This should display correctly at least in Firefox and be invisible if the image was rasterized to a bitmap format. A: This question was asked in 2008. SVG has improved rapidly in the intervening four years. Now tooltips are fully supported in all platforms I'm aware of. Use a <title> tag (not an attribute) and you will get a native tooltip. Here are the docs: https://developer.mozilla.org/en-US/docs/SVG/Element/title A: <svg> <text id="thingyouhoverover" x="50" y="35" font-size="14">Mouse over me!</text> <text id="thepopup" x="250" y="100" font-size="30" fill="black" visibility="hidden">Change me <set attributeName="visibility" from="hidden" to="visible" begin="thingyouhoverover.mouseover" end="thingyouhoverover.mouseout"/> </text> </svg> Further explanation can be found here. A: Since the <set> element doesn't work with Firefox 3, I think you have to use ECMAScript. If you add the following script element into your SVG: <script type="text/ecmascript"> <![CDATA[ function init(evt) { if ( window.svgDocument == null ) { // Define SGV svgDocument = evt.target.ownerDocument; } tooltip = svgDocument.getElementById('tooltip'); } function ShowTooltip(evt) { // Put tooltip in the right position, change the text and make it visible tooltip.setAttributeNS(null,"x",evt.clientX+10); tooltip.setAttributeNS(null,"y",evt.clientY+30); tooltip.firstChild.data = evt.target.getAttributeNS(null,"mouseovertext"); tooltip.setAttributeNS(null,"visibility","visible"); } function HideTooltip(evt) { tooltip.setAttributeNS(null,"visibility","hidden"); } ]]></script> You need to add onload="init(evt)" into the SVG element to call the init() function. Then, to the end of the SVG, add the tooltip text: <text id="tooltip" x="0" y="0" visibility="hidden">Tooltip</text> Finally, to each of the element that you want to have the mouseover function add: onmousemove="ShowTooltip(evt)" onmouseout="HideTooltip(evt)" mouseovertext="Whatever text you want to show" I've written a more detailed explanation with improved functionality at http://www.petercollingridge.co.uk/interactive-svg-components/tooltip I haven't yet included multi-line tooltips, which would require multiple <tspan> elements and manual word wrapping. A: This should work: nodeEnter.append("svg:element") .style("fill", function(d) { return d._children ? "lightsteelblue" : "#fff"; }) .append("svg:title") .text(function(d) {return d.Name+"\n"+d.Age+"\n"+d.Dept;}); // It shows the tool tip box with item [Name,Age,Dept] and upend to the svg dynamicaly
{ "language": "en", "url": "https://stackoverflow.com/questions/102457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: Why does std::stack use std::deque by default? Since the only operations required for a container to be used in a stack are: * *back() *push_back() *pop_back() Why is the default container for it a deque instead of a vector? Don't deque reallocations give a buffer of elements before front() so that push_front() is an efficient operation? Aren't these elements wasted since they will never ever be used in the context of a stack? If there is no overhead for using a deque this way instead of a vector, why is the default for priority_queue a vector not a deque also? (priority_queue requires front(), push_back(), and pop_back() - essentially the same as for stack) Updated based on the Answers below: It appears that the way deque is usually implemented is a variable size array of fixed size arrays. This makes growing faster than a vector (which requires reallocation and copying), so for something like a stack which is all about adding and removing elements, deque is likely a better choice. priority_queue requires indexing heavily, as every removal and insertion requires you to run pop_heap() or push_heap(). This probably makes vector a better choice there since adding an element is still amortized constant anyways. A: As the container grows, a reallocation for a vector requires copying all the elements into the new block of memory. Growing a deque allocates a new block and links it to the list of blocks - no copies are required. Of course you can specify that a different backing container be used if you like. So if you have a stack that you know is not going to grow much, tell it to use a vector instead of a deque if that's your preference. A: See Herb Sutter's Guru of the Week 54 for the relative merits of vector and deque where either would do. I imagine the inconsistency between priority_queue and queue is simply that different people implemented them.
{ "language": "en", "url": "https://stackoverflow.com/questions/102459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "109" }
Q: Web Access to Virtual Machine Is their a way to access a web server such as windows server 2003 installed on a virtual box such as vmware from the host machine? A: If VMware is set to use bridged networking, then each guest OS effectively has its own IP address, like brien said, you just point your browser to that address. A: If you configure your virtual machine to use bridged networking, instead of NAT, it will have its own IP address "beside" the host machine, instead of a local IP address "behind" it. Then you can connect to the virtual machine, using that IP number. (Disclaimer: I've used VMware workstation for several years, but not their server products.) A: Yes, you should just be able to point to the IP address of the VM. How is your VM networking configured? A: I am doing this all over the place, just make sure that the vm has an ip configured. A: i believe vmware (workstation?) also has built in a virtual network client (VNC) that you can connect to - enable it by going to the configuration properties of the vm, and in the last tab there is a checkbox for it. A: IP address should do it. A: I faced the same issue. You have to set your networkconnection to "bridged mode" in your VM. Then you have to find out the IP of your Webserver. Sometimes Webservers have a redirect to a specific URL. In this case you can edit your host-file in C:/Windows/System32/drivers/etc/hosts and add your IP with the redirected URL like this: 192.168.0.37 some.url-you.need Then your Host can go to the Webserver. Even participants of your Ethernet can access the Webserver.
{ "language": "en", "url": "https://stackoverflow.com/questions/102464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is it possible to save metadata in an image? We create thumb images on our server and I'm looking for a way to save metadata (text) in that image. Is that possible? At this moment we use PHP and we create JPG images. A: You question is the same as writing exif data in php. My answer is: * *PEL(PHP Exif Library). A library for reading and writing Exif headers in JPEG and TIFF images using PHP. *The PHP JPEG Metadata Toolkit. Allows reading, writing and display of the following JPEG metadata formats: EXIF 2.2, XMP / RDF, IPTC-NAA IIM 4.1 ect *ExifTool by perl. The ExifTool is excellent. It’s basically got it all – EXIF, IPTC and XMP support (read/write) and support for manufacturer extensions. A: I hope this helps you! I modified a class that I found here (thanks debers). And all the references to IPTC tags can be readed from this PDF Code (PHP >= 5.4): <? define("IPTC_OBJECT_NAME", "005"); define("IPTC_EDIT_STATUS", "007"); define("IPTC_PRIORITY", "010"); define("IPTC_CATEGORY", "015"); define("IPTC_SUPPLEMENTAL_CATEGORY", "020"); define("IPTC_FIXTURE_IDENTIFIER", "022"); define("IPTC_KEYWORDS", "025"); define("IPTC_RELEASE_DATE", "030"); define("IPTC_RELEASE_TIME", "035"); define("IPTC_SPECIAL_INSTRUCTIONS", "040"); define("IPTC_REFERENCE_SERVICE", "045"); define("IPTC_REFERENCE_DATE", "047"); define("IPTC_REFERENCE_NUMBER", "050"); define("IPTC_CREATED_DATE", "055"); define("IPTC_CREATED_TIME", "060"); define("IPTC_ORIGINATING_PROGRAM", "065"); define("IPTC_PROGRAM_VERSION", "070"); define("IPTC_OBJECT_CYCLE", "075"); define("IPTC_BYLINE", "080"); define("IPTC_BYLINE_TITLE", "085"); define("IPTC_CITY", "090"); define("IPTC_PROVINCE_STATE", "095"); define("IPTC_COUNTRY_CODE", "100"); define("IPTC_COUNTRY", "101"); define("IPTC_ORIGINAL_TRANSMISSION_REFERENCE", "103"); define("IPTC_HEADLINE", "105"); define("IPTC_CREDIT", "110"); define("IPTC_SOURCE", "115"); define("IPTC_COPYRIGHT_STRING", "116"); define("IPTC_CAPTION", "120"); define("IPTC_LOCAL_CAPTION", "121"); class IPTC { var $meta = []; var $file = null; function __construct($filename) { $info = null; $size = getimagesize($filename, $info); if(isset($info["APP13"])) $this->meta = iptcparse($info["APP13"]); $this->file = $filename; } function getValue($tag) { return isset($this->meta["2#$tag"]) ? $this->meta["2#$tag"][0] : ""; } function setValue($tag, $data) { $this->meta["2#$tag"] = [$data]; $this->write(); } private function write() { $mode = 0; $content = iptcembed($this->binary(), $this->file, $mode); $filename = $this->file; if(file_exists($this->file)) unlink($this->file); $fp = fopen($this->file, "w"); fwrite($fp, $content); fclose($fp); } private function binary() { $data = ""; foreach(array_keys($this->meta) as $key) { $tag = str_replace("2#", "", $key); $data .= $this->iptc_maketag(2, $tag, $this->meta[$key][0]); } return $data; } function iptc_maketag($rec, $data, $value) { $length = strlen($value); $retval = chr(0x1C) . chr($rec) . chr($data); if($length < 0x8000) { $retval .= chr($length >> 8) . chr($length & 0xFF); } else { $retval .= chr(0x80) . chr(0x04) . chr(($length >> 24) & 0xFF) . chr(($length >> 16) & 0xFF) . chr(($length >> 8) & 0xFF) . chr($length & 0xFF); } return $retval . $value; } function dump() { echo "<pre>"; print_r($this->meta); echo "</pre>"; } #requires GD library installed function removeAllTags() { $this->meta = []; $img = imagecreatefromstring(implode(file($this->file))); if(file_exists($this->file)) unlink($this->file); imagejpeg($img, $this->file, 100); } } $file = "photo.jpg"; $objIPTC = new IPTC($file); //set title $objIPTC->setValue(IPTC_HEADLINE, "A title for this picture"); //set description $objIPTC->setValue(IPTC_CAPTION, "Some words describing what can be seen in this picture."); echo $objIPTC->getValue(IPTC_HEADLINE); ?> A: EXIF or reuse an old "data-hiding" concept, Stenography A: Yes, it's possible. You can use the almighty Exiftool perl utility, which handles nearly every known set of tags, both standard(EXIF, IPTC, Adobe's XMP, etc) and proprietary ones. A: Embedding XMP Metadata in Application Files (PDF)
{ "language": "en", "url": "https://stackoverflow.com/questions/102466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Algorithm Issue: letter combinations I'm trying to write a piece of code that will do the following: Take the numbers 0 to 9 and assign one or more letters to this number. For example: 0 = N, 1 = L, 2 = T, 3 = D, 4 = R, 5 = V or F, 6 = B or P, 7 = Z, 8 = H or CH or J, 9 = G When I have a code like 0123, it's an easy job to encode it. It will obviously make up the code NLTD. When a number like 5,6 or 8 is introduced, things get different. A number like 051 would result in more than one possibility: NVL and NFL It should be obvious that this gets even "worse" with longer numbers that include several digits like 5,6 or 8. Being pretty bad at mathematics, I have not yet been able to come up with a decent solution that will allow me to feed the program a bunch of numbers and have it spit out all the possible letter combinations. So I'd love some help with it, 'cause I can't seem to figure it out. Dug up some information about permutations and combinations, but no luck. Thanks for any suggestions/clues. The language I need to write the code in is PHP, but any general hints would be highly appreciated. Update: Some more background: (and thanks a lot for the quick responses!) The idea behind my question is to build a script that will help people to easily convert numbers they want to remember to words that are far more easily remembered. This is sometimes referred to as "pseudo-numerology". I want the script to give me all the possible combinations that are then held against a database of stripped words. These stripped words just come from a dictionary and have all the letters I mentioned in my question stripped out of them. That way, the number to be encoded can usually easily be related to a one or more database records. And when that happens, you end up with a list of words that you can use to remember the number you wanted to remember. A: It can be done easily recursively. The idea is that to handle the whole code of size n, you must handle first the n - 1 digits. Once you have all answers for n-1 digits, the answers for the whole are deduced by appending to them the correct(s) char(s) for the last one. A: There's actually a much better solution than enumerating all the possible translations of a number and looking them up: Simply do the reverse computation on every word in your dictionary, and store the string of digits in another field. So if your mapping is: 0 = N, 1 = L, 2 = T, 3 = D, 4 = R, 5 = V or F, 6 = B or P, 7 = Z, 8 = H or CH or J, 9 = G your reverse mapping is: N = 0, L = 1, T = 2, D = 3, R = 4, V = 5, F = 5, B = 6, P = 6, Z = 7, H = 8, J = 8, G = 9 Note there's no mapping for 'ch', because the 'c' will be dropped, and the 'h' will be converted to 8 anyway. Then, all you have to do is iterate through each letter in the dictionary word, output the appropriate digit if there's a match, and do nothing if there isn't. Store all the generated digit strings as another field in the database. When you want to look something up, just perform a simple query for the number entered, instead of having to do tens (or hundreds, or thousands) of lookups of potential words. A: The general structure you want to hold your number -> letter assignments is an array or arrays, similar to: // 0 = N, 1 = L, 2 = T, 3 = D, 4 = R, 5 = V or F, 6 = B or P, 7 = Z, // 8 = H or CH or J, 9 = G $numberMap = new Array ( 0 => new Array("N"), 1 => new Array("L"), 2 => new Array("T"), 3 => new Array("D"), 4 => new Array("R"), 5 => new Array("V", "F"), 6 => new Array("B", "P"), 7 => new Array("Z"), 8 => new Array("H", "CH", "J"), 9 => new Array("G"), ); Then, a bit of recursive logic gives us a function similar to: function GetEncoding($number) { $ret = new Array(); for ($i = 0; $i < strlen($number); $i++) { // We're just translating here, nothing special. // $var + 0 is a cheap way of forcing a variable to be numeric $ret[] = $numberMap[$number[$i]+0]; } } function PrintEncoding($enc, $string = "") { // If we're at the end of the line, then print! if (count($enc) === 0) { print $string."\n"; return; } // Otherwise, soldier on through the possible values. // Grab the next 'letter' and cycle through the possibilities for it. foreach ($enc[0] as $letter) { // And call this function again with it! PrintEncoding(array_slice($enc, 1), $string.$letter); } } Three cheers for recursion! This would be used via: PrintEncoding(GetEncoding("052384")); And if you really want it as an array, play with output buffering and explode using "\n" as your split string. A: This kind of problem are usually resolved with recursion. In ruby, one (quick and dirty) solution would be @values = Hash.new([]) @values["0"] = ["N"] @values["1"] = ["L"] @values["2"] = ["T"] @values["3"] = ["D"] @values["4"] = ["R"] @values["5"] = ["V","F"] @values["6"] = ["B","P"] @values["7"] = ["Z"] @values["8"] = ["H","CH","J"] @values["9"] = ["G"] def find_valid_combinations(buffer,number) first_char = number.shift @values[first_char].each do |key| if(number.length == 0) then puts buffer + key else find_valid_combinations(buffer + key,number.dup) end end end find_valid_combinations("",ARGV[0].split("")) And if you run this from the command line you will get: $ ruby r.rb 051 NVL NFL This is related to brute-force search and backtracking A: Here is a recursive solution in Python. #!/usr/bin/env/python import sys ENCODING = {'0':['N'], '1':['L'], '2':['T'], '3':['D'], '4':['R'], '5':['V', 'F'], '6':['B', 'P'], '7':['Z'], '8':['H', 'CH', 'J'], '9':['G'] } def decode(str): if len(str) == 0: return '' elif len(str) == 1: return ENCODING[str] else: result = [] for prefix in ENCODING[str[0]]: result.extend([prefix + suffix for suffix in decode(str[1:])]) return result if __name__ == '__main__': print decode(sys.argv[1]) Example output: $ ./demo 1 ['L'] $ ./demo 051 ['NVL', 'NFL'] $ ./demo 0518 ['NVLH', 'NVLCH', 'NVLJ', 'NFLH', 'NFLCH', 'NFLJ'] A: Could you do the following: Create a results array. Create an item in the array with value "" Loop through the numbers, say 051 analyzing each one individually. Each time a 1 to 1 match between a number is found add the correct value to all items in the results array. So "" becomes N. Each time a 1 to many match is found, add new rows to the results array with one option, and update the existing results with the other option. So N becomes NV and a new item is created NF Then the last number is a 1 to 1 match so the items in the results array become NVL and NFL To produce the results loop through the results array, printing them, or whatever. A: Let pn be a list of all possible letter combinations of a given number string s up to the nth digit. Then, the following algorithm will generate pn+1: digit = s[n+1]; foreach(letter l that digit maps to) { foreach(entry e in p(n)) { newEntry = append l to e; add newEntry to p(n+1); } } The first iteration is somewhat of a special case, since p-1 is undefined. You can simply initialize p0 as the list of all possible characters for the first character. So, your 051 example: Iteration 0: p(0) = {N} Iteration 1: digit = 5 foreach({V, F}) { foreach(p(0) = {N}) { newEntry = N + V or N + F p(1) = {NV, NF} } } Iteration 2: digit = 1 foreach({L}) { foreach(p(1) = {NV, NF}) { newEntry = NV + L or NF + L p(2) = {NVL, NFL} } } A: The form you want is probably something like: function combinations( $str ){ $l = len( $str ); $results = array( ); if ($l == 0) { return $results; } if ($l == 1) { foreach( $codes[ $str[0] ] as $code ) { $results[] = $code; } return $results; } $cur = $str[0]; $combs = combinations( substr( $str, 1, $l ) ); foreach ($codes[ $cur ] as $code) { foreach ($combs as $comb) { $results[] = $code.$comb; } } return $results;} This is ugly, pidgin-php so please verify it first. The basic idea is to generate every combination of the string from [1..n] and then prepend to the front of all those combinations each possible code for str[0]. Bear in mind that in the worst case this will have performance exponential in the length of your string, because that much ambiguity is actually present in your coding scheme. A: The trick is not only to generate all possible letter combinations that match a given number, but to select the letter sequence that is most easy to remember. A suggestion would be to run the soundex algorithm on each of the sequence and try to match against an English language dictionary such as Wordnet to find the most 'real-word-sounding' sequences.
{ "language": "en", "url": "https://stackoverflow.com/questions/102468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Linq to SQL Association combo box order This is kind of a weird question and more of an annoyance than technical brick wall. When I'm adding tables and such using the Linq-to-SQL designer and I want to create an association using the dialogs. I right click on one of the target tables and choose Add > Association as normal and I am presented with the Association Editor. The Parent Class: and Child Class: combo boxes are filled with the tables that currently exist in the designer. But they are not in alphabetical order they are in the order that they were added to the designer. Can I change the order of these combo boxes? And if I can, where do I do this? A: I went poking around some and found an answer. The dbml file is an XML file that hold all of the basic information about the SQL tables, connections, etc. needed for Linq-to-SQL. By reordering the Table elements, you affect the order of the combo boxes used in the Association editor.
{ "language": "en", "url": "https://stackoverflow.com/questions/102470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Subversion branch reintegration When a branch is reintegrated to the trunk, is that branch effectively dead? Can you make modifications to the branch after the reintegration and merge those back into the trunk at a later date? A: You can do it technically, you branch is not dead nor disabled, but it is not recommended to merge from branch to trunk after reintegration. You can find a full discussion about the reason for that, here: Subversion merge reintegrate Basically, it says, that it is possible to merge your changes again to the trunk, but since reintegration forces you to merge from trunk to branch prior to the reintegrate operation you'll be facing Reflective/Cyclic Merge which is very problematic in Subversion 1.5. According to the article, it is recommended to delete your reintegrated branch immediately after reintegration and create a new one with the same (or different) name instead. This is a known Subversion behavior which will be addressed in future version (probably in 1.6) A: After you reintegrate from a branch into the trunk, you should do one of two things: * *Delete your branch. This is the easiest, but it makes it harder to see the branch's history. *Tell your branch not to merge the reintegrate commit. If you reintegrate to the trunk, and commit it as revision X, you can run this command on your branch: svn merge --record-only -c X url-to-trunk. However, you shouldn't do this if you made any changes as part of the commit, other than the merge itself. Any other changes will never make it back into your branch. A: Some advice on merging the changes back if someone makes changes to the branch multiple times (pre 1.5): Remember at which revision you did the merge! Either write the revision numbers down somewhere, or (which is easier) make a tag. (You can of course find it out later, but that's a PITA.) Example: You have a repository layout like this: /your_project /trunk /branches /tags Let's say it is a web application, and you have planned to make a release. You would create a tag, and from that (or from trunk) a branch in which you do the bugfixes: /your_project /trunk /branches /1.0.0-bugfixes /tags /1.0.0 Doing it this way, you can integrate the new features in the trunk. All bugfixes would happen only within the bugfix branch and before each release you make a tag of the current version (now from the bugfix branch). Let's assume you did a fair amount of bugfixing and released those to the production server and you need one of those features desperately in the current trunk: /your_project /trunk /branches /1.0.0-bugfixes /tags /1.0.0 /1.0.1 /1.0.2 You can now just integrate the changes between 1.0.0 and 1.0.2 in your trunk (assuming you are in your working copy): svn merge http://rep/your_project/tag/1.0.0 http://rep/your_project/tag/1.0.2 . This is what you should remember. You already merged the changes between 1.0.0 and 1.0.2 upon the trunk. Let's assume there are more changes in the current production release: /your_project /trunk /branches /1.0.0-bugfixes /tags /1.0.0 /1.0.1 /1.0.2 /1.0.3 /1.0.4 You are now ready to release the new version from trunk, but the last changes of your bugfixes are still missing: svn merge http://rep/your_project/tag/1.0.2 http://rep/your_project/tag/1.0.4 . Now you have all changes merged on your trunk, and you can make your release (don't forget to test it first). /your_project /trunk /branches /1.0.0-bugfixes /1.1.0-bugfixes /tags /1.0.0 /1.0.1 /1.0.2 /1.0.3 /1.0.4 /1.1.0 A: No, the branch is still alive, but, at that moment, it is exactly the same as the trunk. If you keep developing on the branch, you are free to re-merge with the trunk later on. A: As everyone has already said it here: the branch isn't dead and commits to the branch can continue just fine. Sometimes though you want to kill the branch after the merge. The only reliably solution is to delete the branch. The downside is that then it's harder to find the branch again if you wanted to have a look at it, say, for historical reasons. So, many people leave the "important" branches lying around and having an agreement of not changing them. I wish there was a way to mark a branch dead/readonly, thus ensuring nobody can commit to it until further notice. A: Actually, you need to do a --record-only merge from trunk into your branch of the revision that was created by the --reintegrate commit: $ cd trunk $ svn merge --reintegrate ^my-branch $ svn commit Committed revision 555. # This revision is ^^^^ important And now you record it $ cd my-branch $ svn merge --record-only -c 555 ^trunk $ svn commit You are happy to keep the branch now More information is in Chapter 4. Branching and Merging, Advanced Merging. A: You can merge from a branch to trunk, or trunk to a branch, as many times as you want. A: First of all, you should upgrade your Subversion client and server if you still use Subversion 1.7 or older. There is no reason to be using very old Subversion releases. As of 2016, the current version is Subversion 1.9. SVN 1.8 is also supported now and still receives bug fixes. The problem you ask about was solved in Subversion 1.8. Beginning with SVN 1.8, --reintegrate option has been deprecated. Reintegrate merges are now performed automatically. See Subversion 1.8 Release Notes entry related to the improvement. Read SVNBook 1.8 | Reintegrating a branch: If you choose not to delete your branch after reintegrating it to the trunk you may continue to perform sync merges from the trunk and then reintegrate the branch again. If you do this, only the changes made on your branch after the first reintegrate are merged to the trunk. ... Only Subversion 1.8 supports this reuse of a feature branch. Earlier versions require some special handling before a feature branch can be reintegrated more than once. See the earlier version of this chapter for more information: http://svnbook.red-bean.com/en/1.7/svn.branchmerge.basicmerging.html#svn.branchemerge.basicmerging.reintegrate A: When you do a merge, you specify the target. You can merge the differences of TreeA and TreeB to TreeC if you like. As Chris implies, your question doesn't really make that much sense. If you merge your branch into the trunk, the branch remains untouched. If the branch isn't needed afterwards, you could delete it. A: You can keep on developing on the branch, the feature you will need is merge-tracking which is in Subversion 1.5, this means that additional merges from the branch only include new changes.
{ "language": "en", "url": "https://stackoverflow.com/questions/102472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: Is there a means to produce a changelog in SVN When commiting to SVN I can add a top level commit message to detail what is being committed, but I would ideally like a means to comment on the individual files and what has changed within them. I have seen something similar in previous employment, but this was using CVS (and I can't recall whether this was achieved with a home brew script to produce the skeleton file) I have had a look at changelists but again I don't think (although i am willing to be proved wrong) that this gives the kind of granularity as outlined below. Ideally I am looking for something along the lines of: Foo.vb * *Added new function bar Bar.vb * *Removed function foo *Added functionality in xyz to do abc +/- Modified function to log error A: You could just commit whenever you're done with one particular task. That should lead to better comments anyway. A comment reading "Implemented E-Mail verification" on the three files necessary tells me a lot more than "added function verify_email". I can see the latter myself in the diff. A: I would just do this in the individual commit message. TortoiseSVN has filename autocompletion so that greatly aids in this. Another thing you could do is svn st before you commit and copy/paste the filenames into your commit message. Oh, and be sure to strongly question the value of this. I know some OSS projects (linux?) require this sort of fidelity, but for many projects this is just noise. Diff can tell you much more than this, and more accurately. One other thing you may want to consider is using Git. Git allows you to commit locally, in smaller steps. You then push to the master server all of your commits individually or squashed into a single commit w/ all the commit messages in a single message. That was a way simplified explanation, but it probably is worth checking out. A: One of the essential differences between SVN and CVS is that changes are committed atomically. In CVS each file has its own version, but in SVN the version is for the whole project and includes all the files checked in together. Here are four ideas for a solution: * *Check in each of your programs separately, with its own log message. This may mean that if, say, you check in five files, you will "use up" five versions, of which four may result in a broken build. *Do your development on a separate path (i.e. your own private branch), do as above, then at strategic moments merge your branch to the trunk. *Check in everything together, and keep the individual records as comments in the program header. This may mean (a little) extra work, but you'd have to compose the individual login messages anyway. *Do a single checkin for all the files, but with a nice full log message detailing each piece for each file. A: I've written a project for doing this kind of thing called MOAP One of its functions is to generate a ChangeLog entry from your local diff (currently supporting bazaar, cvs, svn, git and darcs). You do this by running 'moap changelog prepare' or 'moap cl prep' That entry can include functions changed as well if you enable the option. You then go and change that entry, describing your changes. You can remove files you don't want commited as part of your next commit. Then, you can run 'moap changelog commit' to commit the changes described in the ChangeLog entry. It will only commit the files listed there, and leave all your other changes local. Hope that helps! A: That kind of result could be obtain if there is some rules regarding the way comments are written inside each of the committed files. These comments can after that be extracted by a svn trigger.
{ "language": "en", "url": "https://stackoverflow.com/questions/102474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Explain Plan Cost vs Execution Time Before, I have found the "Cost" in the execution plan to be a good indicator of relative execution time. Why is this case different? Am I a fool for thinking the execution plan has relevance? What specifically can I try to improve v_test performance? Thank you. Using Oracle 10g I have a simple query view defined below create or replace view v_test as select distinct u.bo_id as bo_id, upper(trim(d.dept_id)) as dept_id from cust_bo_users u join cust_bo_roles r on u.role_name=r.role_name join cust_dept_roll_up_tbl d on (r.region is null or trim(r.region)=trim(d.chrgback_reg)) and (r.prod_id is null or trim(r.prod_id)=trim(d.prod_id)) and (r.div_id is null or trim(r.div_id)=trim(d.div_id )) and (r.clus_id is null or trim(r.clus_id )=trim( d.clus_id)) and (r.prod_ln_id is null or trim(r.prod_ln_id)=trim(d.prod_ln_id)) and (r.dept_id is null or trim(r.dept_id)=trim(d.dept_id)) defined to replace the following view create or replace view v_bo_secured_detail select distinct Q.BO_ID, Q.DEPT_ID from (select U.BO_ID BO_ID, UPPER(trim(D.DEPT_ID)) DEPT_ID from CUST_BO_USERS U, CUST_BO_ROLES R, CUST_DEPT_ROLL_UP_TBL D where U.ROLE_NAME = R.ROLE_NAME and R.ROLE_LEVEL = 'REGION' and trim(R.REGION) = UPPER(trim(D.CHRGBACK_REG)) union all select U.BO_ID BO_ID, UPPER(trim(D.DEPT_ID)) DEPT_ID from CUST_BO_USERS U, CUST_BO_ROLES R, CUST_DEPT_ROLL_UP_TBL D where U.ROLE_NAME = R.ROLE_NAME and R.ROLE_LEVEL = 'RG_PROD' and trim(R.REGION) = UPPER(trim(D.CHRGBACK_REG)) and trim(R.PROD_ID) = UPPER(trim(D.PROD_ID)) union all select U.BO_ID BO_ID, UPPER(trim(D.DEPT_ID)) DEPT_ID from CUST_BO_USERS U, CUST_BO_ROLES R, CUST_DEPT_ROLL_UP_TBL D where U.ROLE_NAME = R.ROLE_NAME and R.ROLE_LEVEL = 'PROD' and trim(R.PROD_ID) = UPPER(trim(D.PROD_ID)) union all select U.BO_ID BO_ID, UPPER(trim(D.DEPT_ID)) DEPT_ID from CUST_BO_USERS U, CUST_BO_ROLES R, CUST_DEPT_ROLL_UP_TBL D where U.ROLE_NAME = R.ROLE_NAME and R.ROLE_LEVEL = 'DIV' and trim(R.DIV_ID) = UPPER(trim(D.DIV_ID)) union all select U.BO_ID BO_ID, UPPER(trim(D.DEPT_ID)) DEPT_ID from CUST_BO_USERS U, CUST_BO_ROLES R, CUST_DEPT_ROLL_UP_TBL D where U.ROLE_NAME = R.ROLE_NAME and R.ROLE_LEVEL = 'RG_DIV' and trim(R.REGION) = UPPER(trim(D.CHRGBACK_REG)) and trim(R.DIV_ID) = UPPER(trim(D.DIV_ID)) union all select U.BO_ID BO_ID, UPPER(trim(D.DEPT_ID)) DEPT_ID from CUST_BO_USERS U, CUST_BO_ROLES R, CUST_DEPT_ROLL_UP_TBL D where U.ROLE_NAME = R.ROLE_NAME and R.ROLE_LEVEL = 'CLUS' and trim(R.CLUS_ID) = UPPER(trim(D.CLUS_ID)) union all select U.BO_ID BO_ID, UPPER(trim(D.DEPT_ID)) DEPT_ID from CUST_BO_USERS U, CUST_BO_ROLES R, CUST_DEPT_ROLL_UP_TBL D where U.ROLE_NAME = R.ROLE_NAME and R.ROLE_LEVEL = 'RG_CLUS' and trim(R.REGION) = UPPER(trim(D.CHRGBACK_REG)) and trim(R.CLUS_ID) = UPPER(trim(D.CLUS_ID)) union all select U.BO_ID BO_ID, UPPER(trim(D.DEPT_ID)) DEPT_ID from CUST_BO_USERS U, CUST_BO_ROLES R, CUST_DEPT_ROLL_UP_TBL D where U.ROLE_NAME = R.ROLE_NAME and R.ROLE_LEVEL = 'PROD_LN' and trim(R.PROD_LN_ID) = UPPER(trim(D.PROD_LN_ID)) union all select U.BO_ID BO_ID, UPPER(trim(R.DEPT_ID)) DEPT_ID from CUST_BO_USERS U, CUST_BO_ROLES R where U.ROLE_NAME = R.ROLE_NAME and R.ROLE_LEVEL = 'DEPT') Q with the goal of removing the dependency on the ROLE_LEVEL column. The execution plan for v_test is significantly lower than v_bo_secured_detail for simple select * from <view> where bo_id='value' queries. And is significantly lower when used in a real world query select CT_REPORT.RPT_KEY, CT_REPORT_ENTRY.RPE_KEY, CT_REPORT_ENTRY.CUSTOM16, Exp_Sub_Type.value, min(CT_REPORT_PAYMENT_CONF.PAY_DATE), CT_REPORT.PAID_DATE from CT_REPORT, <VIEW> SD, CT_REPORT_ENTRY, CT_LIST_ITEM_LANG Exp_Sub_Type, CT_REPORT_PAYMENT_CONF, CT_STATUS_LANG Payment_Status where (CT_REPORT_ENTRY.RPT_KEY = CT_REPORT.RPT_KEY) and (Payment_Status.STAT_KEY = CT_REPORT.PAY_KEY) and (Exp_Sub_Type.LI_KEY = CT_REPORT_ENTRY.CUSTOM9 and Exp_Sub_Type.LANG_CODE = 'en') and (CT_REPORT.RPT_KEY = CT_REPORT_PAYMENT_CONF.RPT_KEY) and (SD.BO_ID = 'JZHU9') and (SD.DEPT_ID = UPPER(CT_REPORT_ENTRY.CUSTOM5)) and (Payment_Status.name = 'Payment Confirmed' and (Payment_Status.LANG_CODE = 'en') and CT_REPORT.PAID_DATE > to_date('01/01/2008', 'mm/dd/yyyy') and Exp_Sub_Type.value != 'Korea') group by CT_REPORT.RPT_KEY, CT_REPORT_ENTRY.RPE_KEY, CT_REPORT_ENTRY.CUSTOM16, Exp_Sub_Type.value, CT_REPORT.PAID_DATE The execution times are WILDLY different. The v_test view taking 15 hours, and the v_bo_secured_detail taking a few seconds. Thank you all who responded This is one to remember for me. The places where the theory and mathematics of the expressions meets the reality of hardware based execution. Ouch. A: An execution plan is theory, the execution time is reality. The plan shows you how the engine goes about performing your query, but some steps might cause an inordinate amount of work to resolve the query. The use of "x is null or x = y" smells bad. If r and d are big tables you might have some sort of combinatorial explosion hitting you and the request cycles endlessly through large lists of disk blocks. I imagine you're seeing lots of I/O during the execution. On the other hand, the unioned selects are short and sweet, and so probably reuse lots of disk blocks that are still lying around from the earlier selects, and/or you have some degree of parallelism benefitting from reads on the same disk blocks. Also using trim() and upper() everywhere looks a bit suspicious. If your data are so unclean it might be worth running some periodic housecleaning from time to time, so that you can say "x = y" and know it works. update: you asked for tips to improve v_test. Clean your data so that trim() and upper() are unnecessay. They may preclude indexes from being used (although that would be affecting the unioned select version as well). If you can't get rid of "x is null or x = y" then y = nvl(x,'does-not-exist') might have better characteristics (assuming 'does-not-exist' is a "can't happen" id value). A: As the Oracle documentation says, the cost is the estimated cost relative to a particular execution plan. When you tweak the query, the particular execution plan that costs are calculated relative to can change. Sometimes dramatically. The problem with v_test's performance is that Oracle can think of no way to execute it other than performing a nested loop, for each cust_bo_roles, scan all of cust_dept_roll_up_tbl to find a match. If the table are of size n and m, this takes n*m time, which is slow for large tables. By contrast v_bo_secured_detail is set up so that it is a series of queries, each of which can be done through some other mechanism. (Oracle has a number it may use, including using an index, building a hash on the fly, or sorting the datasets and merging them. These operations are all O(n*log(n)) or better.) A small series of fast queries is fast. As painful as it is, if you want this query to be fast then you need to break it out like the previous query did. A: One aspect of low-cost -- high execution time is that when you are looking at large data-sets, it is often more efficient on the whole to do things in bulk, whereas if you want a quick results, it is more efficient to do as little work as possible to get the first record. The repetitiveness of doing the small operations that give the appearance of a quick response will not likely give a good result when working on the large sets. Many times, when you want a quick result, the USE_NL optimizer hint will help. Also, in your test view, it is relying on IS NULL... IS NULL cannot use an index nor can using a function such as trim on the 'table-side' parameter. A: Have you gathered optimiser stats on all the underlying tables? Without them the optimiser's estimates may be wildly out of kilter with reality. A: When you say the "query plan is lower", do you mean it is shorter, or that the actual cost estimates are lower? One obvious problem with your replacement view is that the join with cust_dept_roll_up_tbl uses almost exclusively unindexable criteria (the "is null" tests can be satisfied by an index, but the ones involving calling trim on each argument can't be), so the planner has to make at least one, and probably several sequential scans of the table to satisfy the query. I'm not sure if Oracle has this limitation, but many DBs can only do a single index scan per included table, so even if you clean up your join conditions to be indexable, it may be able to satisfy only one condition with an index scan and have to use sequential scans for the remainder. A: To elaborate about a cost a bit. In Oracle 9/10g, simplifying a bit, the cost is determined by formula: Cost = (SrCount * SrTime + MbrCount * MbrTime + CpuCyclesCount * CpuCycleTime) / SrTime Where SrCount - count total single block reads made, SrTime - average time of one single block read according to gathered system statistics, MbrCount and MbrTime, the same for multiblock read correspondingly (ones use during full table scans and index fast full scans), Cpu related metrics are self-explanatory..and all divided by single block read time.
{ "language": "en", "url": "https://stackoverflow.com/questions/102477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: ASP.Net: User controls added to placeholder dynamically cannot retrieve values I am adding some user controls dynamically to a PlaceHolder server control. My user control consists of some labels and some textbox controls. When I submit the form and try to view the contents of the textboxes (within each user control) on the server, they are empty. When the postback completes, the textboxes have the data that I entered prior to postback. This tells me that the text in the boxes are being retained through ViewState. I just don't know why I can't find them when I'm debugging. Can someone please tell me why I would not be seeing the data the user entered on the server? Thanks for any help. A: This is based on .NET v1 event sequence, but it should give you the idea: * *Initialize (Init event) *Begin Tracking View State (checks if postback) * *Load View State (if postback) *Load Postback Data (if postback) *Load (Load event) * *Raise Changed Events (if postback) *Raise Postback Events (if postback) *PreRender (PreRender event) *Save View State *Render *Unload (Unload event) *Dispose As you can see, the loading of ViewState data back to the controls happen before the Load event. So in order for your dynamically-added controls to "retain" those values, they have to be present for the ASP.NET page to reload the values in the first place. You would have to re-create those controls at the Init stage, before Load View State occurs. A: I figured out yesterday that you can actually make your app work like normal by loading the control tree right after the loadviewstateevent is fired. if you override the loadviewstate event, call mybase.loadviewstate and then put your own code to regenerate the controls right after it, the values for those controls will be available on page load. In one of my apps I use a viewstate field to hold the ID or the array info that can be used to recreate those controls. Protected Overrides Sub LoadViewState(ByVal savedState As Object) MyBase.LoadViewState(savedState) If IsPostBack Then CreateMyControls() End If End Sub A: I believe you'll need to add the UserControl to the PlaceHolder during the Init phase of the page life cycle, in order to get the ViewState to be filled in by the Load phase to read those values. Is this the order in which you're loading those? A: Ensure you are defining your dynamic controls at the class level and adding them to the ASP container: Private dynControl As ASP.MyNamespace_MyControl_ascx And when you instantiate the control, ensure you call LoadControl so the object is added properly: dynControl = CType(LoadControl("~/MyNamespace/MyControl/MyControl.ascx"), ASP.MyNamespace_MyControl_ascx) A: You have to create your controls in the Page_PreInit event handler. The ASP.NET server control model is tricky; you have to fully understand the page lifecycle to do it right. A: As others have said, any form of control manipulation must be done before viewstate is created. Here is a good link on the page lifecycle to help you out: http://msdn.microsoft.com/en-us/library/ms178472.aspx A: We have experienced the same thing and have handled it by using ghost controls on page_load that have the exact same .ID and then the post back picks up the events and the data. As others said it's the dynamic adding of the control after the init stages that the state is built already and controls added after aren't stored. Hope this helps a bit. A: I also want to add that I've seen user controls work the way that you'd expect them to just by setting the Control.ID property at run time. If you do not set the ID, items may get built in a different order and work oddly.
{ "language": "en", "url": "https://stackoverflow.com/questions/102483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Compatible encryption between C# and PHP, ColdFusion, Ruby, Python We're developing a service that will accept a POST request. Some of the POST data will need to be encrypted before the POST as it will be stored in hidden fields on a form. The application is written in C#, but we want third party clients to be able to easily integrate with it. We find that most clients use PHP, Classic ASP or VB.Net. The third parties should only be doing the encryption. We'd do the decryption. There is no two-way communication. What are the most compatible combinations of encryption algorithm, padding mode and other options? A: Assuming that you have a safe way of sharing a key (whether RSA encryption of it, retrieval over an SSH or HTTPS link, or callling the other developer on a secured phone line), any of the major modern encryptions (like AES, as mentioned by @Ed Haber) would be suitable. I would second his suggestion of AES. There should be libraries for PHP, VB, Ruby, etc. However, remember that with "no two-way communication" you will have to find an out-of-channel method for securely getting the symmetric key to the encrypting party. A: If you mean that it should be impossible for third-parties to decrypt data, then you will want to use an asymmetric encryption algorithm such as RSA. This will the third-party to encrypt data with your public key, and then only you can decrypt the data with your private key, which you do not disclose. There should be implementations of RSA available for all the languages you mentioned. If you don't care if the third-party can decrypt the data, then AES is the way to go. You will have one key which you share with the third-parties. This key is used both for encryption and decryption. A: I would use AES for the bulk data encryption and RSA for encrypting the AES Key. If the data is small enough then just encrypt the whole thing with RSA. A: Ed Haber said I would use AES for the bulk data encryption and RSA for encrypting the AES Key. If the data is small enough then just encrypt the whole thing with RSA. I think this is a good solution. What I would do is have your application publish an API for getting a public RSA key. When I third party wants to send you something it gets the public key. It then generates a session key to do the actual encryption using a block cipher, (ie AES), and sends the key to you by encrypting with your public key. You decrypt the session key with your private key. The third party then encrypts the data it wants to send you with AES (using a padding scheme that you also publish) and sends it to you. You decrypt it using the session key. There are some problems with the method above. Since you are not sending any information (other than publishing your public key, you cannot control how the session key is generated. This means that third parties can use very insecure ways to of generating the session key and you will never know. A second problem is everyone who wants to send you data has to pad data for AES in the same way you do. So you will have to make sure every one co-ordinates. The second issue isn't to big, but the first could be a problem especially if you don't trust the third parties all that much to generate really good session keys from a good cryptographically secure random number generator A: You could very easily implement your own XOR key-based bit encryption. With a little thought and ingenuity, you can come up with something that's more than suitable for you application. Here's a PHP example: function XOREncryption($InputString, $KeyPhrase){ $KeyPhraseLength = strlen($KeyPhrase); for ($i = 0; $i < strlen($InputString); $i++){ $rPos = $i % $KeyPhraseLength; $r = ord($InputString[$i]) ^ ord($KeyPhrase[$rPos]); $InputString[$i] = chr($r); } return $InputString; } A: ColdFusion has the encrypt and decrypt functions capable of handling a range of algorithms and encodings, including the AES recommended above. Information at: http://www.cfquickdocs.com/cf8/?getDoc=encrypt#Encrypt Quick example code: Key = generateSecretKey( 'AES' , 128 ) EncryptedText = encrypt( Text , Key , 'AES' , 'Hex' ) Text = decrypt( EncryptedText , Key, 'AES' , 'Hex' ) Similar functionality is available with this library for PHP: http://www.chilkatsoft.com/p/php_aes.asp ...and Java, Python, Ruby, and others... http://www.example-code.com/java/crypt2_aes_matchPhp.asp http://www.example-code.com/python/aes_stringEncryption.asp A: Sounds like RSA is the algorithm for you. A: Why not have your server exposed over HTTPS? That way, any client which can handle HTTPS can consume the service securely.
{ "language": "en", "url": "https://stackoverflow.com/questions/102496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Creating IDL for MAPI-MIME conversion I'm trying to create the IDL for the IConverterSession interface and I'm confused by the definition of the MIMETOMAPI method. It specifies the LPMESSAGE pmsg parameter as [out] yet the comments state its the pointer to the MAPI message to be loaded. Its unclear to me whether the functions allocates the MAPI message object and sets the pointer in which case shouldn't it be a pointer to a pointer of MESSAGE? OR is the calling code expected to have instanced the message object already in which case why is marked [out] and not [in]? Utlitmately this interface is to be consumed from VB6 code so it will either have to be [in] or [in, out] but I do need to know whether in the the IDL I used:- [in] IMessage pmsg* OR [in, out] IMessage pmsg** A: I think in this case the documentation is misleading when it marks the parameter as [out]. You have to pass a valid LPMESSAGE to the method, and that's why is not a double pointer. So i would go with [in] on your idl definition. A: See MAPIMime.h from MFCMapi source (http://mfcmapi.codeplex.com/) as a definitive source. A: The correct documentation can be found here: https://learn.microsoft.com/en-us/office/client-developer/outlook/mapi/iconvertersession-mimetomapi. The caller must supply a message for the API to fill out, so the object must go [in].
{ "language": "en", "url": "https://stackoverflow.com/questions/102514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I work with quarters (quarterly dates) in ASP.Net using VB.Net 2.0? I know that Sql Server has some handy built-in quarterly stuff, but what about the .Net native DateTime object? What is the best way to add, subtract, and traverse quarters? Is it a bad thing™ to use the VB-specific DateAdd() function? e.g.: Dim nextQuarter As DateTime = DateAdd(DateInterval.Quarter, 1, DateTime.Now) Edit: Expanding @bslorence's function: Public Shared Function AddQuarters(ByVal originalDate As DateTime, ByVal quarters As Integer) As Datetime Return originalDate.AddMonths(quarters * 3) End Function Expanding @Matt's function: Public Shared Function GetQuarter(ByVal fromDate As DateTime) As Integer Return ((fromDate.Month - 1) \ 3) + 1 End Function Edit: here's a couple more functions that were handy: Public Shared Function GetFirstDayOfQuarter(ByVal originalDate As DateTime) As DateTime Return AddQuarters(New DateTime(originalDate.Year, 1, 1), GetQuarter(originalDate) - 1) End Function Public Shared Function GetLastDayOfQuarter(ByVal originalDate As DateTime) As DateTime Return AddQuarters(New DateTime(originalDate.Year, 1, 1), GetQuarter(originalDate)).AddDays(-1) End Function A: How about this: Dim nextQuarter As DateTime = DateTime.Now.AddMonths(3); A: I know you can calculate the quarter of a date by: Dim quarter As Integer = (someDate.Month - 1) \ 3 + 1 If you're using Visual Studio 2008, you could try bolting additional functionality on to the DateTime class by taking a look at Extension Methods. A: One thing to remeber, not all companies end their quarters on the last day of a month. A: Public Function GetLastQuarterStart() As Date GetLastQuarterStart = DateAdd(DateInterval.Quarter, -1, DateTime.Now).ToString("MM/01/yyyy") End Function Public Function GetLastQuarterEnd() As Date Dim LastQuarterStart As Date = DateAdd(DateInterval.Quarter, -1, DateTime.Now).ToString("MM/01/yyyy") Dim MM As String = LastQuarterStart.Month Dim DD As Integer = 0 Dim YYYY As String = LastQuarterStart.Year Select Case MM Case "01", "03", "05", "07", "08", "10", "12" DD = 31 Case "02" Select Case YYYY Case "2012", "2016", "2020", "2024", "2028", "2032" DD = 29 Case Else DD = 28 End Select Case Else DD = 30 End Select Dim LastQuarterEnd As Date = DateAdd(DateInterval.Month, 2, LastQuarterStart) MM = LastQuarterEnd.Month YYYY = LastQuarterEnd.Year Return String.Format("{0}/{1}/{2}", MM, DD, YYYY) End Function A: Expanding on Matt Blaine's Answer: Dim intQuarter As Integer = Math.Ceiling(MyDate.Month / 3) Not sure if this would add speed benefits or not but it looks cleaner IMO
{ "language": "en", "url": "https://stackoverflow.com/questions/102521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Where can I get Toad syntax coloring schemes? I am a big fan of the light colors on a dark background color scheme for programming - which is unfortunately not what Quest's Toad comes with by default. I notice that it is possible to export and import settings under the language management window, and I know that Toad has a large level of community involvement. So I assume there must be some location where people are posting their custom coloring schemes. However, in part because I don't know what the Toad guys call them (skins? colorization? themes?) and in part because its so hard to Google Toad +skins I cannot for the life of me find them. Does anyone know if there is such a place so I don't have to set the colors by hand? A: UPDATED ENTIRELY: * *What's the official name? The file extension (.dvtcolortheme) suggests "Color Theme" or "DWT Color Theme" are the official names. Most users on Toad forums seem to use the same terminology. *Where are they shared? There does not seem to be a dedicated site for sharing color themes. However, there were a couple forums on ToadWorld, and ToadForOracle where folks were talking about swapping these color theme files. I would suggest hopping on the forum and asking if others have files to share or know of any repository sites. *Available Color Themes * Nocturnal theme on GitHub: https://github.com/grng/Nocturnal-Toad-Color-Scheme * A nice dark-background theme: http://toadfororacle.com/thread.jspa?messageID=93203 *Rolling-your-own I know you did not want to create your own, but it looks a lot easier than either of us expected. The .dwtcolortheme settings file is just XML. I suggest visiting http://studiostyl.es/ - as it's a large repository of Visual Studio themes. Find a thumbnail you like, take a screen shot, and use a color picker to capture the values. Plop them in your XML settings file and you are ready to go. A: Go to View -> Toad Options. Click Editor -> Behavior. There is a button called Syntax Highlighting. On the Highlighting Tab (opens by default) is a list of styles, clicking on the style will show the options selected for that style... For example in my setup (which is my install default), Reserved Words are blue, Comments green, Strings Red, etc. Scroll through and you'll see the options, and you can change as necessary for your needs. A: The right file for new versions of TOAD is EditorLexers.xml. For version 12.9 is here C:\Users\%username%\AppData\Roaming\Dell\Toad for Oracle\12.9\User Files instead for version 12.11 is here C:\Users\%username%\AppData\Roaming\Quest Software\Toad for Oracle\12.11\User Files Here you can download my dark theme Enjoy! Update: it works also in version 13.0. A: I have created a schema similar to Visual Studio 2017 dark theme You can download it from: EditorLexers.xml You need to replace the file located at: C:\Users\[YOUR_USER]\AppData\Roaming\Quest Software\Toad for Oracle\13.1\User Files
{ "language": "en", "url": "https://stackoverflow.com/questions/102523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Multiple file inputs? Within an XSLT document, is it possible to loop over a set of files in the current directory? I have a situation where I have a directory full of xml files that need some analysis done to generate a report. I have my stylesheet operating on a single document fine, but I'd like to extend that without going to another tool to merge the xml documents. I was thinking along these lines: <xsl:for-each select="{IO Selector Here}"> <xsl:variable select="document(@url)" name="contents" /> <!--More stuff here--> </xsl:for-each> A: In XSLT 2.0, and with Saxon, you can do this with the collection() function: <xsl:for-each select="file:///path/to/directory"> <!-- process the documents --> </xsl:for-each> See http://www.saxonica.com/documentation/sourcedocs/collections.html for more details. In XSLT 1.0, you have to create an index that lists the documents you want to process with a separate tool. Your environment may provide such a tool; for example, Cocoon has a Directory Generator that creates such an index. But without knowing what your environment is, it's hard to know what to recommend. A: As others said, you cannot do it in a platform-independent way. In .NET world, you could create a custom XmlResolver so that document('dir://c:/foo/') would return the list of files in the 'c:\foo' directory in an arbitrary format you wish. See the following links for more information on custom XmlResolver's: Customizing the XmlUrlResolver Class The power of XmlResolver Also you may resort to using scripts (like the msxsl:script element) or extensions in your XSLT stylesheet. All these approaches will make your XSLT code unportable to other platforms. A: I don't think XSL is set up to work that way: it's designed to be used by something else on one or more documents, and the something else would be responsible for finding files to which the XSLT should be applied. If you had one main document and a fixed set of supporting documents, you could possibly use the document() function to return specific nodes and/or values, but I suspect your case is different. A: From within XSLT I think this will not be possible. You could pass in all the XML file names to an <xsl:param name="files" /> as a comma separated list and loop over it using recursion and substring-before() and substring-after(). A: I have a command-line tool that could be used for this - it uses the XSLT processor built into Ant (the java build tool) to process input + transform into output. Would be easy to wrap with a batch file for loop. svn://donie.homeip.net/public/tools A: If you are using .Net you can use XsltExtension to make calls from your XSLT document to methods in your .net class. The method could then return nodesets back to your XSLT. So your method could handle the file IO part.
{ "language": "en", "url": "https://stackoverflow.com/questions/102531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Sending values through links Here is the situation: I have 2 pages. What I want is to have a number of text links(<a href="">) on page 1 all directing to page 2, but I want each link to send a different value. On page 2 I want to show that value like this: Hello you clicked {value} Another point to take into account is that I can't use any php in this situation, just html. A: Can you use any scripting? Something like Javascript. If you can, then pass the values along in the query string (just add a "?ValueName=Value") to the end of your links. Then on the target page retrieve the query string value. The following site shows how to parse it out: Parsing the Query String. Here's the Javascript code you would need: var qs = new Querystring(); var v1 = qs.get("ValueName") From there you should be able to work with the passed value. A: Javascript can get it. Say, you're trying to get the querystring value from this url: http://foo.com/default.html?foo=bar var tabvalue = getQueryVariable("foo"); function getQueryVariable(variable) { var query = window.location.search.substring(1); var vars = query.split("&"); for (var i=0;i<vars.length;i++) { var pair = vars[i].split("="); if (pair[0] == variable) { return pair[1]; } } } ** Not 100% certain if my JS code here is correct, as I didn't test it. A: You might be able to accomplish this using HTML Anchors. http://www.w3schools.com/HTML/html_links.asp A: Append your data to the HREF tag of your links ad use javascript on second page to parse the URL and display wathever you want http://java-programming.suite101.com/article.cfm/how_to_get_url_parts_in_javascript It's not clean, but it should work. A: Use document.location.search and split() http://www.example.com/example.html?argument=value var queryString = document.location.search(); var parts = queryString.split('='); document.write(parts[0]); // The argument name document.write(parts[1]); // The value Hope it helps A: Well this is pretty basic with javascript, but if you want more of this and more advanced stuff you should really look into php for instance. Using php it's easy to get variables from one page to another, here's an example: the url: localhost/index.php?myvar=Hello World You can then access myvar in index.php using this bit of code: $myvar =$_GET['myvar']; A: Ok thanks for all your replies, i'll take a look if i can find a way to use the scripts. It's really annoying since i have to work around a CMS, because in the CMS, all pages are created with a Wysiwyg editor which tend to filter out unrecognized tags/scripts. Edit: Ok it seems that the damn wysiwyg editor only recognizes html tags... (as expected) A: Using php <? $passthis = "See you on the other side"; echo '<form action="whereyouwantittogo.php" target="_blank" method="post">'. '<input type="text" name="passthis1" value="'. $passthis .' " /> '. '<button type="Submit" value="Submit" >Submit</button>'. '</form>'; ?> The script for the page you would like to pass the info to: <? $thispassed = $_POST['passthis1']; echo '<textarea>'. $thispassed .'</textarea>'; echo $thispassed; ?> Use this two codes on seperate pages with the latter at whereyouwantittogo.php and you should be in business.
{ "language": "en", "url": "https://stackoverflow.com/questions/102534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What can you use generator functions for? I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving. A: One of the reasons to use generator is to make the solution clearer for some kind of solutions. The other is to treat results one at a time, avoiding building huge lists of results that you would process separated anyway. If you have a fibonacci-up-to-n function like this: # function version def fibon(n): a = b = 1 result = [] for i in xrange(n): result.append(a) a, b = b, a + b return result You can more easily write the function as this: # generator version def fibon(n): a = b = 1 for i in xrange(n): yield a a, b = b, a + b The function is clearer. And if you use the function like this: for x in fibon(1000000): print x, in this example, if using the generator version, the whole 1000000 item list won't be created at all, just one value at a time. That would not be the case when using the list version, where a list would be created first. A: Basically avoiding call-back functions when iterating over input maintaining state. See here and here for an overview of what can be done using generators. A: Since the send method of a generator has not been mentioned, here is an example: def test(): for i in xrange(5): val = yield print(val) t = test() # Proceed to 'yield' statement next(t) # Send value to yield t.send(1) t.send('2') t.send([3]) It shows the possibility to send a value to a running generator. A more advanced course on generators in the video below (including yield from explination, generators for parallel processing, escaping the recursion limit, etc.) David Beazley on generators at PyCon 2014 A: Some good answers here, however, I'd also recommend a complete read of the Python Functional Programming tutorial which helps explain some of the more potent use-cases of generators. * *Particularly interesting is that it is now possible to update the yield variable from outside the generator function, hence making it possible to create dynamic and interwoven coroutines with relatively little effort. *Also see PEP 342: Coroutines via Enhanced Generators for more information. A: I find this explanation which clears my doubt. Because there is a possibility that person who don't know Generators also don't know about yield Return The return statement is where all the local variables are destroyed and the resulting value is given back (returned) to the caller. Should the same function be called some time later, the function will get a fresh new set of variables. Yield But what if the local variables aren't thrown away when we exit a function? This implies that we can resume the function where we left off. This is where the concept of generators are introduced and the yield statement resumes where the function left off. def generate_integers(N): for i in xrange(N): yield i In [1]: gen = generate_integers(3) In [2]: gen <generator object at 0x8117f90> In [3]: gen.next() 0 In [4]: gen.next() 1 In [5]: gen.next() So that's the difference between return and yield statements in Python. Yield statement is what makes a function a generator function. So generators are a simple and powerful tool for creating iterators. They are written like regular functions, but they use the yield statement whenever they want to return data. Each time next() is called, the generator resumes where it left off (it remembers all the data values and which statement was last executed). A: Real World Example Let's say you have 100 million domains in your MySQL table, and you would like to update Alexa rank for each domain. First thing you need is to select your domain names from the database. Let's say your table name is domains and column name is domain. If you use SELECT domain FROM domains it's going to return 100 million rows which is going to consume lot of memory. So your server might crash. So you decided to run the program in batches. Let's say our batch size is 1000. In our first batch we will query the first 1000 rows, check Alexa rank for each domain and update the database row. In our second batch we will work on the next 1000 rows. In our third batch it will be from 2001 to 3000 and so on. Now we need a generator function which generates our batches. Here is our generator function: def ResultGenerator(cursor, batchsize=1000): while True: results = cursor.fetchmany(batchsize) if not results: break for result in results: yield result As you can see, our function keeps yielding the results. If you used the keyword return instead of yield, then the whole function would be ended once it reached return. return - returns only once yield - returns multiple times If a function uses the keyword yield then it's a generator. Now you can iterate like this: db = MySQLdb.connect(host="localhost", user="root", passwd="root", db="domains") cursor = db.cursor() cursor.execute("SELECT domain FROM domains") for result in ResultGenerator(cursor): doSomethingWith(result) db.close() A: See the "Motivation" section in PEP 255. A non-obvious use of generators is creating interruptible functions, which lets you do things like update UI or run several jobs "simultaneously" (interleaved, actually) while not using threads. A: Buffering. When it is efficient to fetch data in large chunks, but process it in small chunks, then a generator might help: def bufferedFetch(): while True: buffer = getBigChunkOfData() # insert some code to break on 'end of data' for i in buffer: yield i The above lets you easily separate buffering from processing. The consumer function can now just get the values one by one without worrying about buffering. A: Generators give you lazy evaluation. You use them by iterating over them, either explicitly with 'for' or implicitly by passing it to any function or construct that iterates. You can think of generators as returning multiple items, as if they return a list, but instead of returning them all at once they return them one-by-one, and the generator function is paused until the next item is requested. Generators are good for calculating large sets of results (in particular calculations involving loops themselves) where you don't know if you are going to need all results, or where you don't want to allocate the memory for all results at the same time. Or for situations where the generator uses another generator, or consumes some other resource, and it's more convenient if that happened as late as possible. Another use for generators (that is really the same) is to replace callbacks with iteration. In some situations you want a function to do a lot of work and occasionally report back to the caller. Traditionally you'd use a callback function for this. You pass this callback to the work-function and it would periodically call this callback. The generator approach is that the work-function (now a generator) knows nothing about the callback, and merely yields whenever it wants to report something. The caller, instead of writing a separate callback and passing that to the work-function, does all the reporting work in a little 'for' loop around the generator. For example, say you wrote a 'filesystem search' program. You could perform the search in its entirety, collect the results and then display them one at a time. All of the results would have to be collected before you showed the first, and all of the results would be in memory at the same time. Or you could display the results while you find them, which would be more memory efficient and much friendlier towards the user. The latter could be done by passing the result-printing function to the filesystem-search function, or it could be done by just making the search function a generator and iterating over the result. If you want to see an example of the latter two approaches, see os.path.walk() (the old filesystem-walking function with callback) and os.walk() (the new filesystem-walking generator.) Of course, if you really wanted to collect all results in a list, the generator approach is trivial to convert to the big-list approach: big_list = list(the_generator) A: I have found that generators are very helpful in cleaning up your code and by giving you a very unique way to encapsulate and modularize code. In a situation where you need something to constantly spit out values based on its own internal processing and when that something needs to be called from anywhere in your code (and not just within a loop or a block for example), generators are the feature to use. An abstract example would be a Fibonacci number generator that does not live within a loop and when it is called from anywhere will always return the next number in the sequence: def fib(): first = 0 second = 1 yield first yield second while 1: next = first + second yield next first = second second = next fibgen1 = fib() fibgen2 = fib() Now you have two Fibonacci number generator objects which you can call from anywhere in your code and they will always return ever larger Fibonacci numbers in sequence as follows: >>> fibgen1.next(); fibgen1.next(); fibgen1.next(); fibgen1.next() 0 1 1 2 >>> fibgen2.next(); fibgen2.next() 0 1 >>> fibgen1.next(); fibgen1.next() 3 5 The lovely thing about generators is that they encapsulate state without having to go through the hoops of creating objects. One way of thinking about them is as "functions" which remember their internal state. I got the Fibonacci example from Python Generators - What are they? and with a little imagination, you can come up with a lot of other situations where generators make for a great alternative to for loops and other traditional iteration constructs. A: The simple explanation: Consider a for statement for item in iterable: do_stuff() A lot of the time, all the items in iterable doesn't need to be there from the start, but can be generated on the fly as they're required. This can be a lot more efficient in both * *space (you never need to store all the items simultaneously) and *time (the iteration may finish before all the items are needed). Other times, you don't even know all the items ahead of time. For example: for command in user_input(): do_stuff_with(command) You have no way of knowing all the user's commands beforehand, but you can use a nice loop like this if you have a generator handing you commands: def user_input(): while True: wait_for_command() cmd = get_command() yield cmd With generators you can also have iteration over infinite sequences, which is of course not possible when iterating over containers. A: Piles of stuff. Any time you want to generate a sequence of items, but don't want to have to 'materialize' them all into a list at once. For example, you could have a simple generator that returns prime numbers: def primes(): primes_found = set() primes_found.add(2) yield 2 for i in itertools.count(1): candidate = i * 2 + 1 if not all(candidate % prime for prime in primes_found): primes_found.add(candidate) yield candidate You could then use that to generate the products of subsequent primes: def prime_products(): primeiter = primes() prev = primeiter.next() for prime in primeiter: yield prime * prev prev = prime These are fairly trivial examples, but you can see how it can be useful for processing large (potentially infinite!) datasets without generating them in advance, which is only one of the more obvious uses. A: I use generators when our web server is acting as a proxy: * *The client requests a proxied url from the server *The server begins to load the target url *The server yields to return the results to the client as soon as it gets them A: My favorite uses are "filter" and "reduce" operations. Let's say we're reading a file, and only want the lines which begin with "##". def filter2sharps( aSequence ): for l in aSequence: if l.startswith("##"): yield l We can then use the generator function in a proper loop source= file( ... ) for line in filter2sharps( source.readlines() ): print line source.close() The reduce example is similar. Let's say we have a file where we need to locate blocks of <Location>...</Location> lines. [Not HTML tags, but lines that happen to look tag-like.] def reduceLocation( aSequence ): keep= False block= None for line in aSequence: if line.startswith("</Location"): block.append( line ) yield block block= None keep= False elif line.startsWith("<Location"): block= [ line ] keep= True elif keep: block.append( line ) else: pass if block is not None: yield block # A partial block, icky Again, we can use this generator in a proper for loop. source = file( ... ) for b in reduceLocation( source.readlines() ): print b source.close() The idea is that a generator function allows us to filter or reduce a sequence, producing a another sequence one value at a time. A: A practical example where you could make use of a generator is if you have some kind of shape and you want to iterate over its corners, edges or whatever. For my own project (source code here) I had a rectangle: class Rect(): def __init__(self, x, y, width, height): self.l_top = (x, y) self.r_top = (x+width, y) self.r_bot = (x+width, y+height) self.l_bot = (x, y+height) def __iter__(self): yield self.l_top yield self.r_top yield self.r_bot yield self.l_bot Now I can create a rectangle and loop over its corners: myrect=Rect(50, 50, 100, 100) for corner in myrect: print(corner) Instead of __iter__ you could have a method iter_corners and call that with for corner in myrect.iter_corners(). It's just more elegant to use __iter__ since then we can use the class instance name directly in the for expression. A: Also good for printing the prime numbers up to n: def genprime(n=10): for num in range(3, n+1): for factor in range(2, num): if num%factor == 0: break else: yield(num) for prime_num in genprime(100): print(prime_num)
{ "language": "en", "url": "https://stackoverflow.com/questions/102535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "236" }
Q: AMFPHP service new service error I am trying to add a new hello world service to amfphp, I am developing locally <?php /** * First tutorial class */ class HelloWorld { /** * first simple method * @returns a string saying 'Hello World!' */ function sayHello() { return "Hello World!"; } } ?> when exploring in the amfphp browser i get a "TypeError: Error #1009: Cannot access a property or method of a null object reference." need help... A: I recommend Charles for solving this type of problem, this let's you see what's going across the wire. In your case it's likely something simple as a syntax error in the php file. PHP will output the error information into what the Service Browser expects to be amf-encoded data, wreaking havoc to any parsing it tries. Using Charles you can easily see this and fix it! A: Is that the entirety of your source code? I'm sure this isn't the problem but just in case, you are opening the ?php tag right? Here's one of my simple service classes: <?php class Products { public function __construct() { mysql_connect("localhost", "myuser", "mypass"); mysql_select_db("mydb"); } /** * Retrieves data * @returns data */ function getProduct() { $sql = 'SELECT * FROM `content_type_product`'; return mysql_query($sql); } } ?> A: You're trying to access a variable/method that's null. The code here is fine so the problem is somewhere else.. A: I agree with grapefrukt... The browser really doesn't give you good information about PHP errors. Charles is a godsend for doing stuff over AMF and I recommend it highly. You'll get information about the request and the result along with any PHP error messages.
{ "language": "en", "url": "https://stackoverflow.com/questions/102547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Biggest advantage to using ASP.Net MVC vs web forms What are some of the advantages of using one over the other? A: ASP.NET Web Forms and MVC are two web frameworks developed by Microsoft - they are both good choices. Neither of the web frameworks are to be replaced by the other nor are there plans to have them 'merged' into a single framework. Continued support and development are done in parallel by Microsoft and neither will be 'going away'. Each of these web frameworks offers advantages/disadvantages - some of which need to be considered when developing a web application. A web application can be developed using either technology - it might make development for a particular application easier selecting one technology versus the other and vice versa. ASP.NET Web Forms: * *Development supports state • Gives the illusion that a web application is aware of what the user has been doing, similar to Windows applications. I.e. Makes 'wizard' functionality a little bit easier to implement. Web forms does a great job at hiding a lot of that complexity from the developer. *Rapid Application Development (RAD) • The ability to just 'jump in' and start delivering web forms. This is disputed by some of the MVC community, but pushed by Microsoft. In the end, it comes down to the level of expertise of the developer and what they are comfortable with. The web forms model probably has less of a learning curve to less experienced developers. *Larger control toolbox • ASP.NET Web Forms offers a much greater and more robust toolbox (web controls) whereas MVC offers a more primitive control set relying more on rich client-side controls via jQuery (Javascript). *Mature • It's been around since 2002 and there is an abundance of information with regards to questions, problems, etc. Offers more third-party control - need to consider your existing toolkits. ASP.NET MVC: * *Separation of concerns (SoC) • From a technical standpoint, the organization of code within MVC is very clean, organized and granular, making it easier (hopefully) for a web application to scale in terms of functionality. Promotes great design from a development standpoint. *Easier integration with client side tools (rich user interface tools) • More than ever, web applications are increasingly becoming as rich as the applications you see on your desktops. With MVC, it gives you the ability to integrate with such toolkits (such as jQuery) with greater ease and more seamless than in Web Forms. *Search Engine Optimization (SEO) Friendly / Stateless • URL's are more friendly to search engines (i.e. mywebapplication.com/users/ 1 - retrieve user with an ID of 1 vs mywebapplication/users/getuser.aspx (id passed in session)). Similarly, since MVC is stateless, this removes the headache of users who spawn multiple web browsers from the same window (session collisions). Along those same lines, MVC adheres to the stateless web protocol rather than 'battling' against it. *Works well with developers who need high degree of control • Many controls in ASP.NET web forms automatically generate much of the raw HTML you see when an page is rendered. This can cause headaches for developers. With MVC, it lends itself better towards having complete control with what is rendered and there are no surprises. Even more important, is that the HTML forms typically are much smaller than the Web forms which can equate to a performance boost - something to seriously consider. *Test Driven Development (TDD) • With MVC, you can more easily create tests for the web side of things. An additional layer of testing will provide yet another layer of defense against unexpected behavior. Authentication, authorization, configuration, compilation and deployment are all features that are shared between the two web frameworks. A: Biggest single advantage for me would be the clear-cut separation between your Model, View, and Controller layers. It helps promote good design from the start. A: I have not seen ANY advantages in MVC over ASP.Net. 10 years ago Microsoft came up with UIP (User Interface Process) as the answer to MVC. It was a flop. We did a large project (4 developers, 2 designers, 1 tester) with UIP back then and it was a sheer nightmare. Don't just jump in to bandwagon for the sake of Hype. All of the advantages listed above are already available in Asp.Net (With more great tweaks [ New features in Asp.Net 4 ] in Asp.Net 4). If your development team or a single developer families with Asp.Net just stick to it and make beautiful products quickly to satisfy your clients (who pays for your work hours). MVC will eat up your valuable time and produce the same results as Asp.Net :-) A: Francis Shanahan, * *Why do you call partial postback as "nonsense"? This is the core feature of Ajax and has been utilized very well in Atlas framework and wonderful third party controls like Telerik *I agree to your point regarding the viewstate. But if developers are careful to disable viewstate, this can greatly reduce the size of the HTML which is rendered thus the page becomes light weight. *Only HTML Server controls are renamed in ASP.NET Web Form model and not pure html controls. Whatever it may be, why are you so worried if the renaming is done? I know you want to deal with lot of javascript events on the client side but if you design your web pages smartly, you can definitely get all the id's you want *Even ASP.NET Web Forms meets XHTML Standards and I don't see any bloating. This is not a justification of why we need an MVC pattern *Again, why are you bothered with AXD Javascript? Why does it hurts you? This is not a valid justification again So far, i am a fan of developing applications using classic ASP.NET Web forms. For eg: If you want to bind a dropdownlist or a gridview, you need a maximum of 30 minutes and not more than 20 lines of code (minimal of course). But in case of MVC, talk to the developers how pain it is. The biggest downside of MVC is we are going back to the days of ASP. Remember the spaghetti code of mixing up Server code and HTML??? Oh my god, try to read an MVC aspx page mixed with javascript, HTML, JQuery, CSS, Server tags and what not....Any body can answer this question? A: Web forms also gain from greater maturity and support from third party control providers like Telerik. A: In webforms you could also render almost whole html by hand, except few tags like viewstate, eventvalidation and similar, which can be removed with PageAdapters. Nobody force you to use GridView or some other server side control that has bad html rendering output. I would say that biggest advantage of MVC is SPEED! Next is forced separation of concern. But it doesn't forbid you to put whole BL and DAL logic inside Controller/Action! It's just separation of view, which can be done also in webforms (MVP pattern for example). A lot of things that people mentions for mvc can be done in webforms, but with some additional effort. Main difference is that request comes to controller, not view, and those two layers are separated, not connected via partial class like in webforms (aspx + code behind) A: My 2 cents: * *ASP.net forms is great for Rapid application Development and adding business value quickly. I still use it for most intranet applications. *MVC is great for Search Engine Optimization as you control the URL and the HTML to a greater extent *MVC generally produces a much leaner page - no viewstate and cleaner HTML = quick loading times *MVC easy to cache portions of the page. -MVC is fun to write :- personal opinion ;-) A: MVC lets you have more than one form on a page, A small feature I know but it is handy! Also the MVC pattern I feel make the code easier to maintain, esp. when you revisiting it after a few months. A: MVC Controller: [HttpGet] public ActionResult DetailList(ImportDetailSearchModel model) { Data.ImportDataAccess ida = new Data.ImportDataAccess(); List<Data.ImportDetailData> data = ida.GetImportDetails(model.FileId, model.FailuresOnly); return PartialView("ImportSummaryDetailPartial", data); } MVC View: <table class="sortable"> <thead> <tr><th>Unique Id</th><th class="left">Error Type</th><th class="left">Field</th><th class="left">Message</th><th class="left">State</th></tr> </thead> <tbody> @foreach (Data.ImportDetailData detail in Model) { <tr><th>@detail.UniqueID</th><th class="left">@detail.ErrorType</th><th class="left">@detail.FieldName</th><th class="left">@detail.Message</th><th class="left">@detail.ItemState</th></tr> } </tbody></table> How hard is that? No ViewState, No BS Page life-cycle...Just pure efficient code. A: Anyone old enough to remember classic ASP will remember the nightmare of opening a page with code mixed in with html and javascript - even the smallest page was a pain to figure out what the heck it was doing. I could be wrong, and I hope I am, but MVC looks like going back to those bad old days. When ASP.Net came along it was hailed as the savior, separating code from content and allowing us to have web designers create the html and coders work on the code behind. If we didn't want to use ViewState, we turned it off. If we didn't want to use code behind for some reason, we could place our code inside the html just like classic ASP. If we didn't want to use PostBack we redirected to another page for processing. If we didn't want to use ASP.Net controls we used standard html controls. We could even interrogate the Response object if we didn't want to use ASP.Net runat="server" on our controls. Now someone in their great wisdom (probably someone who never programmed classic ASP) has decided it's time to go back to the days of mixing code with content and call it "separation of concerns". Sure, you can create cleaner html, but you could with classic ASP. To say "you are not programming correctly if you have too much code inside your view" is like saying "if you wrote well structured and commented code in classic ASP it is far cleaner and better than ASP.NET" If I wanted to go back to mixing code with content I'd look at developing using PHP which has a far more mature environment for that kind of development. If there are so many problems with ASP.NET then why not fix those issues? Last but not least the new Razor engine means it is even harder to distinguish between html and code. At least we could look for opening and closing tags i.e. <% and %> in ASP but now the only indication will be the @ symbol. It might be time to move to PHP and wait another 10 years for someone to separate code from content once again. A: The main advantages of ASP.net MVC are: * *Enables the full control over the rendered HTML. *Provides clean separation of concerns(SoC). *Enables Test Driven Development (TDD). *Easy integration with JavaScript frameworks. *Following the design of stateless nature of the web. *RESTful urls that enables SEO. *No ViewState and PostBack events The main advantage of ASP.net Web Form are: * *It provides RAD development *Easy development model for developers those coming from winform development. A: If you're working with other developers, such as PHP or JSP (and i'm guessing rails) - you're going to have a much easier time converting or collaborating on pages because you wont have all those 'nasty' ASP.NET events and controls everywhere. A: The problem with MVC is that even for "experts" it eats up a lot of valuable time and requires lot of effort. Businesses are driven by the basic thing "Quick Solution that works" regardless of technology behind it. WebForms is a RAD technology that saves time and money. Anything that requires more time is not acceptable by businesses. A: * *Proper AJAX, e.g. JSONResults no partial page postback nonsense. *no viewstate +1 *No renaming of the HTML IDs. *Clean HTML = no bloat and having a decent shot at rendering XHTML or standards compliant pages. *No more generated AXD javascript. A: I can see the only two advantages for smaller sites being: 6) RESTful urls that enables SEO. 7) No ViewState and PostBack events (and greater performance in general) Testing for small sites is not an issue, neither are the design advantages when a site is coded properly anyway, MVC in many ways obfuscates and makes changes harder to make. I'm still deciding whether these advantages are worth it. I can clearly see the advantage of MVC in larger multi-developer sites. A: Main benefit i find is it forces the project into a more testable strcuture. This can pretty easily be done with webforms as well (MVP pattern), but requires the developer to have an understanding of this, many dont. Webforms and MVC are both viable tools, both excel in different areas. I personally use web forms as we primarily develop B2B/ LOB apps. But we always do it with an MVP pattern with wich we can achieve 95+% code coverage for our unit tests. This also alows us to automate testing on properties of webcontrols property value is exposed through the view eg bool IMyView.IsAdminSectionVisible{ get{return pnlAdmin.Visible;} get{pnlAdmin.Visible=value;} } ) I dont think this level of testing is as easily achived in MVC, without poluting my model. A: You don't feel bad about using 'non post-back controls' anymore - and figuring how to smush them into a traditional asp.net environment. This means that modern (free to use) javascript controls such this or this or this can all be used without that trying to fit a round peg in a square hole feel. A: Modern javascript controls as well as JSON requests can be handled much easily using MVC. There we can use a lot of other mechanisms to post data from one action to another action. That's why we prefer MVC over web forms. Also we can build light weight pages. A: My personal opinion is that, Biggest dis-advantage to using ASP.Net MVC is that CODE BLOCKS mixed with HTML... html hell for the developers who maintain it...
{ "language": "en", "url": "https://stackoverflow.com/questions/102558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "164" }
Q: How to shut down the computer from C# What's the best way to shut down the computer from a C# program? I've found a few methods that work - I'll post them below - but none of them are very elegant. I'm looking for something that's simpler and natively .net. A: System.Diagnostics.Process.Start("shutdown", "/s /t 0") Should work. For restart, it's /r This will restart the PC box directly and cleanly, with NO dialogs. A: Taken from: a Geekpedia post This method uses WMI to shutdown windows. You'll need to add a reference to System.Management to your project to use this. using System.Management; void Shutdown() { ManagementBaseObject mboShutdown = null; ManagementClass mcWin32 = new ManagementClass("Win32_OperatingSystem"); mcWin32.Get(); // You can't shutdown without security privileges mcWin32.Scope.Options.EnablePrivileges = true; ManagementBaseObject mboShutdownParams = mcWin32.GetMethodParameters("Win32Shutdown"); // Flag 1 means we want to shut down the system. Use "2" to reboot. mboShutdownParams["Flags"] = "1"; mboShutdownParams["Reserved"] = "0"; foreach (ManagementObject manObj in mcWin32.GetInstances()) { mboShutdown = manObj.InvokeMethod("Win32Shutdown", mboShutdownParams, null); } } A: Note that shutdown.exe is just a wrapper around InitiateSystemShutdownEx, which provides some niceties missing in ExitWindowsEx A: You can launch the shutdown process: * *shutdown -s -t 0 - Shutdown *shutdown -r -t 0 - Restart A: I had trouble trying to use the WMI method accepted above because i always got privilige not held exceptions despite running the program as an administrator. The solution was for the process to request the privilege for itself. I found the answer at http://www.dotnet247.com/247reference/msgs/58/292150.aspx written by a guy called Richard Hill. I've pasted my basic use of his solution below in case that link gets old. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Management; using System.Runtime.InteropServices; using System.Security; using System.Diagnostics; namespace PowerControl { public class PowerControl_Main { public void Shutdown() { ManagementBaseObject mboShutdown = null; ManagementClass mcWin32 = new ManagementClass("Win32_OperatingSystem"); mcWin32.Get(); if (!TokenAdjuster.EnablePrivilege("SeShutdownPrivilege", true)) { Console.WriteLine("Could not enable SeShutdownPrivilege"); } else { Console.WriteLine("Enabled SeShutdownPrivilege"); } // You can't shutdown without security privileges mcWin32.Scope.Options.EnablePrivileges = true; ManagementBaseObject mboShutdownParams = mcWin32.GetMethodParameters("Win32Shutdown"); // Flag 1 means we want to shut down the system mboShutdownParams["Flags"] = "1"; mboShutdownParams["Reserved"] = "0"; foreach (ManagementObject manObj in mcWin32.GetInstances()) { try { mboShutdown = manObj.InvokeMethod("Win32Shutdown", mboShutdownParams, null); } catch (ManagementException mex) { Console.WriteLine(mex.ToString()); Console.ReadKey(); } } } } public sealed class TokenAdjuster { // PInvoke stuff required to set/enable security privileges [DllImport("advapi32", SetLastError = true), SuppressUnmanagedCodeSecurityAttribute] static extern int OpenProcessToken( System.IntPtr ProcessHandle, // handle to process int DesiredAccess, // desired access to process ref IntPtr TokenHandle // handle to open access token ); [DllImport("kernel32", SetLastError = true), SuppressUnmanagedCodeSecurityAttribute] static extern bool CloseHandle(IntPtr handle); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern int AdjustTokenPrivileges( IntPtr TokenHandle, int DisableAllPrivileges, IntPtr NewState, int BufferLength, IntPtr PreviousState, ref int ReturnLength); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern bool LookupPrivilegeValue( string lpSystemName, string lpName, ref LUID lpLuid); [StructLayout(LayoutKind.Sequential)] internal struct LUID { internal int LowPart; internal int HighPart; } [StructLayout(LayoutKind.Sequential)] struct LUID_AND_ATTRIBUTES { LUID Luid; int Attributes; } [StructLayout(LayoutKind.Sequential)] struct _PRIVILEGE_SET { int PrivilegeCount; int Control; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 1)] // ANYSIZE_ARRAY = 1 LUID_AND_ATTRIBUTES[] Privileges; } [StructLayout(LayoutKind.Sequential)] internal struct TOKEN_PRIVILEGES { internal int PrivilegeCount; [MarshalAs(UnmanagedType.ByValArray, SizeConst = 3)] internal int[] Privileges; } const int SE_PRIVILEGE_ENABLED = 0x00000002; const int TOKEN_ADJUST_PRIVILEGES = 0X00000020; const int TOKEN_QUERY = 0X00000008; const int TOKEN_ALL_ACCESS = 0X001f01ff; const int PROCESS_QUERY_INFORMATION = 0X00000400; public static bool EnablePrivilege(string lpszPrivilege, bool bEnablePrivilege) { bool retval = false; int ltkpOld = 0; IntPtr hToken = IntPtr.Zero; TOKEN_PRIVILEGES tkp = new TOKEN_PRIVILEGES(); tkp.Privileges = new int[3]; TOKEN_PRIVILEGES tkpOld = new TOKEN_PRIVILEGES(); tkpOld.Privileges = new int[3]; LUID tLUID = new LUID(); tkp.PrivilegeCount = 1; if (bEnablePrivilege) tkp.Privileges[2] = SE_PRIVILEGE_ENABLED; else tkp.Privileges[2] = 0; if (LookupPrivilegeValue(null, lpszPrivilege, ref tLUID)) { Process proc = Process.GetCurrentProcess(); if (proc.Handle != IntPtr.Zero) { if (OpenProcessToken(proc.Handle, TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, ref hToken) != 0) { tkp.PrivilegeCount = 1; tkp.Privileges[2] = SE_PRIVILEGE_ENABLED; tkp.Privileges[1] = tLUID.HighPart; tkp.Privileges[0] = tLUID.LowPart; const int bufLength = 256; IntPtr tu = Marshal.AllocHGlobal(bufLength); Marshal.StructureToPtr(tkp, tu, true); if (AdjustTokenPrivileges(hToken, 0, tu, bufLength, IntPtr.Zero, ref ltkpOld) != 0) { // successful AdjustTokenPrivileges doesn't mean privilege could be changed if (Marshal.GetLastWin32Error() == 0) { retval = true; // Token changed } } TOKEN_PRIVILEGES tokp = (TOKEN_PRIVILEGES)Marshal.PtrToStructure(tu, typeof(TOKEN_PRIVILEGES)); Marshal.FreeHGlobal(tu); } } } if (hToken != IntPtr.Zero) { CloseHandle(hToken); } return retval; } } } A: This thread provides the code necessary: http://bytes.com/forum/thread251367.html but here's the relevant code: using System.Runtime.InteropServices; [StructLayout(LayoutKind.Sequential, Pack=1)] internal struct TokPriv1Luid { public int Count; public long Luid; public int Attr; } [DllImport("kernel32.dll", ExactSpelling=true) ] internal static extern IntPtr GetCurrentProcess(); [DllImport("advapi32.dll", ExactSpelling=true, SetLastError=true) ] internal static extern bool OpenProcessToken( IntPtr h, int acc, ref IntPtr phtok ); [DllImport("advapi32.dll", SetLastError=true) ] internal static extern bool LookupPrivilegeValue( string host, string name, ref long pluid ); [DllImport("advapi32.dll", ExactSpelling=true, SetLastError=true) ] internal static extern bool AdjustTokenPrivileges( IntPtr htok, bool disall, ref TokPriv1Luid newst, int len, IntPtr prev, IntPtr relen ); [DllImport("user32.dll", ExactSpelling=true, SetLastError=true) ] internal static extern bool ExitWindowsEx( int flg, int rea ); internal const int SE_PRIVILEGE_ENABLED = 0x00000002; internal const int TOKEN_QUERY = 0x00000008; internal const int TOKEN_ADJUST_PRIVILEGES = 0x00000020; internal const string SE_SHUTDOWN_NAME = "SeShutdownPrivilege"; internal const int EWX_LOGOFF = 0x00000000; internal const int EWX_SHUTDOWN = 0x00000001; internal const int EWX_REBOOT = 0x00000002; internal const int EWX_FORCE = 0x00000004; internal const int EWX_POWEROFF = 0x00000008; internal const int EWX_FORCEIFHUNG = 0x00000010; private void DoExitWin( int flg ) { bool ok; TokPriv1Luid tp; IntPtr hproc = GetCurrentProcess(); IntPtr htok = IntPtr.Zero; ok = OpenProcessToken( hproc, TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, ref htok ); tp.Count = 1; tp.Luid = 0; tp.Attr = SE_PRIVILEGE_ENABLED; ok = LookupPrivilegeValue( null, SE_SHUTDOWN_NAME, ref tp.Luid ); ok = AdjustTokenPrivileges( htok, false, ref tp, 0, IntPtr.Zero, IntPtr.Zero ); ok = ExitWindowsEx( flg, 0 ); } Usage: DoExitWin( EWX_SHUTDOWN ); or DoExitWin( EWX_REBOOT ); A: Different methods: A. System.Diagnostics.Process.Start("Shutdown", "-s -t 10"); B. Windows Management Instrumentation (WMI) * *http://www.csharpfriends.com/Forums/ShowPost.aspx?PostID=36953 *http://www.dreamincode.net/forums/showtopic33948.htm C. System.Runtime.InteropServices Pinvoke * *http://bytes.com/groups/net-c/251367-shutdown-my-computer-using-c D. System Management * *http://www.geekpedia.com/code36_Shut-down-system-using-Csharp.html After I submit, I have seen so many others also have posted... A: I tried roomaroo's WMI method to shutdown Windows 2003 Server, but it would not work until I added `[STAThread]' (i.e. "Single Threaded Apartment" threading model) to the Main() declaration: [STAThread] public static void Main(string[] args) { Shutdown(); } I then tried to shutdown from a thread, and to get that to work I had to set the "Apartment State" of the thread to STA as well: using System.Management; using System.Threading; public static class Program { [STAThread] public static void Main(string[] args) { Thread t = new Thread(new ThreadStart(Program.Shutdown)); t.SetApartmentState(ApartmentState.STA); t.Start(); ... } public static void Shutdown() { // roomaroo's code } } I'm a C# noob, so I'm not entirely sure of the significance of STA threads in terms of shutting down the system (even after reading the link I posted above). Perhaps someone else can elaborate...? A: **Elaborated Answer... using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; // Remember to add a reference to the System.Management assembly using System.Management; using System.Diagnostics; namespace ShutDown { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void btnShutDown_Click(object sender, EventArgs e) { ManagementBaseObject mboShutdown = null; ManagementClass mcWin32 = new ManagementClass("Win32_OperatingSystem"); mcWin32.Get(); // You can't shutdown without security privileges mcWin32.Scope.Options.EnablePrivileges = true; ManagementBaseObject mboShutdownParams = mcWin32.GetMethodParameters("Win32Shutdown"); // Flag 1 means we want to shut down the system mboShutdownParams["Flags"] = "1"; mboShutdownParams["Reserved"] = "0"; foreach (ManagementObject manObj in mcWin32.GetInstances()) { mboShutdown = manObj.InvokeMethod("Win32Shutdown", mboShutdownParams, null); } } } } A: Works starting with windows XP, not available in win 2000 or lower: This is the quickest way to do it: Process.Start("shutdown","/s /t 0"); Otherwise use P/Invoke or WMI like others have said. Edit: how to avoid creating a window var psi = new ProcessStartInfo("shutdown","/s /t 0"); psi.CreateNoWindow = true; psi.UseShellExecute = false; Process.Start(psi); A: Short and sweet. Call an external program: using System.Diagnostics; void Shutdown() { Process.Start("shutdown.exe", "-s -t 00"); } Note: This calls Windows' Shutdown.exe program, so it'll only work if that program is available. You might have problems on Windows 2000 (where shutdown.exe is only available in the resource kit) or XP Embedded. A: The old-school ugly method. Use the ExitWindowsEx function from the Win32 API. using System.Runtime.InteropServices; void Shutdown2() { const string SE_SHUTDOWN_NAME = "SeShutdownPrivilege"; const short SE_PRIVILEGE_ENABLED = 2; const uint EWX_SHUTDOWN = 1; const short TOKEN_ADJUST_PRIVILEGES = 32; const short TOKEN_QUERY = 8; IntPtr hToken; TOKEN_PRIVILEGES tkp; // Get shutdown privileges... OpenProcessToken(Process.GetCurrentProcess().Handle, TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, out hToken); tkp.PrivilegeCount = 1; tkp.Privileges.Attributes = SE_PRIVILEGE_ENABLED; LookupPrivilegeValue("", SE_SHUTDOWN_NAME, out tkp.Privileges.pLuid); AdjustTokenPrivileges(hToken, false, ref tkp, 0U, IntPtr.Zero, IntPtr.Zero); // Now we have the privileges, shutdown Windows ExitWindowsEx(EWX_SHUTDOWN, 0); } // Structures needed for the API calls private struct LUID { public int LowPart; public int HighPart; } private struct LUID_AND_ATTRIBUTES { public LUID pLuid; public int Attributes; } private struct TOKEN_PRIVILEGES { public int PrivilegeCount; public LUID_AND_ATTRIBUTES Privileges; } [DllImport("advapi32.dll")] static extern int OpenProcessToken(IntPtr ProcessHandle, int DesiredAccess, out IntPtr TokenHandle); [DllImport("advapi32.dll", SetLastError = true)] [return: MarshalAs(UnmanagedType.Bool)] static extern bool AdjustTokenPrivileges(IntPtr TokenHandle, [MarshalAs(UnmanagedType.Bool)]bool DisableAllPrivileges, ref TOKEN_PRIVILEGES NewState, UInt32 BufferLength, IntPtr PreviousState, IntPtr ReturnLength); [DllImport("advapi32.dll")] static extern int LookupPrivilegeValue(string lpSystemName, string lpName, out LUID lpLuid); [DllImport("user32.dll", SetLastError = true)] static extern int ExitWindowsEx(uint uFlags, uint dwReason); In production code you should be checking the return values of the API calls, but I left that out to make the example clearer. A: Just to add to Pop Catalin's answer, here's a one liner which shuts down the computer without displaying any windows: Process.Start(new ProcessStartInfo("shutdown", "/s /t 0") { CreateNoWindow = true, UseShellExecute = false }); A: Use shutdown.exe. To avoid problem with passing args, complex execution, execution from WindowForms use PowerShell execute script: using System.Management.Automation; ... using (PowerShell PowerShellInstance = PowerShell.Create()) { PowerShellInstance.AddScript("shutdown -a; shutdown -r -t 100;"); // invoke execution on the pipeline (collecting output) Collection<PSObject> PSOutput = PowerShellInstance.Invoke(); } System.Management.Automation.dll should be installed on OS and available in GAC. Sorry for My english. A: For Windows 10, I needed to add /f option in order to shutdown the pc without any question and wait time. //This did not work for me Process.Start("shutdown", "/s /t 0"); //But this worked Process.Start("shutdown", "/s /f /t 0"); A: There is no .net native method for shutting off the computer. You need to P/Invoke the ExitWindows or ExitWindowsEx API call. A: If you want to shut down computer remotely then you can use Using System.Diagnostics; on any button click { Process.Start("Shutdown","-i"); }
{ "language": "en", "url": "https://stackoverflow.com/questions/102567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "155" }
Q: What is a double underscore in Perl? I'm trying to understand someone else's Perl code without knowing much Perl myself. I would appreciate your help. I've encountered a Perl function along these lines: MyFunction($arg1,$arg2__size,$arg3) Is there a meaning to the double-underscore syntax in $arg2, or is it just part of the name of the second argument? A: As far as the interpreter is concerned, an underscore is just another character allowed in identifiers. It can be used as an alternative to concatenation or camel case to form multi-word identifiers. A leading underscore is often used to mean an identifier is for local use only, e.g. for non-exported parts of a module. It's merely a convention; the interpreter doesn't care. A: In the context of your question, the double underscore doesn't have any programmatic meaning. Double underscores does mean something special for a limited number of values in Perl, most notably __FILE__ & __LINE__. These are special literals that aren't prefixed with a sigil ($, % or @) and are only interpolated outside of quotes. They contain the full path & name of the currently executing file and the line that is being executed. See the section on 'Special Literals' in perldata or this post on Perl Monks A: There is no specific meaning to the use of a __ inside of a perl variable name. It's likely programmer preference, especially in the case that you've cited in your question. You can see more information about perl variable naming here. A: As in most languages underscore is just part of an identifier; no special meaning. But are you sure it's Perl? There aren't any sigils on the variables. Can you post more context? A: I'm fairly certain arg2__size is just the name of a variable. A: Mark's answer is of course correct, it has no special meaning. But I want to note that your example doesn't look like Perl at all. Perl variables aren't barewords. They have the sigils, as you will see from the links above. And Perl doesn't have "functions", it has subroutines. So there may be some confusion about which language we're talking about. A: You will need to tell the interpreter that "$arg2" is the name of a variable. and not "$arg2__size". For this you will need to use the parenthesis. (This usage is similar to that seen in shell). This should work MyFunction($arg1,${arg2}__size,$arg3) --Binu
{ "language": "en", "url": "https://stackoverflow.com/questions/102568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: ASP.NET Control/Page Library Question I'm working on a drop in assembly that has predefined pages and usable controls. I am having no difficulties with creating server controls, but I'm wondering what the "best practices" are with dealing with pages in an assembly. Can you compile a page into an assembly and release it as just a dll? How would this be accessed from the client browser's perspective as far as the address they would type or be directed to with a link? As an example, I have a simple login page with the standard username and password text boxes, and the log in button and a "remember me" checkbox, with a "I can't remember my username and/or password" hyperlink. Can I access that page as like a webresource? such as "http://www.site.name/webresource.axd?related_resource_id_codes" A: Your best bet if you want to be able to code it and treat it like a real page is to implement a VirtualPathProvider. Using a virtualpathprovider would allow you to embed the actual aspx as a resource (or put it in a database, whatever) and serve it from there, and still use the asp.net page compilation engine. This would let you still use the visual studio design time tools easily, and prevent you from having to do vast amounts of build customization to precompile the pages. You can see here as well If you don't want to do that, you can try using the aspnet_compiler tool to precompile the aspx and such pages into a dll. This will require some build customization, and tricks to allow serving the pages from the dll. A: You can add an httpHandler element to web.config pointing to your page. Something like: <httpHandlers> <add verb="*" path="login.aspx" type="MyPages.LoginPage, MyPages" /> </httpHandlers>
{ "language": "en", "url": "https://stackoverflow.com/questions/102587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SQL strip text and convert to integer In my database (SQL 2005) I have a field which holds a comment but in the comment I have an id and I would like to strip out just the id, and IF possible convert it to an int: activation successful of id 1010101 The line above is the exact structure of the data in the db field. And no I don't want to do this in the code of the application, I actually don't want to touch it, just in case you were wondering ;-) A: This should do the trick: SELECT SUBSTRING(column, PATINDEX('%[0-9]%', column), 999) FROM table Based on your sample data, this that there is only one occurence of an integer in the string and that it is at the end. A: -- Test table, you will probably use some query DECLARE @testTable TABLE(comment VARCHAR(255)) INSERT INTO @testTable(comment) VALUES ('activation successful of id 1010101') -- Use Charindex to find "id " then isolate the numeric part -- Finally check to make sure the number is numeric before converting SELECT CASE WHEN ISNUMERIC(JUSTNUMBER)=1 THEN CAST(JUSTNUMBER AS INTEGER) ELSE -1 END FROM ( select right(comment, len(comment) - charindex('id ', comment)-2) as justnumber from @testtable) TT I would also add that this approach is more set based and hence more efficient for a bunch of data values. But it is super easy to do it just for one value as a variable. Instead of using the column comment you can use a variable like @chvComment. A: I don't have a means to test it at the moment, but: select convert(int, substring(fieldName, len('activation successful of id '), len(fieldName) - len('activation successful of id '))) from tableName A: If the comment string is EXACTLY like that you can use replace. select replace(comment_col, 'activation successful of id ', '') as id from .... It almost certainly won't be though - what about unsuccessful Activations? You might end up with nested replace statements select replace(replace(comment_col, 'activation not successful of id ', ''), 'activation successful of id ', '') as id from .... [sorry can't tell from this edit screen if that's entirely valid sql] That starts to get messy; you might consider creating a function and putting the replace statements in that. If this is a one off job, it won't really matter. You could also use a regex, but that's quite slow (and in any case mean you now have 2 problems). A: Would you be open to writing a bit of code? One option, create a CLR User Defined function, then use Regex. You can find more details here. This will handle complex strings. If your above line is always formatted as 'activation successful of id #######', with your number at the end of the field, then: declare @myColumn varchar(100) set @myColumn = 'activation successful of id 1010102' SELECT @myColumn as [OriginalColumn] , CONVERT(int, REVERSE(LEFT(REVERSE(@myColumn), CHARINDEX(' ', REVERSE(@myColumn))))) as [DesiredColumn] Will give you: OriginalColumn DesiredColumn ---------------------------------------- ------------- activation successful of id 1010102 1010102 (1 row(s) affected) A: CAST(REVERSE(LEFT(REVERSE(@Test),CHARINDEX(' ',REVERSE(@Test))-1)) AS INTEGER) A: select cast(right(column_name,charindex(' ',reverse(column_name))) as int)
{ "language": "en", "url": "https://stackoverflow.com/questions/102591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Learning Anaysis Services Can anyone recommend a good resource -- book, website, article, etc -- to help me learn SQL Server Analysis services. I have no knowledge of this technology right now but I do constantly work with SQL server in the traditional sense. I want to learn about Cubes and Using Reporting Services with it. I want to start from the bottom but after I finish with the material, ideally, I'd be able to stumble through a real development project... I'm hoping to get started with a free resource but if anyone knows of a really good book, I'd take that too. Or, if you don't know of a resource how did you get started with the technology? Thank you, Frank A: Take a look Here for a list of AS resources I compiled in answer to a similar question. A: Pretty outstanding book: Professional SQL Server Analysis Services 2005 with MDX Gives you a good overview of the architecture of SSAS, as well as the query language MDX, and administrative/maintenance overview. A good primer for a developer OR a system administrator. A: My personal favorite book on the topic is Microsoft SQL Server 2005 Analysis Services Mosha Pasumansky's blog is a great resource once you start learning more about the technology and MDX http://sqlblog.com/blogs/mosha/default.aspx A: Here's a link to Analysis Services Books online. It's a decent resource, and completely free.
{ "language": "en", "url": "https://stackoverflow.com/questions/102600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can I perform a DNS lookup (hostname to IP address) using client-side Javascript? I would like to use client-side Javascript to perform a DNS lookup (hostname to IP address) as seen from the client's computer. Is that possible? A: Edit: This question gave me an itch, so I put up a JSONP webservice on Google App Engine that returns the clients ip address. Usage: <script type="application/javascript"> function getip(json){ alert(json.ip); // alerts the ip address } </script> <script type="application/javascript" src="http://jsonip.appspot.com/?callback=getip"> </script> Yay, no server proxies needed. Pure JS can't. If you have a server script under the same domain that prints it out you could send a XMLHttpRequest to read it. A: I know this question was asked a very long time ago, but I figured I'd offer a more recent answer. DNS over HTTPS (DoH) You can send DNS queries over HTTPS to DNS resolvers that support it. The standard for DOH is described in RFC 8484. This is a similar thing to what all the other answers suggest, only that DoH is actually the DNS protocol over HTTPS. It's also a "proposed" Internet standard and it's becoming quite popular. For example, some major browsers either support it or have plans to support it (Chrome, Edge, Firefox), and Microsoft is in the process of building it into their operating system. One of the purposes of DoH is: allowing web applications to access DNS information via existing browser APIs in a safe way consistent with Cross Origin Resource Sharing (CORS) There's an open source tool made especially for doing DNS lookups from web applications called dohjs. It does DNS over HTTPS (DoH) wireformat queries as described in RFC 8484. It supports both GET and POST methods. Full disclosure: I am a contributor to dohjs. Another JavaScript library with similar features is found here - https://github.com/sc0Vu/doh-js-client. I haven't used this one personally, but I think it would work client side as well. DNS over HTTPS JSON APIs If you don't want to bother with DNS wireformat, both Google and Cloudflare offer JSON APIs for DNS over HTTPS. * *Google's JSON API Doc: https://developers.google.com/speed/public-dns/docs/doh/json Endpoint: https://dns.google/resolve? *Cloudflare's JSON API Doc: https://developers.cloudflare.com/1.1.1.1/dns-over-https/json-format/ Endpoint https://cloudflare-dns.com/dns-query? Example Javascript code to lookup example.com with Google's JSON DOH API: var response = await fetch('https://dns.google/resolve?name=example.com'); var json = await response.json(); console.log(json); Examples from the RFC for DOH GET and POST with wireformat Here are the examples the RFC gives for both GET and POST (see https://www.rfc-editor.org/rfc/rfc8484#section-4.1.1): GET example: The first example request uses GET to request "www.example.com". :method = GET :scheme = https :authority = dnsserver.example.net :path = /dns-query?dns=AAABAAABAAAAAAAAA3d3dwdleGFtcGxlA2NvbQAAAQAB accept = application/dns-message POST example: The same DNS query for "www.example.com", using the POST method would be: :method = POST :scheme = https :authority = dnsserver.example.net :path = /dns-query accept = application/dns-message content-type = application/dns-message content-length = 33 <33 bytes represented by the following hex encoding> 00 00 01 00 00 01 00 00 00 00 00 00 03 77 77 77 07 65 78 61 6d 70 6c 65 03 63 6f 6d 00 00 01 00 01 Other places to send DOH queries You can find a list of some public DNS resolvers that support DNS over HTTPS in a couple places: * *DNSCrypt has a long list of public DoH and DNSCrypt resolver on their Github, and a nice interactive version of the list at https://dnscrypt.info/public-servers/ *Wikipedia - comparison of public recursive nameservers *List on Curl's wiki *(short) list on dnsprivacy.org Of the above resources, I'd say that the list on Curl's wiki and the DNSCrypt list are are probably the most complete and the most frequently updated. Curl's page also includes a list of open source tools for DoH (servers, proxies, client libs, etc). A: There's no notion of hosts or ip-addresses in the javascript standard library. So you'll have to access some external service to look up hostnames for you. I recommend hosting a cgi-bin which looks up the ip-address of a hostname and access that via javascript. A: There's a third-party service which provides a CORS-friendly REST API to perform DNS lookups from the browser - https://exana.io/tools/dns/ A: Very late, but I guess many people will still land here through "Google Airlines". A moderm approach is to use WebRTC that doesn't require server support. https://hacking.ventures/local-ip-discovery-with-html5-webrtc-security-and-privacy-risk/ Next code is a copy&paste from http://net.ipcalf.com/ // NOTE: window.RTCPeerConnection is "not a constructor" in FF22/23 var RTCPeerConnection = /*window.RTCPeerConnection ||*/ window.webkitRTCPeerConnection || window.mozRTCPeerConnection; if (RTCPeerConnection) (function () { var rtc = new RTCPeerConnection({iceServers:[]}); if (window.mozRTCPeerConnection) { // FF needs a channel/stream to proceed rtc.createDataChannel('', {reliable:false}); }; rtc.onicecandidate = function (evt) { if (evt.candidate) grepSDP(evt.candidate.candidate); }; rtc.createOffer(function (offerDesc) { grepSDP(offerDesc.sdp); rtc.setLocalDescription(offerDesc); }, function (e) { console.warn("offer failed", e); }); var addrs = Object.create(null); addrs["0.0.0.0"] = false; function updateDisplay(newAddr) { if (newAddr in addrs) return; else addrs[newAddr] = true; var displayAddrs = Object.keys(addrs).filter(function (k) { return addrs[k]; }); document.getElementById('list').textContent = displayAddrs.join(" or perhaps ") || "n/a"; } function grepSDP(sdp) { var hosts = []; sdp.split('\r\n').forEach(function (line) { // c.f. http://tools.ietf.org/html/rfc4566#page-39 if (~line.indexOf("a=candidate")) { // http://tools.ietf.org/html/rfc4566#section-5.13 var parts = line.split(' '), // http://tools.ietf.org/html/rfc5245#section-15.1 addr = parts[4], type = parts[7]; if (type === 'host') updateDisplay(addr); } else if (~line.indexOf("c=")) { // http://tools.ietf.org/html/rfc4566#section-5.7 var parts = line.split(' '), addr = parts[2]; updateDisplay(addr); } }); } })(); else { document.getElementById('list').innerHTML = "<code>ifconfig | grep inet | grep -v inet6 | cut -d\" \" -f2 | tail -n1</code>"; document.getElementById('list').nextSibling.textContent = "In Chrome and Firefox your IP should display automatically, by the power of WebRTCskull."; } A: I am aware this is an old question but my solution may assist others. I find that the JSON(P) services which make this easy do not last forever but the following JavaScript works well for me at the time of writing. <script type="text/javascript">function z (x){ document.getElementById('y').innerHTML=x.query }</script> <script type='text/javascript' src='http://ip-api.com/json/zero.eu.org?callback=z'></script> The above writes my server's IP on the page it is located but the script can be modified to find any IP by changing 'zero.eu.org' to another domain name. This can be seen in action on my page at: http://meon.zero.eu.org/ A: There is a javascript library DNS-JS.com that does just this. DNS.Query("dns-js.com", DNS.QueryType.A, function(data) { console.log(data); }); A: The hosted JSONP version works like a charm, but it seems it goes over its resources during night time most days (Eastern Time), so I had to create my own version. This is how I accomplished it with PHP: <?php header('content-type: application/json; charset=utf-8'); $data = json_encode($_SERVER['REMOTE_ADDR']); echo $_GET['callback'] . '(' . $data . ');'; ?> Then the Javascript is exactly the same as before, just not an array: <script type="application/javascript"> function getip(ip){ alert('IP Address: ' + ip); } </script> <script type="application/javascript" src="http://www.anotherdomain.com/file.php?callback=getip"> </script> Simple as that! Side note: Be sure to clean your $_GET if you're using this in any public-facing environment! A: As many people said you need to use an external service and call it. And that will only get you the DNS resolution from the server perspective. If that's good enough and if you just need DNS resolution you can use the following Docker container: https://github.com/kuralabs/docker-webaiodns Endpoints: [GET] /ipv6/[domain]: Perform a DNS resolution for given domain and return the associated IPv6 addresses. { "addresses": [ "2a01:91ff::f03c:7e01:51bd:fe1f" ] } [GET] /ipv4/[domain]: Perform a DNS resolution for given domain and return the associated IPv4 addresses. { "addresses": [ "139.180.232.162" ] } My recommendation is that you setup your web server to reverse proxy to the container on a particular endpoint in your server serving your Javascript and call it using your standard Javascript Ajax functions. A: Doing this would require to break the browser sandbox. Try to let your server do the lookup and request that from the client side via XmlHttp. A: Firefox has a built-in API for this since v60, for WebExtensions: https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/dns/resolve A: sure you can do that without using any addition, just pure javascript, by using this method of dns browser.dns.resolve("example.com"); but it is compatible just with FIREFOX 60 you can see more information on MDN https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/dns/resolve A: My version is like this: php on my server: <?php header('content-type: application/json; charset=utf-8'); $data = json_encode($_SERVER['REMOTE_ADDR']); $callback = filter_input(INPUT_GET, 'callback', FILTER_SANITIZE_STRING, FILTER_FLAG_ENCODE_HIGH|FILTER_FLAG_ENCODE_LOW); echo $callback . '(' . $data . ');'; ?> jQuery on the page: var self = this; $.ajax({ url: this.url + "getip.php", data: null, type: 'GET', crossDomain: true, dataType: 'jsonp' }).done( function( json ) { self.ip = json; }); It works cross domain. It could use a status check. Working on that. A: Maybe I missed the point but in reply to NAVY guy here is how the browser can tell you the 'requestor's' IP address (albeit maybe only their service provider). Place a script tag in the page to be rendered by the client that calls (has src pointing to) another server that is not loaded balanced (I realize that this means you need access to a 2nd server but hosting is cheap these days and you can set this up easily and cheaply). This is the kind of code that needs to be added to client page: On the other server "someServerIown" you need to have the ASP, ASPX or PHP page that; ----- contains server code like this: "<% Response.Write("var clientipaddress = '" & Request.ServerVariables("REMOTE_ADDR") & "';") %>" (without the outside dbl quotes :-)) ---- and writes this code back to script tag: var clientipaddress = '178.32.21.45'; This effectively creates a Javascript variable that you can access with Javascript on the page no less. Hopefully, you access this var and write the value to a form control ready for sending back. When the user posts or gets on the next request your Javascript and/or form sends the value of the variable that the "otherServerIown" has filled in for you, back to the server you would like it on. This is how I get around the dumb load balancer we have that masks the client IP address and makes it appear as that of the Load balancer .... dumb ... dumb dumb dumb! I haven't given the exact solution because everyone's situation is a little different. The concept is sound, however. Also, note if you are doing this on an HTTPS page your "otherServerIOwn" must also deliver in that secure form otherwise Client is alerted to mixed content. And if you do have https then make sure ALL your certs are valid otherwise client also gets a warning. Hope it helps someone! Sorry, it took a year to answer/contribute. :-) A: I don't think this is allowed by most browsers for security reasons, in a pure JavaScript context as the question asks. A: If the client has Java installed, you could do something like this: ipAddress = java.net.InetAddress.getLocalHost().getHostAddress(); Other than that, you will probably have to use a server side script.
{ "language": "en", "url": "https://stackoverflow.com/questions/102605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "107" }
Q: Number of nodes meeting a conditional based on attributes Below is part of the XML which I am processing with PHP's XSLTProcessor: <result> <uf x="20" y="0"/> <uf x="22" y="22"/> <uf x="4" y="3"/> <uf x="15" y="15"/> </result> I need to know how many "uf" nodes exist where x == y. In the above example, that would be 2. I've tried looping and incrementing a counter variable, but I can't redefine variables. I've tried lots of combinations of xsl:number, with count/from, but couldn't get the XPath expression right. Thanks! A: <xsl:value-of select="count(/result/uf[@y=@x])" /> A: count('/result/uf[@x = @y]')
{ "language": "en", "url": "https://stackoverflow.com/questions/102606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Shape of a Winforms MessageBox I am looking for a way to have some control over the shape of a simple MessageBox in Winforms. I would like to control where the passed in text wraps so that the dialog rect is narrower. Windows seems to want to make the dialog as wide as possible before wrapping the text. Is there an easy way to control the maximum width of the dialog without resorting to creating my own custom form? A: You can embed newlines in the text to force it to wrap at a certain point. e.g. "message text...\nmore text..." update: I posted that thinking it was a win32 API question, but I think the principle should still apply. I assume WinForms eventually calls MessageBox(). A: There's really just two ways (sane ways) 1) Add line breaks to your string yourself to limit the lenghth of each line. 2) Make your own form and use it rather than messagebox. A: What happens if you throw your own newlines in the string message you pass it? I'm pretty sure that will work if I recall correctly. A: This, or alternatively create your own form and use that. A: The \n newline chars will give you enough flexibility, then do this. I use this a lot. Eg. if I'm giving a warning, the first line will give the warning, and the next line will give the internal error message or further information as appropriate. If you don't do this, you end up with a very wide message box with very little height! MessageBox only has limited variability - eg. the button types and icon. If you need more, then create your own. You could then do all sorts of things like add URLs, a Help button ,etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/102614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How would you get the Sql Command objects for a Given TableAdaptor and SqlDataAdaptor in C# (.NET 2.0) I am creating a generic error handling / logging class for our applications. The goal is to log the exception info, info about the class and function (as well as parameters) and if relevant, the information about the System.Data.SqlClient.SqlCommand object. I would like to be able to handle passing in SqlCommands, TableAdaptors, and SqlDataAdaptors. I am new to using reflection and I know that it is possible to do this, I am just not sure how to go about it. Please advise. A: Is this what you're talking about? SqlDataAdapter da = new SqlDataAdapter(); var cmd1 = ((IDbDataAdapter)da).DeleteCommand; var cmd2 = ((IDbDataAdapter)da).UpdateCommand; var cmd3 = ((IDbDataAdapter)da).SelectCommand; var cmd4 = ((IDbDataAdapter)da).InsertCommand; The SqlDataAdapter implements IDbDataAdapter, which has getters/setters for all the CRUD commands. The SqlDataAdapter implements these explicitly, so they don't show up in the signature of the class unless you first cast it to the interface. No reflection necessary.
{ "language": "en", "url": "https://stackoverflow.com/questions/102623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: the locale id '0' of the source column 'PAT_NUM_ADT' and the locale id '1033' of the destination column 'PAT_ID_OLD' do not match I get this error when I do a bulk insert with select * from [table_name], and another table name: the locale id '0' of the source column 'PAT_NUM_ADT' and the locale id '1033' of the destination column 'PAT_ID_OLD' do not match I tried resetting my db collation but this did not help. Has anyone seen this error? A: If you are copying less than a full set of fields from one table to another, whether that table is on another domain across the world, or is collocated in the same database, you just have to select them in order. SqlBulkCopyColumnMappings do not work. Yes, I tried. I used all four possible constructors, and I used them both as SqlBulkCopyMapping objects and just by providing the same information to the Add method of SqlBulkCopy.ColumnMappings.Add. My columns are named the same. If you're using a different name as well as a different order, you may well find that you have to actually rename the columns. Good luck. A: I just had this error message when bulk copying some data. While it might not have been the exact same problem you were having, I was getting the same error. Specifically, I was doing the following: SELECT NULL AS ColumnName ... And the destination was a nullable varchar(3). In this case, all I needed to do was update my select statement as follows: SELECT CONVERT(VARCHAR(3),NULL) AS ColumnName... This worked perfectly and the error message went away! A: It is right that when we use SqlBulkCopy, some time it gives error, the best way to map the columns when you are using SqlBulkCopy. My Previous Code : SqlConnectionStringBuilder cb = new SqlConnectionStringBuilder("Data Source=ServerName;User Id=userid;Password=****;Initial Catalog=Deepak; Pooling=true; Max pool size=200; Min pool size=0"); SqlConnection con = new SqlConnection(cb.ConnectionString); SqlCommand cmd = new SqlCommand("select Name,Class,Section,RollNo from Student", con); con.Open(); SqlDataReader rdr = cmd.ExecuteReader(); SqlBulkCopy sbc = new SqlBulkCopy("Data Source=DestinationServer;User Id=destinationserveruserid;Password=******;Initial Catalog=DeepakTransfer; Pooling=true; Max pool size=200; Min pool size=0"); sbc.DestinationTableName = "StudentTrans"; sbc.WriteToServer(rdr); sbc.Close(); rdr.Close(); con.Close(); The Code Was giving me the Error as : The locale id '0' of the source column 'RollNo' and the locale id '1033' of the destination column 'Section' do not match. Now After Column Mapping my Code Is Running Successfully. My Modified Code is : SqlConnectionStringBuilder cb = new SqlConnectionStringBuilder("Data Source=ServerName;User Id=userid;Password=****;Initial Catalog=Deepak;"); SqlConnection con = new SqlConnection(cb.ConnectionString); SqlCommand cmd = new SqlCommand("select Name,Class,Section,RollNo from Student", con); con.Open(); SqlDataReader rdr = cmd.ExecuteReader(); SqlBulkCopy sbc = new SqlBulkCopy("Data Source=DestinationServer;User Id=destinationserveruserid;Password=******;Initial Catalog=DeepakTransfer;"); sbc.DestinationTableName = "StudentTrans"; sbc.ColumnMappings.Add("Name", "Name"); sbc.ColumnMappings.Add("Class", "Class"); sbc.ColumnMappings.Add("Section", "Section"); sbc.ColumnMappings.Add("RollNo", "RollNo"); sbc.WriteToServer(rdr); sbc.Close(); rdr.Close(); con.Close(); This code is running Successfully. A: The answer by sal If you are copying less than a full set of fields from one table to another, whether that table is on another domain across the world, or is collocated in the same database, you just have to select them in order. SqlBulkCopyColumnMappings do not work. is according to my work absolutely right! Thanks for posting it. Everything has to be the same - data types, etc. Each time it finds a mismatch it throws the mysterious Locale Id error - funny yet frustrating as h###. A: I was getting same error and it turned out I was copying from VARCHAR column in the DataTable to INT. After I changed the data type it worked flawlessly. I successfully copied subset of fields, specifying proper field mappings (mappings worked by both field name or sequence number). So make sure your data types are correct. A: I would check what your default locale settings are. Also, you'll need to check the locale of both tables using sp_help to verify they are the same. If they aren't you'll need to convert it to the correct locale A: When you change the Collation of a database the table columns keep the old collation so you need to drop the tables and create them again. A: A great way to debug this is to take the sql query being used in your SqlBulkCopy and run it in management studio as a select-into, for instance, change select * from [table_name] to select * into newTable from [table_name], then look at the nullability and data types of 'newTable' versus 'table_name'. If there are any differences then you are likely to end up with this misleading error. Adjust the query or target table until they match, and then your command will work. A: Many thanks to Deepak Dwivedi for help. Here is little more hack with COLLATE DATABASE_DEFAULT which finally solved problem for me: SqlConnectionStringBuilder cb = new SqlConnectionStringBuilder("Data Source=ServerName;User Id=userid;Password=****;Initial Catalog=Deepak;"); SqlConnection con = new SqlConnection(cb.ConnectionString); SqlCommand cmd = new SqlCommand("select Name COLLATE DATABASE_DEFAULT Name ,Class COLLATE DATABASE_DEFAULT Class ,Section COLLATE DATABASE_DEFAULT Section ,RollNo COLLATE DATABASE_DEFAULT RollNo from Student", con); con.Open(); SqlDataReader rdr = cmd.ExecuteReader(); SqlBulkCopy sbc = new SqlBulkCopy("Data Source=DestinationServer;User Id=destinationserveruserid;Password=******;Initial Catalog=DeepakTransfer;"); sbc.DestinationTableName = "StudentTrans"; sbc.ColumnMappings.Add("Name", "Name"); sbc.ColumnMappings.Add("Class", "Class"); sbc.ColumnMappings.Add("Section", "Section"); sbc.ColumnMappings.Add("RollNo", "RollNo"); sbc.WriteToServer(rdr); sbc.Close(); rdr.Close(); con.Close();
{ "language": "en", "url": "https://stackoverflow.com/questions/102626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to write a crawler? I have had thoughts of trying to write a simple crawler that might crawl and produce a list of its findings for our NPO's websites and content. Does anybody have any thoughts on how to do this? Where do you point the crawler to get started? How does it send back its findings and still keep crawling? How does it know what it finds, etc,etc. A: Multithreaded Web Crawler If you want to crawl large sized website then you should write a multi-threaded crawler. connecting,fetching and writing crawled information in files/database - these are the three steps of crawling but if you use a single threaded than your CPU and network utilization will be pour. A multi threaded web crawler needs two data structures- linksVisited(this should be implemented as a hashmap or trai) and linksToBeVisited(this is a queue). Web crawler uses BFS to traverse world wide web. Algorithm of a basic web crawler:- * *Add one or more seed urls to linksToBeVisited. The method to add a url to linksToBeVisited must be synchronized. *Pop an element from linksToBeVisited and add this to linksVisited. This pop method to pop url from linksToBeVisited must be synchronized. *Fetch the page from internet. *Parse the file and add any till now not visited link found in the page to linksToBeVisited. URL's can be filtered if needed. The user can give a set of rules to filter which url's to be scanned. *The necessary information found on the page is saved in database or file. *repeat step 2 to 5 until queue is linksToBeVisited empty. Here is a code snippet on how to synchronize the threads.... public void add(String site) { synchronized (this) { if (!linksVisited.contains(site)) { linksToBeVisited.add(site); } } } public String next() { if (linksToBeVisited.size() == 0) { return null; } synchronized (this) { // Need to check again if size has changed if (linksToBeVisited.size() > 0) { String s = linksToBeVisited.get(0); linksToBeVisited.remove(0); linksVisited.add(s); return s; } return null; } } A: Crawlers are simple in concept. You get a root page via a HTTP GET, parse it to find URLs and put them on a queue unless they've been parsed already (so you need a global record of pages you have already parsed). You can use the Content-type header to find out what the type of content is, and limit your crawler to only parsing the HTML types. You can strip out the HTML tags to get the plain text, which you can do text analysis on (to get tags, etc, the meat of the page). You could even do that on the alt/title tags for images if you got that advanced. And in the background you can have a pool of threads eating URLs from the Queue and doing the same. You want to limit the number of threads of course. A: If your NPO's sites are relatively big or complex (having dynamic pages that'll effectively create a 'black hole' like a calendar with a 'next day' link) you'd be better using a real web crawler, like Heritrix. If the sites total a few number of pages you can get away with just using curl or wget or your own. Just remember if they start to get big or you start making your script more complex to just use a real crawler or at least look at its source to see what are they doing and why. Some issues (there are more): * *Black holes (as described) *Retries (what if you get a 500?) *Redirects *Flow control (else you can be a burden on the sites) *robots.txt implementation A: Wikipedia has a good article about web crawlers, covering many of the algorithms and considerations. However, I wouldn't bother writing my own crawler. It's a lot of work, and since you only need a "simple crawler", I'm thinking all you really need is an off-the-shelf crawler. There are a lot of free and open-source crawlers that will likely do everything you need, with very little work on your part. A: The complicated part of a crawler is if you want to scale it to a huge number of websites/requests. In this situation you will have to deal with some issues like: * *Impossibility to keep info all in one database. *Not enough RAM to deal with huge index(s) *Multithread performance and concurrency *Crawler traps (infinite loop created by changing urls, calendars, sessions ids...) and duplicated content. *Crawl from more than one computer *Malformed HTML codes *Constant http errors from servers *Databases without compression, wich make your need for space about 8x bigger. *Recrawl routines and priorities. *Use requests with compression (Deflate/gzip) (good for any kind of crawler). And some important things * *Respect robots.txt *And a crawler delay on each request to dont suffocate web servers. A: You could make a list of words and make a thread for each word searched at google. Then each thread will create a new thread for each link it find in the page. Each thread should write what it finds in a database. When each thread finishes reading the page, it terminates.And there you have a very big database of links in your database. A: You'll be reinventing the wheel, to be sure. But here's the basics: * *A list of unvisited URLs - seed this with one or more starting pages *A list of visited URLs - so you don't go around in circles *A set of rules for URLs you're not interested in - so you don't index the whole Internet Put these in persistent storage, so you can stop and start the crawler without losing state. Algorithm is: while(list of unvisited URLs is not empty) { take URL from list remove it from the unvisited list and add it to the visited list fetch content record whatever it is you want to about the content if content is HTML { parse out URLs from links foreach URL { if it matches your rules and it's not already in either the visited or unvisited list add it to the unvisited list } } } A: Use wget, do a recursive web suck, which will dump all the files onto your harddrive, then write another script to go through all the downloaded files and analyze them. Edit: or maybe curl instead of wget, but I am not familiar with curl, I do not know if it does recursive downloads like wget. A: I'm using Open search server for my company internal search, try this : http://open-search-server.com its also open soruce. A: i did a simple web crawler using reactive extension in .net. https://github.com/Misterhex/WebCrawler public class Crawler { class ReceivingCrawledUri : ObservableBase<Uri> { public int _numberOfLinksLeft = 0; private ReplaySubject<Uri> _subject = new ReplaySubject<Uri>(); private Uri _rootUri; private IEnumerable<IUriFilter> _filters; public ReceivingCrawledUri(Uri uri) : this(uri, Enumerable.Empty<IUriFilter>().ToArray()) { } public ReceivingCrawledUri(Uri uri, params IUriFilter[] filters) { _filters = filters; CrawlAsync(uri).Start(); } protected override IDisposable SubscribeCore(IObserver<Uri> observer) { return _subject.Subscribe(observer); } private async Task CrawlAsync(Uri uri) { using (HttpClient client = new HttpClient() { Timeout = TimeSpan.FromMinutes(1) }) { IEnumerable<Uri> result = new List<Uri>(); try { string html = await client.GetStringAsync(uri); result = CQ.Create(html)["a"].Select(i => i.Attributes["href"]).SafeSelect(i => new Uri(i)); result = Filter(result, _filters.ToArray()); result.ToList().ForEach(async i => { Interlocked.Increment(ref _numberOfLinksLeft); _subject.OnNext(i); await CrawlAsync(i); }); } catch { } if (Interlocked.Decrement(ref _numberOfLinksLeft) == 0) _subject.OnCompleted(); } } private static List<Uri> Filter(IEnumerable<Uri> uris, params IUriFilter[] filters) { var filtered = uris.ToList(); foreach (var filter in filters.ToList()) { filtered = filter.Filter(filtered); } return filtered; } } public IObservable<Uri> Crawl(Uri uri) { return new ReceivingCrawledUri(uri, new ExcludeRootUriFilter(uri), new ExternalUriFilter(uri), new AlreadyVisitedUriFilter()); } public IObservable<Uri> Crawl(Uri uri, params IUriFilter[] filters) { return new ReceivingCrawledUri(uri, filters); } } and you can use it as follows: Crawler crawler = new Crawler(); IObservable observable = crawler.Crawl(new Uri("http://www.codinghorror.com/")); observable.Subscribe(onNext: Console.WriteLine, onCompleted: () => Console.WriteLine("Crawling completed"));
{ "language": "en", "url": "https://stackoverflow.com/questions/102631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: CSS Margin Collapsing So essentially does margin collapsing occur when you don't set any margin or padding or border to a given div element? A: No. When you have two adjacent vertical margins, the greater of the two is used and the other is ignored. So, for instance, if you have two block-display elements, A, followed by B beneath it, and A has a bottom-margin of 3em, while B has a top-margin of 2em, then the distance between them will be 3em. If you set a border or padding, this prevents the collapsing from occurring. In the above example, the distance between the two elements will then be 5em. If you don't set any margins, then there won't be any margins to collapse. It has nothing whatsoever to do with the element type in use - it is applicable to all element types, not just <div> elements. Read the CSS 2.1 specification for more details. A: "the expression collapsing margins means that adjoining margins (no non-empty content, padding or border areas or clearance separate them) of two or more boxes (which may be next to one another or nested) combine to form a single margin." Source: Box Model - 8.3.1 Collapsing margins
{ "language": "en", "url": "https://stackoverflow.com/questions/102640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: How should I move queued messages from IIS to Exchange on different servers? We currently have a company email server with Exchange, and a bulk email processing server that is using IIS SMTP. We are upgrading to a 3rd party MTA (zrinity xms) for bulk sending. I need to be able to keep sending the messages already queued for IIS when we switch to the 3rd party sofware. Can I simply move the IIS queue files to the Exchange server queue and have sending attempts begin automatically for them? If not, any suggestions on accomplishing this? A: You should be able to move the *.eml files to the Exchange server's pickup directory. Or set the IIS SMTP service to smart host to the new MTA, assuming they (the 3rd party) allow SMTP relay from your IP address. A: Moving the files will work. However, any email with a BCC line in the header will get sent out with the BCC intact. Some clients, such as gmail, will display the information to the recipient, thus breaking the whole point of BCC. This happens when copying EML files to MS-SMTP (which Exchange also uses) because the BCC information is usually stripped out of the header in during the SMTP hand-off to (not from) MS-SMTP. If that was how the messages were initially handed off, then it's possible that the EMLs you have were already broken into separate messages for each BCC, and that header was properly stripped. Just a little gotcha to watch out for.
{ "language": "en", "url": "https://stackoverflow.com/questions/102647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Wrong 'Local Path' in DCOM Config entry I have a component in DCOM Config whose 'Local Path' (on the General tab of the Properties page for that component in dcomcnfg) is pointing to the wrong place. However when I go to that directory and unregister the component using "componentname.exe /unregserver", the Local Path for that component remains unchanged. I've also tried going to the correct directory and registering the component there, using "componentname.exe /regserver", but the value in 'Local Path' still doesn't change. Any suggestions? A: Sounds to me like that componentname.exe is not using the ProgID/GUID that you think it's using. Either that or its register/unregister commands aren't working. Do you have the source? You could step through the registration routine and see what it's doing.
{ "language": "en", "url": "https://stackoverflow.com/questions/102649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Are there any resources about the PHP XMLWriter functionality? The PHP documentation can be found here, but I think it's rather lacking. There are no examples of how to use these functions, and few (if any) of the pages have user comments. So where might I be able to find an explanation (and example code) on how to use these functions to write an XML document? A: I don't know any other resources, but I found the examples in the comments on this page quite helpful. A: I'd recommend looking at the DOM functions over the SimpleXML ones - it's much more robust. Not as simple, but definitely has more features.
{ "language": "en", "url": "https://stackoverflow.com/questions/102652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Prevent a PostBack from showing up in the History I have a page where I submit some data, and return to the original form with a "Save Successful" message. However, the user would like the ability to return to the previous page they were at (which contains search results) by clicking the browser's "Back" button. However, due to the postback, when they click the "Back" button they do not go to the previous page ,they simply go to the same page (but at its previous state). I read that enabling SmartNavigation will take care of this issue (postbacks appearing in the history) however, it has been deprecated. What's the "new" best practice? *Edit - I added a ScriptManager control, and wrapped the buttons in an UpdatePanel, however now I'm receiving the following error: Type 'System.Web.UI.UpdatePanel' does not have a public property named 'Button' Am I missing a reference? *Disregard the above edit, I simply forgot to add the < ContentTemplate > section to the UpdatePanel :P A: If you put your "Save" button in an UpdatePanel, the postback will not show in the users history. A: I would avoid if possible. A better solution would be to have a button that just returns them to their search results on the "Save Successful" screen. The problem with the ajaxy saving and such is that you violate the "Back" rules that users expect. This user might want the Back button to go back to the Search page, but other users might expect that clicking Back would return them to the Add/Update page. So if another user tries to update something, clicks save, and then "woops, i forgot something on the update", they'll click back, and now they're at search results, instead of the expected Update page.
{ "language": "en", "url": "https://stackoverflow.com/questions/102657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Issue with clipboard I have a legacy app I am currently supporting that is having problems when people copy large quantities of data from a datasheet view. The App is built in MS Access and the amount of rows being copied can get pretty large (sometimes in the thousands). The funny thing about it is, you can paste the data out, but then Access keeps "rendering" the data into different formats and becomes CPU bound for LONG periods of time. The Status message beside the progress bar at the bottom right of the MS Access Window is Rendering Data to format: Biff5 Biff5 is a "Binary Interchange File Format (BIFF) version 5" According to Source The app code doesn't use BIFF5 anywhere so I don't think this is an app problem. I cannot find any data on this error anywhere on the web so I thought it would be a good question for stackoverflow. So, can anyone help please? A: Instead of trying to copy-paste, can't you just export the query to Excel? A: I am not sure what the problem is but sometimes you can run into some very quirky bugs with Access. Have you tried running this on different machines? Different OS's? Would it be possible to paste the data into Excel and then import into Access using import functionality? Can you import the data directly instead of pasting it? A: We are all on the same OS here for this, I am investigating the possibility that some update sent out in the last maintenance window has caused this, as it wasn't a problem before this and there have been no new releases of the software in that time period. Tried in on lot's of machines, same issue on them all. The problem is actually with copying from a datasheet view in Access and pasting to Excel, not the other way around strangely. Here is the use case Access --> "Copy from datasheet"(Normal Ctrl+C) --> "paste into Excel" (Normal Ctrl+V) (this works fine!) When you then go back to Access to continue working, it is CPU bound doing the "Rendering Data to format: " thing, I mentioned above. I'm stumped to be honest, it's all a bit strange. A: Try copy-paste operation through VBA: Once user has selected data to copy, you can execute the code below when click on a button in the form, and then do a pastespecial in excel: --- Data selected by user --- RunCommand acCmdCopy Dim xlApp As Object Set xlApp = CreateObject(Class:="Excel.Application") 'New Excel Workbook Dim xlWbook As Object 'Excel.Workbook Set xlWbook = xlApp.Workbooks.Add 'Paste in excel xlWSheet.Range("A1").Select xlWSheet.PasteSpecial Link:=False, DisplayAsIcon:=False, Format:="Biff5"
{ "language": "en", "url": "https://stackoverflow.com/questions/102683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How does c# figure out the hash code for an object? This question comes out of the discussion on tuples. I started thinking about the hash code that a tuple should have. What if we will accept KeyValuePair class as a tuple? It doesn't override the GetHashCode() method, so probably it won't be aware of the hash codes of it's "children"... So, run-time will call Object.GetHashCode(), which is not aware of the real object structure. Then we can make two instances of some reference type, which are actually Equal, because of the overloaded GetHashCode() and Equals(). And use them as "children" in tuples to "cheat" the dictionary. But it doesn't work! Run-time somehow figures out the structure of our tuple and calls the overloaded GetHashCode of our class! How does it work? What's the analysis made by Object.GetHashCode()? Can it affect the performance in some bad scenario, when we use some complicated keys? (probably, impossible scenario... but still) Consider this code as an example: namespace csharp_tricks { class Program { class MyClass { int keyValue; int someInfo; public MyClass(int key, int info) { keyValue = key; someInfo = info; } public override bool Equals(object obj) { MyClass other = obj as MyClass; if (other == null) return false; return keyValue.Equals(other.keyValue); } public override int GetHashCode() { return keyValue.GetHashCode(); } } static void Main(string[] args) { Dictionary<object, object> dict = new Dictionary<object, object>(); dict.Add(new KeyValuePair<MyClass,object>(new MyClass(1, 1), 1), 1); //here we get the exception -- an item with the same key was already added //but how did it figure out the hash code? dict.Add(new KeyValuePair<MyClass,object>(new MyClass(1, 2), 1), 1); return; } } } Update I think I've found an explanation for this as stated below in my answer. The main outcomes of it are: * *Be careful with your keys and their hash codes :-) *For complicated dictionary keys you must override Equals() and GetHashCode() correctly. A: Here are the proper Hash and equality implementations for the Quad tuple (contains 4 tuple components inside). This code ensures proper usage of this specific tuple in HashSets and the dictionaries. More on the subject (including the source code) here. Note usage of the unchecked keyword (to avoid overflows) and throwing NullReferenceException if obj is null (as required by the base method) public override bool Equals(object obj) { if (ReferenceEquals(null, obj)) throw new NullReferenceException("obj is null"); if (ReferenceEquals(this, obj)) return true; if (obj.GetType() != typeof (Quad<T1, T2, T3, T4>)) return false; return Equals((Quad<T1, T2, T3, T4>) obj); } public bool Equals(Quad<T1, T2, T3, T4> obj) { if (ReferenceEquals(null, obj)) return false; if (ReferenceEquals(this, obj)) return true; return Equals(obj.Item1, Item1) && Equals(obj.Item2, Item2) && Equals(obj.Item3, Item3) && Equals(obj.Item4, Item4); } public override int GetHashCode() { unchecked { int result = Item1.GetHashCode(); result = (result*397) ^ Item2.GetHashCode(); result = (result*397) ^ Item3.GetHashCode(); result = (result*397) ^ Item4.GetHashCode(); return result; } } public static bool operator ==(Quad<T1, T2, T3, T4> left, Quad<T1, T2, T3, T4> right) { return Equals(left, right); } public static bool operator !=(Quad<T1, T2, T3, T4> left, Quad<T1, T2, T3, T4> right) { return !Equals(left, right); } A: Check out this post by Brad Abrams and also the comment by Brian Grunkemeyer for some more information on how object.GetHashCode works. Also, take a look at the first comment on Ayande's blog post. I don't know if the current releases of the Framework still follow these rules or if they have actually changed it like Brad implied. A: Don't override GetHashcode() and Equals() on mutable classes, only override it on immutable classes or structures, else if you modify a object used as key the hash table won't function properly anymore (you won't be able to retrieve the value associated to the key after the key object was modified) Also hash tables don't use hashcodes to identify objects they use the key objects themselfes as identifiers, it's not required that all keys that are used to add entries in a hash table return different hashcodes, but it is recommended that they do, else performance suffers greatly. A: It seems that I have a clue now. I thought KeyValuePair is a reference type, but it is not, it is a struct. And so it uses ValueType.GetHashCode() method. MSDN for it says: "One or more fields of the derived type is used to calculate the return value". If you will take a real reference type as a "tuple-provider" you'll cheat the dictionary (or yourself...). using System.Collections.Generic; namespace csharp_tricks { class Program { class MyClass { int keyValue; int someInfo; public MyClass(int key, int info) { keyValue = key; someInfo = info; } public override bool Equals(object obj) { MyClass other = obj as MyClass; if (other == null) return false; return keyValue.Equals(other.keyValue); } public override int GetHashCode() { return keyValue.GetHashCode(); } } class Pair<T, R> { public T First { get; set; } public R Second { get; set; } } static void Main(string[] args) { var dict = new Dictionary<Pair<int, MyClass>, object>(); dict.Add(new Pair<int, MyClass>() { First = 1, Second = new MyClass(1, 2) }, 1); //this is a pair of the same values as previous! but... no exception this time... dict.Add(new Pair<int, MyClass>() { First = 1, Second = new MyClass(1, 3) }, 1); return; } } } A: I don't have the book reference anymore, and I'll have to find it just to confirm, but I thought the default base hash just hashed together all of the members of your object. It got access to them because of the way the CLR worked, so it wasn't something that you could write as well as they had. That is completely from memory of something I briefly read so take it for what you will. Edit: The book was Inside C# from MS Press. The one with the Saw blade on the cover. The author spent a good deal of time explaining how things were implemented in the CLR, how the language translated down to MSIL, ect. ect. If you can find the book it's not a bad read. Edit: Form the link provided it looks like Object.GetHashCode() uses an internal field in the System.Object class to generate the hash value. Each object created is assigned a unique object key, stored as an integer,when it is created. These keys start at 1 and increment every time a new object of any type gets created. Hmm I guess I need to write a few of my own hash codes, if I expect to use objects as hash keys. A: so probably it won't be aware of the hash codes of it's "children". Your example seems to prove otherwise :-) The hash code for the key MyClass and the value 1 is the same for both KeyValuePair's . The KeyValuePair implementation must be using both its Key and Value for its own hash code Moving up, the dictionary class wants unique keys. It is using the hashcode provided by each key to figure things out. Remember that the runtime isn't calling Object.GetHashCode(), but it is calling the GetHashCode() implementation provided by the instance you give it. Consider a more complex case: public class HappyClass { enum TheUnit { Points, Picas, Inches } class MyDistanceClass { int distance; TheUnit units; public MyDistanceClass(int theDistance, TheUnit unit) { distance = theDistance; units = unit; } public static int ConvertDistance(int oldDistance, TheUnit oldUnit, TheUnit newUnit) { // insert real unit conversion code here :-) return oldDistance * 100; } /// <summary> /// Figure out if we are equal distance, converting into the same units of measurement if we have to /// </summary> /// <param name="obj">the other guy</param> /// <returns>true if we are the same distance</returns> public override bool Equals(object obj) { MyDistanceClass other = obj as MyDistanceClass; if (other == null) return false; if (other.units != this.units) { int newDistance = MyDistanceClass.ConvertDistance(other.distance, other.units, this.units); return distance.Equals(newDistance); } else { return distance.Equals(other.distance); } } public override int GetHashCode() { // even if the distance is equal in spite of the different units, the objects are not return distance.GetHashCode() * units.GetHashCode(); } } static void Main(string[] args) { // these are the same distance... 72 points = 1 inch MyDistanceClass distPoint = new MyDistanceClass(72, TheUnit.Points); MyDistanceClass distInch = new MyDistanceClass(1, TheUnit.Inch); Debug.Assert(distPoint.Equals(distInch), "these should be true!"); Debug.Assert(distPoint.GetHashCode() != distInch.GetHashCode(), "But yet they are fundimentally different values"); Dictionary<object, object> dict = new Dictionary<object, object>(); dict.Add(new KeyValuePair<MyDistanceClass, object>(distPoint, 1), 1); //this should not barf dict.Add(new KeyValuePair<MyDistanceClass, object>(distInch, 1), 1); return; } } Basically... in the case of my example, you'd want two objects that are the same distance to return "true" for Equals, but yet return different hash codes.
{ "language": "en", "url": "https://stackoverflow.com/questions/102690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What is the best option for running a Jabber/XMPP on Windows 2003? I'm looking to run a Jabber server on a Windows 2003 server(web farm) and like some practical advice from anyone who has run a live environment with ~500 concurrent users. Criteria for comment: * *Performance *Capacity (ie ~number of concurrent users) *Stability A: OpenFire is a good gpl java implementation of a jabber server. It has plenty of option plugins you can use and it can intergrate quite well with Active Directory OpenFire A: I think you're going to need to be a bit more explicit - you looking for server configurations, or software e.g. Jabber Server? If you're thinking Jabber server, EJabberD is probably the most stable, flexible, capable of being clustered etc. Really useful comparison of Open Source servers here... http://www.saint-andre.com/jabber/jsc/
{ "language": "en", "url": "https://stackoverflow.com/questions/102704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: CSS (Stylesheet) organization and colors I just finished a medium sized web site and one thing I noticed about my css organization was that I have a lot of hard coded colour values throughout. This obviously isn't great for maintainability. Generally, when I design a site I pick 3-5 main colours for a theme. I end up setting some default values for paragraphs, links, etc... at the beginning of my main css, but some components will change the colour (like the legend tag for example) and require me to restyle with the colour I wanted. How do you avoid this? I was thinking of creating separate rules for each colour and just use those when I need to restyle. i.e. .color1 { color: #3d444d; } A: One thing I've done here is break out my palette declarations from other style/layout markup, grouping commonly-colored items in lists, e.g. h1 { padding... margin... font-family... } p { ... } code { ... } /* time passes */ /* these elements are semantically grouped by color in the design */ h1, p, code { color: #ff0000; } On preview, JeeBee's suggestion is a logical extension of this: if it makes sense to handle your color declarations (and, of course, this can apply to other style issues, though color has the unique properties of not changing layout), you might consider pushing it out to a separate css file, yeah. This makes it easier to hot-swap color-only thematic variations, too, by just targeting one or another colorxxx.css profile as your include. A: That's exactly what you should do. The more centralized you can make your css, the easier it will be to make changes in the future. And let's be serious, you will want to change colors in the future. You should almost never hard-code any css into your html, it should all be in the css. Also, something I have started doing more often is to layer your css classes on eachother to make it even easier to change colors once... represent everywhere. Sample (random color) css: .main_text {color:#444444;} .secondary_text{color:#765123;} .main_color {background:#343434;} .secondary_color {background:#765sda;} Then some markup, notice how I am using the colors layer with otehr classes, that way I can just change ONE css class: <body class='main_text'> <div class='main_color secondary_text'> <span class='secondary color main_text'>bla bla bla</span> </div> <div class='main_color secondary_text> You get the idea... </div> </body> Remember... inline css = bad (most of the time) A: See: Create a variable in .CSS file for use within that .CSS file To summarize, you have three basic option: * *Use a macro pre-processor to replace constant color names in your stylesheets. *Use client-side scripting to configure styles. *Use a single rule for every color, listing all selectors for which it should apply (my fav...) A: Maybe pull all the color information into one part of your stylesheet. For example change this: p .frog tr.mango { color: blue; margin: 1px 3em 2.5em 4px; position: static; } #eta .beta span.pi { background: green; color: red; font-size: small; float: left; } // ... to this: p .frog tr.mango { color: blue; } #eta .beta span.pi { background: green; color: red; } //... p .frog tr.mango { margin: 1px 3em 2.5em 4px; position: static; } #eta .beta span.pi { font-size: small; float: left; } // ... A: I sometimes use PHP, and make the file something like style.css.php. Then you can do this: <?php header("Content-Type: text/css"); $colour1 = '#ff9'; ?> .username {color: <?=$colour1;?>; } Now you can use that colour wherever you want, and only have to change it in one place. This also works for values other then colours of course. A: You could have a colours.css file with just the colours/images for each tag in. Then you can change the colours just by replacing the file, or having a dynamically generated CSS file, or having different CSS files available and selecting based upon website URL/subfolder/property/etc. Or you can have colour tags as you write, but then your HTML turns into: <p style="body grey">Blah</p> CSS should have a feature where you can define values for things like colours that you wish to be consistent through a style but are defined in one place only. Still, there's search and replace. A: So you're saying you don't want to go back into your CSS to change color values if you find another color 'theme' that might work better? Unfortunately, I don't see a way around this. CSS defines styles, and with color being one of them, the only way to change it is to go into the css and change it. Of course, you could build yourself a little program that will allow you to change the css file by picking a color wheel on a webpage or something, which will then write that value into the css file using the filesystemobject or something, but that's a lot more work than required for sure. A: Generally it's better to just find and replace the colours you are changing. Anything more powerful than that will be more complex with few benefits. A: CSS is not your answer. You want to look into an abstraction on top of CSS like SASS. This will allow you to define constants and generally clean up your css. Here is a list of CSS Frameworks. A: I keep a list of all the colors I've used at the top of the file. A: When the CSS is served by a server-side script, eg. PHP, usually coders make the CSS as a template file and substitute the colors at run-time. This might be used to let users choose a color model, too. Another way, to avoid parsing this file each time (although cache should take care of that), or just if you have a static site, is to make such template and parse it with some script/static template engine before uploading to the server. Search/replace can work, except when two initially distinct colors end up being the same: hard to separate them again after that! :-) If I am not mistaken, CSS3 should allow such parametrization. But I won't hold my breath until this feature will be available in 90% of browsers surfing the Net! A: I like the idea of separating the colour information into a separate file, no matter how I do it. I would accept multiple answers here if I could, because I like Josh Millard's as well. I like the idea of having separate colour rules though because a particular tag might have different colours depending on where it occurs. Maybe a combination of both of these techniques would be good: h1, p, code { color: #ff0000; } and then also have .color1 { color: #ff0000; } for when you need to restyle. A: This is where SASS comes to help you.
{ "language": "en", "url": "https://stackoverflow.com/questions/102720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why is '397' used for ReSharper GetHashCode override? Like many of you, I use ReSharper to speed up the development process. When you use it to override the equality members of a class, the code-gen it produces for GetHashCode() looks like: public override int GetHashCode() { unchecked { int result = (Key != null ? Key.GetHashCode() : 0); result = (result * 397) ^ (EditableProperty != null ? EditableProperty.GetHashCode() : 0); result = (result * 397) ^ ObjectId; return result; } } Of course I have some of my own members in there, but what I am wanting to know is why 397? * *EDIT: So my question would be better worded as, is there something 'special' about the 397 prime number outside of it being a prime number? A: The hash that resharper uses looks like a variant of the FNV hash. FNV is frequently implemented with different primes. There's a discussion on the appropriate choice of primes for FNV here. A: Probably because 397 is a prime of sufficient size to cause the result variable to overflow and mix the bits of the hash somewhat, providing a better distribution of hash codes. There's nothing particularly special about 397 that distinguishes it from other primes of the same magnitude.
{ "language": "en", "url": "https://stackoverflow.com/questions/102742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "164" }
Q: "Carriage Return" gets stripped out when compresssing file When compressing a string "stream" the '/r' gets stripped out from '/r/n'. I am using the ICSharp.zip library for compression. Has any one else faced this problem, and if you have is there is a workaround? A: Does your zip library have a parameter to treat the stream as either text or binary? It sounds like it's treating it as text and is changing the line-end delimiter (some apps do this to try and make sure it matches the target platform). If you can tell it to treat the data as binary it might help. A: Try DotNetZip. It's a managed-code library, doesn't have problems with CR/LF translation. fee. open source. DotNetZip on CodePlex
{ "language": "en", "url": "https://stackoverflow.com/questions/102758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Database safety: Intermediary "to_be_deleted" column/table? Everyone has accidentally forgotten the WHERE clause on a DELETE query and blasted some un-backed up data once or twice. I was pondering that problem, and I was wondering if the solution I came up with is practical. What if, in place of actual DELETE queries, the application and maintenance scripts did something like: UPDATE foo SET to_be_deleted=1 WHERE blah = 50; And then a cron job was set to go through and actually delete everything with the flag? The downside would be that pretty much every other query would need to have WHERE to_be_deleted != 1 appended to it, but the upside would be that you'd never mistakenly lose data again. You could see "2,349,325 rows affected" and say, "Hmm, looks like I forgot the WHERE clause," and reset the flags. You could even make the to_be_deleted field a DATE column, so the cron job would check to see if a row's time had come yet. Also, you could remove DELETE permission from the production database user, so even if someone managed to inject some SQL into your site, they wouldn't be able to remove anything. So, my question is: Is this a good idea, or are there pitfalls I'm not seeing? A: That is fine if you want to do that, but it seems like a lot of work. How many people are manually changing the database? It should be very few, especially if your users have an app to work with. When I work on the production db I put EVERYTHING I do in a transaction so if I mess up I can rollback. Just having a standard practice like that for me has helped me. I don't see anything really wrong with that though other than ever single point of data manipulation in each applicaiton will have to be aware of this functionality and not just the data it wants. A: This would be fine as long as your appliction does not require that the data is immediately deleted since you have to wait for the next interval of the cron job. I think a better solution and the more common practice is to use a development server and a production server. If your development database gets blown out, simply reload it. No harm done. If you're testing code on your production database, you deserve anything bad that happens. A: A lot of people have a delete flag or a row status flag. But if someone is doing a change through the back end (and they will be doing it since often people need batch changes done that can't be accomplished through the front end) and they make a mistake they will still often go for delete. Ultimately this is no substitute for testing the script before applying it to a production environment. Also...what happens if the following query gets executed "UPDATE foo SET to_be_deleted=1" because they left off the where clause. Unless you have auditing columns with a time stamp how do you know which columns were deleted and which ones were done in error? But even if you have auditing columns with a time stamp, if the auditing is done via a stored procedure or programmer convention then these back end queries may not supply information letting you know that they were just applied. A: Too complicated. The standard approach to this is to do all your work inside a transaction, so if you screw up and forget a WHERE clause, then you simply roll back when you see the "2,349,325 rows affected" result. A: It may be easier to create a parallel table for deleted rows. A DELETE trigger (and UPDATE too if you want to undo changes as well) on the original table could copy the affected rows to the parallel table. Adding a datetime column to the parallel table to record the date & time of the change would let you permanently remove rows past a certain age using your cron job. That way, you'd use normal DELETE statements on the original table, so there's no chance you'll forget to run your special "DELETE" statement. You also sidestep the to_be_deleted != 1 expression, which is just a bug waiting to happen when someone inevitably forgets. A: It looks like you're describing three cases here. * *Case 1 - maintenance scripts. Risk can be minimized by developing them and testing them in an environment other than your production box. For quick maintenance, do the maintenance in a single transaction, and check everything before committing. If you made a mistake, issue the rollback command. For more serious maintenance that you can't necessarily wait around for, or do in a single transaction, consider taking a backup directly before running the maintenance job, so that you can always restore back to the point before you ran your script if you encounter serious problems. *Case 2 - SQL Injection. This is an architecture issue. Your application shouldn't pass SQL into the database, access should be controlled through packages / stored procedures / functions, and values that are going to come from the UI and be used in a DDL statement should be applied using bind variables, rather than by creating dynamic SQL by appending strings together. *Case 3 - Regular batch jobs. These should have been tested before being deployed to production. If you delete too much, you have a bug, and are going to have to rely on your backup strategy. A: Everyone has accidentally forgotten the WHERE clause on a DELETE query and blasted some un-backed up data once or twice. No. I always prototype my DELETEs as SELECTs and only if the latter gives the results I want to delete change the statement before WHERE to a DELETE. This let's me inspect in any needed detail the rows I want to affect before doing anything. A: You could set up a view on that table that selects WHERE to_be_deleted != 1, and all of your normal selects are done on that view - that avoids having to put the WHERE on all of your queries. A: The pitfall is that it's unnecessarily complicated and someone will inadvertently forget too check the flag in their query. There's also the issue of potentially needing to delete something immediately instead of wait for the scheduled job to run. A: To avoid the to_be_deleted WHERE clause you could create a trigger before the delete command fires off to insert the deleted rows into a separate table. This table could be cleared out when you're sure everything in it really needs to be deleted, or you could keep it around for archive purposes. A: You also get a "soft delete" feature so you can give the(certain) end-users the power of "undo" - there would have to be a pretty strong downside in the mix to cancel the benefits of soft deleting. A: The "WHERE to_be_deleted <> 1" on every other query is a huge one. Another is once you've ran your accidentally rogue query, how will you determine which of the 2,349,325 were previously marked as deleted? I think the practical solution is regular backups, and failing that, perhaps a delete trigger that captures the tuples to be axed. A: The other option would be to create a delete trigger on each table. When anything is deleted, it would insert that "to be deleted" record into another table, ideally named TABLENAME_deleted. The downside would be that the db would have twice as many tables. I don't recommend triggers in general, but it might be what you are looking for. A: This is why, whenever you are editing data by hand, you should BEGIN TRAN, edit your data, check that it looks good (for instance that you didn't delete more data than you were expecting) and then END TRAN. If you're using Postgres then you want to create lots of savepoints as well so that a typo doesn't wipe out your intermediate work. But that said, in many applications it does make sense to have software mark records as invalid rather than deleting them. Add a last_modified date that is automatically updated, and you are all prepared to set up incremental updates into a data warehouse. Even if you don't have a data warehouse now, it never hurts to prepare for the future when preparing is cheap. Plus in the event of manual mistakes you still have the data, and can just find all of the records that got "deleted" when you made your mistake and fix them. (You should still use transactions though.)
{ "language": "en", "url": "https://stackoverflow.com/questions/102759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Apache and J2EE sharing security realms/logins, single sign-on Here is the situation I'd like to create: * *www.blah.com/priv - protected by Apache HTTP Basic Auth, realm "foo" *www.blah.com/application - protected by Tomcat/Servlet HTTP Basic Auth, realm "foo" *User access /priv, apache requests login info, they provide and are given access *Same user then requests /application. Since they have authenticated to the "foo" realm in the previous step, I would like them to be let in directly. *If another users accesses /application without first going to /priv, Tomcat requires authentication (and then they could also later access /priv without having to re-authenticate) Basically, I want apache and tomcat to share authentication realms and, ideally, user databases. How could this be best achieved? A: Have you already tried to do this and failed? I ask because HTTP Basic authentication takes place purely by adding an HTTP header to a request; that is to say, once you're authenticated against a given realm on a given server, your browser adds an additional header to your request (e.g., "Authorization: Basic amxldmludnskZXZsaW4="), and the server acknowledges that you're authenticated because of that header. So given your example, and given some ad-hoc testing I just did, I suspect that the setup you describe will just work without any additional effort on your part.
{ "language": "en", "url": "https://stackoverflow.com/questions/102778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way to encrypt a very short string in PHP? I would like to encrypt strings which could potentially only be about three or four characters but run to about twenty characters. A hashing function (md5, sha1, crypt etc) is not suitable as I would like to be able to decrypt the information as well. The mcrypt extension has a thoroughly daunting array of possibilities. Does anyone have any ideas about the best way to safely encrypt short strings and why? Does anyone have any links to any material introducing a casual programmer to practical encryption scenarios? A: I like to use GnuPG for anything that needs to be encrypted on a server and then possibly decrypted either on the server or on another server (which is usually my case). This allows for an extra level of security since in my scenario the encrypting server doesn't have the key to decrypt the data. It also allows for easier manual decryption. There are a few good wrappers available for various languages (another advantage), one for PHP is GnuPGP PHP Class. A: mcrypt is linked into most builds of PHP by default. It contains all the primitives you're likely to need. Without knowing more about what you're encrypting, what your threat model is, etc, it's hard to give concrete recommendations on what algorithm, mode of operation, etc to use. One thing I can say for certain: With short text strings, it's more vital than ever that you MUST use a unique, random Initialization Vector. Otherwise, it's trivial for someone to mount a variety of attacks against the encrypted data. A: I highly recommend the suggestions of Chris Kite. Without knowing more about what you're doing, why, and the threats you anticipate needing to protect against AES-128 is likely sufficient. The ability to use symmetric encryption is great for a standalone app that will be both the decryptor and encryptor of data. As both Chris Kite and Arachnid said, due to the small size of your data it's advised that you pad the data and use a random Initialization Vector. Update: As for why.... if the data is small enough, and the IV can be predicted, it's possible to brute force the plain-text by generating cipher-text for every combination of plain-text with the known IV and matching it up to the captured cipher-text. In short, this is how rainbow tables work. Now if you're going to encrypt on one server and decrypt on another I'd go with the suggestions of pdavis. By using an asymmetric method you're able to separate the encryption keys from the decryption keys. This way if the server that encrypts data is compromised, the attacker is still unable to decrypt the data. If you're able to, it'd help the community to know more about your use case for the encryption. As I mentioned above, having a proper understanding of plausible threats is key when evaluating security controls. A: Does it matter if anybody can decrypt it? If you're just trying to obfuscate it a little, use ROT13. It's old school. A: If you want to encrypt and decrypt data within an application, you most likely want to use a symmetric key cipher. AES, which is the symmetric block encryption algorithm certified by the NSA for securing top secret data, is your best choice. There is a pure-PHP implementation available at www.phpaes.com For your use it sounds like AES128 is sufficient. You will want to use CBC mode with a random initialization vector, or else the same data will always produce the same ciphertext. Choosing the right encryption algorithm is a good first step, but there are many factors to a secure system which are hard to get right, such as key management. There are good resources out there, such as Applied Cryptography by Bruce Schneier, and Security Engineering by Ross Anderson (available for free online). A: I agree with Chris Kite - just use AES 128, this is far sufficient. I don't know exactly your environment, but I guess you're transmitting the data somehow through the internet. Don't use ECB, this will always produce the same result for the same plain text. CBC mode is the way to go and don't forget a random initialization vector. This vector has to be communicated with the cipher text and can be sent in the clear. Regarding your data, since AES is a block cipher, the outcome is always a multiple of the block size. If you don't want to let the observer know if your data is short or long, add some padding to extend it up to the maximum expected size. A: Any one-way encryption algorithm such as Blowfish will do, I guess. Blowfish is fast and open. You can use Blowfish through the crypt() function. AFAIK there are no encryption algorithm that work especially well on small strings. One thing to be aware of though is that brute-forcing such small strings will be very easy. Maybe you should encrypt the string along with a 'secret' salt value for additional security. A: You can use the general programming ideas without relying in built in encryption/decription functions Example create a function call it function encryptstring($string) { $string_length=strlen($string); $encrychars=""; /** *For each character of the given string generate the code */ for ($position = 0;$position<$string_length;$position++){ $key = (($string_length+$position)+1); $key = (255+$key) % 255; $get_char_to_be_encrypted = SUBSTR($string, $position, 1); $ascii_char = ORD($get_char_to_be_encrypted); $xored_char = $ascii_char ^ $key; //xor operation $encrypted_char = CHR($xored_char); $encrychars .= $encrypted_char; } /** *Return the encrypted/decrypted string */ return $encrychars; } On the page with link to include the id's required to be encrypted /** *While passing the unique value to a link *Do the following steps */ $id=57;//or if you are fetching it automatically just pass it here /** *For more security multiply some value *You can set the multiplication value in config file */ $passstring=$id*346244; $encrypted_string=encryptstring($passstring); $param=urlencode($encrypted_string); /** *Derive the url for the link */ echo '<a href="target_file.php?aZ98#9A_KL='.$param.'">something</a>' ; On the target file that get opened after the link is clicked /** *Retriving the value in the target file *Do the following steps */ $fetchid=$_GET['aZ98#9A_KL']; $passstring=urldecode(stripslashes($fetchid)); $decrypted_string= encryptstring($passstring); /** *Divide the decrypted value with the same value we used for the multiplication */ $actual_id= $decrypted_string/346244;
{ "language": "en", "url": "https://stackoverflow.com/questions/102788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Table and List view with single Model in Qt I have a 2D model where each row represents a frame in a video, and each column represents an object. The object can have different states on each frame, and this is stored in the model. Then I have a QTableView that shows this data. The model has header data, so each row has a header like "frame k" and each column has a header like "object n". This table is editable. But I want the user to edit it another way. The other way is a graphics view that shows a single frame. Below the graphics view is a list (oriented horizontally) that represents each frame. This way the user can click on a frame in the list and the graphics view now displays that frame. The problem is that the list displays the first column of each row in the model. What I want it to do is show the header of each row instead (so the list says "frame 1, frame 2, etc"). Is there a way to do this? A: Two possible solutions: * *Try to use a proxy model (a subclass of QAbstractProxyModel) which accesses row headers as columns in a single row. Not trivial because the proxy model displays as data what the original model considers to be header. *Display a second 2D view of your model, but hide everything except for the column headers. Since your frames are rows, you'll need a proxy model to transpose between rows and columns. DISCLAIMER: I did not actually implement any of the solutions.
{ "language": "en", "url": "https://stackoverflow.com/questions/102789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Data visualization framework for a web server? I need a framework for generating charts for data visualization. I picked up Processing www.processing.org/ but at the moment i can not run that in the "headless" mode from a web server. Is there any other candidate for this domain? What are the options if you need more chart type than what is supported by out-of-the-box solutions -Bharani A: JFreeChart is a option, or a software reporting software tool that does all the footwork of data analysis for you. I can recommend i-net Clear Reports seeing as how I work for i-net software and all... ;). A: Try using Silverlight for the clientside chart drawing Great set of OpenSource Silverlight chart has published on CodePlex - Check out http://silverlight.codeplex.com/ Visifire is a free charting library available in Silverlight And a tutorial is up here A: I am now using Graphics2D that comes with JDK. With this you are no longer limited to the chart types - simply concentrate of the data pattern and let Graphics2D do it's job. A: There is also Gnuplot, if you're into calling stuff through the command line, and Matplotlib, if you're using python.
{ "language": "en", "url": "https://stackoverflow.com/questions/102801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you enable the network on a virtual machine running Vista x64? I'm running Server 2008 64bit with Hyper-V. I've created a virtual machine with Vista 64bit and installed it. I can't get the Vista virtual machine to see the network adapter. I've set-up an external network on the Virtual Network Manager (Hyper-V) and associated that with the virtual machine (Vista). I've also tried using a Legacy Network Adapter but that didn't work either although that time the Vista machine saw the network card but couldn't connect through it. This is (obviously) the first time I've tried to set-up a virtual machine. Any ideas? EDIT: I notice that this question has been voted down a couple of times. I know that it's not a programming question but I'm a developer setting up a virtual machine to test my C#/ASP.NET code on and thought that other developers may hit this problem as well when they're doing this... A: I don't know Hyper-V, but I know in VMWare you can create a network connection in Bridged mode (meaning the VM will get it's own IP address via DHCP if that's enabled) or host-only mode (meaning the VM can only communicate with the host). When Vista could see the card, could it communicate with the host machine (which would indicate a host-only connection was specified)? What kind of IP address did it have (I would guess Hyper-V has a built-in DHCP server like VMWare does?) -- that might give additional clues. Sorry I don't know Hyper-V better... A: Make sure you have the Hyper-V Tools installed on the Guest VM. You shouldn't need the legacy adapter. You also may want to make sure you have all of the latest updates which may have addressed your issue. Particularly, KB950050 http://support.microsoft.com/kb/950050 A: It turns out that Vista x64 running as a VM through Hyper-V doesn't support the virtual network connection/card and that you have to set it up as a legacy network card. When I eventually got the config settings correct for the legacy network and disable the virtual network it connected. Thanks for the help guys - much appreciated!
{ "language": "en", "url": "https://stackoverflow.com/questions/102804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Best Free Text Editor Supporting *More Than* 4GB Files? I am looking for a text editor that will be able to load a 4+ Gigabyte file into it. Textpad doesn't work. I own a copy of it and have been to its support site, it just doesn't do it. Maybe I need new hardware, but that's a different question. The editor needs to be free OR, if its going to cost me, then no more than $30. For Windows. A: glogg could also be considered, for a different usage: Caveat (reported by Simon Tewsi in the comments, Feb. 2013) One caveat - has two search functions, Main Search and Quick Find. The lower one, which I assume is Quick Find, is at least an order of magnitude slower than the upper one, which is fast. A: Have you tried context editor? It is small and fast. A: I Stumbled on this post many times, as I often need to handle huge files (10 Gigas+). After being tired of buggy and pretty limited freeware, and not willing to pay fo costly editors after trial expired (not worth the money after all), I just used VIM for Windows with great success and satisfaction. It is simply PERFECT for this need, fully customizable, with ALL feature one can think of when dealing with text files (searching, replacing, reading, etc. you name it) I am very surprised nobody answered that (Except a previous answer but for MacOS)... For the record I stumbled on it on this blog post, which wisely adviced it. A: It's really tough to handle a 4G file as such. I used to handle larger text files, but I never used to load them in to my editor. I mostly used UltraEdit in my previous company, now I use Notepad++, but I would get just those parts which i needed to edit. (Most of the cases, the files never needed an edit). Why do u want to load such a big file in to an editor? When I handled files of these size, I used GNU Core Utils. The most common operations i performed on those files were head ( to get the top 250k lines etc ), tail, split, sort, shuf, uniq etc. It's really powerful. There's a lot of things you can do with GNU Core Utils. I would definitely recommend those, instead of a new editor. A: Sorry to post on such an old thread, but I tried several of the tips here, and none of them worked for me. It's slightly different than a text editor, but I found that Beyond Compare could handle an extremely large (3.6 Gig) file on my Vista 32-bit machine. This is a file that that Emacs, Large Text File Viewer, HexEdit, and Notepad++ all choked on. -Eric A: My favourite after trying a few to read a 6GB mysqldump file: PilotEdit Lite http://www.pilotedit.com/ Because: * *Memory usage has (somehow?!) never gone above 25MB, so basically no impact on the rest of my system - though it took several minutes to open. *There was an accurate progress bar during that time so I knew how it was getting on. *Once open, simple searching, and browsing through the file all worked as well as a small notepad file. *It's free. Others I tried... EmEditor Pro trial was very impressive, the file opened almost instantly, but unfortunately too expensive for my requirements. EditPad Pro loaded the whole 6GB file into memory and slowed everything to a crawl. A: I've had to look at monster(runaway) log files (20+ GB). I used hexedit FREE version which can work with any size files. It is also open source. It is a Windows executable. A: Jeff Atwood has a post on this here: http://www.codinghorror.com/blog/archives/000229.html He eventually went with Edit Pad Pro, because "Based on my prior usage history, I felt that EditPad Pro was the best fit: it's quite fast on large text files, has best-of-breed regex support, and it doesn't pretend to be an IDE." A: Instead of loading a gigantic log file in an editor, I'm using Unix command line tools like grep, tail, gawk, etc. to filter the interesting parts into a much smaller file and then, I open that. On Windows, try Cygwin. A: For windows, unix, or Mac? On the Mac or *nix you can use command line or GUI versions of emacs or vim. For the Mac: TextWrangler to handle big files well. I'm not versed enough on the Windows landscape to help out there. A: When I'm faced with an enormous log file, I don't try to look at the whole thing, I use Free File Splitter Admittedly this is a workaround rather than a solution, and there are times when you would need the whole file. But often I only need to see a few lines from a larger file and that seems to be your problem too. If not, maybe others would find that utility useful. A viewer that lets you see enormous text files isn't much help if you are trying to get it loaded into Excel to use the Autofilter, for example. Since we all spend the day breaking down problems into smaller parts to be able to solve them, applying the same principle to a large file didn't strike me as contentious. A: f you just want to view a large file rather than edit it, there are a couple of freeware programs that read files a chunk at a time rather than trying to load the entire file in to memory. I use these when I need to read through large ( > 5 GB) files. Large Text File Viewer by swiftgear http://www.swiftgear.com/ltfviewer/features.html Big File Viewer by Team Walrus. You'll have to find the link yourself for that last one because the I can only post a maximum of one hyperlink being a newbie. A: HxD -- it's a hexeditor, but it allows in place edits, and doesn't barf on large files. A: Tweak is a hex editor which can handle edits to very large files, including inserts and deletes. A: EmEditor should handle this. As their site claims: EmEditor is now able to open even larger than 248 GB (or 2.1 billion lines) by opening a portion of the file with the new custom bar - Large File Controller. The Large File Controller allows you to specify the beginning point, end point, and range of the file to be opened. It also allows you to stop the opening of the file and monitor the real size of the file and the size of the temporary disk available. Not free though.. A: I found that FAR commander could open large files ( I tried 4.2 GB xml file) And it does not load the entire file in memory and works fast. A: Opened 5GB file (quickly) with: 1) Hex Editor Neo 2) 010 editor A: Textpad also works well at opening files that size. I have done it many times when having to deal with extremely large log files in the 3-5gb range. Also, using grep to pull out the worthwhile lines and then look at those works great. A: The question would need more details. Do you want just to look at a file (eg. a log file) or to edit it? Do you have more memory than the size of the file you want to load or less? For example, TheGun, a very small text editor written in assembly language, claims to "not have an effective file size limit and the maximum size that can be loaded into it is determined by available memory and loading speed of the file. [...] It has been speed optimised for both file load and save." To abstract the memory limit, I suppose one can use mapped memory. But then, if you need to edit the file, some clever method should be used, like storing in memory the local changes, and applying them chunk by chunk when saving. Might be ineffective in some cases (big search/replace for example). A: I have had problems with TextPad on 4G files too. Notepad++ works nicely. A: I also like notepad++. A: Emacs can handle huge file sizes and you can use it on Windows or *nix. A: What OS and CPU are you using? If you are using a 32-bit OS, then a process on your system physically cannot address more than 4GB of memory. Since most text editors try to load the entire file into memory, I doubt you'll find one that will do what you want. It would have to be a very fancy text editor, that can do out-of-core processing, i. e. load a chunk of the file at a time. You may be able to load such a huge file with if you use a 64-bit text editor on a computer with a 64-bit CPU and a 64-bit operating system. And you have to make sure that you have enough space in your swap partition or your swap file. A: Why do you want to load a 4+ GB file into memory? Even if you find a text editor that can do that, does your machine have 4 GB of memory? And unless it has a lot more than 4 GB in physical memory, your machine will slow down a lot and go swap file crazy. So why do you want a 4+ GB file? If you want to transform it, or do a search and replace, you may be better off writing a small quick program to do it.
{ "language": "en", "url": "https://stackoverflow.com/questions/102829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "82" }
Q: Get Form request with Seam/JSF I have a query form that I would like to submit as a GET request so the result page may be bookmarked and otherwise RESTful. It's your classical text field with a submit button. How do I induce Seam/JSF to use GET and include the query expression as a parameter rather than POST, the default? A: All you need to do is enable the SeamFilter in web.xml. See Blog Example for an example RESTful application using Seam. The key is to use a Seam page parameter, defined in WEB-INF/pages.xml A: you can use a PhaseListener to convert POST requests to GET requests or just to interpret GET requests so that they can be bookmarkable. This page should explain in more detail: http://balusc.blogspot.com/2007/03/post-redirect-get-pattern.html A: If you are using s:button or s:link, your form will be using GET method.
{ "language": "en", "url": "https://stackoverflow.com/questions/102832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I use ASP.NET with Visual Studio 2008 I haven't used Visual Studio since VB 3 and am trying to give it a shot with ASP.NET. It seems that it should be able to connect to a website (via some sort of ftp like protocol I figure) and allow to edit without having to manually upload/download the files. Is this the way it is supposed to work or am I mis-understanding? I have tried using 'create new website' and 'open website' using my testing domain (hosted by godaddy, wondering if that may be the issue as well), each time it gives me errors. I'm not sure if I'm doing something wrong or trying to do something it wasn't meant to. A: You really don't want to be working directly on a live web site, do you? That's just crazy. One little mistake and you've hosed the site. Visual Studio now has it's own built in web server. You use that for testing. If you really don't want to use that you can put IIS on your local machine or set up a Dev/QA server somewhere. In that case, you'd edit it via a file share. You should be using some kind of source control. Even for a single developer it's very important. When finished with a programming session, you check your updates back into source control. Finally, only after the site's gone through a suitable QA process, the production server is updated from source control, not from within visual studio. A: I would develop your website locally and ftp it to your godaddy website after or use the publish website feature in VS
{ "language": "en", "url": "https://stackoverflow.com/questions/102833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Posting forms to a 404 + HttpHandler in IIS7: why has all POST data gone missing? OK, this might sound a bit confusing and complicated, so bear with me. We've written a framework that allows us to define friendly URLs. If you surf to any arbitrary URL, IIS tries to display a 404 error (or, in some cases, 403;14 or 405). However, IIS is set up so that anything directed to those specific errors is sent to an .aspx file. This allows us to implement an HttpHandler to handle the request and do stuff, which involves finding the an associated template and then executing whatever's associated with it. Now, this all works in IIS 5 and 6 and, to an extent, on IIS7 - but for one catch, which happens when you post a form. See, when you post a form to a non-existent URL, IIS says "ah, but that url doesn't exist" and throws a 405 "method not allowed" error. Since we're telling IIS to redirect those errors to our .aspx page and therefore handling it with our HttpHandler, this normally isn't a problem. But as of IIS7, all POST information has gone missing after being redirected to the 405. And so you can no longer do the most trivial of things involving forms. To solve this we've tried using a HttpModule, which preserves POST data but appears to not have an initialized Session at the right time (when it's needed). We also tried using a HttpModule for all requests, not just the missing requests that hit 404/403;14/405, but that means stuff like images, css, js etc are being handled by .NET code, which is terribly inefficient. Which brings me to the actual question: has anyone ever encountered this, and does anyone have any advice or know what to do to get things working again? So far someone has suggested using Microsoft's own URL Rewriting module. Would this help solve our problem? Thanks. A: Since IIS7 uses .net from the top down there would not be any performance overhead of using an HttpModule, In fact there are several Managed HttpModules that are always used on every request. When the BeginRequest event is fired, the SessionStateModule may not have been added to the Modules collection, so if you try to handle the request during this event no session state info will be available. Setting the HttpContext.Handler property will initialize the session state if the requested handler needs it, so you can just set the handler to your fancy 404 page that implements IRequiresSessionState. The code below should do the trick, though you may need to write a different implementation for the IsMissing() method: using System.Web; using System.Web.UI; class Smart404Module : IHttpModule { public void Dispose() {} public void Init(HttpApplication context) { context.BeginRequest += new System.EventHandler(DoMapping); } void DoMapping(object sender, System.EventArgs e) { HttpApplication app = (HttpApplication)sender; if (IsMissing(app.Context)) app.Context.Handler = PageParser.GetCompiledPageInstance( "~/404.aspx", app.Request.MapPath("~/404.aspx"), app.Context); } bool IsMissing(HttpContext context) { string path = context.Request.MapPath(context.Request.Url.AbsolutePath); if (System.IO.File.Exists(path) || (System.IO.Directory.Exists(path) && System.IO.File.Exists(System.IO.Path.Combine(path, "default.aspx")))) return true; return false; } } Edit: I added an implementation of IsMissing() Note: On IIS7, The session state module does not run globally by default. There are two options: Enable the session state module for all requests (see my comment above regarding running managed modules for all request types), or you could use reflection to access internal members inside System.Web.dll. A: Microsoft released a hotfix for this : http://support.microsoft.com/default.aspx/kb/956578 A: The problem in IIS 7 of post variables not being passed through to custom error handlers is fixed in service pack 2 for Vista. Haven't tried it on Windows Server but I'm sure it will be fixed there too. A: Just a guess: the handler specified in IIS7's %windir%\system32\inetsrv\config\applicationhost.config which is handling your request is not allowing the POST verb to get through at all, and it is evaluating that rule before determining whether the URL doesn't exist. A: Yes, I would definitely recommend URL rewriting (using Microsoft's IIS7 one or one of the many alternatives). This is specifically designed for providing friendly URLs, whereas error documents are a last-ditch backstop for failures, which tends to munge the incoming data so it may not be what you expect.
{ "language": "en", "url": "https://stackoverflow.com/questions/102846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I install CPAN modules locally without root access (DynaLoader.pm line 229 error)? Doesn't work with other modules, but to give an example. I installed Text::CSV_XS with a CPAN setting: 'makepl_arg' => q[PREFIX=~/lib], When I try running a test.pl script: $ perl test.pl #!/usr/bin/perl use lib "/homes/foobar/lib/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi"; use Text::CSV_XS; print "test"; I get Can't load '/homes/foobar/lib/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/auto/Text/CSV_XS/CSV_XS.so' for module Text::CSV_XS: /homes/foobar/lib/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/auto/Text/CSV_XS/CSV_XS.so: cannot open shared object file: No such file or directory at /www/common/perl/lib/5.8.2/i686-linux/DynaLoader.pm line 229. at test.pl line 6 Compilation failed in require at test.pl line 6. BEGIN failed--compilation aborted at test.pl line 6. I traced the error back to DynaLoader.pm it happens at this line: # Many dynamic extension loading problems will appear to come from # this section of code: XYZ failed at line 123 of DynaLoader.pm. # Often these errors are actually occurring in the initialisation # C code of the extension XS file. Perl reports the error as being # in this perl code simply because this was the last perl code # it executed. my $libref = dl_load_file($file, $module->dl_load_flags) or croak("Can't load '$file' for module $module: ".dl_error()); CSV_XS.so exists in the above directory A: When you installed the module, did you watch the output? Where did it say it installed the module? Look in lib. Do you see the next directory you expect? Look in ~/lib to see where eveything ended up to verify that you have the right directory name in your use lib statement: % find ~/lib -name CSV_XS.so Once you see where it is installed, use that directory name in your use lib (or PERL5LIB or whatever). I expect you have a lib/lib in there somehow. The PREFIX is just the, well, prefix, and the installer appends other directory portions to that base path. That includes lib, man, bin, etc. A: Try this instead: 'makepl_arg' => q[PREFIX=~/] PREFIX sets the base for all the directories you will be installing into (bin, lib, and so forth.) You may also be running into shell expansion problems with your '~'. You can try to expand it yourself: 'makepl_arg' => q[PREFIX=/home/users/foobar] It would also be helpful if you included the commands you used to get the error you are asking about. A: Personally I would suggest to use local::lib. :) A: It looks from the error message ("at /www/common ...") that your script is a CGI or mod_perl script. The web server is probably not running as the user 'foo', under whose home directory you've installed the module - that could result in the web server being unable to read that directory. It may also be running in a "chroot jail", which would mean that the directory in which you've installed the module may not be visible to the script. In other words, just because you can see the module, does not mean that the web server, and therefore your script, can do so. You should check the relevant file permissions, and if the server is chrooted, whether your module directory is mounted within the virtual file system. A: Does the file in question (CSV_XS.so) exist? Does it exist at the listed location? If you do: set |grep PERL What is the output? Have you successfully installed other local perl modules? A: I strongly suggest installing your own perl in your own home directory, if you have space. Then you can keep everything under your control and keep your own module set, as well as escaping if the admins are keeping you on an older version of perl. (Not to mention preserving yourself if they upgrade some day and leave out all the modules you are relying on.)
{ "language": "en", "url": "https://stackoverflow.com/questions/102850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: JSON string to list or other usable format in asp.net 2.0 I have one JSON that is coming in a string format. I need to store it in a key-pair value or something like that. I am using asp.net 2.0 and can not use 3rd party DLL like Newtonsoft.Json.dll. I guess last option will be to use regular expression. Can anybody please help me in this? A: If you go to http://www.json.org/ and look towards the bottom of the page there are dozens of json libraries most of them open source, I believe they list 8 for C#. If you can not reference one of these libraries, I think your best bet would be to find one with a permissive license and simply add the code to your project. Another idea is to look at the diagrams, grammer, and syntax at http://www.json.org/ and just write your own parser, but regex is NOT the way to do it. If you dont know how to write a parser you could look at one of the open source json libraries or start with something less complicated like a good CSV parser, here is a paper that looks pretty good: http://www.boyet.com/Articles/CsvParser.html A: It is possible to serialize JSON using JScript in C# into key/value pairs. You need to add a few references to your project. They're part of the .NET framework, you just need to add the references to your project. You'll need: * *Microsoft.JSript *Microsoft.Vsa First, the usings at the top of your class: using Microsoft.JScript; using Microsoft.JScript.Vsa; Then the Engine that will execute the script needs to be initialized somewhere in your 'Page' code-behind: VsaEngine Engine = VsaEngine.CreateEngine(); Then you just create this method and call it by passing in your JSON object: object EvalJScript(string JScript) { object result = null; try { result = Microsoft.JScript.Eval.JScriptEvaluate(JScript, Engine); } catch (Exception ex) { return ex.Message; } return result; } The type of object returned (if JSON is passed in) is a 'JSObject'. You can access its values as key/value pairs. Read the MSDN documentation for more details on this object. Here's an example of using the code: string json = "({Name:\"Dan\",Occupation:\"Developer\"})"; JSObject o = EvalJScript(json) as JSObject; string name = o["Name"] as string; // Value of 'name' will be 'Dan' A: Could you use JScript.NET? If so, should be easy enough with eval() - then just loop through the objects returned and translate into KeyValuePair's or whatever A: You will need to use jscript.net as the code behind language, but other pages of your site should be fine to stay as c# if thats what you prefer. As mentioned in previous comment, you will need to be aware of the security aspects and risks - only use eval if you trust the JSON you're parsing!
{ "language": "en", "url": "https://stackoverflow.com/questions/102866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can you use WMI to create an MSMQ message queue (PRIVATE queue)? I need to create a PRIVATE message queue on a remote machine and I have resolved to fact that I can't do this with the .NET Framework in a straight forward manner. I can create a public message queue on a remote machine, but not a PRIVATE one. I can create a message queue (public or private) locally. I am wondering if anyone knows how to access MSMQ through WMI. Edit: I don't see anything to do it with using the MSMQ Provider. May have to get tricky and use PSExec to log onto a remote server and execute some code. A: Yes, queue creation is simple in .NET, however you cannot create a private queue on a remote machine this way. I have been thinking about adding queue creation to the MSMQ WMI provider for some time... If you need it for a real product / customer, you can contact me and I will consider giving this feature a priority. All the best, Yoel Arnon A: A blog post about MSMQ and WMI is here: http://msmq.spaces.live.com/blog/cns!393534E869CE55B7!210.entry It says there is a provider here: http://www.msmq.biz/Blog/MSMQWmiSetup.msi It also says there is a reference here: http://www.msmq.biz/Blog/MSMQ%20WMI%20Provider%20Objects.doc Hope this helps. A: WMI can't do this out-of-box. The previous answer has some obsucre WMI provider, but it doesn't even seem to support Queue creation. This is very simple in .NET however! I wouldn't go so far as PSExec. MessageQueue.Create A: set qinfo = CreateObject("MSMQ.MSMQQueueInfo") qinfo.PathName = ".\Private$\TestQueue" qinfo.Label = ".\Private$\TestQueue" qinfo.Journal = "1" qinfo.Create Copy the code in a text editor, save the file as .vbs and execute. A: I was wanting to create remote private queues also, but since .NET doesn't support it, we decided we will just use remote public queues instead. If we set Send and Receive permissions on the queues as desired, this should be fine. One idea for a work around would be to write your own Windows service or web service that runs on the same machine where the queue needs to reside. You could call this service remotely through a socket or over http, and your locally-running code could create the local private queue. If you use the direct name format to reference the queue, you can Send and Receive from a remote private queue.
{ "language": "en", "url": "https://stackoverflow.com/questions/102877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I get Datetime to display in military time in oracle? I am running some queries to track down a problem with our backup logs and would like to display datetime fields in 24-hour military time. Is there a simple way to do this? I've tried googling and could find nothing. A: If you want all queries in your session to show the full datetime, then do alter session set NLS_DATE_FORMAT='DD/MM/YYYY HH24:MI:SS' at the start of your session. A: select to_char(sysdate,'DD/MM/YYYY HH24:MI:SS') from dual; Give the time in 24 hour format. More options are described here. A: Use a to_char(field,'YYYYMMDD HH24MISS'). A good list of date formats is available here A: It's not oracle that determines the display of the date, it's the tool you're using to run queries. What are you using to display results? Then we can point you to the correct settings hopefully.
{ "language": "en", "url": "https://stackoverflow.com/questions/102881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is anyone targeting Google Chrome yet? (Web apps, plugins) Is anyone writing applications specifically to take advantage of google chrome? Are there any enterprise users who are considering using it as the standard browser? A: I switched to Chrome and haven't looked back except for the occasional site which doesn't work properly, forcing me to load it in Firefox. All my existing web applications work fine on it, and I'm using it for primary testing on my current development project. A: I'm not actually targeting chrome, but I have added chrome to my browsers to test sites on. I've found some odd quirks in this product where some plugins cause the browser to hang, or run really slow in some environments, but they are still in beta in active development. But I definately now make sure sites I work on render well in chrome, as well as firefox, latest versions of IE, safari, Konquerer and opera. I usually check out how it looks on lynx as well, that helps me catch "un-alternated text" in images. Yeah, I know that isn't a word, but some people will understand what I'm saying. A: Because chrome uses the webkit to render HTML, you can be assured if it works in safari, it'll work under chrome, however it's rendering engine isn't up to scratch quite yet. I think writing applications that take advantage of it is similar to writing iPhone applications, remember chrome is expected to be adopted by android to make it similar to iPhone. That way it pretty much takes advantage of all those iPhone apps. Would I install it as the browser of choice? not yet - but i'll certainly work on valid web pages that will render across all browsers. A: Yes, I have started to pay very good attention to Google Chrome for my applications. Recent analytics show that between 6%-15% of my users are accessing my applications (varies between 6 to 15 in different applications) on Chrome. And, this number looks on an upward trend. Thus, I can't really ignore it for testing right now. As far as taking it as a standard goes, thats a long way off. I still have to test for IE6! :( Though, we have been planning to start using features like Gears (inbuilt in Chrome - downloadable elsewhere) once Chrome crosses the 25% mark. Thats when I believe that we will be looking at Chrome to be our preferred browser. I hope that we have Chrome 1.0+ by then! ;) A: One of our major customers has outlawed Chrome because it installs on the C drive without asking. They deploy a standard image with a small C drive and large D drive so they can easily re-clone the system part of the image on C without destroying the client's personal files on D. Most software allows you to choose the install directory. Anything that violates this is disallowed, and they're a big enough company to have some weight with most vendors. A: We have enough headaches trying to support * *Firefox *Two versions of IE which have their own iffy bugs *Safari I'm not sure why we continue to support Safari. Most of our users (corporate) use IE6 or IE7. We try to make sure that things work in both of those. A: Maybe not for programming purposes but Chrome w/ Google Reader makes for the most powerful RSS reader. Can handle up to 1500 feeds w/ performance still ok, managing subscriptions still functioning. A: I'm using it on my work machine, but that's about it. It's been stable for me, and I like the barebones UI. I'll still switch to Firefox for the web developer extensions however. A: I'm liking some of GoogleChrome- the Start page with your 9 most recent is the winner for me. The interface takes a little getting used to, but the speed is impressive, especially with Gmail. However, it glitches with Java, which rules it out for serious work at the moment. I use FireFox mostly and have Chrome for the "other" websites at work. A: I'm considering using GWT on an intranet project and considering suggesting to the users that use Chrome to take advantage of the enhanced Javascript performance. Any AJAX-heavy app would be a great candidate to target Chrome. A: At my company, we're not targeting it, but we're definitely paying attention to it. My boss is using it as his primary browser, and I have implemented browser detection for it in our scripts in case we ever to need to target it for some reason. A: Chrome has the .png opacity bug where the transparent parts of the .png are a solid color if you try to transition the opacity from 0 to 1. In IE7 the opaque parts are black, and in Chrome, they are white. Today, I decided to go ahead and account for this bug in my JavaScript. I don't really test sites on Chrome that often, but I am actually using it for almost all of my browsing. A: I will target Chrome as soon as a stable Linux and OSX client is available. A: Targeting Chrome/Chromium right now, I think is like targeting Konqueror web browser. It will get popular, but you should wait to a more stable beta, and/or some Linux and OS X client. A: My website statistics shows 3.xx % visitors using Chrome which arrived just few weeks back. And Opera is only 4.xx % which has been around for several years. Easily you can see that rate at which Chrome is picking up. You can see how easily Google takes over all areas of your computing world and personal world too. A: Since Chrome uses Webkit, it has the same rendering engine and DOM support as Safari (not necessarily the same revision of Webkit though). By testing in Safari, you can generally get by without worrying about Chrome. Any differences you find are probably just bugs that you should file on instead of work around. However, because Chrome uses a different JS engine, there may be a few incompatibilities with Safari. So, if you're doing anything with JS, you might as well fire up Chrome and see if there's anything obviously wrong. Generally though, you don't target browsers, you target rendering engines (with their associated DOM support and JS engines). A: No. * *Why help Google further build an evil empire? In this particular case it is so obvious that they do not care about users but only obsessed with gathering usage info. *It's not any major player yet A: I am using Google Chrome, so far all the web apps I have work fine in it with no modifications.
{ "language": "en", "url": "https://stackoverflow.com/questions/102894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Build new computer using Scripts I would like to automate the process of setting up a new PC, this would include downloading and installing the latest windows and office updates; installing software from a network share (some software will require a restart so the script would need to be able to login and continue) adding PC to a domain and setting up local user accounts. Is this possible and what would be the best scripting language to achieve this? A: Check out nLite. Allows you to pre-configure many options, slipstream updates and service packs, etc. A: The standard method in enterprise IT is the Microsoft Deployment Toolkit (MDT). Even if another OS deployment technique (SCCM, BigFix, SpecOps...) is used, the Windows images are often developed in MDT. There is no better guide to getting started than Johan Arwidmark's book series "Deployment Fundamentals". There is also material at Windows Noob. You could integrate Chocolatey, BoxStarter or Ninite for app management after the OS is deployed.
{ "language": "en", "url": "https://stackoverflow.com/questions/102900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is a good CI build-process What constitutes a good CI build-process? We use CI, but is deployment to production even a realistic CI goal when you have dependencies on several services that should be deployed too and other apps may depend on these too. Is a good good CI build process good enough when its automated to QA and manual from there? A: CI is not intended as a deployment mechanism. It is good to have your CI execute any automated deployment to a QA/Test server, to ensure those aspects of your build work, but I would not use a CI system like Cruise Control or Bamboo as the means of deployment. CI is for building the codebase periodically to automate execution of automated tests, verification of the codebase via static analysis and other checks of that nature. A: Be sure you understand the idea behind a CI build. CI stands for Continuous Integration and CI builds are really intended to be throw-away builds that are performed when a developer checks code in to the source control system (or at some specified interval) to ensure that the newest changes do not break the code base (hence the idea of continuously integrating the changes to the code base). To that end, the technology used for the actual build server process is largely irrelevant compared to what actually happens during the build. As @pdavis mentioned, the CI build should compile the code base, execute some code analysis (FxCop, StyleCop, Lint, etc.), execute unit tests and code coverage, and execute any other custom analysis you want performed that should impact the concept of a "successful" or "failed" build. Having a CI build automatically deploy to an environment really doesn't fall under the control of a build server. That being said, you can always create a separate project that runs on the build server that handles the deployment when it detects certain conditions (such as a build completes successfuly), but that should always be done as a completely independent thing. A: I am starting on a new project at work that I am really looking forward to. We are still in the initial design stage and have just recently completed the Logical System Architecture. We have ordered new servers for the testing and staging environments and are setting up a Continuous Integration (CI) build system based on Cruise Control (http://cruisecontrol.sourceforge.net/) and MSBuild (http://msdn2.microsoft.com/en-us/library/wea2sca5.aspx) which is basically an improved port of ANT. It appears that Visual Studio 2005 project and solution files are all now in MSBuild format. Cruise Control will be automatically pulling the source from Visual Source Safe (ok, it isn't Subversion but we can deal), compiling it, and then running it through fxCop (http://www.gotdotnet.com/Team/FxCop/), nUnit (http://www.nunit.org/), nCover (http://ncover.org/site/), and last but not lease Simian (http://www.redhillconsulting.com.au/products/simian/). Cruise Control has a pretty good website interface for displaying all of the compiled results from the various tools and can even display code changes from one build to the next. It also keeps track of all builds in a build history. I'm looking forward to the test driven development and think that this type of approach combined with nUnit/nCover should give us a pretty good idea before we roll out changes that we haven't broken anything. There are also plans to incorporate some type of automated user interface testing once we are far enough along in the project. Depending on the tool, this should be just a matter of installing the tool on the build server and calling it from Cruise Control. Sweet. A: A good CI process will have full or nearly-full unit test coverage. Unit tests test classes and methods, vs. integration tests, which test multiple parts of the system. When you set up your CI builds, have them automate the unit tests. That way, the CI builds can run multiple times per day. We have ours set to run every 2 hours. You can have longer running builds that run once per day. These can use other services and run integration tests. A: Well "it depends" :) We use our CI system to: * *build & unit test *deploy to single box, run intergration tests and code analisys *deploy to lab environment *run acceptance tests in prod-like system *drop builds that pass to code drop for prod deployment This is for a greenfield project of about a dozen services and databases deployed to 20+ servers, that also had dependencies on half a dozen other 'external' services. Using a CI tool to deploy your product to a production environment as a realistic goal? again... "it depends" Why would you want to do this? * *if you have the process you can roll changes (and roll back) faster and more often *less chance for human error *you can test the same deployment strategy in a test environment before going to production and catch issues earlier Some technical things you have to address before you can answer this: * *what is the uptime requirements for your system -- Are you allowed to have downtime or does it need to be up 24/7? *do you have change control processes in place that require human intervention/approval? *is your deployment robust enough for any component to roll back to a known-good state if a deployment fails? *is your system designed to handle different versions of services or clients in case one or several component deployments fails (and you have the above rollback to last known good)? *does the process have the smarts to handle a partial deployment where a component cannot handle mixed versions of its dependencies/clients? *how are you handing database deployment/upgrades? *do you have monitoring in place so you know when something goes wrong? Here are a couple of recent related links about automation and building the tools you need. When it comes down to it the more complex your system the more difficult it is do automate everything, but that does not mean it is not a worthy goal, it just takes a lot more effort and willpower to get it done -- everything from knowing the difficulties you're going to face, the problems you have to account for (failure will happen), the political challenges of building infrastructure (vs. more product features). Now heres the big secret... the technical challenges are challenging but not impossible... the political challenges may be insurmountable. Everything about this costs money whether its dev time or buying 3rd party solutions. So really, can you build the $1K, $10K, $100K, or $1M solution? Whatever solution you go for make sure the automation is robust first, complete second... i.e. make sure you have as robust a solution as you can for getting deployment to a test environment rather than a fragile solution that deploys to production. A: I was watching a ThoughtWorks presentation (creators of Cruise Control) and they actually addressed this issue. Their answer is that NO deployment is too complex to test. Why? Because otherwise, your customers become your testers, which is exactly where you don't want to be. If you have a complex deployment structure, set up a visualization server. Have it pretend to be all the systems you need to talk to. They can always start in a known good state, because you can reset to a clean image. To answer your initial question, a good process is one which enables communication between the repository and the developers. If the repository is in a bad state (non-compiling code, failed tests, etc.), the developers know about it as soon as possible, so that they can correct it. A: The later a bug is discovered, the costlier it is to fix. So bugs should be discovered as early as possible. This is the motivation behind CI. A good CI should ensure catching as many bugs as possible. The whole application comprises of code (often in multiple languages), Database schema, deployment files etc. Errors in any of these can cause bugs - so the CI should try to exercise as many of them as possible. CI does not replace a proper QA discipline. Also, CI need not be very comprehensive on day one of the project. One can start with a simple CI process that does basic compilation & unit testing initially. As you discover more classes of bugs in QA, you should adapt the CI process to try to catch future occurrences of those bugs. It can also involve static code-analysis checks, so that you can implement consistent coding and design ideals across the codebase.
{ "language": "en", "url": "https://stackoverflow.com/questions/102902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Caching Data Objects when using Repository/Service Pattern and MVC I have an MVC-based site, which is using a Repository/Service pattern for data access. The Services are written to be using in a majority of applications (console, winform, and web). Currently, the controllers communicate directly to the services. This has limited the ability to apply proper caching. I see my options as the following: * *Write a wrapper for the web app, which implements the IWhatEverService which does caching. *Apply caching in each controller by cache the ViewData for each Action. *Don't worry about data caching and just implement OutputCaching for each Action. I can see the pros and cons of each. What is/should the best practice be for caching with Repository/Service A: Based on answer provided by Brendan, I defined a generic cached repository for the special case of relatively small lists that are rarely changed, but heavily read. 1. The interface public interface IRepository<T> : IRepository where T : class { IQueryable<T> AllNoTracking { get; } IQueryable<T> All { get; } DbSet<T> GetSet { get; } T Get(int id); void Insert(T entity); void BulkInsert(IEnumerable<T> entities); void Delete(T entity); void RemoveRange(IEnumerable<T> range); void Update(T entity); } 2. Normal/non-cached repository public class Repository<T> : IRepository<T> where T : class, new() { private readonly IEfDbContext _context; public Repository(IEfDbContext context) { _context = context; } public IQueryable<T> All => _context.Set<T>().AsQueryable(); public IQueryable<T> AllNoTracking => _context.Set<T>().AsNoTracking(); public IQueryable AllNoTrackingGeneric(Type t) { return _context.GetSet(t).AsNoTracking(); } public DbSet<T> GetSet => _context.Set<T>(); public DbSet GetSetNonGeneric(Type t) { return _context.GetSet(t); } public IQueryable AllNonGeneric(Type t) { return _context.GetSet(t); } public T Get(int id) { return _context.Set<T>().Find(id); } public void Delete(T entity) { if (_context.Entry(entity).State == EntityState.Detached) _context.Set<T>().Attach(entity); _context.Set<T>().Remove(entity); } public void RemoveRange(IEnumerable<T> range) { _context.Set<T>().RemoveRange(range); } public void Insert(T entity) { _context.Set<T>().Add(entity); } public void BulkInsert(IEnumerable<T> entities) { _context.BulkInsert(entities); } public void Update(T entity) { _context.Set<T>().Attach(entity); _context.Entry(entity).State = EntityState.Modified; } } 3. Generic cached repository is based on non-cached one public interface ICachedRepository<T> where T : class, new() { string CacheKey { get; } void InvalidateCache(); void InsertIntoCache(T item); } public class CachedRepository<T> : ICachedRepository<T>, IRepository<T> where T : class, new() { private readonly IRepository<T> _modelRepository; private static readonly object CacheLockObject = new object(); private IList<T> ThreadSafeCacheAccessAction(Action<IList<T>> action = null) { // refresh cache if necessary var list = HttpRuntime.Cache[CacheKey] as IList<T>; if (list == null) { lock (CacheLockObject) { list = HttpRuntime.Cache[CacheKey] as IList<T>; if (list == null) { list = _modelRepository.All.ToList(); //TODO: remove hardcoding HttpRuntime.Cache.Insert(CacheKey, list, null, DateTime.UtcNow.AddMinutes(10), Cache.NoSlidingExpiration); } } } // execute custom action, if one is required if (action != null) { lock (CacheLockObject) { action(list); } } return list; } public IList<T> GetCachedItems() { IList<T> ret = ThreadSafeCacheAccessAction(); return ret; } /// <summary> /// returns value without using cache, to allow Queryable usage /// </summary> public IQueryable<T> All => _modelRepository.All; public IQueryable<T> AllNoTracking { get { var cachedItems = GetCachedItems(); return cachedItems.AsQueryable(); } } // other methods come here public void BulkInsert(IEnumerable<T> entities) { var enumerable = entities as IList<T> ?? entities.ToList(); _modelRepository.BulkInsert(enumerable); // also inserting items within the cache ThreadSafeCacheAccessAction((list) => { foreach (var item in enumerable) list.Add(item); }); } public void Delete(T entity) { _modelRepository.Delete(entity); ThreadSafeCacheAccessAction((list) => { list.Remove(entity); }); } } Using a DI framework (I am using Ninject), one can easily define if a repository should be cached or not: // IRepository<T> should be solved using Repository<T>, by default kernel.Bind(typeof(IRepository<>)).To(typeof(Repository<>)); // IRepository<T> must be solved to Repository<T>, if used in CachedRepository<T> kernel.Bind(typeof(IRepository<>)).To(typeof(Repository<>)).WhenInjectedInto(typeof(CachedRepository<>)); // explicit repositories using caching var cachedTypes = new List<Type> { typeof(ImportingSystem), typeof(ImportingSystemLoadInfo), typeof(Environment) }; cachedTypes.ForEach(type => { // allow access as normal repository kernel .Bind(typeof(IRepository<>).MakeGenericType(type)) .To(typeof(CachedRepository<>).MakeGenericType(type)); // allow access as a cached repository kernel .Bind(typeof(ICachedRepository<>).MakeGenericType(type)) .To(typeof(CachedRepository<>).MakeGenericType(type)); }); So, reading from cached repositories is done without knowing about the caching. However, changing them requires to inject from ICacheRepository<> and calling the appropriate methods. A: Steve Smith did two great blog posts which demonstrate how to use his CachedRepository pattern to achieve the result you're looking for. Introducing the CachedRepository Pattern Building a CachedRepository via Strategy Pattern In these two posts he shows you how to set up this pattern and also explains why it is useful. By using this pattern you get caching without your existing code seeing any of the caching logic. Essentially you use the cached repository as if it were any other repository. public class CachedAlbumRepository : IAlbumRepository { private readonly IAlbumRepository _albumRepository; public CachedAlbumRepository(IAlbumRepository albumRepository) { _albumRepository = albumRepository; } private static readonly object CacheLockObject = new object(); public IEnumerable<Album> GetTopSellingAlbums(int count) { Debug.Print("CachedAlbumRepository:GetTopSellingAlbums"); string cacheKey = "TopSellingAlbums-" + count; var result = HttpRuntime.Cache[cacheKey] as List<Album>; if (result == null) { lock (CacheLockObject) { result = HttpRuntime.Cache[cacheKey] as List<Album>; if (result == null) { result = _albumRepository.GetTopSellingAlbums(count).ToList(); HttpRuntime.Cache.Insert(cacheKey, result, null, DateTime.Now.AddSeconds(60), TimeSpan.Zero); } } } return result; } } A: The easiest way would be to handle caching in your repository provider. That way you don't have to change out any code in the rest of your app; it will be oblivious to the fact that the data was served out of a cache rather than the repository. So, I'd create an interface that the controllers use to communicate with the backend, and in the implementation of this I'd add the caching logic. Wrap it all up in a nice bow with some DI, and your app will be set for easy testing.
{ "language": "en", "url": "https://stackoverflow.com/questions/102913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: How is AJAX implemented, and how does it help web dev? From http://en.wikipedia.org/wiki/AJAX, I get a fairly good grasp of what AJAX is. However, it looks like in order to learn it, I'd have to delve into multiple technologies at the same time to get any benefit out of it. So two questions: * *What are resources that can help me understand/use AJAX? *What sort of website would benefit from AJAX? A: If you aren't interested in the nitty gritty, you could use a higher-level library like JQuery or Prototype to create the underlying Javascript for you. The main benefit is a vastly more responsive user interface for web-based applications. A: There are many libraries out there that can help you get benefit out of AJAX without learning about implementing callbacks, etc. Are you using .NET? Look at http://ajax.asp.net. If you're not, then take a look at tools like qcodo for PHP, and learn about prototype.js, jquery, etc. As far as websites that would benefit: Every web application ever. :) Anything you interact with by exchanging information, not just by clicking a link and reading an article. A: Every website can benefit from AJAX, but in my opinion the biggest benefit to AJAX comes in data entry sections - forms basically. I have done entire sites where the front end - the part the user sees had almost no AJAX functionality in it. All the AJAX stuff was in the administration control panel for assisting in (correct!) data entry. There is nothing worse than submitting a form and getting back an error, using AJAX you can pretty much prevent this for everything but file uploads. A: I find it easiest to just stay away from all the frameworks and other helpers and just do basic Javascript. This not only lets you understand what's going on under the covers, it also lets you do it in the simplest way possible. There's really not much to it. User the JS XML DOM objects to create an xml document client side. Sent it to the server with XMLHTTPRequest, and then process the result, again using the JS XML DOM objects. Start with something simple. Just try sending one piece of information to the server, and getting a small piece of information back. A: The Mozilla documentation is good. Sites that benefit from it the most are ones that behave almost like a desktop application and need high interactivity. You can usually improve usability on almost any site by using it, however. A: Ajax should be thought of as a means to alter some content on a page without reloading the entire page. So when do you need to do this? Really only when you have some user interactions or form information that you want to keep intact while you change some content on the page.
{ "language": "en", "url": "https://stackoverflow.com/questions/102929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Generate a Sandcastle help file for a website without the "Xml Documentation File" option I am trying to generate a Sandcastle help file for a website. In the properties window for project, there aren't any options for creating the XML Documentation File required for Sandcastle. The Build tab in the property pages only contains options for: Start Action, Build Solution Action, and Accessibility validation. I don't have any options for Output, or XML documentation file, like my other projects have. The website I'm working with does not have an actual .proj file, which could be the problem. If this is the problem, what is the best way of creating one for a project that is under source control and being worked on by many people with minimal disruption? This is using Visual Studio 2005 professional. A: The problem with websites in VS2k5 is that, when they get compiled, the resulting dlls are a mess. No namespaces, weird names, etc. If you truly want to generate a Sandcastle Help File, look at converting your website into a web application. You can definitely generate source code docs for that. A: I haven't tried it yet, but you might want to try the following Documenting Web Sites / Projects from Eric Woodruff's site. It gives the specifics on how it can be done. Update: I did try it and it works for regular websites. The only issue I can see is that the websites don't have namespaces. So when I run it I get a topic by FolderName_WebPage Class format without any logical grouping. So it is alhpabetical by folder and page name. Once you got the content created, you can edit the help file using a helpcompiler / builder and group the topics as needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/102940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: C# VB.NET Conversion Which tools do you use to convert between C# and VB.NET? A: Check out Code Converter by Telerik. A: This has been asked so many times. Like here: What is the best C# to VB.net converter? A: Reflector. It isn't the best for the task, but it works. A: Tangible has C++ <> C#, Java <> C#, VB <> C# etc. I'm not sure how to quantify which is "best" but their support is responsive if there's a bug. A: Use snippet converter for converting code..snippet converter
{ "language": "en", "url": "https://stackoverflow.com/questions/102956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Calculate total sum of all numbers in c:forEach loop I have a Java bean like this: class Person { int age; String name; } I'd like to iterate over a collection of these beans in a JSP, showing each person in a HTML table row, and in the last row of the table I'd like to show the total of all the ages. The code to generate the table rows will look something like this: <c:forEach var="person" items="${personList}"> <tr><td>${person.name}<td><td>${person.age}</td></tr> </c:forEach> However, I'm struggling to find a way to calculate the age total that will be shown in the final row without resorting to scriptlet code, any suggestions? A: Are you trying to add up all the ages? You could calculate it in your controller and only display the result in the jsp. You can write a custom tag to do the calculation. You can calculate it in the jsp using jstl like this. <c:set var="ageTotal" value="${0}" /> <c:forEach var="person" items="${personList}"> <c:set var="ageTotal" value="${ageTotal + person.age}" /> <tr><td>${person.name}<td><td>${person.age}</td></tr> </c:forEach> ${ageTotal} A: Note: I tried combining answers to make a comprehensive list. I mentioned names where appropriate to give credit where it is due. There are many ways to solve this problem, with pros/cons associated with each: Pure JSP Solution As ScArcher2 mentioned above, a very easy and simple solution to the problem is to implement it directly in the JSP as so: <c:set var="ageTotal" value="${0}" /> <c:forEach var="person" items="${personList}"> <c:set var="ageTotal" value="${ageTotal + person.age}" /> <tr><td>${person.name}<td><td>${person.age}</td></tr> </c:forEach> ${ageTotal} The problem with this solution is that the JSP becomes confusing to the point where you might as well have introduced scriplets. If you anticipate that everyone looking at the page will be able to follow the rudimentary logic present it is a fine choice. Pure EL solution If you're already on EL 3.0 (Java EE 7 / Servlet 3.1), use new support for streams and lambdas: <c:forEach var="person" items="${personList}"> <tr><td>${person.name}<td><td>${person.age}</td></tr> </c:forEach> ${personList.stream().map(person -> person.age).sum()} JSP EL Functions Another way to output the total without introducing scriplet code into your JSP is to use an EL function. EL functions allow you to call a public static method in a public class. For example, if you would like to iterate over your collection and sum the values you could define a public static method called sum(List people) in a public class, perhaps called PersonUtils. In your tld file you would place the following declaration: <function> <name>sum</name> <function-class>com.example.PersonUtils</function-class> <function-signature>int sum(java.util.List people)</function-signature> </function> Within your JSP you would write: <%@ taglib prefix="f" uri="/your-tld-uri"%> ... <c:out value="${f:sum(personList)}"/> JSP EL Functions have a few benefits. They allow you to use existing Java methods without the need to code to a specific UI (Custom Tag Libraries). They are also compact and will not confuse a non-programming oriented person. Custom Tag Yet another option is to roll your own custom tag. This option will involve the most setup but will give you what I think you are esentially looking for, absolutly no scriptlets. A nice tutorial for using simple custom tags can be found at http://java.sun.com/j2ee/tutorial/1_3-fcs/doc/JSPTags5.html#74701 The steps involved include subclassing TagSupport: public PersonSumTag extends TagSupport { private List personList; public List getPersonList(){ return personList; } public void setPersonList(List personList){ this.personList = personList; } public int doStartTag() throws JspException { try { int sum = 0; for(Iterator it = personList.iterator(); it.hasNext()){ Person p = (Person)it.next(); sum+=p.getAge(); } pageContext.getOut().print(""+sum); } catch (Exception ex) { throw new JspTagException("SimpleTag: " + ex.getMessage()); } return SKIP_BODY; } public int doEndTag() { return EVAL_PAGE; } } Define the tag in a tld file: <tag> <name>personSum</name> <tag-class>example.PersonSumTag</tag-class> <body-content>empty</body-content> ... <attribute> <name>personList</name> <required>true</required> <rtexprvalue>true</rtexprvalue> <type>java.util.List</type> </attribute> ... </tag> Declare the taglib on the top of your JSP: <%@ taglib uri="/you-taglib-uri" prefix="p" %> and use the tag: <c:forEach var="person" items="${personList}"> <tr><td>${person.name}<td><td>${person.age}</td></tr> </c:forEach> <p:personSum personList="${personList}"/> Display Tag As zmf mentioned earlier, you could also use the display tag, although you will need to include the appropriate libraries: http://displaytag.sourceforge.net/11/tut_basic.html A: Check out display tag. http://displaytag.sourceforge.net/11/tut_basic.html A: ScArcher2 has the simplest solution. If you wanted something as compact as possible, you could create a tag library with a "sum" function in it. Something like: class MySum { public double sum(List list) {...} } In your TLD: <function> <name>sum</name> <function-class>my.MySum</function-class> <function-signature>double sum(List)</function-signature> </function> In your JSP, you'd have something like: <%@ taglib uri="/myfunc" prefix="f" %> ${f:sum(personList)} A: It's a bit hacky, but in your controller code you could just create a dummy Person object with the total in it! How are you retrieving your objects? HTML lets you set a <TFOOT> element which will sit at the bottom of any data you have, therefore you could just set the total separately from the Person objects and output it on the page as-is without any computation on the JSP page. A: you can iterate over a collection using JSTL according to following <c:forEach items="${numList}" var="item"> ${item} </c:forEach> if it is a map you can do on the following <c:forEach items="${numMap}" var="entry"> ${entry.key},${entry.value}<br/> </c:forEach> A: Calculating Totals/ or other summaries in the controller -- not in the JSP -- is really strongly preferable. Use Java code & an MVC approach, eg Spring MVC framework, instead of trying to do too much in JSP or JSTL; doing significant calculation in these languages is weak & slow, and makes your JSP pages much less clear. Example: class PersonList_Controller implements Controller { ... protected void renderModel (List<Person> items, Map model) { int totalAge = 0; for (Person person : items) { totalAge += person.getAge(); } model.put("items", items); model.put("totalAge", totalAge); } } Design decision-wise -- anywhere a total is required, could conceivably next month be extended to require an average, a median, a standard deviation. JSTL calculations & summarization are scarcely holding together, in just getting a total. Are you really going to want to meet any further summary requirements in JSTL? I believe the answer is NO -- and that therefore the correct design decision is to calculate in the Controller, as equally/ more simple and definitely more plausible-requirements extensible.
{ "language": "en", "url": "https://stackoverflow.com/questions/102964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Is IIS7 migration a piece of cake I wish to migrate a website to windows 2008 platform, is there any obvious pitfalls i should be aware of? code base is c# 3.5,asp.net with ms ajax. A: I googled a bit and found this link: http://weblogs.asp.net/steveschofield/archive/2008/09/04/iis6-to-iis7-migration-tips-tricks.aspx Biggest Issue i find is that 3rd party components needs to have 64bit version ready to get most of benefits. A: I haven't had any experience with a migrated application not working properly. I've only done a few, but we've tested a number here at work, and they all run great under IIS7. The only gotcha is that the .NET "Managed Pipeline Mode" is set to "Integrated" by default, which caused problems in some of our applications. Either setting it to "Classic" on your app pool, or switching your application to use the "Classic .NET" app pool should resolve the problem. For some more information about the new pipeine, read about it here. Oh - and +1 on the wacked-out interface. I want my old IIS6 interface back! A: Don't let the wacked user interface put you off (but it will drive you dilly)
{ "language": "en", "url": "https://stackoverflow.com/questions/102980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Should I store a database ID field in ViewState? I need to retrieve a record from a database, display it on a web page (I'm using ASP.NET) but store the ID (primary key) from that record somewhere so I can go back to the database later with that ID (perhaps to do an update). I know there are probably a few ways to do this, such as storing the ID in ViewState or a hidden field, but what is the best method and what are the reasons I might choose this method over any others? A: It depends. Do you care if anyone sees the record id? If you do then both hidden fields and viewstate are not suitable; you need to store it in session state, or encrypt viewstate. Do you care if someone submits the form with a bogus id? If you do then you can't use a hidden field (and you need to look at CSRF protection as a bonus) Do you want it unchangable but don't care about it being open to viewing (with some work)? Use viewstate and set enableViewStateMac="true" on your page (or globally) Want it hidden and protected but can't use session state? Encrypt your viewstate by setting the following web.config entries <pages enableViewState="true" enableViewStateMac="true" /> <machineKey ... validation="3DES" /> A: Do you want the end user to know the ID? For example if the id value is a standard 1,1 seed from the database I could look at the number and see how many customers you have. If you encrypt the value (as the viewstate can) I would find it much harder to decypher the key (but not impossible). The alternative is to store it in the session, this will put a (very small if its just an integer) performance hit on your application but mean that I as a user never see that primary key. It also exposes the object to other parts of your application, that you may or may not want it to be exposed to (session objects remain until cleared, a set time (like 5 mins) passes or the browser window is closed - whichever happens sooner. View state values cause extra load on the client after every post back, because the viewstate not only saves objects for the page, but remembers objects if you use the back button. That means after every post back it viewstate gets slightly bigger and harder to use. They will only exist on he page until the browser goes to another page. Whenever I store an ID in the page like this, I always create a property public int CustomerID { get { return ViewState("CustomerID"); } set { ViewState("CustomerID") = value; } } or Public Property CustomerID() As Integer Get Return ViewState("CustomerID") End Get Set(ByVal value As Integer) ViewState("CustomerID") = value End Set End Property That way if you decide to change it from Viewstate to a session variable or a hidden form field, it's just a case of changing it in the property reference, the rest of the page can access the variable using "Page.CustomerID". A: ViewState is an option. It is only valid for the page that you are on. It does not carry across requests to other resources like the Session object. Hidden fields work too, but you are leaking and little bit of information about your application to anyone smart enough to view the source of your page. You could also store your entire record in ViewState and maybe avoid another round trip to th server. A: I personally am very leery about putting anything in the session. Too many times our worker processes have cycled and we lost our session state. As you described your problem, I would put it in a hidden field or in the viewstate of the page. Also, when determining where to put data like this, always look at the scope of the data. Is it scoped to a single page, or to the entire session? If the answer is 'session' for us, we put it in a cookie. (Disclaimer: We write intranet apps where we know cookies are enabled.) A: If its a simple id will choose to pass it in querystring, that way you do not need to do postbacks and page is more accessible for users and search engines. A: Session["MyId"]=myval; It would be a little safer and essentially offers the same mechanics as putting it in the viewstate A: I tend to stick things like that in hidden fields just do a little <asp:label runat=server id=lblThingID visible=false />
{ "language": "en", "url": "https://stackoverflow.com/questions/103000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: An issue possibly related to Cursor/Join Here is my situation: Table one contains a set of data that uses an id for an unique identifier. This table has a one to many relationship with about 6 other tables such that. Given Table 1 with Id of 001: Table 2 might have 3 rows with foreign key: 001 Table 3 might have 12 rows with foreign key: 001 Table 4 might have 0 rows with foreign key: 001 Table 5 might have 28 rows with foreign key: 001 I need to write a report that lists all of the rows from Table 1 for a specified time frame followed by all of the data contained in the handful of tables that reference it. My current approach in pseudo code would look like this: select * from table 1 foreach(result) { print result; select * from table 2 where id = result.id; foreach(result2) { print result2; } select * from table 3 where id = result.id foreach(result3) { print result3; } //continued for each table } This means that the single report can run in the neighbor hood of 1000 queries. I know this is excessive however my sql-fu is a little weak and I could use some help. A: LEFT OUTER JOIN Tables2-N on Table1 SELECT Table1.*, Table2.*, Table3.*, Table4.*, Table5.* FROM Table1 LEFT OUTER JOIN Table2 ON Table1.ID = Table2.ID LEFT OUTER JOIN Table3 ON Table1.ID = Table3.ID LEFT OUTER JOIN Table4 ON Table1.ID = Table4.ID LEFT OUTER JOIN Table5 ON Table1.ID = Table5.ID WHERE (CRITERIA) A: Join doesn't do it for me. I hate having to de-tangle the data on the client side. All those nulls from left-joining. Here's a set-based solution that doesn't use Joins. INSERT INTO @LocalCollection (theKey) SELECT id FROM Table1 WHERE ... SELECT * FROM Table1 WHERE id in (SELECT theKey FROM @LocalCollection) SELECT * FROM Table2 WHERE id in (SELECT theKey FROM @LocalCollection) SELECT * FROM Table3 WHERE id in (SELECT theKey FROM @LocalCollection) SELECT * FROM Table4 WHERE id in (SELECT theKey FROM @LocalCollection) SELECT * FROM Table5 WHERE id in (SELECT theKey FROM @LocalCollection) A: Ah! Procedural! My SQL would look like this, if you needed to order the results from the other tables after the results from the first table. Insert Into #rows Select id from Table1 where date between '12/30' and '12/31' Select * from Table1 t join #rows r on t.id = r.id Select * from Table2 t join #rows r on t.id = r.id --etc If you wanted to group the results by the initial ID, use a Left Outer Join, as mentioned previously. A: You may be best off to use a reporting tool like Crystal or Jasper, or even XSL-FO if you are feeling bold. They have things built in to handle specifically this. This is not something the would work well in raw SQL. If the format of all of the rows (the headers as well as all of the details) is the same, it would also be pretty easy to do it as a stored procedure. What I would do: Do it as a join, so you will have the header data on every row, then use a reporting tool to do the grouping. A: SELECT * FROM table1 t1 INNER JOIN table2 t2 ON t1.id = t2.resultid -- this could be a left join if the table is not guaranteed to have entries for t1.id INNER JOIN table2 t3 ON t1.id = t3.resultid -- etc OR if the data is all in the same format you could do. SELECT cola,colb FROM table1 WHERE id = @id UNION ALL SELECT cola,colb FROM table2 WHERE resultid = @id UNION ALL SELECT cola,colb FROM table3 WHERE resultid = @id It really depends on the format you require the data in for output to the report. If you can give a sample of how you would like the output I could probably help more. A: Join all of the tables together. select * from table_1 left join table_2 using(id) left join table_3 using(id); Then, you'll want to roll up the columns in code to format your report as you see fit. A: What I would do is open up cursors on the following queries: SELECT * from table1 order by id SELECT * from table1 r, table2 t where t.table1_id = r.id order by r.id SELECT * from table1 r, table3 t where t.table1_id = r.id order by r.id And then I would walk those cursors in parallel, printing your results. You can do this because all appear in the same order. (Note that I would suggest that while the primary ID for table1 might be named id, it won't have that name in the other tables.) A: Do all the tables have the same format? If not, then if you have to have a report that can display the n different types of rows. If you are only interested in the same columns then it is easier. Most databases have some form of dynamic SQL. In that case you can do the following: create temporary table from select * from table1 where rows within time frame x integer sql varchar(something) x = 1 while x <= numresults { sql = 'SELECT * from table' + CAST(X as varchar) + ' where id in (select id from temporary table' execute sql x = x + 1 } But I mean basically here you are running one query on your main table to get the rows that you need, then running one query for each sub table to get rows that match your main table. If the report requires the same 2 or 3 columns for each table you could change the select * from tablex to be an insert into and get a single result set at the end...
{ "language": "en", "url": "https://stackoverflow.com/questions/103005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the easiest way to convert latitude and longitude to double values I've got a CSV file containing latitude and longitude values, such as: "25°36'55.57""E","45°39'12.52""N" Anyone have a quick and simple piece of C# code to convert this to double values? Thanks A: Thanks for all the quick answers. Based on the answer by amdfan, I put this code together that does the job in C#. /// <summary>The regular expression parser used to parse the lat/long</summary> private static Regex Parser = new Regex("^(?<deg>[-+0-9]+)[^0-9]+(?<min>[0-9]+)[^0-9]+(?<sec>[0-9.,]+)[^0-9.,ENSW]+(?<pos>[ENSW]*)$"); /// <summary>Parses the lat lon value.</summary> /// <param name="value">The value.</param> /// <remarks>It must have at least 3 parts 'degrees' 'minutes' 'seconds'. If it /// has E/W and N/S this is used to change the sign.</remarks> /// <returns></returns> public static double ParseLatLonValue(string value) { // If it starts and finishes with a quote, strip them off if (value.StartsWith("\"") && value.EndsWith("\"")) { value = value.Substring(1, value.Length - 2).Replace("\"\"", "\""); } // Now parse using the regex parser Match match = Parser.Match(value); if (!match.Success) { throw new ArgumentException(string.Format(CultureInfo.CurrentUICulture, "Lat/long value of '{0}' is not recognised", value)); } // Convert - adjust the sign if necessary double deg = double.Parse(match.Groups["deg"].Value); double min = double.Parse(match.Groups["min"].Value); double sec = double.Parse(match.Groups["sec"].Value); double result = deg + (min / 60) + (sec / 3600); if (match.Groups["pos"].Success) { char ch = match.Groups["pos"].Value[0]; result = ((ch == 'S') || (ch == 'W')) ? -result : result; } return result; } A: If you mean C# code to do this: result = 25 + (36 / 60) + (55.57 / 3600) First you'll need to parse the expression with Regex or some other mechanism and split it into the individual parts. Then: String hour = "25"; String minute = "36"; String second = "55.57"; Double result = (hour) + (minute) / 60 + (second) / 3600; And of course a switch to flip sign depending on N/S or E/S. Wikipedia has a little on that: For calculations, the West/East suffix is replaced by a negative sign in the western hemisphere. Confusingly, the convention of negative for East is also sometimes seen. The preferred convention -- that East be positive -- is consistent with a right-handed Cartesian coordinate system with the North Pole up. A specific longitude may then be combined with a specific latitude (usually positive in the northern hemisphere) to give a precise position on the Earth's surface. (http://en.wikipedia.org/wiki/Longitude) A: What are you wanting to represent it as? Arc seconds? Then 60 min in every degree, 60 seconds in every minute. You would then have to keep E and N by yourself. This is not how it's done generally though. The easiest representation I've seen to work with is a point plotted on the globe on a grid system that has its origin through the center of the earth.[Thus a nice position vector.] The problem with this is that while it's easy to use the data, getting it into and out of the system correctly can be tough, because the earth is not round, or for that matter uniform.
{ "language": "en", "url": "https://stackoverflow.com/questions/103006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Broken pipe no longer ends programs? When you pipe two process and kill the one at the "output" of the pipe, the first process used to receive the "Broken Pipe" signal, which usually terminated it aswell. E.g. running $> do_something_intensive | less and then exiting less used to return you immediately to a responsive shell, on a SuSE8 or former releases. when i'm trying that today, do_something_intensive is obviously still running until i kill it manually. It seems that something has changed (glib ? shell ?) that makes program ignore "broken pipes" ... Anyone of you has hints on this ? how to restore the former behaviour ? why it has been changed (or why it always existed multiple semantics) ? edit : further tests (using strace) reveal that "SIGPIPE" is generated, but that the program is not interrupted. A simple #include <stdio.h> int main() { while(1) printf("dumb test\n"); exit(0); } will go on with an endless --- SIGPIPE (Broken pipe) @ 0 (0) --- write(1, "dumb test\ndumb test\ndumb test\ndu"..., 1024) = -1 EPIPE (Broken pipe) when less is killed. I could for sure program a signal handler in my program and ensure it terminates, but i'm more looking for some environment variable or a shell option that would force programs to terminate on SIGPIPE edit again: it seems to be a tcsh-specific issue (bash handles it properly) and terminal-dependent (Eterm 0.9.4) A: Well, if there is an attempt to write to a pipe after the reader has gone away, a SIGPIPE signal gets generated. The application has the ability to catch this signal, but if it doesn't, the process is killed. The SIGPIPE won't be generated until the calling process attempts to write, so if there's no more output, it won't be generated. A: Has "do something intensive" changed at all? As Daniel has mentioned SIGPIPE is not a magic "your pipe went away" signal but rather a "nice try, you can no longer read/write that pipe" signal. If you have control of "do something intensive" you could change it to write out some "progress indicator" output as it spins. This would raise the SIGPIPE in a timely fashion. A: Thanks for your advices, the solution is getting closer... According to the manpage of tcsh, "non-login shells inherit the terminate behavior from their parents. Other signals have the values which the shell inherited from its parent." Which suggest my terminal is actually the root of the problem ... if it ignored SIGPIPE, the shell itself will ignore SIGPIPE as well ... edit: i have the definitive confirmation that the problem only arise with Eterm+tcsh and found a suspiciously missing signal(SIGPIPE,SIG_DFL) in Eterm source code. I think that close the case.
{ "language": "en", "url": "https://stackoverflow.com/questions/103016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What are pros and cons of Msmqdistributor service of Enterprise Library? We are using EntLib Logging Application Block. And also it turned out that we should use msmq for logging because of performance. Now we are trying to use Msmqdistributor service to log those messages in the queue. What are pros and cons of Msmqdistributor service of Enterprise Library? Please share your experience. A: The main drawback is going to be the Microsoft Message Queue (MSMQ) itself. MSMQ has been around for awhile and it is a pretty cool tool. It does however lack utilities. Because of the way that data is stored in the queue, most people end up needing to write some helper utilities for debugging and manually manipulating the queue. Some other things to consider: * *Queue size - if too many items get put in the queue, and aren't removed in a timely manner the server can stall. *Purpose - MSMQ is designed for multi-step transactions (such as billing), you mention you are going to use it for logging. If the log is just for debugging, Then a DB table or a flat file or sending errors to a bug tracker will serve you better. If you need complicated logging and are using MSMQ to send the information to a different copmuter, then you will find MSMQ more useful.
{ "language": "en", "url": "https://stackoverflow.com/questions/103033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: VS2008 Debugger does not break on unhandled exception I'm having an odd problem with my vs debugger. When running my program under the vs debugger, the debugger does not break on an unhandled exception. Instead control is returned to VS as if the program exited normally. If I look in the output tab, There is a first-chance exeption listed just before the thread termination. I understand how to use the "Exceptions" box from the Debug menu. I have the break on unhandled exceptions checked. If I check first-chance exceptions for the specific exeption that is occuring, the debugger will stop. However, it is my understanding that the debugger should also stop on any 'Unhandled-Exceptions'. It is not doing this for me. Here are the last few lines of my Output tab: A first chance exception of type 'System.ArgumentOutOfRangeException' occurred in mscorlib.dll The thread 0x60c has exited with code 0 (0x0). The program '[3588] ALMSSecurityManager.vshost.exe: Managed' has exited with code -532459699 (0xe0434f4d). I don't understand why the exception is flagges as a "first chance" exception when it is unhandled. I believe that the 0xe0434f4d exit code is a generic COM error. Any Ideas? Metro. A: When I read the answer about having two check boxes in the "Exception..." dialog, I went back and opened the dialog again. I only had one column of check boxes -- for break on "Thrown". As it turns out, if you do not have "Enable Just My Code (Managed Only)" checked in the Debug options, the "User-Unhandled" column does not show in the "Exceptions" dialog. I selected the "Enable Just My Code" option and verified that the "User-unhandled" checkbox on the "Exceptions" dialog was selected for all of the exception categories. I was able to get unhandled exceptions to break into the debugger for one session. But when I came back the next day, the behavior was as before. Metro. A: If you're on a 64-bit OS, there's a pretty good chance you're being bitten by an OS-level behavior that causes exceptions to disappear. The most reliable way to reproduce it is to make a new WinForm application that simply throws an exception in OnLoad; it will appear to not get thrown. Take a look at these: * *Visual Studio doesn't break on unhandled exception with windows 64-bit * *http: // social.msdn.microsoft.com/Forums/en/vsdebug/thread/69a0b831-7782-4bd9-b910-25c85f18bceb *The case of the disappearing OnLoad exception *Silent exceptions on x64 development machines (Microsoft Connect) * *https: // connect.microsoft.com/VisualStudio/feedback/details/357311/silent-exceptions-on-x64-development-machines The first is what I found from Google (after this thread didn't help), and that thread led me to the following two. The second has the best explanation, and the third is the Microsoft bug/ticket (that re-affirms that this is "by design" behavior). So, basically, if your application throws an Exception that hits a kernel-mode boundary on its way back up the stack, it gets blocked at that boundary. And the Windows team decided the best way to deal with it was to pretend the exception was handled; execution continues as if everything completed normally. Oh, and this happens everywhere. Debug versus Release is irrelevant. .Net vs C++ is irrelevant. This is OS-level behavior. Imagine you have to write some critical data to disk, but it fails on the wrong side of a kernal-mode boundary. Other code tries to use it later and, if you're lucky, you detect something's wrong with the data ...but why? I bet you never consider that your application failed to write the data---because you expected an exception would be thrown. Jerks. A: Ctl-D, E brings up the Exceptions window. You can set what exceptions you want to, and don't want to, break on. A: There are two checkboxes in the "Exceptions..." box, I usually have to have them both checked to get it to break on unhandled exceptions. Regardless that it only reads like you need to have one checked. A: Once every while this happens to me as well. It seems like a bug or something, as when I replicate the scenario the exception is caught and shown as usual. A: I had a similar problem, and checking "Enable Just My Code (Managed Only)" fixed the problem, while if I turned it back off then the problem came back, no clue why (but it is possible that some DLL's that appear to get loaded when it is unchecked cause the behavior).
{ "language": "en", "url": "https://stackoverflow.com/questions/103034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Where can I see what Color properties in .NET like "BlanchedAlmond" look like? I'm in the middle of writing code in .Net to draw something in my app and I need to pick a color to use. But what does the color "Chartreuse" look like? Isn't there a nice bitmap that shows what each of the system colors look like somewhere? Thanks! A: MSDN - Colors by Name A: Try this site. This site is nice because it shows how the color will look as foreground and background color. A: I believe this is what you're looking for: http://www.cambiaresearch.com/c4/7cb36a7b-3731-48f6-b91b-1d8c503f140e/What-are-the-aspnet-Named-Colors.aspx A: Yes there is a site: http://adonnart.free.fr/gratuit/140coulu.htm with Hex-Codes A: Check this out: http://www.w3schools.com/TAGS/ref_color_tryit.asp?color=BlanchedAlmond (Pay attention the URL and modify as necessary)
{ "language": "en", "url": "https://stackoverflow.com/questions/103035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you move from the Proof of Concept phase to working on a production-ready solution? I'm working on a project that's been accepted as a proof of concept and is now on the schedule as an actual production project. I'm curious how others approach this transition. I've heard from various sources that when a project starts as a proof of concept it's often a good idea to trash all of the code written during that rapidly-evolving phase and essentially to start over with a clean slate, relying on what you learned from the conceptual phase but without working to clean up the potentially messy code that you wrote the first time around. Kind of the programming version of "throw away the first copy of that angry email you're about to send and start all over" theory. I've done it this was in the past and I've also refactored the conceptual code to use in production, but since I'm in the transition phase for a new project I wanted to get an idea how others do this. Obviously a lot depends on the project itself, and on the conceptual code (if what you generated works but won't scale for example, it's probably best to start afresh, but if you have a very compressed timeline for the project you might be forced to build on what you've already written). That said, if all things were equal what would you all choose as an approach? A: As you already kind of hinted at, the answer is, "It Depends" Starting over is good because you can help trim out the stuff that was added while you were initially working out the kinks but isn't really needed. It also gives you a chance to give more consideration to how you want the architecture to be -- without already being dependent on how the proof-of-concept was written... In practice, though, unless you're in the business of selling the software to the outside world, building upon the prototype is pretty commonplace. Just don't get into the habit of thinking "I'll fix it later" if you run into some code that smells or seems like it could be done in a better way... A: Refactor the existing code into the solution. A: For me it would depend on how sloppy my POC was. If it is something I would be ashamed to pass onto another developer, I would rewrite it. Otherwise, just go with what you got. A: If the code works, use it. Spend a little bit of time refactoring the messiest parts in order to ease future maintenance. But don't fall into the trap of building a new system from scratch. A: Throw away everything from the proof of concept except for the lessons learned, and, possibly, some minor code fragments such as calculations etc. Proof of concept applications should never be more than just the bare minimum to see if the technology in question will work and to start testing some of the boundary conditions. Once done you are free to redesign the application with your new found knowledge.
{ "language": "en", "url": "https://stackoverflow.com/questions/103051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Where to start with source-control Anyone have any suggestions on where to start for a newbie wanting to try out some sort of source-control along with a new journey into ASP.NET? SVN, VSS, CVS...I dont even know where to start! A: Very Subversion specific but it gives a decent understanding of version control basics. http://svnbook.red-bean.com/ A: I think the easiest way is to get a free SVN server through some free provider. I use assembla. Then, if you are in Windows, you should download TortoiseSVN. That is by far the easiest Subversion client to use. Start by importing a current project into the repository and then you are ready to go. A: I reccomend SVN to start off with, bit easier to get the hang of than some others. Tortoise is a good start. It's simple to install and integrates with the windows shell, meaning everything is fairly intuitive on Right-clicks on folders / files etc. Pretty good doc as well. You will also need a SVN server / host to connet to... I can reccomend Assembla which is free to register with and gives you a SVN server to play with. A: I will recommend Subversion or Visual Studio Team Foundation Systems depending on how much money you are willing to shell out. Check out Visual SVN (Subversion for die-hard VS fans) - it integrates nicely with Visual Studio: http://www.visualsvn.com/ A: If you are just getting started, I would recommend reading through the free Subversion book. It is a widely-used system, free, and should be relatively easy to comprehend once you get going. A: If you're 100% .NET, and you have an MSDN developer license, run TFS for Workgroups. Its pretty simple and solid. If you don't have a license, check out Subversion. Its a nice, free source control that has plugins for Visual Studio integration. A: I first got introduced to svn using tortoise svn, loved it and later a company used Visual Sourcesafe, how I missed using tortoise. A: Get a copy of the book Pragmatic Version Control Using Subversion - it will help you get started. Image http://ecx.images-amazon.com/images/I/51XYQTP2BYL._SL500_AA240_.jpg A: Since nobody has mentioned this yet, I'd recommend looking at Git as well. There's even a free source hosting service: github. A: Nowadays I would just start with a distributed system. They are easier to set up (you don't need to set up a server and/or find one on-line: just init some random directory and start doing your stuff) and just as easy/hard to understand as the centralized ones. Here are a few that people should take a look when choosing a distributed revision control system: * *Git *Darcs *Mercurial (Hg) If you're stuck in Windows, I would stay away from Git (at least for the time being). There seems to be support for Git on Windows in progress, but I haven't tried it yet. A: just make sure you stay away from visual source safe. A: Eric Sink's Source Control HOWTO. Read it, then download Subversion (free) or Vault (free for a single user) and start playing around with it. A: Lots of people here have suggested introductions and comprehensive how-tos, which are all good for telling you how to do what you want. In addition, I'd give three pieces of advice for the novice on how to know what you want: 1) Version-control EVERYTHING (that is, everything you write). Version-control the project files. Version-control your test cases. Version-control any little scripts you use to copy things around. Version-control your todo list. Definitely version control your design notes. Once you're familiar with the commands it costs nothing, and some day you'll be glad of the history of a file you'd never imagined needing to roll back. 2) When you're happy with a change, check it in immediately. And check it all in. If you work in sequential steps (and that doesn't always happen - you can get distracted - but it's good practice), then at the start of each new step you should have 0 modified files in your checkout. You may even want to check in unfinished non-working code, depending on what suits you. 3) When you reach a milestone, tag it. Even your own personal goals (inch-pebbles). If you can't be bothered with tagging, just make a note of the date and time (in, you guessed it, a version-controlled file). If a particular version is memorable for some reason ("I finished the back-end", "I sent it to someone else to look at"), you want to know exactly what was in it. And diffing against the repository diagnoses some kinds of bugs faster than the debugger. A: Have to second reading the Subversion book. Plus, the software is free, runs on most environments, and is easy to get going. A: The Pragmatic series has two of the best books I have read to help get your head around version control. They have versions for SVN and CVS. I really like their chapters on tags and branches. A: What source control to use really depends on your environment, your corporate culture and the general situation of how projects are handled in your company. The newer "Visual Source Safe" in Team Foundation is hands down better than the piece of crud that VSS used to be. and is good in a Microsoft shop with one or few locations. I've also used Subversion very successfully, and it integrates well into Eclipse. I don't like to put down products, since all have their positive and negative points, I guess. But the two I mentioned above are really good source/version control products. If you are just getting started and want to get your feet wet and learn source control practices and general concepts, download an open source product like CVS or Subversion, load it up and try it out. http://www.ericsink.com/scm/source_control.html has some good information to work with. -- I am re-editing this comment to note that it looks like someone else linked to that source control link in a previous post. :) darn it, I wasn't the first to post it. A: Just getting tools installed is not enough. You need to understand how the particular technology works (like SVN) and how the whole source tree supposed to work: best structure, tagging, branching, merging and so fourth. Since we use SVN I recommend Subversion Book. It has some good explanations of source control concepts. A: Check out Subversion (SVN) to start. As suggested in other posts, there's an excellent free ebook that not only covers how to use SVN but is an excellent start on the concepts. For an easy to setup server, check out the free server from VisualSVN. If you use Visual Studio, you can use their add-in (costs) or the open source Ankh SVN addin. If you're using other IDEs, Tortoise SVN integrates with Explorer in Windows. A: Start with Subversion. The documentation is online, and the Pragmatic Programmers Svn book is great. If you're on Windows you can also get TortoiseSvn (free) for explorer integration or VisualSvn (commercial) for Visual Studio integration. For the Mac, Versions looks like a nice stand alone client, and XCode 3 has svn integration built in. Still I'd spend a bit of time in the command line using the svn client to really figure out what you're doing. After getting comfortable with the way svn works, then you can get into distributed version control systems like Git, Bazaar, or Mercurial, but I've seen enough professional developers have problems wrapping their heads around the basics of version control (branching merging etc) that I'd get comfortable with that first before moving to distributed systems. Stay the hell away from Visual SourceSafe (VSS). It is poop. Your code is not safe. See these many links as to why not to use VSS. A: subversion ftw. A: You could start browsing through this site. https://stackoverflow.com/questions/tagged/source-control A: In my opinion if you already have Viual Source Safe and you are a single developer doing work with VS, it is perfectly fine. Its a simple system that works well enough for small single developer projects. I have used it for years without any trouble for small projects. Easy to manage and backup as well. A: I believe that strating point is with Distributed version control system (like mercurial or git). There are some advandtages of using it : * *You don't need to setup central repository (it rquires tedious server setup) *You can share changesets (revisions) with your friends by using email or other method, and integrate changes to your repository safely. *You can modify revision history (rebase, git supports it) easilly, what is impossible to do with SVN. A: I recommended trying on of the free distributed version control systems. Most of them are very easy to setup on your personal PC and come with good documentation. Here are my favourites 1. Bazaar 2. Mercurial 3. Git (only for *nix based systems as far as I know) See a list here: http://en.wikipedia.org/wiki/List_of_revision_control_software A: Comments on products that I've used: VSS - Haven't used it in a while now (several years), at the time it did the basics of what we wanted but we were running into a variety of issues regularly enough that we actively sought out a better solution. If you've got free access then it wouldn't hurt to get exposure to their implementation to see what was at the time a different way of dealing with the issues (they may have come into line with the rest of the products by now). CVS - Used this at a previous company, and worked with the original author of tortoiseCVS so I'm a little biased perhaps in thinking that it was a decent open source solution. I'd recommend starting here personally, everythings easily available and widely used. Perforce - Our companies current source control solution and pretty much universally well regarded on the team. A couple of alternative UI's that people can choose between and good command line support too (vital for tools interaction). If you're evaluating for a company I'd certainly include this in your list to look at. A: Try Unfuddle for a free hosted SVN source control. A: Another vote for Subversion / TortoiseSVN but note that it doesn't play nicely with a FAT32 domestic NAS. A: I've been losing faith to the whole source control community. Everyone is saying VSS is hell bad, likes those old bias, C++ is way better than C, Win Vista is terrible and XP is too old to serve the world. Yet still lots of people using them. You never know how you would like a tool before you use it. And every version control application consists of bugs and defects. And VSS is just so difficult to drop. A: Visual Source Safe (VSS) is fine for a beginner, because you won't know what you're missing. When you need something more complicated, then worry about getting some other product.
{ "language": "en", "url": "https://stackoverflow.com/questions/103059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Why can't I open this table in SQL Server Management Studio? I created a couple of tables procedurally via C# named something like [MyTableOneCustom0] and [MyTableTwoCustom0]. When I try to return all of the values from these tables via "Open Table" in MSSQL Server Management Studio, I receive the following error: Error Source: Microsoft.VisualStudio.DataTools Error Message: Exception has been thrown by the target of an invocation. However, I can still bring up all of the data via a SELECT * statement. Does anyone know what is causing this? A: Based on a similar post loacated at at Egg Head Cafe, it looks like the Management Studio will thrown an exception if there are too many columns included explicitly in the query. Select * returns them implicitly, so there doesn't seem to be an issue. I have over 800 columns in this table, so I'm sure this is the problem. A: I hesitate to ask, but normally you would not want 800 or columns in a database, so why did you do this? Given how databases store information you are possibly creating many problems for yourself with a design like that in terms of data retrieval and storage. How many bytes of data woudl a full row have? You know there is a limit to the number of bytes of data that can be stored in a row. You could be setting yourself up for issues entering data when a row exceeds those limits. It might be best to break into separate tables even if there is a one-to-one relationship. Read in BOL about data pages and how data is stored to understand why this concerns me.
{ "language": "en", "url": "https://stackoverflow.com/questions/103092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Incremental File Copy Tool and NIH For years, one of my most important tools has been incremental copy utility that compares the contents of two directories and shows me which files are newer / older / added / deleted. Every day I find myself copying folders of source code between my two desktop machines and the server, and such a utility is critical to avoid overwriting newer files with older ones and also to save time by only copying changed files. In addition, the utility allows me to see new files in the source folder that I don't necessarily want to copy (like temp files) that I instead can delete. Like anyone who subscribes to the NIH way of thinking, I wrote my own utility to compare the contents of two folders and let me mark files to be copied, deleted, diffed or ignored. I've had many versions of this utility going back to DOS, OS/2 and Win32. I use this utility on a daily basis, and it leaves me wondering: What do others use? Surely there are similar programs out there to do this... My utility doesn't have a diff screen, and it would be occasionally nice to see what the difference is between two changed files. What do you use for comparing and incrementally copying between folders? A: rsync. All the time. The biggest benefit to rsync is that it trades increased CPU time for decreased transfer bandwidth, as CPUs are super fast nowadays, and even disk-copy is relatively slow, this is a good thing. A: I use rsync for some jobs, and unison for others. For your situation, I would strongly recommend using some version control solution such as Subversion. As for NIH? While I have written a large number of tools over the years, I always look for an existing tool before writing my own. It saves time, and may have a better solution than I would have used. At the very least, it will give me some "how NOT to do it" examples. A: SyncToy is also good at this stuff. A: SyncBack (free) or SyncBackSE ($$) is another possible solution. SyncBackSE is one of the few programs I've ever paid for. Health warning: Win only. IMHO, NIH violates Laziness and Impatience, though it strongly supports Hubris. A: I tried Robocopy (included in Vista and available for download for XP) today and it worked fine. To incremental mirror a drive I used: robocopy source destination /MIR There is also a GUI available: http://technet.microsoft.com/en-us/magazine/cc160891.aspx A: robocopy. It's in Vista, and is also part of the Windows Resource Kit. It has a strange command line interface, but it's very powerful & good for this kind of thing. Still, I find myself wondering whether source control would be a better choice for you. A: NIH is not necessarily a bad thing. It can be good when the application has some very personal traits and you want it to be as convenient for you as possible, screw the generality. I've also rolled my own utility (a Perl script) for this purpose a few years ago, and I'm using it both and home and at work for backups. It's simple to run because it does exactly what I need and only that, and simple to tweak because it's written in a flexible scripting language. A: I have been using rsync (Linux to Linux / WinXP to Linux) a lot and like it. Today I tried to get it up and running under Vista and that was quite a challenge. I got it running but are having some issues with network drives and localized chars (i.e. åäö). SyncToy seems pretty sweet! I noticed that it puts a data file in the synced folders. Anyone knows if it is possible to use it without the data files / have it save the data files to another folder? I have to try robocopy as well. Thanks a lot! A: Have you considered using version control tool to accomplish this? It will allow you to keep things in sync while also remembering the history of a project. A: A few good tools to try here. Rsync is the ultimate for permissions and deltas on linux but on windows your usually using it with posix so acl permissions aren't perfect. The people who made delta copy also made a pure windows version from the ground up that is based on rysnc's algorithm: http://web.synametrics.com/Syncrify.htm I haven't tried it but hopefully it will bring back good permissions to windows with the amazing incremental offerings of rsync.
{ "language": "en", "url": "https://stackoverflow.com/questions/103118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I figure out who has a SQL Server 2005 database in Single User Mode? I have a database in single user mode and I am trying to drop it so I can re-run the creation scripts on it, but I'm being locked out from it. * *How do I figure out who has the lock on it? *How do I disable that lock? A: run sp_who, find the spid with the database name you require, kill the spid. A: From SQL Server Management Studio: * *open the object explorer *expand the database server *expand "Management" *double-click on "Activity Monitor" *locate the process using the desired database *right-click on process *click "Kill Process"
{ "language": "en", "url": "https://stackoverflow.com/questions/103121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: error while using groovy.sql Out parameter I am trying to execute some stored procedures in groovy way. I am able to do it quite easily by using straight JDBC but this does not seem in the spirit of Grails. I am trying to call the stored procedure as: sql.query( "{call web_GetCityStateByZip(?,?,?,?,?)}",[params.postalcode, sql.out(java.sql.Types.VARCHAR), sql.out(java.sql.Types.VARCHAR), sql.out(java.sql.Types.INTEGER), sql.out(java.sql.Types.VARCHAR)]) { rs -> params.city = rs.getString(2) params.state = rs.getString(3) } I tried various ways like sql.call. I was trying to get output variable value after this. Everytime error: Message: Cannot register out parameter. Caused by: java.sql.SQLException: Cannot register out parameter. Class: SessionExpirationFilter but this does not seem to work. Can anyone point me in the right direction? A: This is still unanswered, so I did a bit of digging although I don't fully understand the problem. The following turned up from the Groovy source, perhaps it's of some help: This line seems to be the origin of the exception: http://groovy.codehaus.org/xref/groovy/sql/Sql.html#1173 This would seem to indicate that you have a Statement object implementing PreparedStatement, when you need the subinterface CallableStatement, which has the registerOutParameter() method which should be ultimately invoked. A: Thanks Internet Friend, If i write code like- Sql sql = new Sql(dataSource) Connection conn ResultSet rs try { conn = sql.createConnection() CallableStatement callable = conn.prepareCall( "{call web_GetCityStateByZip(?,?,?,?,?)}") callable.setString("@p_Zip",params.postalcode) callable.registerOutParameter("@p_City",java.sql.Types.VARCHAR) callable.registerOutParameter("@p_State",java.sql.Types.VARCHAR) callable.registerOutParameter("@p_RetCode",java.sql.Types.INTEGER) callable.registerOutParameter("@p_Msg",java.sql.Types.VARCHAR) callable.execute() params.city = callable.getString(2) params.state = callable.getString(3) } It working well in JDBC way. But i wanted to try it like the previous code using sql.query/sql.call. Any comments?? Thanks Sadhna A: groovy way could be this code: def getHours(java.sql.Date date, User user) throws CallProceduresServiceException { log.info "Calling stored procedure for getting hours statistics." def procedure def hour try { def sql = Sql.newInstance(dataSource.url, user.username, user.password, dataSource.driverClassName) log.debug "Date(first param): '${date}'" procedure = "call ${dbPrefixName}.GK_WD_GET_SCHEDULED_TIME_SUM(?, ?, ?, ?)" log.debug "procedure: ${procedure}" sql.call("{${procedure}}", [date, Sql.out(Sql.VARCHAR.getType()), Sql.out(Sql.VARCHAR.getType()), Sql.out(Sql.VARCHAR.getType())]) { hourInDay, hourInWeek, hourInMonth -> log.debug "Hours in day: '${hourInDay}'" log.debug "Hours in week: '${hourInWeek}'" log.debug "Hours in month: '${hourInMonth}'" hour = new Hour(hourInDay, hourInWeek, hourInMonth) } log.info "Procedure was executed." } catch (SQLException e) { throw new CallProceduresServiceException("Executing sql procedure failed!" + "\nProcedure: ${procedure}", e) } return hour } In my app it works great. Tomas Peterka
{ "language": "en", "url": "https://stackoverflow.com/questions/103136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: JavaScript Debugger Does anyone know of a really good editor to debug JavaScript (other then Visual Studio 2008 and FireBug)? ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: Here is an article, Advanced JavaScript Debugging Techniques, that describes the use of several tools. One new tool I learned about that I hadn't heard of before is JSLint. Sometimes JSLint just immediately shows you your dodgy code that is causing the issue. A: Opera has Dragonfly, though I still prefer Firebug. Before Firebug there was Venkman, though it's future is uncertain at this point. A: IE8 beta 2 has a nice debugger A: The Google chrome browser has a reasonable wee JS debugger built-in. There's a good list of the available commands here. A: Take a look at Venkman, the JavaScript debugger for Firefox: http://www.mozilla.org/projects/venkman/ It's a real source-level javascript debugger where you can set breakpoints and step through code. A: Aptana Studio IDE has a nice javascript debugger. The community version supports only Firefox, the professional one also supports Internet Explorer. A: For Internet Explorer debugging (and when you don't have VS 2008), you can use MS Script Editor. This is a good writeup on how to get it configured correctly: http://www.jonathanboutelle.com/mt/archives/2006/01/howto_debug_jav.html A: I work in Aptana. You set breakpoints, hover over variables, and do watches right in the editor. Love it. Never thought I'd move away from Firebug as my chief debugger. A: If you're accustomed to using Firebug, you might like Firebug Lite, implemented in JavaScript. You can use it as a bookmarklet, which is nice. I'm not sure how powerful it is, I imagine other, "real" solutions are better, but it's handy in a pinch.
{ "language": "en", "url": "https://stackoverflow.com/questions/103155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How to convert an XML file into a .Net class? Can someone please remind me how to create a .Net class from an XML file? I would prefer the batch commands, or a way to integrate it into the shell. Thanks! A: The below batch will create a .Net Class from XML in the current directory. So... XML -> XSD -> VB (Feel free to substitute CS for Language) Create a Convert2Class.Bat in the %UserProfile%\SendTo directory. Then copy/save the below: @Echo off Set XsdExePath="C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin\XSD.exe" Set Language=VB %~d1 CD %~d1%~p1 %XsdExePath% "%~n1.xml" /nologo %XsdExePath% "%~n1.xsd" /nologo /c /language:%Language% Works on my machine - Good Luck!! A: You might be able to use the xsd.exe tool to generate a class, otherwise you probably have to implement a custom solution against your XML XML Schema Definition Tool XML Serialization
{ "language": "en", "url": "https://stackoverflow.com/questions/103157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: IIS ASP Caching I'm trying to configure ASP caching in IIS, following the instructions of a software I purchased. This is supposed to make it run faster. http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/a5766228-828e-4e31-a92b-51da7d24d569.mspx?mfr=true The software instructions point to that article. The problem i'm having is that the "ASP File Cache section" that's mentioned there does not exist in my IIS dialog... Am I missing something? Is there any configuration that'll make it appear? I'm running IIS 6.0 on W2003 Server Enterprise Edition. Update 1: I am logged in as the local administrator (the box is not in a domain) A: Right click on "Web sites" in IIS manager. Choose Properties->Home directory->Configuration and you'll see "Cache options" tab. The trick is to click on "Web sites" as opposed to proceeding to specific web site.
{ "language": "en", "url": "https://stackoverflow.com/questions/103158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }