text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: prevent mime faking on php uploads Is there a way to prevent someone from faking a mime type on a file upload and then running a php/exe/etc...
I have to make the file upload directory writeable and executable so that the files can be stored, but this allows anyone to run a script after. One thing I can do is add random data to the file name so they can't guess the file name after (since they still can't read from the directory to get a listing).
I'm using file upload with php for the first time and I'm trying to cover all of the security issues.
A: The file upload directory should not be accessible to the web browser. I.e. don't allow somebody to upload a file, say "remove_all_my_files.php", and then execute it on your system by giving the url to it, say "http://xample.com/uploads/remove_all_my_files.php".
A: The information in $_FILES always comes from the client, so what you want to do is accept the file and scan it on the server. I'd either recommend using finfo, which is a PHP extension and it makes it easy:
<?php
// example :-)
$finfo = finfo_open(FILEINFO_MIME);
echo finfo_file($finfo, '/path/to/your/upload/file);
finfo_close($finfo);
?>
There is also a OO-interface if you don't like procedural.
If finfo is not an option, you could use the unix command file to check.
Also, many people suggest serving files through a wrapper. I am torn on this one, it can be a solution but it's far from ideal because a) the files are still on your server and b) it's expensive to serve files like that.
A: Don't serve the file directly. Keep uploads in a no public access location. Read from file, buffer for output to allow downloading.
Here's the basic idea:
function ReadAndOutputFileChunked ($filename) {
$chunksize = 1*(1024*1024); // how many bytes per chunk
$buffer = '';
$handle = @fopen($filename, 'rb');
if ($handle === false) {
return false;
}
while (!feof($handle)) {
$buffer = @fread($handle, $chunksize);
print $buffer;
}
return @fclose($handle);
}
header("Content-type: application/octet-stream");
ReadAndOutputFileChunked('/private/path/to/upload/files/' . $nameOfFile);
A: On my Apache web server configurations I don't believe the actual file contents determine whether a file runs as a script or not. The determination as to whether to display a file as text or an image format, or run it as a script is made by matching the file ending.
For example, a directive in the apache configuration file, httpd.conf of
AddType application/x-httpd-php .php
tells the server to run files ending in .php to run as a php script. So just make sure that none of the uploaded files are saved with a .php ending, or any other script executable ending or any file ending you use for include files.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is it possible to retrieve the call stack programmatically in VB6? When an error occurs in a function, I'd like to know the sequence of events that lead up to it, especially when that function is called from a dozen different places. Is there any way to retrieve the call stack in VB6, or do I have to do it the hard way (e.g., log entries in every function and error handler, etc.)?
A: I'm pretty sure you have to do it the hard way. At a previous job of mine, we had a very elegant error handling process for VB6 with DCOM components. However, it was a lot redundant code that had to be added to every method, so much that we had home-grown tools to insert it all for you.
I can't provide too much insight on its implementation (both because I've forgotten most of it and there's a chance they may consider it a trade secret). One thing that does stand out was that the method name couldn't be derived at run-time so it was added as a string variable (some developers would copy-paste instead of using the tool and it would lead to error stacks that lied...).
HTH
A: The hard, manual way is pretty much the only way. If you check out this question, someone suggested a tool called MZTools that will do much of the grunt work for you.
A: You do have to do it the hard way, but it's not really all that hard... Seriously, once you've written the template once, it's a quick copy/paste/modify to match the function name in the Err.Raise statement to the actual function name.
Private Function DoSomething(ByVal Arg as String)
On Error GoTo Handler
Dim ThisVar as String
Dim ThatVar as Long
' Code here to implement DoSomething...
Exit Function
Handler:
Err.Raise Err.Number, , "MiscFunctions.DoSomething: " & Err.Description
End Function
When you have nested calls, this unwinds as each routine hits its Handler and adds its name to the error description. At the top level function, you get a "call stack" showing the list of routines that were called, and the error number and description of the error that actually occurred. It's not perfect, in that you don't get line numbers, but I've found that you don't usually need them to find your way to the problem. (And if you really want line numbers, you can put them in the function and reference them in the Err.Raise statement using the Erl variable. Without line numbers, that just returns 0.)
Also, note that within the function itself, you can raise your own errors with the values of interesting variables in the message like so:
Err.Raise PCLOADLETTER_ERRNUM, , "PC Load Letter error on Printer """ & PrinterName & """"
(The syntax highlighting looks wonky in the preview... I wonder how will it look when posted?)
A: As other people said (years ago, I see... but there's so many people still using VB6! :) ), I think it's not possible to programmatically retrieve the Call Stack, unless you use some 3rd-party tool.
But if you need to do that for debugging purposes, you can consider of adding to the called routine an Optional input string variable, were you'll put the caller's name.
Sub MyRoutine
(...) ' Your code here
call DoSomething (Var1, Var2, Var3, "MyRoutine")
' ^
' Present routine's name -----------+
(...) ' Your code here
End Sub
Public DoSomething (DoVar1, DoVar2, DoVar3, Optional Caller as string = "[unknown]")
Debug.Print " DoSomething Routine Called. Caller = " & Caller
... ' (your code here)
End Sub
Not so elegant, maybe, but it worked for me.
Regards,
Max - Italy
A: Compuware (or was it Numega at the time) DevStudio for Visual Basic 6 used to do this. The way was by adding adding instrumenation to every call that called a very small snippet that added to the code stack. On any error it dumped out that callstack, and then did things like mail or post to a webserver all the debuging information. Adding and removing the instrumentation was a potentially lethal operation (especially back then, when we were using VSS as our source control), but if it worked, it work well.
As Darrel pointed out, you could add something very simlar by using MZTools and setting up a template. It's a lot of working, and is probably more effeort than the reward would be but if you have very difficult to track down bugs, it might help).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Change the Catalog property of a Crystal Report in VS 2005 I'm working on an existing report and I would like to test it with the database. The problem is that the catalog set during the initial report creation no longer exists. I just need to change the catalog parameter to a new database. The report is using a stored proc for its data. It looks like if try and remove the proc to re-add it all the fields on the report will disapear and I'll have to start over.
I'm working in the designer in Studio and just need to tweak the catalog property to get a preview. I have code working to handle things properly from the program.
A: If you just need to do it in the designer then right click in some whitespace and click on Database->set datasource location. From there you can use a current connection or add a new connection. Set a new connection using the new catalog. Then click on your current connection in the top section and click update. Your data source will change. But if you need to do this at runtime then the following code is the best manner.
#'SET REPORT CONNECTION INFO
For i = 0 To rsource.ReportDocument.DataSourceConnections.Count - 1
rsource.ReportDocument.DataSourceConnections(i).SetConnection(crystalServer, crystalDB, crystalUser, crystalPassword)
Next
A: EDIT: Saw your edit, so i'll keep my original post but have to say.. I've never had a crystal report in design mode in VS so I can't be of much help there sorry.
report.SetDatabaseLogon(UserID, Password, ServerName, DatabaseName);
After that you have to roll through all referenced tables in the report and recurse through subreports and reset their logoninfo to one based on the reports connectioninfo.
private void FixDatabase(ReportDocument report)
{
ConnectionInfo crystalConnectionInfo = someConnectionInfo;
foreach (Table table in report.Database.Tables)
{
TableLogOnInfo logOnInfo = table.LogOnInfo;
if (logOnInfo != null)
{
logOnInfo.ConnectionInfo = crystalConnectionInfo;
table.LogOnInfo.TableName = table.Name;
table.LogOnInfo.ConnectionInfo.UserID = someConnectionInfo.UserID;
table.LogOnInfo.ConnectionInfo.Password = someConnectionInfo.Password;
table.LogOnInfo.ConnectionInfo.DatabaseName = someConnectionInfo.DatabaseName;
table.LogOnInfo.ConnectionInfo.ServerName = someConnectionInfo.ServerName;
table.ApplyLogOnInfo(table.LogOnInfo);
table.Location = someConnectionInfo.DatabaseName + ".dbo." + table.Name;
}
}
//call this method recursively for each subreport
foreach (ReportObject reportObject in report.ReportDefinition.ReportObjects)
{
if (reportObject.Kind == ReportObjectKind.SubreportObject)
{
this.FixDatabase(report.OpenSubreport(((SubreportObject)reportObject).SubreportName));
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Debug XP application on Vista computer I am building an MFC application for both XP and Vista. I have Visual Studio 2008 installed on the XP machine but I need to debug the application on Vista. How can I do that? Do I really have to install Visual Studio on a Vista machine?
When using remote debugging I assume that all executable and library files must be in my Vista virtual machine. But I can seem to copy the MFC debug DLLs to the Vista VM, and as a result I keep getting side-by-side configuration errors.
I would prefer to remote debug the application without having to copy any files, how can I do that? And if I can't, how can I install the MFC DLLs without having to install Visual Studio on the Vista machine?
Note: I have Vista installed on a virtual machine using Virtual PC. I just don't know how to run the debug version of my application there.
A: You can install VirtualPC (or other virtualization software) and install Vista as virtual system, so you don't need two computers. For this part of the debugging, it probably better that you explicitly do not install visual studio to make sure there's not some hidden dependency in your program that visual studio provides. At this point you want to be testing the fully-deployed version of the app.
The biggest rule I've found so far for developing for vista is making sure that you never write anything to the same folder where the program is installed. Write to the Application Data folder instead. This was a rule for XP, too, but it's much more strictly enforced in vista.
A: If you have Visual Studio Pro or Team, you can give remote debugging a shot. There's just a tiny stub that gets installed on the remote computer.
If you want to run a debug build of your application, you will need to install the debug runtime files on the virtual PC as well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Sync File Modification Time Across Multiple Directories I have a computer A with two directory trees. The first directory contains the original mod dates that span back several years. The second directory is a copy of the first with a few additional files. There is a second computer be which contains a directory tree which is the same as the second directory on computer A (new mod times and additional files). How update the files in the two newer directories on both machines so that the mod times on the files are the same as the original? Note that these directory trees are in the order of 10s of gigabytes so the solution would have to include some method of sending only the date information to the second computer.
A: The answer by Paul is partly correct, rsync is able to do this, however with different parameters. The correct command is
rsync -Prt --size-only original_dir copy_dir
where -P enables partial transfers and displays a progress indicator, -r recurses through subdirectories, -t preserves time stamps and --size-only doesn't transfer files that match in size.
A: The following command will make sure that TEST2 gets the same date assigned that TEST1 has
touch -t `stat -t '%Y%m%d%H%M.%S' -f '%Sa' TEST1` TEST2
Now instead of using hard-coded values here, you could find the files using "find" utility and then run touch via SSH on the remote machine. However, that means you may have to enter the password for each file, unless you switch SSH to cert authentication. I'd rather not do it all in a super fancy one-liner. Instead let's work with temp files. First go to the directory in question and run a find (you can filter by file type, size, extension, whatever pleases you, see "man find" for details. I'm just filtering by type file here to exclude any directories):
find . -type f -print -exec stat -t '%Y%m%d%H%M.%S' -f '%Sm' "{}" \; > /tmp/original_dates.txt
Now we have a file that looks like this (in my example there are only two entries there):
# cat /tmp/original_dates.txt
./test1
200809241840.55
./test2
200809241849.56
Now just copy the file over to the other machine and place it in the directory (so the relative file paths match) and apply the dates:
cat original_dates.txt | (while read FILE && read DATE; do touch -t $DATE "$FILE"; done)
Will also work with file names containing spaces.
One note: I used the last "modification" date at stat, as that's what you wrote in the question. However, it rather sounds as if you want to use the "creation" date (every file has a creation date, last modification date and last access date), you need to alter the stat call a bit.
'%Sm' - last modification date
'%Sc' - creation date
'%Sa' - last access date
However, touch can only change the modification time and access time, I think it can't change the creation time of a file ... so if that was your real intention, my solution might be sub-optimal... but in that case your question was as well ;-)
A: I would go through all the files in the source directory tree and gather the modification times from them into a script that I could run on the other directory trees. You will need to be careful about a few 'gotchas'. First, make sure that your output script has relative paths, and make sure you run it from the proper target directory, which should be the root directory of the target tree. Also, when changing machines make sure you are using the same timezone as you were on the machine where you generated the script.
Here's a Perl script I put together that will output the touch commands needed to update the times on the other directory trees. Depending on the target machines, you may need to tweak the date formats or command options, but this should give you a place to start.
#!/usr/bin/perl
my $STARTDIR="$HOME/test";
chdir $STARTDIR;
my @files = `find . -type f`;
chomp @files;
foreach my $file (@files) {
my $mtime = localtime((stat($file))[9]);
print qq(touch -m -d "$mtime" "$file"\n);
}
A: The other approach you could try is to attach the remote directory using NFS and then copy the times using find and touch -r.
A: I think rsync (with the right options)
will do this - it claims to only send
file differences, so presumably will
work out that there are no differences
to be transferred.
--times preserves the modification times, which is what you want.
See (for instance)
http://linux.die.net/man/1/rsync
Also add -I, --ignore-times don't skip files that match size and time
so that all files are "transferred' and trust to rsync's file differences optimisation to make it "fairly efficient" - see excerpt from the man page below.
-t, --times
This tells rsync to transfer modification times along with the files and update them on the remote system. Note that if this option is not used, the optimization that excludes files that have not been modified cannot be effective; in other words, a missing -t or -a will cause the next transfer to behave as if it used -I, causing all files to be updated (though the rsync algorithm will make the update fairly efficient if the files haven't actually changed, you're much better off using -t).
A: I used the following Python scripts instead.
Python scripts run much faster than an approach creating new processes for each file (like using find and stat). The solution below also works in case of timezone differences between systems, as it uses UTC times. It also works with paths containing spaces (but not paths containing newline!). It doesn't set times for symlinks, because the operating system provides no mechanism to modify the timestamp of a symlink, but in a file manager the time of the file the symlink points at is shown instead anyway. It uses a maxTime parameter to avoid resetting dates for files that are actually modified after copying from the original directory.
listMTimes.py:
import os
from datetime import datetime
from pytz import utc
for dirpath, dirnames, filenames in os.walk('./'):
for name in filenames+dirnames:
path = os.path.join(dirpath, name)
# Avoid symlinks because os.path.getmtime and os.utime get and
# set the time of the pointed file, and in the new directory,
# the link may have been redirected.
if not os.path.islink(path):
mtime = datetime.fromtimestamp(os.path.getmtime(path), utc)
print(mtime.isoformat()+" "+path)
setMTimes.py:
import datetime, fileinput, os, sys, time
import dateutil.parser
from pytz import utc
# Based on
# http://stackoverflow.com/questions/6999726/python-getting-millis-since-epoch-from-datetime
def unix_time(dt):
epoch = datetime.datetime.fromtimestamp(0, utc)
delta = dt - epoch
return delta.total_seconds()
if len(sys.argv) != 2:
print('Syntax: '+sys.argv[0]+' <maxTime>')
print(' where <maxTime> an ISO time, e. g. "2013-12-02T23:00+02:00".')
exit(1)
# A file with modification time newer than maxTime is not reset to
# its original modification time.
maxTime = unix_time(dateutil.parser.parse(sys.argv[1]))
for line in fileinput.input([]):
(datetimeString, path) = line.rstrip('\r\n').split(' ', 1)
mtime = dateutil.parser.parse(datetimeString)
if os.path.exists(path) and not os.path.islink(path):
if os.path.getmtime(path) <= maxTime:
os.utime(path, (time.time(), unix_time(mtime)))
Usage: in the first directory (the original) run
python listMTimes.py >/tmp/original_dates.txt
Then in the second directory (a copy of the original, possibly with some files modified/added/deleted) run something like this:
python setMTimes.py 2013-12-02T23:00+02:00 </tmp/original_dates.txt
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: View grants in MySQL How do I view the grants (access rights) for a given user in MySQL?
A: You could try this:
SELECT GRANTEE, PRIVILEGE_TYPE FROM information_schema.user_privileges;
SELECT User,Host,Db FROM mysql.db;
A: You might want to check out mk-show-grants from Maatkit, which will output the current set of grants for all users in a canonical form, making version control or replication more straightforward.
A: An alternative method for recent versions of MySQL is:
select * from information_schema.user_privileges where grantee like "'user'%";
The possible advantage with this format is the increased flexibility to check "user's" grants from any host (assuming consistent user names) or to check for specific privileges with additional conditions (eg, privilege_type = 'delete').
This version is probably better suited to use within a script while the "show grants" syntax is better for interactive sessions (more "human readable").
A: mysql> show grants for 'user'@'host'
A: If you're already running a web server with PHP then phpMyAdmin is a fairly friendly administrative tool.
A: You may need to Show Grants Statement
SHOW GRANTS [FOR user]
This statement displays the privileges that are assigned to a MySQL user account, in the form of GRANT statements that must be executed to duplicate the privilege assignments.
To display the privileges granted to the current user (the account you are using to connect to the server), you can use any of the following statements:
SHOW GRANTS;
SHOW GRANTS FOR CURRENT_USER;
SHOW GRANTS FOR CURRENT_USER();
note:
SHOW GRANTS requires the SELECT privilege for the mysql system database, except to display privileges for the current user.\
It's directly from the official website
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
}
|
Q: Disabling client cache from Jetty server for REST requests I have a REST Java server implemented with Jersey running on Jetty. It seems that certain browsers (IE7) internally caches all requests made to the server.
What I would like to do is to send a certain HTTP header in the response from the REST server indicating the browser that it shouldn't cache that response, and so will query the server again the next time it needs access to that resource.
Any ideas on how to configure Jersey/Jetty for this?
Or the only way to configure it is client-side?
A:
response.setHeader("Pragma", "no-cache");
No, No. No!
The use of the pragma header to disabling client side caching is wrong, it's a request header and has zero effect on the response.
http://www.mnot.net/cache_docs/#PRAGMA
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.32
Also, setting Expires: 0 isn't correct, Expires should be a date, not a number of seconds, however this will work as an invalid http date is interpreted as "already expired"
http://www.mnot.net/cache_docs/#EXPIRES
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21
A: There's nothing you can do about rogue clients, but Jetty can send the appopriate HTTP headers. Try here for info on configuring the Last-Modified and Cache-Control headers.
A: On the server side you can try this if you have access to the response (you might be able to do it through filters).
response.setHeader("Pragma", "no-cache");
response.setHeader("Cache-Control", "no-cache");
response.setHeader("Expires", "0");
Another trick you can try on the client side is to add an superfluous argument to the url like "http://www.company.com/services/staff?id=xxx&requestTime="+(new Date()).getTime(); This way the url being request is different every time and it can't be cached.
A: @Dave Cheney: well, what I understand from http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9 is that Cache-control makes sense for the request as well as for the response. And when the response is a cache-controled response, it's a specification for what the client (browser) should do with the resource (see next section, 14.9.1).
@all: Also, in section 14.21 of the same document it's specified that the Expires header set on 0 means 'invalid date' and can be ignored by the clients. And my tests with sending an expires date to 1 jan 1970 (timestamp 0) causes nothing but ignore from IE (and ff for that matter), which will still cache the response.
My solution was to send the current date for the Expires field, which is what the spec says.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Favorite .NET Unit Testing framework I've been using NUnit for a few years. I've tried MBUnit for a short while as well as Zenebug and XUnit but I keep coming back to NUnit.
What is your favorite/most used Unit test Framework? Can you explain why you're using it?
A: MbUnit
I like the way it handles reports, and I'm looking forward to some of the upcoming features I've heard about, such as integration with JsUnit.
A: I've used nUnit for years, but when we moved to VS2008 and TFS 2008 (using TeamBuild) we decided to try MSTest. No huge complaints there... we really like how well it integrates with the IDE as well as the CI build server.
One new thing we're trying which looks to have awesome potential is another add-on from Microsoft Research called Pex (requires VS2008). As they put it: "Pex generates unit tests from hand-written parameterized unit tests through automated exploratory testing based on dynamic symbolic execution." The way I put it is: this thing does static analysis on your unit test and target code and codegen's unit tests to achieve super high code coverage (which is often impractical if you're doing it by hand).
A: Gallio looks like it's going to be awesome once it gets more stable (currently alpha).
It's not just a test framework, but a test automation platform, so it will work with many existing test frameworks (MbUnit, NUnit, xUnit.net) yet be fully extensible, with a number of built-in additional features such as report generation in many formats and code analysis tools.
I've also heard that it will be able to
*
*output image streams, so for example WatiN test failures can be output as screenshots, so you can see what state the browser was in when the test failed.
*filter by namespace, so you can easily uncheck tests for an entire namespace before running them
Edit: It is indeed out of alpha now. We've tried it at our company and we really hated it. It was horrible to use and very slow. What a shame.
A: I've used Nunit for quite some time, but I happen to prefer things baked into VS. So, I'm now using MSUnit. Just a preference for having less add-ins installed in VS.
A: NUnit. We can use it on CC.
A: Nunit for the win!! It is simple and easy to implement. No mess, no fuss.
A: I like xUnit because of the way it uses the constructor and Dispose methods instead of having to apply attributes to other methods for initialization and all that.
A: MbUnit has compatible syntax with NUnit but has more features (especially data driven tests).
A: The support for NUnit tests in Resharper is great and sets the bar very high for me moving away from NUnit. I can run all the tests in a solution directly from Visual Studio, or I can drill down and concentrate on specific tests. When my code is checked in, my continuous integration build runs the same tests. This gives me a lot of confidence in my development process.
A: MSTest
http://en.wikipedia.org/wiki/MSTest
I don't know if it's my favorite (haven't really tried many others), but it's convenient since it's built into Visual Studio.
A: xUnit.net, but I'm hardly unbiased. :)
Why I use it: http://www.codeplex.com/xunit/Wiki/View.aspx?title=WhyDidWeBuildXunit
A: I used to use NUnit, but now I prefer the framework that comes with Visual Studio 2008, simply because it has tighter integration and is easier to set up to test private methods.
We also had problems with keeping the versions of NUnit synchronized with the rest of the team. It was a minor annoyance (go and upgrade, or fix the project references), but it went away with the switch.
A: *
*xUnit - less ceremony, support for data driven testing and other extentions
A: I've been using NUnit for some 4 years now, would definitely recommend using it. Reshaper - a plugin for VisualStudio by JetBrains includes a UnitTestRunner which integrates nicely with VisualStudio and lets you run / debug your tests from directly from the IDE. Resharper, NUnit and RhinoMocks is my preferred suite of tools for UnitTesting.
A: I have used both NUnit as well as MS Test. I like the integration of MS Test with the IDE and additional benefit of code coverage as well. But due to the performance reason as well as things like fluent assertions I prefer to use NUnit over MS test.
You can write framework agnostic asserts using a library called Should. It also has a very nice fluent syntax which can be used if you like fluent interfaces. I had a blog post related to the same.
http://nileshgule.blogspot.com/2010/11/use-should-assertion-library-to-write.html
If we use something like Should for assertions then both versions of test almost lokks the same and I don't see much difference between the two frameworks.
I had done a comparison of NUnit and MSTest unit testing frameworks in one of my blog.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: SVN performance after many revisions My project is currently using a svn repository which gains several hundred new revisions per day.
The repository resides on a Win2k3-server and is served through Apache/mod_dav_svn.
I now fear that over time the performance will degrade due to too many revisions.
Is this fear reasonable?
We are already planning to upgrade to 1.5, so having thousands of files in one directory will not be a problem in the long term.
Subversion on stores the delta (differences), between 2 revisions, so this helps saving a LOT of space, specially if you only commit code (text) and no binaries (images and docs).
Does that mean that in order to check out the revision 10 of the file foo.baz, svn will take revision 1 and then apply the deltas 2-10?
A: What type of repo do you have? FSFS or BDB?
(Let's assume FSFS for now, since that's the default.)
In the case of FSFS, each revision is stored as a diff against the previous. So, you would think that yes, after many revisions, it would be very slow.
However, this isn't the case. FSFS uses what are called "skip deltas" to avoid having to do too many lookups on previous revs.
(So, if you are using an FSFS repo, Brad Wilson's answer is wrong.)
In the case of a BDB repo, the HEAD (latest) revision is full-text, but the earlier revisions are built as a series of diffs against the head. This means the previous revs have to be re-calculated after each commit.
For more info: http://svn.apache.org/repos/asf/subversion/trunk/notes/skip-deltas
P.S. Our repo is about 20GB, with about 35,000 revisions, and we have not noticed any performance degradation.
A: I personally haven't dealt with Subversion repositories with codebases bigger than 80K LOC for the actual project. The biggest repository I've actually had was about 1.2 gigs, but this included all of the libraries and utilities that the project uses.
I don't think the day to day usage will be affected that much, but anything that needs to look through the different revisions might slow down a tad. It may not even be noticeable.
Now, from a sys admin point of view, there are a few things that can help you minimize performance bottlenecks. Since Subversion is mostly a file-based system, you can do this:
*
*Put the actual repositories in a different drive
*Make sure that no file locking apps, other than svn, are working on the drive above
*Make the drives at least 7,500 RPM. You could try getting 10,000 RPM, but it may be overkill
*Update the LAN to gigabit, if everybody is in the same office.
This may be overkill for your situation, but that's what I've usually done for other file-intensive applications.
If you ever "outgrow" Subversion, then Perforce will be your next step up. It's hands down the fastest source control app for very large projects.
A: We're running a subversion server with gigabytes worth of code and binaries, and it's up to over twenty thousand revisions. No slowdowns yet.
A: Subversion only stores the delta (differences), between 2 revisions, so this helps saving a LOT of space, specially if you only commit code (text) and no binaries (images and docs).
Additionally I´ve seen a lot of very big projects using svn and never complained about performance.
Maybe you are worried about checkout times? then I guess this would really be a networking problem.
Oh, and I´ve worked on CVS repositories with 2Gb+ of stuff (code, imgs, docs) and never had an performance problem. Since svn is a great improvement on cvs I don´t think you should worry about.
Hope it helps easy your mind a little ;)
A: I do not think that our subversion slowed down by aging. We have currently several TeraBytes of data, mostly binary. We checkout/commit daily up to 50 GigaByte of data. In total we have currently 50000 revisions. We are using FSFS as storage type and are interfacing either directly SVN: (Windows server) or via Apache mod_dav_svn (Gentoo Linux Server).
I cannot confirm that this gets svn to slowdown over time, as we set up a clean server for performance comparison which we could compare to. We could NOT measure a significant degration.
However I have to say that our subversion is uncommonly slow by default and obviously it is subversion itself as we tried with another computer system.
For some unknown reasons subversion seems to be completly server CPU limited. Our checkout/commit rates are limited to in between 15-30 MegaBytes/s per client because then one server CPU core is completly used up. This is the same for an almost empty repository (1 GigaByte, 5 revisions) as for our full server (~5 TeraByte, 50000 revisions). Tuning like setting compression to 0 = off did not improve this.
Our High Bandwith (delivers ~1 GigaByte/s) FC-Array idles, the other cores idle and network (currently 1 GigaBit/s for clients, 10 GigaBits/s for server) idles as well. Okay not really idling but if only 2-3% of available capacity is used I call it idling.
It is no real fun to see all components idling and we need to wait for our working copies to get checked out or comitted. Basically I have no idea what the server process is doing by fully consuming one CPU core all the time during checkout/commit.
However I am just trying to find a way to tune subversion. If this is not possible we might need to switch to another system.
Therefore: Answer: No SVN does not degrade in performance it is initially slow.
Of course if you do not need (high) performance you won't have a problem.
Btw. all the above applies to subversioon 1.7 latest stable version
A: The only operations which are likely to slow down are things which read information from multiple revisions (e.g. SVN Blame).
A: Subversion stores the most current version as full text, with backward-looking diffs. This means that updates to head are always fast, and what you incrementally pay for is looking farther and farther back in history.
A: Maybe you should consider improving your workflow.
I don't know if a repos will have perf issues in these conditions, but you ability to go back to a sane revision will.
In your case, you may want to include a validation process, so a team commit in a team leader repo, and each of them commit to the team manager repo who commit to the read-only clean company repos. You have make a clean selection at it stage of what commit must go to the top.
This way, anybody can go back to a clean copy, with an easy to browse history. Merge are much easier, and dev can still commit their mess as much as they want.
A: I am not sure..... I am using SVN with apache on Centos 5.2. Works ok. Revision number was 8230 something like that... And on all client machines Commit was so slow that we had to wait at least 2min for a file that is 1kb. I am talking about 1 file that has no big filesize.
Then I made a new repository. Started from rev. 1. Now works ok. Fast.
used svnadmin create xxxxxx.
did not check if it is FSFS or BDB.....
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
}
|
Q: Firefox XPCOM component - Permission denied to call method UnnamedClass Can a firefox XPCOM component read and write page content across multiple pages?
Scenario:
A bunch of local HTML and javascript files. A "Main.html" file opens a window "pluginWindow", and creates a plugin using:
netscape.security.PrivilegeManager.enablePrivilege('UniversalXPConnect');
var obj = Components.classes[cid].createInstance();
plugin = obj.QueryInterface(Components.interfaces.IPlugin);
plugin.addObserver(handleEvent);
The plugin that has 3 methods.
IPlugin.Read - Read data from plugin
IPlugin.Write - Write data to the plugin
IPlugin.addObserver - Add a callback handler for reading.
The "Main.html" then calls into the pluginWindow and tries to call the plugin method Write.
I receive an error:
Permission denied to call method UnnamedClass.Write
A: Does Main.html and that other window run with chrome privileges?
If you access Main.html "normally", just putting it on the location bar of Firefox, then it will have restrictions to what it can do (Otherwise, an arbitrary web page could do exactly the same).
If you are creating a firefox plugin, place your code in a XUL overlay.
If you really want to allow any web page to do whatever it is your plugin does, you can establish some mechanism through wich the page can ask the plugin to do the operation with its chrome privileges and send the result to the page afterwards.
If you are NOT making a firefox extension...then I am afraid I misunderstood something, could you explain it more?
A: First, is your C++ code really a plugin or an XPCOM component, possibly installed as part of an extension? Sounds like it's the later.
If so, it's not usable from untrusted JS code - any web page or a local HTML file. It's fully usable from privileged code, the most common type of which is the extension code.
You're working around this problem when creating the component using the enablePrivilege('UniversalXPConnect') call. This is not really recommended, unless this will not be distributed to users (since this call pops a confusing box and if you set a preference to always allow file:// scripts use XPCOM, it may be a security problem, since not all local pages are trusted - think saved web pages).
Your Write call fails for the same reason - file:// pages are not trusted to use XPCOM components. You probably can get it to work if you add another enablePrivilege call in the same function as the Write call itself.
Depending on the situation, there may be a better solution.
If your files must be treated as trusted, you may want to package them as an extension and access them via a chrome:// URL. This gives the code in those pages permissions to call any XPCOM component, including yours.
If the component's methods are safe to use from any page or if the environment is controlled and no untrusted pages are loaded in the browser, you could make your component accessible to content (search for nsSidebar in mozilla code for an example and also for nsISecurityCheckedComponent).
Oh, and when you don't get good answers here, you should definitely try the mozilla newsgroups/mailing lists.
[edit in reply to a comment] Consider putting the code that needs to call the component in a chrome:// script. Alternatively, you should be able to "bless" your pages with the chrome privileges using code like this (note that it does the opposite of what you need - stripping away the chrome privileges).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What can you do with SharePoint on Intranet? We have had SharePoint where I work for a little while now, but we've not done a lot with it. We have an intranet with hundreds of ASP/ASP.Net applications and I'm wondering what kind of things can be done to integrate with SharePoint to make a more seamless environment? We put documentation and production move requests and so on in SharePoint now, but it pretty much feels like it's own separate system rather than an integrated tool on our intranet.
I've searched around to see what other people are doing with SharePoint but I've been finding a lot of useless information.
A: A typical intranet portal segments functionality by department. Each department will probably have some custom web-based apps that you might have historically implemented in ASP.Net, and linked to from the intranet portal. With sharepoint you can start bringing the useful bits of those custom web-apps in as modular parts, so that the business owner of the portal can have more control as to how information is structured and displayed to his/her users.
Think dashboards, populated with custom metrics that only make sense to individual departments. That's one of the most obvious places to start. HR, accounting, IT, they all have metrics they want to track and display. They all have legacy systems that they might want to correlate information from. All this can be done in reusable web-parts. Since Sharepoint gives the end-user the control over layout, display, audience control, etc, you don't end up reinventing wheels all day.
A: SharePoint was designed to be a collaboration portal and document repository. If you have other business processes wrapped up in other internal web sites, you may not get much benefit from converting these sites into SharePoint sub-sites.
However, if there is signifcant overlap in your applications (contact lists, inventory, specs, etc.) you may want to make the investment to combine.
A: A great idea for you would be move your most used asp.net apps to run within the SharePoint site. Each app can be added either as a control directly on a pagelayout or integrated into a webpart (use the webpart to load child controls).
This would allow you to use the flexible moss interface to move the asp.net app into a unified information architecture so people can find the app easily.
SharePoint is really easy to roll out something that works, but creating a seamless intranet does require a bit of thinking outside of SharePoint itself (i.e. what should go where, which users need to see what, navigation structure...)
That is really a lot of work and requires lots of input from people outside the IT area.
A: If you have InfoPath, you can create online forms. You can share your docs and edit them online. You can start an approvement workflow on these docs. You can create polls. You can create work groups.
Basically SharePoint is a giant and robust document store, but you can do anything what you can do in any ASP.NET web application. You can create e.g. custom workflows to automate business processes. We've worked for several customers to create corporate intranets and sometimes internet sites, so it really works. :)
But sometimes it's very hard to implement the requested features (a lot of workarounds).
A: Really its an intranet in a box. We pretty much run all of our day to day development tasks off of it. We keep documentation, track defects, manage people's time off etc. You can migrate your asp.net and asp applications to run under the sharepoint site. In the adminstration section you can set up web applications to run under the same site, but outside of sharepoint's control. That would probably help with the "feel" of it being completely seperate.
Sharepoint is really a shift in the way people have to think about web development and that's the key. You're no longer developing a standalone application, you're adding on to an existing framework. I would put it akin to having "silos of data" vs. a centralized database system which houses all the company's data. Once people realize that everything is connected, it will feel more like a seemless integration. My advice is to actively try and create applications in sharepoint and think about how to migrate existing apps on to it.
A: How about BI and reporting from an ERP?
When we know IE is uncapable to handle a page with 10000 table rows (without pagination)
Many don't realize but the success of a reporting tool depends on the performance of the grid object used - Excel and the SpreadSheet obj from the defunct Office Web Components are still the #1 in user's (accountants, managers, ceo) choice.
A: I think it depends on your environment. In our environment, we setup each department with their own pages and we use it for basic information, surveys, and the employee's homepage. We've built Google/Live Search and Weather.com widgets and roll RSS feeds using Tim Huer's RSS control.
A: One thing you can do is to create web parts to provide access to data from existing applications. Initially they could simply be read-only views, but depending on your experience they could be fleshed out to allow writes.
Another idea is to add links between SharePoint and your applications (assuming they're web based); that will at least allow a flow between them.
I haven't done it, but you could also theoretically skin SharePoint to look like the rest of your intranet.
A: Create libraries
Form libraries, documents libraries, slide libraries
Create standard or custom lists
Standard lists - announcements, tasks, contacts
Custom lists - suppliers, contractors, inventories, orders
Setup secure team discussion areas
Build shared team calendars
Create simple workflow processes on documents and lists
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Algorithm to return all combinations of k elements from n I want to write a function that takes an array of letters as an argument and a number of those letters to select.
Say you provide an array of 8 letters and want to select 3 letters from that. Then you should get:
8! / ((8 - 3)! * 3!) = 56
Arrays (or words) in return consisting of 3 letters each.
A:
Short java solution:
import java.util.Arrays;
public class Combination {
public static void main(String[] args){
String[] arr = {"A","B","C","D","E","F"};
combinations2(arr, 3, 0, new String[3]);
}
static void combinations2(String[] arr, int len, int startPosition, String[] result){
if (len == 0){
System.out.println(Arrays.toString(result));
return;
}
for (int i = startPosition; i <= arr.length-len; i++){
result[result.length - len] = arr[i];
combinations2(arr, len-1, i+1, result);
}
}
}
Result will be
[A, B, C]
[A, B, D]
[A, B, E]
[A, B, F]
[A, C, D]
[A, C, E]
[A, C, F]
[A, D, E]
[A, D, F]
[A, E, F]
[B, C, D]
[B, C, E]
[B, C, F]
[B, D, E]
[B, D, F]
[B, E, F]
[C, D, E]
[C, D, F]
[C, E, F]
[D, E, F]
A: If you can use SQL syntax - say, if you're using LINQ to access fields of an structure or array, or directly accessing a database that has a table called "Alphabet" with just one char field "Letter", you can adapt following code:
SELECT A.Letter, B.Letter, C.Letter
FROM Alphabet AS A, Alphabet AS B, Alphabet AS C
WHERE A.Letter<>B.Letter AND A.Letter<>C.Letter AND B.Letter<>C.Letter
AND A.Letter<B.Letter AND B.Letter<C.Letter
This will return all combinations of 3 letters, notwithstanding how many letters you have in table "Alphabet" (it can be 3, 8, 10, 27, etc.).
If what you want is all permutations, rather than combinations (i.e. you want "ACB" and "ABC" to count as different, rather than appear just once) just delete the last line (the AND one) and it's done.
Post-Edit: After re-reading the question, I realise what's needed is the general algorithm, not just a specific one for the case of selecting 3 items. Adam Hughes' answer is the complete one, unfortunately I cannot vote it up (yet). This answer's simple but works only for when you want exactly 3 items.
A:
Here is an elegant, generic implementation in Scala, as described on 99 Scala Problems.
object P26 {
def flatMapSublists[A,B](ls: List[A])(f: (List[A]) => List[B]): List[B] =
ls match {
case Nil => Nil
case sublist@(_ :: tail) => f(sublist) ::: flatMapSublists(tail)(f)
}
def combinations[A](n: Int, ls: List[A]): List[List[A]] =
if (n == 0) List(Nil)
else flatMapSublists(ls) { sl =>
combinations(n - 1, sl.tail) map {sl.head :: _}
}
}
A: May I present my recursive Python solution to this problem?
def choose_iter(elements, length):
for i in xrange(len(elements)):
if length == 1:
yield (elements[i],)
else:
for next in choose_iter(elements[i+1:], length-1):
yield (elements[i],) + next
def choose(l, k):
return list(choose_iter(l, k))
Example usage:
>>> len(list(choose_iter("abcdefgh",3)))
56
I like it for its simplicity.
A:
I had a permutation algorithm I used for project euler, in python:
def missing(miss,src):
"Returns the list of items in src not present in miss"
return [i for i in src if i not in miss]
def permutation_gen(n,l):
"Generates all the permutations of n items of the l list"
for i in l:
if n<=1: yield [i]
r = [i]
for j in permutation_gen(n-1,missing([i],l)): yield r+j
If
n<len(l)
you should have all combination you need without repetition, do you need it?
It is a generator, so you use it in something like this:
for comb in permutation_gen(3,list("ABCDEFGH")):
print comb
A: Lets say your array of letters looks like this: "ABCDEFGH". You have three indices (i, j, k) indicating which letters you are going to use for the current word, You start with:
A B C D E F G H
^ ^ ^
i j k
First you vary k, so the next step looks like that:
A B C D E F G H
^ ^ ^
i j k
If you reached the end you go on and vary j and then k again.
A B C D E F G H
^ ^ ^
i j k
A B C D E F G H
^ ^ ^
i j k
Once you j reached G you start also to vary i.
A B C D E F G H
^ ^ ^
i j k
A B C D E F G H
^ ^ ^
i j k
...
Written in code this look something like that
void print_combinations(const char *string)
{
int i, j, k;
int len = strlen(string);
for (i = 0; i < len - 2; i++)
{
for (j = i + 1; j < len - 1; j++)
{
for (k = j + 1; k < len; k++)
printf("%c%c%c\n", string[i], string[j], string[k]);
}
}
}
A: https://gist.github.com/3118596
There is an implementation for JavaScript. It has functions to get k-combinations and all combinations of an array of any objects. Examples:
k_combinations([1,2,3], 2)
-> [[1,2], [1,3], [2,3]]
combinations([1,2,3])
-> [[1],[2],[3],[1,2],[1,3],[2,3],[1,2,3]]
A: The following recursive algorithm picks all of the k-element combinations from an ordered set:
*
*choose the first element i of your combination
*combine i with each of the combinations of k-1 elements chosen recursively from the set of elements larger than i.
Iterate the above for each i in the set.
It is essential that you pick the rest of the elements as larger than i, to avoid repetition. This way [3,5] will be picked only once, as [3] combined with [5], instead of twice (the condition eliminates [5] + [3]). Without this condition you get variations instead of combinations.
A: Here you have a lazy evaluated version of that algorithm coded in C#:
static bool nextCombination(int[] num, int n, int k)
{
bool finished, changed;
changed = finished = false;
if (k > 0)
{
for (int i = k - 1; !finished && !changed; i--)
{
if (num[i] < (n - 1) - (k - 1) + i)
{
num[i]++;
if (i < k - 1)
{
for (int j = i + 1; j < k; j++)
{
num[j] = num[j - 1] + 1;
}
}
changed = true;
}
finished = (i == 0);
}
}
return changed;
}
static IEnumerable Combinations<T>(IEnumerable<T> elements, int k)
{
T[] elem = elements.ToArray();
int size = elem.Length;
if (k <= size)
{
int[] numbers = new int[k];
for (int i = 0; i < k; i++)
{
numbers[i] = i;
}
do
{
yield return numbers.Select(n => elem[n]);
}
while (nextCombination(numbers, size, k));
}
}
And test part:
static void Main(string[] args)
{
int k = 3;
var t = new[] { "dog", "cat", "mouse", "zebra"};
foreach (IEnumerable<string> i in Combinations(t, k))
{
Console.WriteLine(string.Join(",", i));
}
}
Hope this help you!
Another version, that forces all the first k to appear firstly, then all the first k+1 combinations, then all the first k+2 etc.. It means that if you have sorted array, the most important on the top, it would take them and expand gradually to the next ones - only when it is must do so.
private static bool NextCombinationFirstsAlwaysFirst(int[] num, int n, int k)
{
if (k > 1 && NextCombinationFirstsAlwaysFirst(num, num[k - 1], k - 1))
return true;
if (num[k - 1] + 1 == n)
return false;
++num[k - 1];
for (int i = 0; i < k - 1; ++i)
num[i] = i;
return true;
}
For instance, if you run the first method ("nextCombination") on k=3, n=5 you'll get:
0 1 2
0 1 3
0 1 4
0 2 3
0 2 4
0 3 4
1 2 3
1 2 4
1 3 4
2 3 4
But if you'll run
int[] nums = new int[k];
for (int i = 0; i < k; ++i)
nums[i] = i;
do
{
Console.WriteLine(string.Join(" ", nums));
}
while (NextCombinationFirstsAlwaysFirst(nums, n, k));
You'll get this (I added empty lines for clarity):
0 1 2
0 1 3
0 2 3
1 2 3
0 1 4
0 2 4
1 2 4
0 3 4
1 3 4
2 3 4
It's adding "4" only when must to, and also after "4" was added it adds "3" again only when it must to (after doing 01, 02, 12).
A:
Lets say your array of letters looks like this: "ABCDEFGH". You have three indices (i, j, k) indicating which letters you are going to use for the current word, You start with:
A B C D E F G H
^ ^ ^
i j k
First you vary k, so the next step looks like that:
A B C D E F G H
^ ^ ^
i j k
If you reached the end you go on and vary j and then k again.
A B C D E F G H
^ ^ ^
i j k
A B C D E F G H
^ ^ ^
i j k
Once you j reached G you start also to vary i.
A B C D E F G H
^ ^ ^
i j k
A B C D E F G H
^ ^ ^
i j k
...
function initializePointers($cnt) {
$pointers = [];
for($i=0; $i<$cnt; $i++) {
$pointers[] = $i;
}
return $pointers;
}
function incrementPointers(&$pointers, &$arrLength) {
for($i=0; $i<count($pointers); $i++) {
$currentPointerIndex = count($pointers) - $i - 1;
$currentPointer = $pointers[$currentPointerIndex];
if($currentPointer < $arrLength - $i - 1) {
++$pointers[$currentPointerIndex];
for($j=1; ($currentPointerIndex+$j)<count($pointers); $j++) {
$pointers[$currentPointerIndex+$j] = $pointers[$currentPointerIndex]+$j;
}
return true;
}
}
return false;
}
function getDataByPointers(&$arr, &$pointers) {
$data = [];
for($i=0; $i<count($pointers); $i++) {
$data[] = $arr[$pointers[$i]];
}
return $data;
}
function getCombinations($arr, $cnt)
{
$len = count($arr);
$result = [];
$pointers = initializePointers($cnt);
do {
$result[] = getDataByPointers($arr, $pointers);
} while(incrementPointers($pointers, count($arr)));
return $result;
}
$result = getCombinations([0, 1, 2, 3, 4, 5], 3);
print_r($result);
Based on https://stackoverflow.com/a/127898/2628125, but more abstract, for any size of pointers.
A: Array.prototype.combs = function(num) {
var str = this,
length = str.length,
of = Math.pow(2, length) - 1,
out, combinations = [];
while(of) {
out = [];
for(var i = 0, y; i < length; i++) {
y = (1 << i);
if(y & of && (y !== of))
out.push(str[i]);
}
if (out.length >= num) {
combinations.push(out);
}
of--;
}
return combinations;
}
A: Clojure version:
(defn comb [k l]
(if (= 1 k) (map vector l)
(apply concat
(map-indexed
#(map (fn [x] (conj x %2))
(comb (dec k) (drop (inc %1) l)))
l))))
A: short python code, yielding index positions
def yield_combos(n,k):
# n is set size, k is combo size
i = 0
a = [0]*k
while i > -1:
for j in range(i+1, k):
a[j] = a[j-1]+1
i=j
yield a
while a[i] == i + n - k:
i -= 1
a[i] += 1
A: Algorithm:
*
*Count from 1 to 2^n.
*Convert each digit to its binary representation.
*Translate each 'on' bit to elements of your set, based on position.
In C#:
void Main()
{
var set = new [] {"A", "B", "C", "D" }; //, "E", "F", "G", "H", "I", "J" };
var kElement = 2;
for(var i = 1; i < Math.Pow(2, set.Length); i++) {
var result = Convert.ToString(i, 2).PadLeft(set.Length, '0');
var cnt = Regex.Matches(Regex.Escape(result), "1").Count;
if (cnt == kElement) {
for(int j = 0; j < set.Length; j++)
if ( Char.GetNumericValue(result[j]) == 1)
Console.Write(set[j]);
Console.WriteLine();
}
}
}
Why does it work?
There is a bijection between the subsets of an n-element set and n-bit sequences.
That means we can figure out how many subsets there are by counting sequences.
e.g., the four element set below can be represented by {0,1} X {0, 1} X {0, 1} X {0, 1} (or 2^4) different sequences.
So - all we have to do is count from 1 to 2^n to find all the combinations. (We ignore the empty set.) Next, translate the digits to their binary representation. Then substitute elements of your set for 'on' bits.
If you want only k element results, only print when k bits are 'on'.
(If you want all subsets instead of k length subsets, remove the cnt/kElement part.)
(For proof, see MIT free courseware Mathematics for Computer Science, Lehman et al, section 11.2.2. https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-042j-mathematics-for-computer-science-fall-2010/readings/ )
A: Art of Computer Programming Volume 4: Fascicle 3 has a ton of these that might fit your particular situation better than how I describe.
Gray Codes
An issue that you will come across is of course memory and pretty quickly, you'll have problems by 20 elements in your set -- 20C3 = 1140. And if you want to iterate over the set it's best to use a modified gray code algorithm so you aren't holding all of them in memory. These generate the next combination from the previous and avoid repetitions. There are many of these for different uses. Do we want to maximize the differences between successive combinations? minimize? et cetera.
Some of the original papers describing gray codes:
*
*Some Hamilton Paths and a Minimal Change Algorithm
*Adjacent Interchange Combination Generation Algorithm
Here are some other papers covering the topic:
*
*An Efficient Implementation of the Eades, Hickey, Read Adjacent Interchange Combination Generation Algorithm (PDF, with code in Pascal)
*Combination Generators
*Survey of Combinatorial Gray Codes (PostScript)
*An Algorithm for Gray Codes
Chase's Twiddle (algorithm)
Phillip J Chase, `Algorithm 382: Combinations of M out of N Objects' (1970)
The algorithm in C...
Index of Combinations in Lexicographical Order (Buckles Algorithm 515)
You can also reference a combination by its index (in lexicographical order). Realizing that the index should be some amount of change from right to left based on the index we can construct something that should recover a combination.
So, we have a set {1,2,3,4,5,6}... and we want three elements. Let's say {1,2,3} we can say that the difference between the elements is one and in order and minimal. {1,2,4} has one change and is lexicographically number 2. So the number of 'changes' in the last place accounts for one change in the lexicographical ordering. The second place, with one change {1,3,4} has one change but accounts for more change since it's in the second place (proportional to the number of elements in the original set).
The method I've described is a deconstruction, as it seems, from set to the index, we need to do the reverse – which is much trickier. This is how Buckles solves the problem. I wrote some C to compute them, with minor changes – I used the index of the sets rather than a number range to represent the set, so we are always working from 0...n.
Note:
*
*Since combinations are unordered, {1,3,2} = {1,2,3} --we order them to be lexicographical.
*This method has an implicit 0 to start the set for the first difference.
Index of Combinations in Lexicographical Order (McCaffrey)
There is another way:, its concept is easier to grasp and program but it's without the optimizations of Buckles. Fortunately, it also does not produce duplicate combinations:
The set that maximizes , where .
For an example: 27 = C(6,4) + C(5,3) + C(2,2) + C(1,1). So, the 27th lexicographical combination of four things is: {1,2,5,6}, those are the indexes of whatever set you want to look at. Example below (OCaml), requires choose function, left to reader:
(* this will find the [x] combination of a [set] list when taking [k] elements *)
let combination_maccaffery set k x =
(* maximize function -- maximize a that is aCb *)
(* return largest c where c < i and choose(c,i) <= z *)
let rec maximize a b x =
if (choose a b ) <= x then a else maximize (a-1) b x
in
let rec iterate n x i = match i with
| 0 -> []
| i ->
let max = maximize n i x in
max :: iterate n (x - (choose max i)) (i-1)
in
if x < 0 then failwith "errors" else
let idxs = iterate (List.length set) x k in
List.map (List.nth set) (List.sort (-) idxs)
A small and simple combinations iterator
The following two algorithms are provided for didactic purposes. They implement an iterator and (a more general) folder overall combinations.
They are as fast as possible, having the complexity O(nCk). The memory consumption is bound by k.
We will start with the iterator, which will call a user provided function for each combination
let iter_combs n k f =
let rec iter v s j =
if j = k then f v
else for i = s to n - 1 do iter (i::v) (i+1) (j+1) done in
iter [] 0 0
A more general version will call the user provided function along with the state variable, starting from the initial state. Since we need to pass the state between different states we won't use the for-loop, but instead, use recursion,
let fold_combs n k f x =
let rec loop i s c x =
if i < n then
loop (i+1) s c @@
let c = i::c and s = s + 1 and i = i + 1 in
if s < k then loop i s c x else f c x
else x in
loop 0 0 [] x
A:
Here is a method which gives you all combinations of specified size from a random length string. Similar to quinmars' solution, but works for varied input and k.
The code can be changed to wrap around, ie 'dab' from input 'abcd' w k=3.
public void run(String data, int howMany){
choose(data, howMany, new StringBuffer(), 0);
}
//n choose k
private void choose(String data, int k, StringBuffer result, int startIndex){
if (result.length()==k){
System.out.println(result.toString());
return;
}
for (int i=startIndex; i<data.length(); i++){
result.append(data.charAt(i));
choose(data,k,result, i+1);
result.setLength(result.length()-1);
}
}
Output for "abcde":
abc abd abe acd ace ade bcd bce bde cde
A:
All said and and done here comes the O'caml code for that.
Algorithm is evident from the code..
let combi n lst =
let rec comb l c =
if( List.length c = n) then [c] else
match l with
[] -> []
| (h::t) -> (combi t (h::c))@(combi t c)
in
combi lst []
;;
A: Short javascript version (ES 5)
let combine = (list, n) =>
n == 0 ?
[[]] :
list.flatMap((e, i) =>
combine(
list.slice(i + 1),
n - 1
).map(c => [e].concat(c))
);
let res = combine([1,2,3,4], 3);
res.forEach(e => console.log(e.join()));
A: Another python recusive solution.
def combination_indicies(n, k, j = 0, stack = []):
if len(stack) == k:
yield list(stack)
return
for i in range(j, n):
stack.append(i)
for x in combination_indicies(n, k, i + 1, stack):
yield x
stack.pop()
list(combination_indicies(5, 3))
Output:
[[0, 1, 2],
[0, 1, 3],
[0, 1, 4],
[0, 2, 3],
[0, 2, 4],
[0, 3, 4],
[1, 2, 3],
[1, 2, 4],
[1, 3, 4],
[2, 3, 4]]
A:
Short example in Python:
def comb(sofar, rest, n):
if n == 0:
print sofar
else:
for i in range(len(rest)):
comb(sofar + rest[i], rest[i+1:], n-1)
>>> comb("", "abcde", 3)
abc
abd
abe
acd
ace
ade
bcd
bce
bde
cde
For explanation, the recursive method is described with the following example:
Example: A B C D E
All combinations of 3 would be:
*
*A with all combinations of 2 from the rest (B C D E)
*B with all combinations of 2 from the rest (C D E)
*C with all combinations of 2 from the rest (D E)
A:
Here is my proposition in C++
I tried to impose as little restriction on the iterator type as i could so this solution assumes just forward iterator, and it can be a const_iterator. This should work with any standard container. In cases where arguments don't make sense it throws std::invalid_argumnent
#include <vector>
#include <stdexcept>
template <typename Fci> // Fci - forward const iterator
std::vector<std::vector<Fci> >
enumerate_combinations(Fci begin, Fci end, unsigned int combination_size)
{
if(begin == end && combination_size > 0u)
throw std::invalid_argument("empty set and positive combination size!");
std::vector<std::vector<Fci> > result; // empty set of combinations
if(combination_size == 0u) return result; // there is exactly one combination of
// size 0 - emty set
std::vector<Fci> current_combination;
current_combination.reserve(combination_size + 1u); // I reserve one aditional slot
// in my vector to store
// the end sentinel there.
// The code is cleaner thanks to that
for(unsigned int i = 0u; i < combination_size && begin != end; ++i, ++begin)
{
current_combination.push_back(begin); // Construction of the first combination
}
// Since I assume the itarators support only incrementing, I have to iterate over
// the set to get its size, which is expensive. Here I had to itrate anyway to
// produce the first cobination, so I use the loop to also check the size.
if(current_combination.size() < combination_size)
throw std::invalid_argument("combination size > set size!");
result.push_back(current_combination); // Store the first combination in the results set
current_combination.push_back(end); // Here I add mentioned earlier sentinel to
// simplyfy rest of the code. If I did it
// earlier, previous statement would get ugly.
while(true)
{
unsigned int i = combination_size;
Fci tmp; // Thanks to the sentinel I can find first
do // iterator to change, simply by scaning
{ // from right to left and looking for the
tmp = current_combination[--i]; // first "bubble". The fact, that it's
++tmp; // a forward iterator makes it ugly but I
} // can't help it.
while(i > 0u && tmp == current_combination[i + 1u]);
// Here is probably my most obfuscated expression.
// Loop above looks for a "bubble". If there is no "bubble", that means, that
// current_combination is the last combination, Expression in the if statement
// below evaluates to true and the function exits returning result.
// If the "bubble" is found however, the ststement below has a sideeffect of
// incrementing the first iterator to the left of the "bubble".
if(++current_combination[i] == current_combination[i + 1u])
return result;
// Rest of the code sets posiotons of the rest of the iterstors
// (if there are any), that are to the right of the incremented one,
// to form next combination
while(++i < combination_size)
{
current_combination[i] = current_combination[i - 1u];
++current_combination[i];
}
// Below is the ugly side of using the sentinel. Well it had to haave some
// disadvantage. Try without it.
result.push_back(std::vector<Fci>(current_combination.begin(),
current_combination.end() - 1));
}
}
A: I created a solution in SQL Server 2005 for this, and posted it on my website: http://www.jessemclain.com/downloads/code/sql/fn_GetMChooseNCombos.sql.htm
Here is an example to show usage:
SELECT * FROM dbo.fn_GetMChooseNCombos('ABCD', 2, '')
results:
Word
----
AB
AC
AD
BC
BD
CD
(6 row(s) affected)
A:
Here is a code I recently wrote in Java, which calculates and returns all the combination of "num" elements from "outOf" elements.
// author: Sourabh Bhat (heySourabh@gmail.com)
public class Testing
{
public static void main(String[] args)
{
// Test case num = 5, outOf = 8.
int num = 5;
int outOf = 8;
int[][] combinations = getCombinations(num, outOf);
for (int i = 0; i < combinations.length; i++)
{
for (int j = 0; j < combinations[i].length; j++)
{
System.out.print(combinations[i][j] + " ");
}
System.out.println();
}
}
private static int[][] getCombinations(int num, int outOf)
{
int possibilities = get_nCr(outOf, num);
int[][] combinations = new int[possibilities][num];
int arrayPointer = 0;
int[] counter = new int[num];
for (int i = 0; i < num; i++)
{
counter[i] = i;
}
breakLoop: while (true)
{
// Initializing part
for (int i = 1; i < num; i++)
{
if (counter[i] >= outOf - (num - 1 - i))
counter[i] = counter[i - 1] + 1;
}
// Testing part
for (int i = 0; i < num; i++)
{
if (counter[i] < outOf)
{
continue;
} else
{
break breakLoop;
}
}
// Innermost part
combinations[arrayPointer] = counter.clone();
arrayPointer++;
// Incrementing part
counter[num - 1]++;
for (int i = num - 1; i >= 1; i--)
{
if (counter[i] >= outOf - (num - 1 - i))
counter[i - 1]++;
}
}
return combinations;
}
private static int get_nCr(int n, int r)
{
if(r > n)
{
throw new ArithmeticException("r is greater then n");
}
long numerator = 1;
long denominator = 1;
for (int i = n; i >= r + 1; i--)
{
numerator *= i;
}
for (int i = 2; i <= n - r; i++)
{
denominator *= i;
}
return (int) (numerator / denominator);
}
}
A:
A concise Javascript solution:
Array.prototype.combine=function combine(k){
var toCombine=this;
var last;
function combi(n,comb){
var combs=[];
for ( var x=0,y=comb.length;x<y;x++){
for ( var l=0,m=toCombine.length;l<m;l++){
combs.push(comb[x]+toCombine[l]);
}
}
if (n<k-1){
n++;
combi(n,combs);
} else{last=combs;}
}
combi(1,toCombine);
return last;
}
// Example:
// var toCombine=['a','b','c'];
// var results=toCombine.combine(4);
A:
In C++ the following routine will produce all combinations of length distance(first,k) between the range [first,last):
#include <algorithm>
template <typename Iterator>
bool next_combination(const Iterator first, Iterator k, const Iterator last)
{
/* Credits: Mark Nelson http://marknelson.us */
if ((first == last) || (first == k) || (last == k))
return false;
Iterator i1 = first;
Iterator i2 = last;
++i1;
if (last == i1)
return false;
i1 = last;
--i1;
i1 = k;
--i2;
while (first != i1)
{
if (*--i1 < *i2)
{
Iterator j = k;
while (!(*i1 < *j)) ++j;
std::iter_swap(i1,j);
++i1;
++j;
i2 = k;
std::rotate(i1,j,last);
while (last != j)
{
++j;
++i2;
}
std::rotate(k,i2,last);
return true;
}
}
std::rotate(first,k,last);
return false;
}
It can be used like this:
#include <string>
#include <iostream>
int main()
{
std::string s = "12345";
std::size_t comb_size = 3;
do
{
std::cout << std::string(s.begin(), s.begin() + comb_size) << std::endl;
} while (next_combination(s.begin(), s.begin() + comb_size, s.end()));
return 0;
}
This will print the following:
123
124
125
134
135
145
234
235
245
345
A:
I found this thread useful and thought I would add a Javascript solution that you can pop into Firebug. Depending on your JS engine, it could take a little time if the starting string is large.
function string_recurse(active, rest) {
if (rest.length == 0) {
console.log(active);
} else {
string_recurse(active + rest.charAt(0), rest.substring(1, rest.length));
string_recurse(active, rest.substring(1, rest.length));
}
}
string_recurse("", "abc");
The output should be as follows:
abc
ab
ac
a
bc
b
c
A: static IEnumerable<string> Combinations(List<string> characters, int length)
{
for (int i = 0; i < characters.Count; i++)
{
// only want 1 character, just return this one
if (length == 1)
yield return characters[i];
// want more than one character, return this one plus all combinations one shorter
// only use characters after the current one for the rest of the combinations
else
foreach (string next in Combinations(characters.GetRange(i + 1, characters.Count - (i + 1)), length - 1))
yield return characters[i] + next;
}
}
A:
In C#:
public static IEnumerable<IEnumerable<T>> Combinations<T>(this IEnumerable<T> elements, int k)
{
return k == 0 ? new[] { new T[0] } :
elements.SelectMany((e, i) =>
elements.Skip(i + 1).Combinations(k - 1).Select(c => (new[] {e}).Concat(c)));
}
Usage:
var result = Combinations(new[] { 1, 2, 3, 4, 5 }, 3);
Result:
123
124
125
134
135
145
234
235
245
345
A:
Simple recursive algorithm in Haskell
import Data.List
combinations 0 lst = [[]]
combinations n lst = do
(x:xs) <- tails lst
rest <- combinations (n-1) xs
return $ x : rest
We first define the special case, i.e. selecting zero elements. It produces a single result, which is an empty list (i.e. a list that contains an empty list).
For n > 0, x goes through every element of the list and xs is every element after x.
rest picks n - 1 elements from xs using a recursive call to combinations. The final result of the function is a list where each element is x : rest (i.e. a list which has x as head and rest as tail) for every different value of x and rest.
> combinations 3 "abcde"
["abc","abd","abe","acd","ace","ade","bcd","bce","bde","cde"]
And of course, since Haskell is lazy, the list is gradually generated as needed, so you can partially evaluate exponentially large combinations.
> let c = combinations 8 "abcdefghijklmnopqrstuvwxyz"
> take 10 c
["abcdefgh","abcdefgi","abcdefgj","abcdefgk","abcdefgl","abcdefgm","abcdefgn",
"abcdefgo","abcdefgp","abcdefgq"]
A: I have written a class to handle common functions for working with the binomial coefficient, which is the type of problem that your problem falls under. It performs the following tasks:
*
*Outputs all the K-indexes in a nice format for any N choose K to a file. The K-indexes can be substituted with more descriptive strings or letters. This method makes solving this type of problem quite trivial.
*Converts the K-indexes to the proper index of an entry in the sorted binomial coefficient table. This technique is much faster than older published techniques that rely on iteration. It does this by using a mathematical property inherent in Pascal's Triangle. My paper talks about this. I believe I am the first to discover and publish this technique, but I could be wrong.
*Converts the index in a sorted binomial coefficient table to the corresponding K-indexes.
*Uses Mark Dominus method to calculate the binomial coefficient, which is much less likely to overflow and works with larger numbers.
*The class is written in .NET C# and provides a way to manage the objects related to the problem (if any) by using a generic list. The constructor of this class takes a bool value called InitTable that when true will create a generic list to hold the objects to be managed. If this value is false, then it will not create the table. The table does not need to be created in order to perform the 4 above methods. Accessor methods are provided to access the table.
*There is an associated test class which shows how to use the class and its methods. It has been extensively tested with 2 cases and there are no known bugs.
To read about this class and download the code, see Tablizing The Binomial Coeffieicent.
It should not be hard to convert this class to C++.
A:
Short php algorithm to return all combinations of k elements from n (binomial coefficent) based on java solution:
$array = array(1,2,3,4,5);
$array_result = NULL;
$array_general = NULL;
function combinations($array, $len, $start_position, $result_array, $result_len, &$general_array)
{
if($len == 0)
{
$general_array[] = $result_array;
return;
}
for ($i = $start_position; $i <= count($array) - $len; $i++)
{
$result_array[$result_len - $len] = $array[$i];
combinations($array, $len-1, $i+1, $result_array, $result_len, $general_array);
}
}
combinations($array, 3, 0, $array_result, 3, $array_general);
echo "<pre>";
print_r($array_general);
echo "</pre>";
The same solution but in javascript:
var newArray = [1, 2, 3, 4, 5];
var arrayResult = [];
var arrayGeneral = [];
function combinations(newArray, len, startPosition, resultArray, resultLen, arrayGeneral) {
if(len === 0) {
var tempArray = [];
resultArray.forEach(value => tempArray.push(value));
arrayGeneral.push(tempArray);
return;
}
for (var i = startPosition; i <= newArray.length - len; i++) {
resultArray[resultLen - len] = newArray[i];
combinations(newArray, len-1, i+1, resultArray, resultLen, arrayGeneral);
}
}
combinations(newArray, 3, 0, arrayResult, 3, arrayGeneral);
console.log(arrayGeneral);
A: Here's my JavaScript solution that is a little more functional through use of reduce/map, which eliminates almost all variables
function combinations(arr, size) {
var len = arr.length;
if (size > len) return [];
if (!size) return [[]];
if (size == len) return [arr];
return arr.reduce(function (acc, val, i) {
var res = combinations(arr.slice(i + 1), size - 1)
.map(function (comb) { return [val].concat(comb); });
return acc.concat(res);
}, []);
}
var combs = combinations([1,2,3,4,5,6,7,8],3);
combs.map(function (comb) {
document.body.innerHTML += comb.toString() + '<br />';
});
document.body.innerHTML += '<br /> Total combinations = ' + combs.length;
A:
C code for Algorithm L (Lexicographic combinations) in Section 7.2.1.3 of The Art of Computer Programming, Volume 4A: Combinatorial Algorithms, Part 1 :
#include <stdio.h>
#include <stdlib.h>
void visit(int* c, int t)
{
// for (int j = 1; j <= t; j++)
for (int j = t; j > 0; j--)
printf("%d ", c[j]);
printf("\n");
}
int* initialize(int n, int t)
{
// c[0] not used
int *c = (int*) malloc((t + 3) * sizeof(int));
for (int j = 1; j <= t; j++)
c[j] = j - 1;
c[t+1] = n;
c[t+2] = 0;
return c;
}
void comb(int n, int t)
{
int *c = initialize(n, t);
int j;
for (;;) {
visit(c, t);
j = 1;
while (c[j]+1 == c[j+1]) {
c[j] = j - 1;
++j;
}
if (j > t)
return;
++c[j];
}
free(c);
}
int main(int argc, char *argv[])
{
comb(5, 3);
return 0;
}
A:
Jumping on the bandwagon, and posting another solution. This is a generic Java implementation. Input: (int k) is number of elements to choose and (List<T> list) is the list to choose from. Returns a list of combinations (List<List<T>>).
public static <T> List<List<T>> getCombinations(int k, List<T> list) {
List<List<T>> combinations = new ArrayList<List<T>>();
if (k == 0) {
combinations.add(new ArrayList<T>());
return combinations;
}
for (int i = 0; i < list.size(); i++) {
T element = list.get(i);
List<T> rest = getSublist(list, i+1);
for (List<T> previous : getCombinations(k-1, rest)) {
previous.add(element);
combinations.add(previous);
}
}
return combinations;
}
public static <T> List<T> getSublist(List<T> list, int i) {
List<T> sublist = new ArrayList<T>();
for (int j = i; j < list.size(); j++) {
sublist.add(list.get(j));
}
return sublist;
}
A: JavaScript, generator-based, recursive approach:
function *nCk(n,k){
for(var i=n-1;i>=k-1;--i)
if(k===1)
yield [i];
else
for(var temp of nCk(i,k-1)){
temp.unshift(i);
yield temp;
}
}
function test(){
try{
var n=parseInt(ninp.value);
var k=parseInt(kinp.value);
log.innerText="";
var stop=Date.now()+1000;
if(k>=1)
for(var res of nCk(n,k))
if(Date.now()<stop)
log.innerText+=JSON.stringify(res)+" ";
else{
log.innerText+="1 second passed, stopping here.";
break;
}
}catch(ex){}
}
n:<input id="ninp" oninput="test()">
>= k:<input id="kinp" oninput="test()"> >= 1
<div id="log"></div>
This way (decreasing i and unshift()) it produces combinations and elements inside combinations in decreasing order, somewhat pleasing the eye.
Test stops after 1 second, so entering weird numbers is relatively safe.
A: And here comes granddaddy COBOL, the much maligned language.
Let's assume an array of 34 elements of 8 bytes each (purely arbitrary selection.) The idea is to enumerate all possible 4-element combinations and load them into an array.
We use 4 indices, one each for each position in the group of 4
The array is processed like this:
idx1 = 1
idx2 = 2
idx3 = 3
idx4 = 4
We vary idx4 from 4 to the end. For each idx4 we get a unique combination
of groups of four. When idx4 comes to the end of the array, we increment idx3 by 1 and set idx4 to idx3+1. Then we run idx4 to the end again. We proceed in this manner, augmenting idx3,idx2, and idx1 respectively until the position of idx1 is less than 4 from the end of the array. That finishes the algorithm.
1 --- pos.1
2 --- pos 2
3 --- pos 3
4 --- pos 4
5
6
7
etc.
First iterations:
1234
1235
1236
1237
1245
1246
1247
1256
1257
1267
etc.
A COBOL example:
01 DATA_ARAY.
05 FILLER PIC X(8) VALUE "VALUE_01".
05 FILLER PIC X(8) VALUE "VALUE_02".
etc.
01 ARAY_DATA OCCURS 34.
05 ARAY_ITEM PIC X(8).
01 OUTPUT_ARAY OCCURS 50000 PIC X(32).
01 MAX_NUM PIC 99 COMP VALUE 34.
01 INDEXXES COMP.
05 IDX1 PIC 99.
05 IDX2 PIC 99.
05 IDX3 PIC 99.
05 IDX4 PIC 99.
05 OUT_IDX PIC 9(9).
01 WHERE_TO_STOP_SEARCH PIC 99 COMP.
* Stop the search when IDX1 is on the third last array element:
COMPUTE WHERE_TO_STOP_SEARCH = MAX_VALUE - 3
MOVE 1 TO IDX1
PERFORM UNTIL IDX1 > WHERE_TO_STOP_SEARCH
COMPUTE IDX2 = IDX1 + 1
PERFORM UNTIL IDX2 > MAX_NUM
COMPUTE IDX3 = IDX2 + 1
PERFORM UNTIL IDX3 > MAX_NUM
COMPUTE IDX4 = IDX3 + 1
PERFORM UNTIL IDX4 > MAX_NUM
ADD 1 TO OUT_IDX
STRING ARAY_ITEM(IDX1)
ARAY_ITEM(IDX2)
ARAY_ITEM(IDX3)
ARAY_ITEM(IDX4)
INTO OUTPUT_ARAY(OUT_IDX)
ADD 1 TO IDX4
END-PERFORM
ADD 1 TO IDX3
END-PERFORM
ADD 1 TO IDX2
END_PERFORM
ADD 1 TO IDX1
END-PERFORM.
A: Another C# version with lazy generation of the combination indices. This version maintains a single array of indices to define a mapping between the list of all values and the values for the current combination, i.e. constantly uses O(k) additional space during the entire runtime. The code generates individual combinations, including the first one, in O(k) time.
public static IEnumerable<T[]> Combinations<T>(this T[] values, int k)
{
if (k < 0 || values.Length < k)
yield break; // invalid parameters, no combinations possible
// generate the initial combination indices
var combIndices = new int[k];
for (var i = 0; i < k; i++)
{
combIndices[i] = i;
}
while (true)
{
// return next combination
var combination = new T[k];
for (var i = 0; i < k; i++)
{
combination[i] = values[combIndices[i]];
}
yield return combination;
// find first index to update
var indexToUpdate = k - 1;
while (indexToUpdate >= 0 && combIndices[indexToUpdate] >= values.Length - k + indexToUpdate)
{
indexToUpdate--;
}
if (indexToUpdate < 0)
yield break; // done
// update combination indices
for (var combIndex = combIndices[indexToUpdate] + 1; indexToUpdate < k; indexToUpdate++, combIndex++)
{
combIndices[indexToUpdate] = combIndex;
}
}
}
Test code:
foreach (var combination in new[] {'a', 'b', 'c', 'd', 'e'}.Combinations(3))
{
System.Console.WriteLine(String.Join(" ", combination));
}
Output:
a b c
a b d
a b e
a c d
a c e
a d e
b c d
b c e
b d e
c d e
A:
Here's some simple code that prints all the C(n,m) combinations. It works by initializing and moving a set of array indices that point to next valid combination. The indices are initialized to point to the lowest m indices (lexicographically the smallest combination). Then on, starting with the m-th index, we try to move the indices forward. if an index has reached its limit, we try the previous index (all the way down to index 1). If we can move an index forward, then we reset all greater indices.
m=(rand()%n)+1; // m will vary from 1 to n
for (i=0;i<n;i++) a[i]=i+1;
// we want to print all possible C(n,m) combinations of selecting m objects out of n
printf("Printing C(%d,%d) possible combinations ...\n", n,m);
// This is an adhoc algo that keeps m pointers to the next valid combination
for (i=0;i<m;i++) p[i]=i; // the p[.] contain indices to the a vector whose elements constitute next combination
done=false;
while (!done)
{
// print combination
for (i=0;i<m;i++) printf("%2d ", a[p[i]]);
printf("\n");
// update combination
// method: start with p[m-1]. try to increment it. if it is already at the end, then try moving p[m-2] ahead.
// if this is possible, then reset p[m-1] to 1 more than (the new) p[m-2].
// if p[m-2] can not also be moved, then try p[m-3]. move that ahead. then reset p[m-2] and p[m-1].
// repeat all the way down to p[0]. if p[0] can not also be moved, then we have generated all combinations.
j=m-1;
i=1;
move_found=false;
while ((j>=0) && !move_found)
{
if (p[j]<(n-i))
{
move_found=true;
p[j]++; // point p[j] to next index
for (k=j+1;k<m;k++)
{
p[k]=p[j]+(k-j);
}
}
else
{
j--;
i++;
}
}
if (!move_found) done=true;
}
A:
A Lisp macro generates the code for all values r (taken-at-a-time)
(defmacro txaat (some-list taken-at-a-time)
(let* ((vars (reverse (truncate-list '(a b c d e f g h i j) taken-at-a-time))))
`(
,@(loop for i below taken-at-a-time
for j in vars
with nested = nil
finally (return nested)
do
(setf
nested
`(loop for ,j from
,(if (< i (1- (length vars)))
`(1+ ,(nth (1+ i) vars))
0)
below (- (length ,some-list) ,i)
,@(if (equal i 0)
`(collect
(list
,@(loop for k from (1- taken-at-a-time) downto 0
append `((nth ,(nth k vars) ,some-list)))))
`(append ,nested))))))))
So,
CL-USER> (macroexpand-1 '(txaat '(a b c d) 1))
(LOOP FOR A FROM 0 TO (- (LENGTH '(A B C D)) 1)
COLLECT (LIST (NTH A '(A B C D))))
T
CL-USER> (macroexpand-1 '(txaat '(a b c d) 2))
(LOOP FOR A FROM 0 TO (- (LENGTH '(A B C D)) 2)
APPEND (LOOP FOR B FROM (1+ A) TO (- (LENGTH '(A B C D)) 1)
COLLECT (LIST (NTH A '(A B C D)) (NTH B '(A B C D)))))
T
CL-USER> (macroexpand-1 '(txaat '(a b c d) 3))
(LOOP FOR A FROM 0 TO (- (LENGTH '(A B C D)) 3)
APPEND (LOOP FOR B FROM (1+ A) TO (- (LENGTH '(A B C D)) 2)
APPEND (LOOP FOR C FROM (1+ B) TO (- (LENGTH '(A B C D)) 1)
COLLECT (LIST (NTH A '(A B C D))
(NTH B '(A B C D))
(NTH C '(A B C D))))))
T
CL-USER>
And,
CL-USER> (txaat '(a b c d) 1)
((A) (B) (C) (D))
CL-USER> (txaat '(a b c d) 2)
((A B) (A C) (A D) (B C) (B D) (C D))
CL-USER> (txaat '(a b c d) 3)
((A B C) (A B D) (A C D) (B C D))
CL-USER> (txaat '(a b c d) 4)
((A B C D))
CL-USER> (txaat '(a b c d) 5)
NIL
CL-USER> (txaat '(a b c d) 0)
NIL
CL-USER>
A:
This is a recursive program that generates combinations for nCk.Elements in collection are assumed to be from 1 to n
#include<stdio.h>
#include<stdlib.h>
int nCk(int n,int loopno,int ini,int *a,int k)
{
static int count=0;
int i;
loopno--;
if(loopno<0)
{
a[k-1]=ini;
for(i=0;i<k;i++)
{
printf("%d,",a[i]);
}
printf("\n");
count++;
return 0;
}
for(i=ini;i<=n-loopno-1;i++)
{
a[k-1-loopno]=i+1;
nCk(n,loopno,i+1,a,k);
}
if(ini==0)
return count;
else
return 0;
}
void main()
{
int n,k,*a,count;
printf("Enter the value of n and k\n");
scanf("%d %d",&n,&k);
a=(int*)malloc(k*sizeof(int));
count=nCk(n,k,0,a,k);
printf("No of combinations=%d\n",count);
}
A:
In VB.Net, this algorithm collects all combinations of n numbers from a set of numbers (PoolArray). e.g. all combinations of 5 picks from "8,10,20,33,41,44,47".
Sub CreateAllCombinationsOfPicksFromPool(ByVal PicksArray() As UInteger, ByVal PicksIndex As UInteger, ByVal PoolArray() As UInteger, ByVal PoolIndex As UInteger)
If PicksIndex < PicksArray.Length Then
For i As Integer = PoolIndex To PoolArray.Length - PicksArray.Length + PicksIndex
PicksArray(PicksIndex) = PoolArray(i)
CreateAllCombinationsOfPicksFromPool(PicksArray, PicksIndex + 1, PoolArray, i + 1)
Next
Else
' completed combination. build your collections using PicksArray.
End If
End Sub
Dim PoolArray() As UInteger = Array.ConvertAll("8,10,20,33,41,44,47".Split(","), Function(u) UInteger.Parse(u))
Dim nPicks as UInteger = 5
Dim Picks(nPicks - 1) As UInteger
CreateAllCombinationsOfPicksFromPool(Picks, 0, PoolArray, 0)
A:
Since programming language is not mentioned I am assuming that lists are OK too. So here's an OCaml version suitable for short lists (non tail-recursive). Given a list l of elements of any type and an integer n it will return a list of all possible lists containing n elements of l if we assume that the order of the elements in the outcome lists is ignored, i.e. list ['a';'b'] is the same as ['b';'a'] and will reported once. So size of resultant list will be ((List.length l) Choose n).
The intuition of the recursion is the following: you take the head of the list and then make two recursive calls:
*
*recursive call 1 (RC1): to the tail of the list, but choose n-1 elements
*recursive call 2 (RC2): to the tail of the list, but choose n elements
to combine the recursive results, list-multiply (please bear the odd name) the head of the list with the results of RC1 and then append (@) the results of RC2. List-multiply is the following operation lmul:
a lmul [ l1 ; l2 ; l3] = [a::l1 ; a::l2 ; a::l3]
lmul is implemented in the code below as
List.map (fun x -> h::x)
Recursion is terminated when the size of the list equals the number of elements you want to choose, in which case you just return the list itself.
So here's a four-liner in OCaml that implements the above algorithm:
let rec choose l n = match l, (List.length l) with
| _, lsize when n==lsize -> [l]
| h::t, _ -> (List.map (fun x-> h::x) (choose t (n-1))) @ (choose t n)
| [], _ -> []
A: void combine(char a[], int N, int M, int m, int start, char result[]) {
if (0 == m) {
for (int i = M - 1; i >= 0; i--)
std::cout << result[i];
std::cout << std::endl;
return;
}
for (int i = start; i < (N - m + 1); i++) {
result[m - 1] = a[i];
combine(a, N, M, m-1, i+1, result);
}
}
void combine(char a[], int N, int M) {
char *result = new char[M];
combine(a, N, M, M, 0, result);
delete[] result;
}
In the first function, m denotes how many more you need to choose, and start denotes from which position in array you must start choosing.
A:
And here's a Clojure version that uses the same algorithm I describe in my OCaml implementation answer:
(defn select
([items]
(select items 0 (inc (count items))))
([items n1 n2]
(reduce concat
(map #(select % items)
(range n1 (inc n2)))))
([n items]
(let [
lmul (fn [a list-of-lists-of-bs]
(map #(cons a %) list-of-lists-of-bs))
]
(if (= n (count items))
(list items)
(if (empty? items)
items
(concat
(select n (rest items))
(lmul (first items) (select (dec n) (rest items)))))))))
It provides three ways to call it:
(a) for exactly n selected items as the question demands:
user=> (count (select 3 "abcdefgh"))
56
(b) for between n1 and n2 selected items:
user=> (select '(1 2 3 4) 2 3)
((3 4) (2 4) (2 3) (1 4) (1 3) (1 2) (2 3 4) (1 3 4) (1 2 4) (1 2 3))
(c) for between 0 and the size of the collection selected items:
user=> (select '(1 2 3))
(() (3) (2) (1) (2 3) (1 3) (1 2) (1 2 3))
A:
Short fast C implementation
#include <stdio.h>
void main(int argc, char *argv[]) {
const int n = 6; /* The size of the set; for {1, 2, 3, 4} it's 4 */
const int p = 4; /* The size of the subsets; for {1, 2}, {1, 3}, ... it's 2 */
int comb[40] = {0}; /* comb[i] is the index of the i-th element in the combination */
int i = 0;
for (int j = 0; j <= n; j++) comb[j] = 0;
while (i >= 0) {
if (comb[i] < n + i - p + 1) {
comb[i]++;
if (i == p - 1) { for (int j = 0; j < p; j++) printf("%d ", comb[j]); printf("\n"); }
else { comb[++i] = comb[i - 1]; }
} else i--; }
}
To see how fast it is, use this code and test it
#include <time.h>
#include <stdio.h>
void main(int argc, char *argv[]) {
const int n = 32; /* The size of the set; for {1, 2, 3, 4} it's 4 */
const int p = 16; /* The size of the subsets; for {1, 2}, {1, 3}, ... it's 2 */
int comb[40] = {0}; /* comb[i] is the index of the i-th element in the combination */
int c = 0; int i = 0;
for (int j = 0; j <= n; j++) comb[j] = 0;
while (i >= 0) {
if (comb[i] < n + i - p + 1) {
comb[i]++;
/* if (i == p - 1) { for (int j = 0; j < p; j++) printf("%d ", comb[j]); printf("\n"); } */
if (i == p - 1) c++;
else { comb[++i] = comb[i - 1]; }
} else i--; }
printf("%d!%d == %d combination(s) in %15.3f second(s)\n ", p, n, c, clock()/1000.0);
}
test with cmd.exe (windows):
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.
c:\Program Files\lcc\projects>combination
16!32 == 601080390 combination(s) in 5.781 second(s)
c:\Program Files\lcc\projects>
Have a nice day.
A: Here is my Scala solution:
def combinations[A](s: List[A], k: Int): List[List[A]] =
if (k > s.length) Nil
else if (k == 1) s.map(List(_))
else combinations(s.tail, k - 1).map(s.head :: _) ::: combinations(s.tail, k)
A: #include <stdio.h>
unsigned int next_combination(unsigned int *ar, size_t n, unsigned int k)
{
unsigned int finished = 0;
unsigned int changed = 0;
unsigned int i;
if (k > 0) {
for (i = k - 1; !finished && !changed; i--) {
if (ar[i] < (n - 1) - (k - 1) + i) {
/* Increment this element */
ar[i]++;
if (i < k - 1) {
/* Turn the elements after it into a linear sequence */
unsigned int j;
for (j = i + 1; j < k; j++) {
ar[j] = ar[j - 1] + 1;
}
}
changed = 1;
}
finished = i == 0;
}
if (!changed) {
/* Reset to first combination */
for (i = 0; i < k; i++) {
ar[i] = i;
}
}
}
return changed;
}
typedef void(*printfn)(const void *, FILE *);
void print_set(const unsigned int *ar, size_t len, const void **elements,
const char *brackets, printfn print, FILE *fptr)
{
unsigned int i;
fputc(brackets[0], fptr);
for (i = 0; i < len; i++) {
print(elements[ar[i]], fptr);
if (i < len - 1) {
fputs(", ", fptr);
}
}
fputc(brackets[1], fptr);
}
int main(void)
{
unsigned int numbers[] = { 0, 1, 2 };
char *elements[] = { "a", "b", "c", "d", "e" };
const unsigned int k = sizeof(numbers) / sizeof(unsigned int);
const unsigned int n = sizeof(elements) / sizeof(const char*);
do {
print_set(numbers, k, (void*)elements, "[]", (printfn)fputs, stdout);
putchar('\n');
} while (next_combination(numbers, n, k));
getchar();
return 0;
}
A: I was looking for a similar solution for PHP and came across the following
class Combinations implements Iterator
{
protected $c = null;
protected $s = null;
protected $n = 0;
protected $k = 0;
protected $pos = 0;
function __construct($s, $k) {
if(is_array($s)) {
$this->s = array_values($s);
$this->n = count($this->s);
} else {
$this->s = (string) $s;
$this->n = strlen($this->s);
}
$this->k = $k;
$this->rewind();
}
function key() {
return $this->pos;
}
function current() {
$r = array();
for($i = 0; $i < $this->k; $i++)
$r[] = $this->s[$this->c[$i]];
return is_array($this->s) ? $r : implode('', $r);
}
function next() {
if($this->_next())
$this->pos++;
else
$this->pos = -1;
}
function rewind() {
$this->c = range(0, $this->k);
$this->pos = 0;
}
function valid() {
return $this->pos >= 0;
}
protected function _next() {
$i = $this->k - 1;
while ($i >= 0 && $this->c[$i] == $this->n - $this->k + $i)
$i--;
if($i < 0)
return false;
$this->c[$i]++;
while($i++ < $this->k - 1)
$this->c[$i] = $this->c[$i - 1] + 1;
return true;
}
}
foreach(new Combinations("1234567", 5) as $substring)
echo $substring, ' ';
source
I'm not to sure how efficient the class is, but I was only using it for a seeder.
A: Another one solution with C#:
static List<List<T>> GetCombinations<T>(List<T> originalItems, int combinationLength)
{
if (combinationLength < 1)
{
return null;
}
return CreateCombinations<T>(new List<T>(), 0, combinationLength, originalItems);
}
static List<List<T>> CreateCombinations<T>(List<T> initialCombination, int startIndex, int length, List<T> originalItems)
{
List<List<T>> combinations = new List<List<T>>();
for (int i = startIndex; i < originalItems.Count - length + 1; i++)
{
List<T> newCombination = new List<T>(initialCombination);
newCombination.Add(originalItems[i]);
if (length > 1)
{
List<List<T>> newCombinations = CreateCombinations(newCombination, i + 1, length - 1, originalItems);
combinations.AddRange(newCombinations);
}
else
{
combinations.Add(newCombination);
}
}
return combinations;
}
Example of usage:
List<char> initialArray = new List<char>() { 'a','b','c','d'};
int combinationLength = 3;
List<List<char>> combinations = GetCombinations(initialArray, combinationLength);
A: We can use the concept of bits to do this. Let we have a string of "abc," and we want to have all combinations of the elements with length 2 (i.e "ab" , "ac","bc".)
We can find the set bits in numbers ranging from 1 to 2^n (exclusive). Here 1 to 7, and wherever we have set bits = 2, we can print the corresponding value from string.
for example:
*
*1 - 001
*2 - 010
*3 - 011 -> print ab (str[0] , str[1])
*4 - 100
*5 - 101 -> print ac (str[0] , str[2])
*6 - 110 -> print ab (str[1] , str[2])
*7 - 111.
Code sample:
public class StringCombinationK {
static void combk(String s , int k){
int n = s.length();
int num = 1<<n;
int j=0;
int count=0;
for(int i=0;i<num;i++){
if (countSet(i)==k){
setBits(i,j,s);
count++;
System.out.println();
}
}
System.out.println(count);
}
static void setBits(int i,int j,String s){ // print the corresponding string value,j represent the index of set bit
if(i==0){
return;
}
if(i%2==1){
System.out.print(s.charAt(j));
}
setBits(i/2,j+1,s);
}
static int countSet(int i){ //count number of set bits
if( i==0){
return 0;
}
return (i%2==0? 0:1) + countSet(i/2);
}
public static void main(String[] arhs){
String s = "abcdefgh";
int k=3;
combk(s,k);
}
}
A: Here is a Lisp approach using a macro. This works in Common Lisp and should work in other Lisp dialects.
The code below creates 'n' nested loops and executes an arbitrary chunk of code (stored in the body variable) for each combination of 'n' elements from the list lst. The variable var points to a list containing the variables used for the loops.
(defmacro do-combinations ((var lst num) &body body)
(loop with syms = (loop repeat num collect (gensym))
for i on syms
for k = `(loop for ,(car i) on (cdr ,(cadr i))
do (let ((,var (list ,@(reverse syms)))) (progn ,@body)))
then `(loop for ,(car i) on ,(if (cadr i) `(cdr ,(cadr i)) lst) do ,k)
finally (return k)))
Let's see...
(macroexpand-1 '(do-combinations (p '(1 2 3 4 5 6 7) 4) (pprint (mapcar #'car p))))
(LOOP FOR #:G3217 ON '(1 2 3 4 5 6 7) DO
(LOOP FOR #:G3216 ON (CDR #:G3217) DO
(LOOP FOR #:G3215 ON (CDR #:G3216) DO
(LOOP FOR #:G3214 ON (CDR #:G3215) DO
(LET ((P (LIST #:G3217 #:G3216 #:G3215 #:G3214)))
(PROGN (PPRINT (MAPCAR #'CAR P))))))))
(do-combinations (p '(1 2 3 4 5 6 7) 4) (pprint (mapcar #'car p)))
(1 2 3 4)
(1 2 3 5)
(1 2 3 6)
...
Since combinations are not stored by default, storage is kept to a minimum. The possibility of choosing the body code instead of storing all results also affords more flexibility.
A: Following Haskell code calculate the combination number and combinations at the same time, and thanks to Haskell's laziness, you can get one part of them without calculating the other.
import Data.Semigroup
import Data.Monoid
data Comb = MkComb {count :: Int, combinations :: [[Int]]} deriving (Show, Eq, Ord)
instance Semigroup Comb where
(MkComb c1 cs1) <> (MkComb c2 cs2) = MkComb (c1 + c2) (cs1 ++ cs2)
instance Monoid Comb where
mempty = MkComb 0 []
addElem :: Comb -> Int -> Comb
addElem (MkComb c cs) x = MkComb c (map (x :) cs)
comb :: Int -> Int -> Comb
comb n k | n < 0 || k < 0 = error "error in `comb n k`, n and k should be natural number"
comb n k | k == 0 || k == n = MkComb 1 [(take k [k-1,k-2..0])]
comb n k | n < k = mempty
comb n k = comb (n-1) k <> (comb (n-1) (k-1) `addElem` (n-1))
It works like:
*Main> comb 0 1
MkComb {count = 0, combinations = []}
*Main> comb 0 0
MkComb {count = 1, combinations = [[]]}
*Main> comb 1 1
MkComb {count = 1, combinations = [[0]]}
*Main> comb 4 2
MkComb {count = 6, combinations = [[1,0],[2,0],[2,1],[3,0],[3,1],[3,2]]}
*Main> count (comb 10 5)
252
A:
In Python like Andrea Ambu, but not hardcoded for choosing three.
def combinations(list, k):
"""Choose combinations of list, choosing k elements(no repeats)"""
if len(list) < k:
return []
else:
seq = [i for i in range(k)]
while seq:
print [list[index] for index in seq]
seq = get_next_combination(len(list), k, seq)
def get_next_combination(num_elements, k, seq):
index_to_move = find_index_to_move(num_elements, seq)
if index_to_move == None:
return None
else:
seq[index_to_move] += 1
#for every element past this sequence, move it down
for i, elem in enumerate(seq[(index_to_move+1):]):
seq[i + 1 + index_to_move] = seq[index_to_move] + i + 1
return seq
def find_index_to_move(num_elements, seq):
"""Tells which index should be moved"""
for rev_index, elem in enumerate(reversed(seq)):
if elem < (num_elements - rev_index - 1):
return len(seq) - rev_index - 1
return None
A:
In Python, taking advantage of recursion and the fact that everything is done by reference. This will take a lot of memory for very large sets, but has the advantage that the initial set can be a complex object. It will find only unique combinations.
import copy
def find_combinations( length, set, combinations = None, candidate = None ):
# recursive function to calculate all unique combinations of unique values
# from [set], given combinations of [length]. The result is populated
# into the 'combinations' list.
#
if combinations == None:
combinations = []
if candidate == None:
candidate = []
for item in set:
if item in candidate:
# this item already appears in the current combination somewhere.
# skip it
continue
attempt = copy.deepcopy(candidate)
attempt.append(item)
# sorting the subset is what gives us completely unique combinations,
# so that [1, 2, 3] and [1, 3, 2] will be treated as equals
attempt.sort()
if len(attempt) < length:
# the current attempt at finding a new combination is still too
# short, so add another item to the end of the set
# yay recursion!
find_combinations( length, set, combinations, attempt )
else:
# the current combination attempt is the right length. If it
# already appears in the list of found combinations then we'll
# skip it.
if attempt in combinations:
continue
else:
# otherwise, we append it to the list of found combinations
# and move on.
combinations.append(attempt)
continue
return len(combinations)
You use it this way. Passing 'result' is optional, so you could just use it to get the number of possible combinations... although that would be really inefficient (it's better done by calculation).
size = 3
set = [1, 2, 3, 4, 5]
result = []
num = find_combinations( size, set, result )
print "size %d results in %d sets" % (size, num)
print "result: %s" % (result,)
You should get the following output from that test data:
size 3 results in 10 sets
result: [[1, 2, 3], [1, 2, 4], [1, 2, 5], [1, 3, 4], [1, 3, 5], [1, 4, 5], [2, 3, 4], [2, 3, 5], [2, 4, 5], [3, 4, 5]]
And it will work just as well if your set looks like this:
set = [
[ 'vanilla', 'cupcake' ],
[ 'chocolate', 'pudding' ],
[ 'vanilla', 'pudding' ],
[ 'chocolate', 'cookie' ],
[ 'mint', 'cookie' ]
]
A:
This is my contribution in javascript (no recursion)
set = ["q0", "q1", "q2", "q3"]
collector = []
function comb(num) {
results = []
one_comb = []
for (i = set.length - 1; i >= 0; --i) {
tmp = Math.pow(2, i)
quotient = parseInt(num / tmp)
results.push(quotient)
num = num % tmp
}
k = 0
for (i = 0; i < results.length; ++i)
if (results[i]) {
++k
one_comb.push(set[i])
}
if (collector[k] == undefined)
collector[k] = []
collector[k].push(one_comb)
}
sum = 0
for (i = 0; i < set.length; ++i)
sum += Math.pow(2, i)
for (ii = sum; ii > 0; --ii)
comb(ii)
cnt = 0
for (i = 1; i < collector.length; ++i) {
n = 0
for (j = 0; j < collector[i].length; ++j)
document.write(++cnt, " - " + (++n) + " - ", collector[i][j], "<br>")
document.write("<hr>")
}
A:
How about this answer ...this prints all combinations of length 3 ...and it can generalised for any length ...
Working code ...
#include<iostream>
#include<string>
using namespace std;
void combination(string a,string dest){
int l = dest.length();
if(a.empty() && l == 3 ){
cout<<dest<<endl;}
else{
if(!a.empty() && dest.length() < 3 ){
combination(a.substr(1,a.length()),dest+a[0]);}
if(!a.empty() && dest.length() <= 3 ){
combination(a.substr(1,a.length()),dest);}
}
}
int main(){
string demo("abcd");
combination(demo,"");
return 0;
}
A:
yet another recursive solution (you should be able to port this to use letters instead of numbers) using a stack, a bit shorter than most though:
stack = []
def choose(n,x):
r(0,0,n+1,x)
def r(p, c, n,x):
if x-c == 0:
print stack
return
for i in range(p, n-(x-1)+c):
stack.append(i)
r(i+1,c+1,n,x)
stack.pop()
4 choose 3 or I want all 3 combinations of numbers starting with 0 to 4
choose(4,3)
[0, 1, 2]
[0, 1, 3]
[0, 1, 4]
[0, 2, 3]
[0, 2, 4]
[0, 3, 4]
[1, 2, 3]
[1, 2, 4]
[1, 3, 4]
[2, 3, 4]
A:
Here's a coffeescript implementation
combinations: (list, n) ->
permuations = Math.pow(2, list.length) - 1
out = []
combinations = []
while permuations
out = []
for i in [0..list.length]
y = ( 1 << i )
if( y & permuations and (y isnt permuations))
out.push(list[i])
if out.length <= n and out.length > 0
combinations.push(out)
permuations--
return combinations
A:
Perhaps I've missed the point (that you need the algorithm and not the ready made solution), but it seems that scala does it out of the box (now):
def combis(str:String, k:Int):Array[String] = {
str.combinations(k).toArray
}
Using the method like this:
println(combis("abcd",2).toList)
Will produce:
List(ab, ac, ad, bc, bd, cd)
A:
Short fast C# implementation
public static IEnumerable<IEnumerable<T>> Combinations<T>(IEnumerable<T> elements, int k)
{
return Combinations(elements.Count(), k).Select(p => p.Select(q => elements.ElementAt(q)));
}
public static List<int[]> Combinations(int setLenght, int subSetLenght) //5, 3
{
var result = new List<int[]>();
var lastIndex = subSetLenght - 1;
var dif = setLenght - subSetLenght;
var prevSubSet = new int[subSetLenght];
var lastSubSet = new int[subSetLenght];
for (int i = 0; i < subSetLenght; i++)
{
prevSubSet[i] = i;
lastSubSet[i] = i + dif;
}
while(true)
{
//add subSet ad result set
var n = new int[subSetLenght];
for (int i = 0; i < subSetLenght; i++)
n[i] = prevSubSet[i];
result.Add(n);
if (prevSubSet[0] >= lastSubSet[0])
break;
//start at index 1 because index 0 is checked and breaking in the current loop
int j = 1;
for (; j < subSetLenght; j++)
{
if (prevSubSet[j] >= lastSubSet[j])
{
prevSubSet[j - 1]++;
for (int p = j; p < subSetLenght; p++)
prevSubSet[p] = prevSubSet[p - 1] + 1;
break;
}
}
if (j > lastIndex)
prevSubSet[lastIndex]++;
}
return result;
}
A:
Here's a C++ solution i came up with using recursion and bit-shifting. It may work in C as well.
void r_nCr(unsigned int startNum, unsigned int bitVal, unsigned int testNum) // Should be called with arguments (2^r)-1, 2^(r-1), 2^(n-1)
{
unsigned int n = (startNum - bitVal) << 1;
n += bitVal ? 1 : 0;
for (unsigned int i = log2(testNum) + 1; i > 0; i--) // Prints combination as a series of 1s and 0s
cout << (n >> (i - 1) & 1);
cout << endl;
if (!(n & testNum) && n != startNum)
r_nCr(n, bitVal, testNum);
if (bitVal && bitVal < testNum)
r_nCr(startNum, bitVal >> 1, testNum);
}
You can find an explanation of how this works here.
A:
C# simple algorithm.
(I'm posting it since I've tried to use the one you guys uploaded, but for some reason I couldn't compile it - extending a class? so I wrote my own one just in case someone else is facing the same problem I did).
I'm not much into c# more than basic programming by the way, but this one works fine.
public static List<List<int>> GetSubsetsOfSizeK(List<int> lInputSet, int k)
{
List<List<int>> lSubsets = new List<List<int>>();
GetSubsetsOfSizeK_rec(lInputSet, k, 0, new List<int>(), lSubsets);
return lSubsets;
}
public static void GetSubsetsOfSizeK_rec(List<int> lInputSet, int k, int i, List<int> lCurrSet, List<List<int>> lSubsets)
{
if (lCurrSet.Count == k)
{
lSubsets.Add(lCurrSet);
return;
}
if (i >= lInputSet.Count)
return;
List<int> lWith = new List<int>(lCurrSet);
List<int> lWithout = new List<int>(lCurrSet);
lWith.Add(lInputSet[i++]);
GetSubsetsOfSizeK_rec(lInputSet, k, i, lWith, lSubsets);
GetSubsetsOfSizeK_rec(lInputSet, k, i, lWithout, lSubsets);
}
USAGE: GetSubsetsOfSizeK(set of type List<int>, integer k)
You can modify it to iterate over whatever you are working with.
Good luck!
A:
Here is an algorithm i came up with for solving this problem. Its written in c++, but can be adapted to pretty much any language that supports bitwise operations.
void r_nCr(const unsigned int &startNum, const unsigned int &bitVal, const unsigned int &testNum) // Should be called with arguments (2^r)-1, 2^(r-1), 2^(n-1)
{
unsigned int n = (startNum - bitVal) << 1;
n += bitVal ? 1 : 0;
for (unsigned int i = log2(testNum) + 1; i > 0; i--) // Prints combination as a series of 1s and 0s
cout << (n >> (i - 1) & 1);
cout << endl;
if (!(n & testNum) && n != startNum)
r_nCr(n, bitVal, testNum);
if (bitVal && bitVal < testNum)
r_nCr(startNum, bitVal >> 1, testNum);
}
You can see an explanation of how it works here.
A: Recursively, a very simple answer, combo, in Free Pascal.
procedure combinata (n, k :integer; producer :oneintproc);
procedure combo (ndx, nbr, len, lnd :integer);
begin
for nbr := nbr to len do begin
productarray[ndx] := nbr;
if len < lnd then
combo(ndx+1,nbr+1,len+1,lnd)
else
producer(k);
end;
end;
begin
combo (0, 0, n-k, n-1);
end;
"producer" disposes of the productarray made for each combination.
A: There is no need for collection manipulations. The problem is almost the same as cycling over K nested loops but you have to be careful with the indexes and bounds (ignoring Java and OOP stuff):
public class CombinationsGen {
private final int n;
private final int k;
private int[] buf;
public CombinationsGen(int n, int k) {
this.n = n;
this.k = k;
}
public void combine(Consumer<int[]> consumer) {
buf = new int[k];
rec(0, 0, consumer);
}
private void rec(int index, int next, Consumer<int[]> consumer) {
int max = n - index;
if (index == k - 1) {
for (int i = 0; i < max && next < n; i++) {
buf[index] = next;
next++;
consumer.accept(buf);
}
} else {
for (int i = 0; i < max && next + index < n; i++) {
buf[index] = next;
next++;
rec(index + 1, next, consumer);
}
}
}
}
Use like so:
CombinationsGen gen = new CombinationsGen(5, 2);
AtomicInteger total = new AtomicInteger();
gen.combine(arr -> {
System.out.println(Arrays.toString(arr));
total.incrementAndGet();
});
System.out.println(total);
Get expected results:
[0, 1]
[0, 2]
[0, 3]
[0, 4]
[1, 2]
[1, 3]
[1, 4]
[2, 3]
[2, 4]
[3, 4]
10
Finally, map the indexes to whatever set of data you may have.
A: Simple but slow C++ backtracking algorithm.
#include <iostream>
void backtrack(int* numbers, int n, int k, int i, int s)
{
if (i == k)
{
for (int j = 0; j < k; ++j)
{
std::cout << numbers[j];
}
std::cout << std::endl;
return;
}
if (s > n)
{
return;
}
numbers[i] = s;
backtrack(numbers, n, k, i + 1, s + 1);
backtrack(numbers, n, k, i, s + 1);
}
int main(int argc, char* argv[])
{
int n = 5;
int k = 3;
int* numbers = new int[k];
backtrack(numbers, n, k, 0, 1);
delete[] numbers;
return 0;
}
A: I made a general class for combinations in C++.
It is used like this.
char ar[] = "0ABCDEFGH";
nCr ncr(8, 3);
while(ncr.next()) {
for(int i=0; i<ncr.size(); i++) cout << ar[ncr[i]];
cout << ' ';
}
My library ncr[i] returns from 1, not from 0.
That's why there is 0 in the array.
If you want to consider order, just chage nCr class to nPr.
Usage is identical.
Result
ABC
ABD
ABE
ABF
ABG
ABH
ACD
ACE
ACF
ACG
ACH
ADE
ADF
ADG
ADH
AEF
AEG
AEH
AFG
AFH
AGH
BCD
BCE
BCF
BCG
BCH
BDE
BDF
BDG
BDH
BEF
BEG
BEH
BFG
BFH
BGH
CDE
CDF
CDG
CDH
CEF
CEG
CEH
CFG
CFH
CGH
DEF
DEG
DEH
DFG
DFH
DGH
EFG
EFH
EGH
FGH
Here goes the header file.
#pragma once
#include <exception>
class NRexception : public std::exception
{
public:
virtual const char* what() const throw() {
return "Combination : N, R should be positive integer!!";
}
};
class Combination
{
public:
Combination(int n, int r);
virtual ~Combination() { delete [] ar;}
int& operator[](unsigned i) {return ar[i];}
bool next();
int size() {return r;}
static int factorial(int n);
protected:
int* ar;
int n, r;
};
class nCr : public Combination
{
public:
nCr(int n, int r);
bool next();
int count() const;
};
class nTr : public Combination
{
public:
nTr(int n, int r);
bool next();
int count() const;
};
class nHr : public nTr
{
public:
nHr(int n, int r) : nTr(n,r) {}
bool next();
int count() const;
};
class nPr : public Combination
{
public:
nPr(int n, int r);
virtual ~nPr() {delete [] on;}
bool next();
void rewind();
int count() const;
private:
bool* on;
void inc_ar(int i);
};
And the implementation.
#include "combi.h"
#include <set>
#include<cmath>
Combination::Combination(int n, int r)
{
//if(n < 1 || r < 1) throw NRexception();
ar = new int[r];
this->n = n;
this->r = r;
}
int Combination::factorial(int n)
{
return n == 1 ? n : n * factorial(n-1);
}
int nPr::count() const
{
return factorial(n)/factorial(n-r);
}
int nCr::count() const
{
return factorial(n)/factorial(n-r)/factorial(r);
}
int nTr::count() const
{
return pow(n, r);
}
int nHr::count() const
{
return factorial(n+r-1)/factorial(n-1)/factorial(r);
}
nCr::nCr(int n, int r) : Combination(n, r)
{
if(r == 0) return;
for(int i=0; i<r-1; i++) ar[i] = i + 1;
ar[r-1] = r-1;
}
nTr::nTr(int n, int r) : Combination(n, r)
{
for(int i=0; i<r-1; i++) ar[i] = 1;
ar[r-1] = 0;
}
bool nCr::next()
{
if(r == 0) return false;
ar[r-1]++;
int i = r-1;
while(ar[i] == n-r+2+i) {
if(--i == -1) return false;
ar[i]++;
}
while(i < r-1) ar[i+1] = ar[i++] + 1;
return true;
}
bool nTr::next()
{
ar[r-1]++;
int i = r-1;
while(ar[i] == n+1) {
ar[i] = 1;
if(--i == -1) return false;
ar[i]++;
}
return true;
}
bool nHr::next()
{
ar[r-1]++;
int i = r-1;
while(ar[i] == n+1) {
if(--i == -1) return false;
ar[i]++;
}
while(i < r-1) ar[i+1] = ar[i++];
return true;
}
nPr::nPr(int n, int r) : Combination(n, r)
{
on = new bool[n+2];
for(int i=0; i<n+2; i++) on[i] = false;
for(int i=0; i<r; i++) {
ar[i] = i + 1;
on[i] = true;
}
ar[r-1] = 0;
}
void nPr::rewind()
{
for(int i=0; i<r; i++) {
ar[i] = i + 1;
on[i] = true;
}
ar[r-1] = 0;
}
bool nPr::next()
{
inc_ar(r-1);
int i = r-1;
while(ar[i] == n+1) {
if(--i == -1) return false;
inc_ar(i);
}
while(i < r-1) {
ar[++i] = 0;
inc_ar(i);
}
return true;
}
void nPr::inc_ar(int i)
{
on[ar[i]] = false;
while(on[++ar[i]]);
if(ar[i] != n+1) on[ar[i]] = true;
}
A: Very fast combinations for MetaTrader MQL4 implemented as iterator object.
The code is so simple to understand.
I benchmarked a lot of algorithms, this one is really very fast - about 3x faster than most next_combination() functions.
class CombinationsIterator
{
private:
int input_array[]; // 1 2 3 4 5
int index_array[]; // i j k
int m_elements; // N
int m_indices; // K
public:
CombinationsIterator(int &src_data[], int k)
{
m_indices = k;
m_elements = ArraySize(src_data);
ArrayCopy(input_array, src_data);
ArrayResize(index_array, m_indices);
// create initial combination (0..k-1)
for (int i = 0; i < m_indices; i++)
{
index_array[i] = i;
}
}
// https://stackoverflow.com/questions/5076695
// bool next_combination(int &item[], int k, int N)
bool advance()
{
int N = m_elements;
for (int i = m_indices - 1; i >= 0; --i)
{
if (index_array[i] < --N)
{
++index_array[i];
for (int j = i + 1; j < m_indices; ++j)
{
index_array[j] = index_array[j - 1] + 1;
}
return true;
}
}
return false;
}
void getItems(int &items[])
{
// fill items[] from input array
for (int i = 0; i < m_indices; i++)
{
items[i] = input_array[index_array[i]];
}
}
};
A driver program to test the above iterator class:
//+------------------------------------------------------------------+
//| |
//+------------------------------------------------------------------+
// driver program to test above class
#define N 5
#define K 3
void OnStart()
{
int myset[N] = {1, 2, 3, 4, 5};
int items[K];
CombinationsIterator comboIt(myset, K);
do
{
comboIt.getItems(items);
printf("%s", ArrayToString(items));
} while (comboIt.advance());
}
Output:
1 2 3
1 2 4
1 2 5
1 3 4
1 3 5
1 4 5
2 3 4
2 3 5
2 4 5
3 4 5
A: Here is a simple JS solution:
function getAllCombinations(n, k, f1) {
indexes = Array(k);
for (let i =0; i< k; i++) {
indexes[i] = i;
}
var total = 1;
f1(indexes);
while (indexes[0] !== n-k) {
total++;
getNext(n, indexes);
f1(indexes);
}
return {total};
}
function getNext(n, vec) {
const k = vec.length;
vec[k-1]++;
for (var i=0; i<k; i++) {
var currentIndex = k-i-1;
if (vec[currentIndex] === n - i) {
var nextIndex = k-i-2;
vec[nextIndex]++;
vec[currentIndex] = vec[nextIndex] + 1;
}
}
for (var i=1; i<k; i++) {
if (vec[i] === n - (k-i - 1)) {
vec[i] = vec[i-1] + 1;
}
}
return vec;
}
let start = new Date();
let result = getAllCombinations(10, 3, indexes => console.log(indexes));
let runTime = new Date() - start;
console.log({
result, runTime
});
A: I'm aware that there are a LOT of answers to this already, but I thought I'd add my own individual contribution in JavaScript, which consists of two functions - one to generate all the possible distinct k-subsets of an original n-element set, and one to use that first function to generate the power set of the original n-element set.
Here is the code for the two functions:
//Generate combination subsets from a base set of elements (passed as an array). This function should generate an
//array containing nCr elements, where nCr = n!/[r! (n-r)!].
//Arguments:
//[1] baseSet : The base set to create the subsets from (e.g., ["a", "b", "c", "d", "e", "f"])
//[2] cnt : The number of elements each subset is to contain (e.g., 3)
function MakeCombinationSubsets(baseSet, cnt)
{
var bLen = baseSet.length;
var indices = [];
var subSet = [];
var done = false;
var result = []; //Contains all the combination subsets generated
var done = false;
var i = 0;
var idx = 0;
var tmpIdx = 0;
var incr = 0;
var test = 0;
var newIndex = 0;
var inBounds = false;
var tmpIndices = [];
var checkBounds = false;
//First, generate an array whose elements are indices into the base set ...
for (i=0; i<cnt; i++)
indices.push(i);
//Now create a clone of this array, to be used in the loop itself ...
tmpIndices = [];
tmpIndices = tmpIndices.concat(indices);
//Now initialise the loop ...
idx = cnt - 1; //point to the last element of the indices array
incr = 0;
done = false;
while (!done)
{
//Create the current subset ...
subSet = []; //Make sure we begin with a completely empty subset before continuing ...
for (i=0; i<cnt; i++)
subSet.push(baseSet[tmpIndices[i]]); //Create the current subset, using items selected from the
//base set, using the indices array (which will change as we
//continue scanning) ...
//Add the subset thus created to the result set ...
result.push(subSet);
//Now update the indices used to select the elements of the subset. At the start, idx will point to the
//rightmost index in the indices array, but the moment that index moves out of bounds with respect to the
//base set, attention will be shifted to the next left index.
test = tmpIndices[idx] + 1;
if (test >= bLen)
{
//Here, we're about to move out of bounds with respect to the base set. We therefore need to scan back,
//and update indices to the left of the current one. Find the leftmost index in the indices array that
//isn't going to move out of bounds with respect to the base set ...
tmpIdx = idx - 1;
incr = 1;
inBounds = false; //Assume at start that the index we're checking in the loop below is out of bounds
checkBounds = true;
while (checkBounds)
{
if (tmpIdx < 0)
{
checkBounds = false; //Exit immediately at this point
}
else
{
newIndex = tmpIndices[tmpIdx] + 1;
test = newIndex + incr;
if (test >= bLen)
{
//Here, incrementing the current selected index will take that index out of bounds, so
//we move on to the next index to the left ...
tmpIdx--;
incr++;
}
else
{
//Here, the index will remain in bounds if we increment it, so we
//exit the loop and signal that we're in bounds ...
inBounds = true;
checkBounds = false;
//End if/else
}
//End if
}
//End while
}
//At this point, if we'er still in bounds, then we continue generating subsets, but if not, we abort immediately.
if (!inBounds)
done = true;
else
{
//Here, we're still in bounds. We need to update the indices accordingly. NOTE: at this point, although a
//left positioned index in the indices array may still be in bounds, incrementing it to generate indices to
//the right may take those indices out of bounds. We therefore need to check this as we perform the index
//updating of the indices array.
tmpIndices[tmpIdx] = newIndex;
inBounds = true;
checking = true;
i = tmpIdx + 1;
while (checking)
{
test = tmpIndices[i - 1] + 1; //Find out if incrementing the left adjacent index takes it out of bounds
if (test >= bLen)
{
inBounds = false; //If we move out of bounds, exit NOW ...
checking = false;
}
else
{
tmpIndices[i] = test; //Otherwise, update the indices array ...
i++; //Now move on to the next index to the right in the indices array ...
checking = (i < cnt); //And continue until we've exhausted all the indices array elements ...
//End if/else
}
//End while
}
//At this point, if the above updating of the indices array has moved any of its elements out of bounds,
//we abort subset construction from this point ...
if (!inBounds)
done = true;
//End if/else
}
}
else
{
//Here, the rightmost index under consideration isn't moving out of bounds with respect to the base set when
//we increment it, so we simply increment and continue the loop ...
tmpIndices[idx] = test;
//End if
}
//End while
}
return(result);
//End function
}
function MakePowerSet(baseSet)
{
var bLen = baseSet.length;
var result = [];
var i = 0;
var partialSet = [];
result.push([]); //add the empty set to the power set
for (i=1; i<bLen; i++)
{
partialSet = MakeCombinationSubsets(baseSet, i);
result = result.concat(partialSet);
//End i loop
}
//Now, finally, add the base set itself to the power set to make it complete ...
partialSet = [];
partialSet.push(baseSet);
result = result.concat(partialSet);
return(result);
//End function
}
I tested this with the set ["a", "b", "c", "d", "e", "f"] as the base set, and ran the code to produce the following power set:
[]
["a"]
["b"]
["c"]
["d"]
["e"]
["f"]
["a","b"]
["a","c"]
["a","d"]
["a","e"]
["a","f"]
["b","c"]
["b","d"]
["b","e"]
["b","f"]
["c","d"]
["c","e"]
["c","f"]
["d","e"]
["d","f"]
["e","f"]
["a","b","c"]
["a","b","d"]
["a","b","e"]
["a","b","f"]
["a","c","d"]
["a","c","e"]
["a","c","f"]
["a","d","e"]
["a","d","f"]
["a","e","f"]
["b","c","d"]
["b","c","e"]
["b","c","f"]
["b","d","e"]
["b","d","f"]
["b","e","f"]
["c","d","e"]
["c","d","f"]
["c","e","f"]
["d","e","f"]
["a","b","c","d"]
["a","b","c","e"]
["a","b","c","f"]
["a","b","d","e"]
["a","b","d","f"]
["a","b","e","f"]
["a","c","d","e"]
["a","c","d","f"]
["a","c","e","f"]
["a","d","e","f"]
["b","c","d","e"]
["b","c","d","f"]
["b","c","e","f"]
["b","d","e","f"]
["c","d","e","f"]
["a","b","c","d","e"]
["a","b","c","d","f"]
["a","b","c","e","f"]
["a","b","d","e","f"]
["a","c","d","e","f"]
["b","c","d","e","f"]
["a","b","c","d","e","f"]
Just copy and paste those two functions "as is", and you'll have the basics needed to extract the distinct k-subsets of an n-element set, and generate the power set of that n-element set if you wish.
I don't claim this to be elegant, merely that it works after a lot of testing (and turning the air blue during the debugging phase :) ).
A: Below is an iterative algorithm in C++ that does not use the STL nor recursion nor conditional nested loops. It is faster that way, it does not perform any element swaps and it does not burden the stack with recursion and it can also be easily ported to ANSI C by substituting mallloc(), free() and printf() for new, delete and std::cout, respectively.
If you want to display the elements with a different or longer alphabet then change the *alphabet parameter to point to a different string than "abcdefg".
void OutputArrayChar(unsigned int* ka, size_t n, const char *alphabet) {
for (int i = 0; i < n; i++)
std::cout << alphabet[ka[i]] << ",";
std::cout << endl;
}
void GenCombinations(const unsigned int N, const unsigned int K, const char *alphabet) {
unsigned int *ka = new unsigned int [K]; //dynamically allocate an array of UINTs
unsigned int ki = K-1; //Point ki to the last elemet of the array
ka[ki] = N-1; //Prime the last elemet of the array.
while (true) {
unsigned int tmp = ka[ki]; //Optimization to prevent reading ka[ki] repeatedly
while (ki) //Fill to the left with consecutive descending values (blue squares)
ka[--ki] = --tmp;
OutputArrayChar(ka, K, alphabet);
while (--ka[ki] == ki) { //Decrement and check if the resulting value equals the index (bright green squares)
OutputArrayChar(ka, K, alphabet);
if (++ki == K) { //Exit condition (all of the values in the array are flush to the left)
delete[] ka;
return;
}
}
}
}
int main(int argc, char *argv[])
{
GenCombinations(7, 4, "abcdefg");
return 0;
}
IMPORTANT: The *alphabet parameter must point to a string with at least N characters. You can also pass an address of a string which is defined somewhere else.
Combinations: Out of "7 Choose 4".
A: Here is a simple und understandable recursive C++ solution:
#include<vector>
using namespace std;
template<typename T>
void ksubsets(const vector<T>& arr, unsigned left, unsigned idx,
vector<T>& lst, vector<vector<T>>& res)
{
if (left < 1) {
res.push_back(lst);
return;
}
for (unsigned i = idx; i < arr.size(); i++) {
lst.push_back(arr[i]);
ksubsets(arr, left - 1, i + 1, lst, res);
lst.pop_back();
}
}
int main()
{
vector<int> arr = { 1, 2, 3, 4, 5 };
unsigned left = 3;
vector<int> lst;
vector<vector<int>> res;
ksubsets<int>(arr, left, 0, lst, res);
// now res has all the combinations
}
A: There was recently a PowerShell challenge on the IronScripter website that needed an n-choose-k solution. I posted a solution there, but here is a more generic version.
*
*The AllK switch is used to control whether output is only combinations of length ChooseK, or of length 1 through ChooseK.
*The Prefix parameter is really an accumulator for the output strings, but has the effect that a value passed in for the initial call will actually prefix each line of output.
function Get-NChooseK
{
[CmdletBinding()]
Param
(
[String[]]
$ArrayN
, [Int]
$ChooseK
, [Switch]
$AllK
, [String]
$Prefix = ''
)
PROCESS
{
# Validate the inputs
$ArrayN = $ArrayN | Sort-Object -Unique
If ($ChooseK -gt $ArrayN.Length)
{
Write-Error "Can't choose $ChooseK items when only $($ArrayN.Length) are available." -ErrorAction Stop
}
# Control the output
$firstK = If ($AllK) { 1 } Else { $ChooseK }
# Get combinations
$firstK..$ChooseK | ForEach-Object {
$thisK = $_
$ArrayN[0..($ArrayN.Length-($thisK--))] | ForEach-Object {
If ($thisK -eq 0)
{
Write-Output ($Prefix+$_)
}
Else
{
Get-NChooseK -Array ($ArrayN[($ArrayN.IndexOf($_)+1)..($ArrayN.Length-1)]) -Choose $thisK -AllK:$false -Prefix ($Prefix+$_)
}
}
}
}
}
E.g.:
PS C:\>$ArrayN = 'E','B','C','A','D'
PS C:\>$ChooseK = 3
PS C:\>Get-NChooseK -ArrayN $ArrayN -ChooseK $ChooseK
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
A: You can use the Asif's algorithm to generate all the possible combinations. It's probably the easiest and most efficient one. You can check out the medium article here.
Let's take a look in the implementation in JavaScript.
function Combinations( arr, r ) {
// To avoid object referencing, cloning the array.
arr = arr && arr.slice() || [];
var len = arr.length;
if( !len || r > len || !r )
return [ [] ];
else if( r === len )
return [ arr ];
if( r === len ) return arr.reduce( ( x, v ) => {
x.push( [ v ] );
return x;
}, [] );
var head = arr.shift();
return Combinations( arr, r - 1 ).map( x => {
x.unshift( head );
return x;
} ).concat( Combinations( arr, r ) );
}
// Now do your stuff.
console.log( Combinations( [ 'a', 'b', 'c', 'd', 'e' ], 3 ) );
A: My implementation in c/c++
#include <unistd.h>
#include <stdio.h>
#include <iconv.h>
#include <string.h>
#include <errno.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
int opt = -1, min_len = 0, max_len = 0;
char ofile[256], fchar[2], tchar[2];
ofile[0] = 0;
fchar[0] = 0;
tchar[0] = 0;
while((opt = getopt(argc, argv, "o:f:t:l:L:")) != -1)
{
switch(opt)
{
case 'o':
strncpy(ofile, optarg, 255);
break;
case 'f':
strncpy(fchar, optarg, 1);
break;
case 't':
strncpy(tchar, optarg, 1);
break;
case 'l':
min_len = atoi(optarg);
break;
case 'L':
max_len = atoi(optarg);
break;
default:
printf("usage: %s -oftlL\n\t-o output file\n\t-f from char\n\t-t to char\n\t-l min seq len\n\t-L max seq len", argv[0]);
}
}
if(max_len < 1)
{
printf("error, length must be more than 0\n");
return 1;
}
if(min_len > max_len)
{
printf("error, max length must be greater or equal min_length\n");
return 1;
}
if((int)fchar[0] > (int)tchar[0])
{
printf("error, invalid range specified\n");
return 1;
}
FILE *out = fopen(ofile, "w");
if(!out)
{
printf("failed to open input file with error: %s\n", strerror(errno));
return 1;
}
int cur_len = min_len;
while(cur_len <= max_len)
{
char buf[cur_len];
for(int i = 0; i < cur_len; i++)
buf[i] = fchar[0];
fwrite(buf, cur_len, 1, out);
fwrite("\n", 1, 1, out);
while(buf[0] != (tchar[0]+1))
{
while(buf[cur_len-1] < tchar[0])
{
(int)buf[cur_len-1]++;
fwrite(buf, cur_len, 1, out);
fwrite("\n", 1, 1, out);
}
if(cur_len < 2)
break;
if(buf[0] == tchar[0])
{
bool stop = true;
for(int i = 1; i < cur_len; i++)
{
if(buf[i] != tchar[0])
{
stop = false;
break;
}
}
if(stop)
break;
}
int u = cur_len-2;
for(; u>=0 && buf[u] >= tchar[0]; u--)
;
(int)buf[u]++;
for(int i = u+1; i < cur_len; i++)
buf[i] = fchar[0];
fwrite(buf, cur_len, 1, out);
fwrite("\n", 1, 1, out);
}
cur_len++;
}
fclose(out);
return 0;
}
here my implementation in c++, it write all combinations to specified files, but behaviour can be changed, i made in to generate various dictionaries, it accept min and max length and character range, currently only ansi supported, it enough for my needs
A: I'd like to present my solution. No recursive calls, nor nested loops in next.
The core of code is next() method.
public class Combinations {
final int pos[];
final List<Object> set;
public Combinations(List<?> l, int k) {
pos = new int[k];
set=new ArrayList<Object>(l);
reset();
}
public void reset() {
for (int i=0; i < pos.length; ++i) pos[i]=i;
}
public boolean next() {
int i = pos.length-1;
for (int maxpos = set.size()-1; pos[i] >= maxpos; --maxpos) {
if (i==0) return false;
--i;
}
++pos[i];
while (++i < pos.length)
pos[i]=pos[i-1]+1;
return true;
}
public void getSelection(List<?> l) {
@SuppressWarnings("unchecked")
List<Object> ll = (List<Object>)l;
if (ll.size()!=pos.length) {
ll.clear();
for (int i=0; i < pos.length; ++i)
ll.add(set.get(pos[i]));
}
else {
for (int i=0; i < pos.length; ++i)
ll.set(i, set.get(pos[i]));
}
}
}
And usage example:
static void main(String[] args) {
List<Character> l = new ArrayList<Character>();
for (int i=0; i < 32; ++i) l.add((char)('a'+i));
Combinations comb = new Combinations(l,5);
int n=0;
do {
++n;
comb.getSelection(l);
//Log.debug("%d: %s", n, l.toString());
} while (comb.next());
Log.debug("num = %d", n);
}
A: A PowerShell solution:
function Get-NChooseK
{
<#
.SYNOPSIS
Returns all the possible combinations by choosing K items at a time from N possible items.
.DESCRIPTION
Returns all the possible combinations by choosing K items at a time from N possible items.
The combinations returned do not consider the order of items as important i.e. 123 is considered to be the same combination as 231, etc.
.PARAMETER ArrayN
The array of items to choose from.
.PARAMETER ChooseK
The number of items to choose.
.PARAMETER AllK
Includes combinations for all lesser values of K above zero i.e. 1 to K.
.PARAMETER Prefix
String that will prefix each line of the output.
.EXAMPLE
PS C:\> Get-NChooseK -ArrayN '1','2','3' -ChooseK 3
123
.EXAMPLE
PS C:\> Get-NChooseK -ArrayN '1','2','3' -ChooseK 3 -AllK
1
2
3
12
13
23
123
.EXAMPLE
PS C:\> Get-NChooseK -ArrayN '1','2','3' -ChooseK 2 -Prefix 'Combo: '
Combo: 12
Combo: 13
Combo: 23
.NOTES
Author : nmbell
#>
# Use cmdlet binding
[CmdletBinding()]
# Declare parameters
Param
(
[String[]]
$ArrayN
, [Int]
$ChooseK
, [Switch]
$AllK
, [String]
$Prefix = ''
)
BEGIN
{
}
PROCESS
{
# Validate the inputs
$ArrayN = $ArrayN | Sort-Object -Unique
If ($ChooseK -gt $ArrayN.Length)
{
Write-Error "Can't choose $ChooseK items when only $($ArrayN.Length) are available." -ErrorAction Stop
}
# Control the output
$firstK = If ($AllK) { 1 } Else { $ChooseK }
# Get combinations
$firstK..$ChooseK | ForEach-Object {
$thisK = $_
$ArrayN[0..($ArrayN.Length-($thisK--))] | ForEach-Object {
If ($thisK -eq 0)
{
Write-Output ($Prefix+$_)
}
Else
{
Get-NChooseK -Array ($ArrayN[($ArrayN.IndexOf($_)+1)..($ArrayN.Length-1)]) -Choose $thisK -AllK:$false -Prefix ($Prefix+$_)
}
}
}
}
END
{
}
}
E.g.:
PS C:\>Get-NChooseK -ArrayN 'A','B','C','D','E' -ChooseK 3
ABC
ABD
ABE
ACD
ACE
ADE
BCD
BCE
BDE
CDE
There was a challenge posted recently on the IronScripter website similar to this question, where you can find links to mine and some other solutions.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "640"
}
|
Q: Catching Exceptions in a Spawned Process I'm using VS2008 to debug an application that starts a new process. I believe that the spawned process is suffering (and handling) some kind of CLR exception during its start-up, but it is not being caught by turning on CLR Exception Notification in Debug -> Exceptions. Any suggestions on how I can see where the exception is generated? I would normally just attach to the newly spawned process, but since the exception is occurring at start-up, there isn't enough time to do it.
A: You can add a call to Debugger.Launch() in your process startup code. This will launch a debugger (typically giving you the choice of using the running copy of VS2008 or a new copy) attached to the process. The same trick is handy for debugging Service startup issues.
A: Another trick that's worth considering is to use "Image File Execution Options", take a look at this post on blogs.msdn.com: http://blogs.msdn.com/greggm/archive/2005/02/21/377663.aspx as this doesn't require any changes to be made to the child executable or parent executable.
A: If the process fails during startup, then CreateProcess should return an error code. Check the error code.
If the process fails directly after startup, then check the process return code, and its documentation, logs etc.
A: Well you could log the error. But that doesn't allow you to look at it. To do that you might consider putting a serious delay (or infinte loop) in the exception handler. That will give you all the time you need to attach to the process and debug it. Just make sure that you remove it beofre you go into production!!
A: If you have control over this process code, use Debugger.Launch().
If not, try:
Just start this process from the command line and see the output. If there is unhanded exception, it'll be shown.
If it does not show anything, use the command line debugger, and use the command ca[tch].
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Why does "piping" a CharBuffer hang? Why does the following method hang?
public void pipe(Reader in, Writer out) {
CharBuffer buf = CharBuffer.allocate(DEFAULT_BUFFER_SIZE);
while( in.read(buf) >= 0 ) {
out.append(buf.flip());
}
}
A: Answering my own question: you have to call buf.clear() between reads. Presumably, read is hanging because the buffer is full. The correct code is
public void pipe(Reader in, Writer out) {
CharBuffer buf = CharBuffer.allocate(DEFAULT_BUFFER_SIZE);
while( in.read(buf) >= 0 ) {
out.append(buf.flip());
buf.clear();
}
}
A: I would assume that it is a deadlock. The in.read(buf) locks the CharBuffer and prevents the out.append(buf) call.
That is assuming that CharBuffer uses locks (of some kind)in the implementation. What does the API say about the class CharBuffer?
Edit: Sorry, some kind of short circuit in my brain... I confused it with something else.
A: CharBuffers don't work with Readers and Writers as cleanly as you might expect. In particular, there is no Writer.append(CharBuffer buf) method. The method called by the question snippet is Writer.append(CharSequence seq), which just calls seq.toString(). The CharBuffer.toString() method does return the string value of the buffer, but it doesn't drain the buffer. The subsequent call to Reader.read(CharBuffer buf) gets an already full buffer and therefore returns 0, forcing the loop to continue indefinitely.
Though this feels like a hang, it is in fact appending the first read's buffer contents to the writer every pass through the loop. So you'll either start to see a lot of output in your destination or the writer's internal buffer will grow, depending on how the writer is implemented.
As annoying as it is, I'd recommend a char[] implementation if only because the CharBuffer solution winds up building at least two new char[] every pass through the loop.
public void pipe(Reader in, Writer out) throws IOException {
char[] buf = new char[DEFAULT_BUFFER_SIZE];
int count = in.read(buf);
while( count >= 0 ) {
out.write(buf, 0, count);
count = in.read(buf);
}
}
I'd recommend only using this if you need to support converting between two character encodings, otherwise a ByteBuffer/Channel or byte[]/IOStream implementation would be preferable even if you're piping characters.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I validate that a list box is not empty (client side) I'm working with ASP.NET 3.5.
I have a list box that users must add items to (I've written the code for this). My requirement is that at least one item must be added to the listbox or they cannot submit the form. I have several other validators on the page and they all write to a ValidationSummary control. I would like this listbox validation to write to the Validation Summary control as well. Any help is greatly appreciated. Thank you.
A: Drop in a custom validator, Add your desired error message to it, double click on the custom validator to get to the code behind for the event handler, and then you would implement server side like this:
protected void CustomValidator1_ServerValidate(object source, ServerValidateEventArgs args)
{
args.IsValid = ListBox1.Items.Count > 0;
}
Also you can implement client side javascript as well.
I just threw this up on a page and tested it quickly, so you might need to tweak it a bit: (The button1 only adds an item to the List Box)
<script language="JavaScript">
<!--
function ListBoxValid(sender, args)
{
args.IsValid = sender.options.length > 0;
}
// -->
</script>
<asp:ListBox ID="ListBox1" runat="server"></asp:ListBox>
<asp:TextBox ID="TextBox1" runat="server"></asp:TextBox>
<asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Button" ValidationGroup="NOVALID" />
<asp:Button ID="Button2" runat="server" Text="ButtonsUBMIT" />
<asp:CustomValidator ID="CustomValidator1" runat="server"
ErrorMessage="CustomValidator"
onservervalidate="CustomValidator1_ServerValidate" ClientValidationFunction="ListBoxValid"></asp:CustomValidator>
If you add a validation summary to the page, you error text should show up in that summary if there is no items in the ListBox, or other collection-able control, what ever you want to use, as long as the ValidationGroup is the same.
A: This didn't work for me:
function ListBoxValid(sender, args)
{
args.IsValid = sender.options.length > 0;
}
But this did:
function ListBoxValid(sender, args)
{
var ctlDropDown = document.getElementById(sender.controltovalidate);
args.IsValid = ctlDropDown.options.length > 0;
}
A: gotta make sure to add these properties to the CustomValidator:
Display="Dynamic" ValidateEmptyText="True"
A: <asp:CustomValidator
runat="server"
ControlToValidate="listbox1"
ErrorMessage="Add some items yo!"
ClientValidationFunction="checkListBox"
/>
<script type="Text/JavaScript">
function checkListBox(sender, args)
{
args.IsValid = sender.options.length > 0;
}
</script>
A: Actually this is the proper way to make this work (as far as the JavaScript is concerned).
ListBox.options.length will always be your total number of options, not the number you have selected. The only way I have found that works is to use a for loop to go through the list.
function ListBoxValid(sender, args)
{
var listBox = document.getElementById(sender.controltovalidate);
var listBoxCnt = 0;
for (var x =0; x<listBox.options.length; x++)
{
if (listBox.options[x].selected) listBoxCnt++;
}
args.IsValid = (listBoxCnt>0)
}
A: this work for me
<script language="JavaScript">
function CheckListBox(sender, args)
{
args.IsValid = document.getElementById("<%=ListBox1.ClientID%>").options.length > 0;
}
</script>
<asp:ListBox ID="ListBox1" runat="server"></asp:ListBox>
<asp:CustomValidator ID="CustomValidator1" runat="server"
ErrorMessage="*Required" ClientValidationFunction="CheckListBox"></asp:CustomValidator>
A: You will want to register your control with the page by sending in the ClientID. Then, you can use Microsoft AJAX to grab your control and check the values.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How to implement a Decorator with non-local equality? Greetings, currently I am refactoring one of my programs, and I found an interesting problem.
I have Transitions in an automata. Transitions always have a start-state and an end-state. Some Transitions have a label, which encodes a certain Action that must be performed upon traversal. No label means no action. Some transitions have a condition, which must be fulfilled in order to traverse this condition, if there is no condition, the transition is basically an epsilon-transition in an NFA and will be traversed without consuming an input symbol.
I need the following operations:
*
*check if the transition has a label
*get this label
*add a label to a transition
*check if the transition has a condition
*get this condition
*check for equality
Judging from the first five points, this sounds like a clear decorator, with a base transition and two decorators: Labeled and Condition. However, this approach has a problem: two transitions are considered equal if their start-state and end-state are the same, the labels at both transitions are equal (or not-existing) and both conditions are the same (or not existing). With a decorator, I might have two transitions Labeled("foo", Conditional("bar", Transition("baz", "qux"))) and Conditional("bar", Labeled("foo", Transition("baz", "qux"))) which need a non-local equality, that is, the decorators would need to collect all the data and the Transition must compare this collected data on a set-base:
class Transition(object):
def __init__(self, start, end):
self.start = start
self.end = end
def get_label(self):
return None
def has_label(self):
return False
def collect_decorations(self, decorations):
return decorations
def internal_equality(self, my_decorations, other):
try:
return (self.start == other.start
and self.end == other.end
and my_decorations = other.collect_decorations())
def __eq__(self, other):
return self.internal_equality(self.collect_decorations({}), other)
class Labeled(object):
def __init__(self, label, base):
self.base = base
self.label = label
def has_label(self):
return True
def get_label(self):
return self.label
def collect_decorations(self, decorations):
assert 'label' not in decorations
decorations['label'] = self.label
return self.base.collect_decorations(decorations)
def __getattr__(self, attribute):
return self.base.__getattr(attribute)
Is this a clean approach? Am I missing something?
I am mostly confused, because I can solve this - with longer class names - using cooperative multiple inheritance:
class Transition(object):
def __init__(self, **kwargs):
# init is pythons MI-madness ;-)
super(Transition, self).__init__(**kwargs)
self.start = kwargs['start']
self.end = kwargs['end']
def get_label(self):
return None
def get_condition(self):
return None
def __eq__(self, other):
try:
return self.start == other.start and self.end == other.end
except AttributeError:
return False
class LabeledTransition(Transition):
def __init__(self, **kwargs):
super(LabeledTransition).__init__(**kwargs)
self.label = kwargs['label']
def get_label(self):
return self.label
def __eq__(self):
super_result = super(LabeledTransition, self).__eq__(other)
try:
return super_result and self.label == other.label
except AttributeError:
return False
class ConditionalTransition(Transition):
def __init__(self, **kwargs):
super(ConditionalTransition, self).__init__(**kwargs)
self.condition = kwargs['condition']
def get_condition(self):
return self.condition
def __eq__(self, other):
super_result = super(ConditionalTransition, self).__eq__(other)
try:
return super_result and self.condition = other.condition
except AttributeError:
return False
# ConditionalTransition about the same, with get_condition
class LabeledConditionalTransition(LabeledTransition, ConditionalTransition):
pass
the class LabledConditionalTransition behaves exactly as expected - and having no code in there is appealing and I do not thing MI is confusing at this size.
Of course, the third option would be to just hammer everything into a single transition class with a bunch of in has_label/has_transition.
So... I am confused. Am I missing something? Which implementation looks better? How do you handle similar cases, that is, objects which look like a Decorator could handle them, but then, such a non-local method comes around?
EDIT:
Added the ConditionalTransition-class. Basically, this kinda behaves like the decorator, minus the order created by the order of creating the decorators, the transition checks for start and end being correct, the LabeledTransition-class checks for label being correct and ConditionalTransition checks for condition being correct.
A: I think its clear that nobody really understands your question. I would suggest putting it in context and making it shorter. As an example, here's one possible implementation of the state pattern in python, please study it to get an idea.
class State(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
class Automaton(object):
def __init__(self, instance, start):
self._state = start
self.transitions = instance.transitions()
def get_state(self):
return self._state
def set_state(self, target):
transition = self.transitions.get((self.state, target))
if transition:
action, condition = transition
if condition:
if condition():
if action:
action()
self._state = target
else:
self._state = target
else:
self._state = target
state = property(get_state, set_state)
class Door(object):
open = State('open')
closed = State('closed')
def __init__(self, blocked=False):
self.blocked = blocked
def close(self):
print 'closing door'
def do_open(self):
print 'opening door'
def not_blocked(self):
return not self.blocked
def transitions(self):
return {
(self.open, self.closed):(self.close, self.not_blocked),
(self.closed, self.open):(self.do_open, self.not_blocked),
}
if __name__ == '__main__':
door = Door()
automaton = Automaton(door, door.open)
print 'door is', automaton.state
automaton.state = door.closed
print 'door is', automaton.state
automaton.state = door.open
print 'door is', automaton.state
door.blocked = True
automaton.state = door.closed
print 'door is', automaton.state
the output of this programm would be:
door is open
closing door
door is closed
opening door
door is open
door is open
A: From the code that was posted, the only difference between Transition and Labeled Transition is the return of get_lable() and has_label(). In which case you can compress these two a single class that sets a label attribute to None and
return self.label is not None
in the has_label() function.
Can you post the code for the ConditionalTransition class? I think this would make it clearer.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Can anyone explain this PHP code using json_encode and json_decode? $a = '{ "tag": "<b></b>" }';
echo json_encode( json_decode($a) );
This outputs:
{"tag":"<b><\/b>"}
when you would think it would output exactly the input. For some reason json_encode adds an extra slash.
A: use this:
echo json_encode($a,JSON_HEX_TAG)
Result will be:
["\u003C\u003E"]
You can read this article to improve your knowledge about JSON_ENCODE
http://php.net/manual/en/function.json-encode.php
A: That's probably a security-feature. The escaped version (Eg. the output) would be parsed as similar to the unescaped-version, by Javascript (Eg. \/ becomes /). Having escaped the slash like that, there is a lesser chance of the browser misinterpreting the Javascript-string as HTML. Of course, if you treat the data correct, this shouldn't be needed, so it's more a safeguard against a clueless programmer messing things up for himself.
A: Your input is not valid JSON, but PHP's JSON parser (like most JSON parsers) will parse it anyway.
A: Because it's part of the JSON standard
http://json.org/
char
any-Unicode-character-
except-"-or-\-or-
control-character
\"
\\
\/ <---- see here?
\b
\f
\n
\r
\t
\u four-hex-digits
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Java idiom for "piping" Is there a more concise/standard idiom (e.g., a JDK method) for "piping" an input to an output in Java than the following?
public void pipe(Reader in, Writer out) {
CharBuffer buf = CharBuffer.allocate(DEFAULT_BUFFER_SIZE);
while (in.read(buf) >= 0 ) {
out.append(buf.flip());
buf.clear();
}
}
[EDIT] Please note the Reader and Writer are given. The correct answer will demonstrate how to take in and out and form a pipe (preferably with no more than 1 or 2 method calls). I will accept answers where in and out are an InputStream and an OutputStream (preferably with a conversion from/to Reader/Writer). I will not accept answers where either in or out is a subclass of Reader/InputStream or Writer/OutputStrem.
A: IOUtils from the Apache Commons project has a number of utilily methods that do exactly what you need.
IOUtils.copy(in, out) will perform a buffered copy of all input to the output. If there is more than one spot in your codebase that requires Stream or Reader/Writer handling, using IOUtils could be a good idea.
A: Take a look at java.io.PipedInputStream and PipedOutputStream, or PipedReader/PipedWriter from the same package.
From the Documentation of PipedInputStream:
A piped input stream should be connected to a piped output stream; the piped input stream then provides whatever data bytes are written to the piped output stream. Typically, data is read from a PipedInputStream object by one thread and data is written to the corresponding PipedOutputStream by some other thread. Attempting to use both objects from a single thread is not recommended, as it may deadlock the thread. The piped input stream contains a buffer, decoupling read operations from write operations, within limits. A pipe is said to be broken if a thread that was providing data bytes to the connected piped output stream is no longer alive.
A: The only optimization available is through FileChannels in the NIO API: Reads, Writes. The JVM can optimize this call to move the data from a file to a destination Channel without first having to move the data to kernel space. See this article for details.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Php debugging with Aptana Studio and Xdebug or Zend debugger on OS X Have you managed to get Aptana Studio debugging to work? I tried following this, but I don't see Windows -> Preferences -> Aptana -> Editors -> PHP -> PHP Interpreters in my menu (I have PHP plugin installed) and any attempt to set up the servers menu gives me "socket error" when I try to debug. Xdebug is installed, confirmed through php info.
A: I've been using ZendDebugger with Eclipse (on OS X) for a while now and it works great!
Here's the recipe that's worked well for me.
*
*install Eclipse PDT via "All in one" package at: http://www.zend.com/en/community/pdt
*install ZendDebugger.so (http://www.zend.com/en/community/pdt)
*configure your php.ini w/ the ZendDebugger extenssion (info below)
Configuring ZendDebugger:
*
*edit php.ini
*add the following:
[Zend]
zend_extension=/full/path/to/ZendDebugger.so
zend_debugger.allow_hosts=127.0.0.1
zend_debugger.expose_remotely=always
zend_debugger.connector_port=10013
Now run "php -m" in the command line to output all the installed modules. If you see the following then its installed just fine
[Zend Modules]
Zend Debugger
Now restart Apache so that it reloads PHP w/ the ZendDebugger. Create a dummy page with in it and examine the output to make sure the PHP apache module picked up ZendDebugger as well. If it's setup right you will see something like the following text somewhere in phpinfo()'s output.
with Zend Debugger v5.2.14, Copyright (c) 1999-2008, by Zend Technologies
OK - but you wanted Aptana Studio... at this point I install the Aptana Studio Plugin into the PDT build of Eclipse. The instructions for that are at: http://www.aptana.com/docs/index.php/Plugging_Aptana_into_an_existing_Eclipse_configuration
That setup has served me well for a while - hopefully it helps you too
-Arin
A: This is not related to Aptana Studio, but if you are looking for a PHP XDebug debugger client on OS X, you can try MacGDBp (Free/GPL).
A: I realize that this is a old thread but I was having the same problem with Aptana Studio 3 and FireFox. If anyone is having this problem make sure that FireFox has FireBug V1.8.X installed, any other version might give you the same problem...
Hope this helps
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How do you pre allocate memory to a process in solaris? My problem is:
I have a perl script which uses lot of memory (expected behaviour because of caching). But, I noticed that the more I do caching, slower it gets and the process spends most of the time in sleep mode.
I thought pre-allocating memory to the process might speed up the performance.
Does someone have any ideas here?
Update:
I think I am not being very clear here. I will put question in clearer way:
I am not looking for the ways of pre-allocating inside the perl script. I dont think that would help me much here. What I am interested in is a way to tell OS to allocate X amount of memory for my perl script so that it does not have to compete with other processes coming in later.
Assume that I cant get away with the memory usage. Although, I am exploring ways of reducing that too but dont expect much improvement there.
FYI, I am working on a solaris 10 machine.
A: What I gathered from your posting and comments is this:
*
*Your program gets slow when memory use rises
*Your pogram increasingly spends time sleeping, not computing.
Most likely eplanation: Sleeping means waiting for a resource to become available. In this case the resource most likely is memory. Use the vmstat 1 command to verify. Have a look at the sr column. If it goes beyond ~150 consistently the system is desperate to free pages to satisfy demand. This is accompanied by high activity in the pi, po and fr columns.
If this is in fact the case, your best choices are:
*
*Upgrade system memory to meet demand
*Reduce memory usage to a level appropiate for the system at hand.
Preallocating memory will not help. In either case memory demand will exceed the available main memory at some point. The kernel will then have to decide which pages need to be in memory now and which pages may be cleared and reused for the more urgently needed pages. If all regularily needed pages (the working set) exceeds the size of main memory, the system is constantly moving pages from and to secondary storage (swap). The system is then said to be thrashing and spends not much time doing useful work. There is nothing you can do about this execept adding memory or using less of it.
A: From a comment:
The memory limitations are not very severe but the memory footprint easily grows to GBs and when we have competing processes for memory, it gets very slow. I want to reserve some memory from OS so that thrashing is minimal even when too many other processes come. Jagmal
Let's take a different tack then. The problem isn't really with your Perl script in particular. Instead, all the processes on the machine are consuming too much memory for the machine to handle as configured.
You can "reserve" memory, but that won't prevent thrashing. In fact, it could make the problem worse because the OS won't know if you are using the memory or just saving it for later.
I suspect you are suffering the tragedy of the commons. Am I right that many other users are on the machine in question? If so, this is more of a social problem than a technical problem. What you need is someone (probably the System Administrator) to step in and coordinate all the processes on the machine. They should find the most extravagant memory hogs and work with their programmers to reduce the cost on system resources. Further, they ought to arrange for processes to be scheduled so that resource allocation is efficient. Finally, they may need to get more or improved hardware to handle the expected system load.
A: Some questions you might ask yourself:
*
*are my data structures really useful for the task at hand?
*do I really have to cache that much?
*can I throw away cached data after some time?
A: Look at http://metacpan.org/pod/Devel::Size
You could also inline a c function to do the above.
As far as I know, you cannot allocate memory directly from Perl. You can get around this by writing an XS module, or using an inline C function like I mentioned.
A: my @array;
$#array = 1_000_000; # pre-extend array to one million elements,
# http://perldoc.perl.org/perldata.html#Scalar-values
my %hash;
keys(%hash) = 8192; # pre-allocate hash buckets
# (same documentation section)
Not being familiar with your code, I'll venture some wild speculation here [grin] that these techniques aren't going to offer new great efficiencies to your script, but that the pre-allocation could help a little bit.
Good luck!
-- Douglas Hunter
A: I recently rediscovered an excellent Randal L. Schwartz article that includes preallocating an array. Assuming this is your problem, you can test preallocating with a variation on that code. But be sure to test the result.
The reason the script gets slower with more caching might be thrashing. Presumably the reason for caching in the first place is to increase performance. So a quick answer is: reduce caching.
Now there may be ways to modify your caching scheme so that it uses less main memory and avoids thrashing. For instance, you might find that caching to a file or database instead of to memory can boost performance. I've found that file system and database caching can be more efficient than application caching and can be shared among multiple instances.
Another idea might be to alter your algorithm to reduce memory usage in other areas. For instance, instead of pulling an entire file into memory, Perl programs tend to work better reading line by line.
Finally, have you explored the Memoize module? It might not be immediately applicable, but it could be a source of ideas.
A: I could not find a way to do this yet.
But, I found out that (See this for details)
Memory allocated to lexicals (i.e.
my() variables) cannot be reclaimed or
reused even if they go out of scope.
It is reserved in case the variables
come back into scope. Memory allocated
to global variables can be reused
(within your program) by using
undef()ing and/or delete().
So, I believe a possibility here could be to check if i can reduce the total memory print of lexical variables at a given point in time.
A: It sounds like you are looking for limit or ulimit. But I suspect that will cause a script that goes over the limit to fail, which probably isn't what you want.
A better idea might be to share cached data between processes. Putting data in a database or in a file works well in my experience.
I hate to say it, but if your memory limitations are this severe, Perl is probably not the right language for this application. C would be a better choice, I'd think.
A: One thing you could do is to use solaris zones (containers) .
You could put your process in a zone and allocate it resources like RAM and CPU's.
Here are two links to some tutorials :
*
*Solaris Containers How To Guide
*Zone Resource Control in the Solaris 10 08/07 OS
A: While it's not pre-allocating as you asked for, you may also want to look at the large page size options, so that when perl has to ask the OS for more memory for your program, it gets it in
larger chunks.
See Solaris Internals: Multiple Page Size Support for more information on the difference this makes and how to do it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Where can you find the C# Language Specifications? Where can I find the specifications for the various C# languages?
(EDIT: it appears people voted down because you could 'google' this, however, my original intent was to put an answer with information not found on google. I've accepted the answer with the best google results, as they are relevant to people who haven't paid for VS)
A: Found using Google:
http://msdn.microsoft.com/en-us/vcsharp/aa336809.aspx
A: The C# language is an ISO standard and as such the specification can be had from the ISO website at: http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=42926
You can also acquire the specifications direct from Microsoft (.Doc Warning) http://download.microsoft.com/download/3/8/8/388e7205-bc10-4226-b2a8-75351c669b09/CSharp%20Language%20Specification.doc (.Doc Warning)
A: Microsoft's version (probably what you want)
The formal standardised versions (via ECMA, created just so they could say it was "standardised" by some external body. Even though ECMA "standards" are effectively "Insert cash, vend standard").
Further ECMA standards
A: If you have Visual Studio 2005 or 2008, they are already on your machine!
For 2005 (English):
.\Microsoft Visual Studio 8\VC#\Specifications\1033
For 2008 (English):
.\Microsoft Visual Studio 9.0\VC#\Specifications\1033
For 2010 (English):
.\Microsoft Visual Studio 10.0\VC#\Specifications\1033
For 2012 (English):
.\Microsoft Visual Studio 11.0\VC#\Specifications\1033
A: From : http://msdn.microsoft.com/en-us/vcsharp/aa336809.aspx
In .doc format:
http://download.microsoft.com/download/3/8/8/388e7205-bc10-4226-b2a8-75351c669b09/CSharp%20Language%20Specification.doc
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Is every DDL SQL command reversible? [database version control] I want to setup a mechanism for tracking DB schema changes, such the one described in this answer:
For every change you make to the
database, you write a new migration.
Migrations typically have two methods:
an "up" method in which the changes
are applied and a "down" method in
which the changes are undone. A single
command brings the database up to
date, and can also be used to bring
the database to a specific version of
the schema.
My question is the following: Is every DDL command in an "up" method reversible? In other words, can we always provide a "down" method? Can you imagine any DDL command that can not be "down"ed?
Please, do not consider the typical data migration problem where during the "up" method we have loss of data: e.g. changing a field type from datetime (DateOfBirth) to int (YearOfBirth) we are losing data that can not be restored.
A: in sql server every DDL command that i know of is an up/down pair.
A: Other than loss of data, every migration I've ever done is reversible. That said, Rails offers a way to mark a migration as "destructive":
Some transformations are destructive
in a manner that cannot be reversed.
Migrations of that kind should raise
an ActiveRecord::IrreversibleMigration
exception in their down method.
See the API documentation here.
A: Yes, you've identified cases where you lose data, either by transforming it or simply DROP COLUMN in the "up" migration.
Another example is that you could drop a SEQUENCE object, thus losing its state. The "down" migration would recreate the sequence, but it would start over at 1. This could cause duplicate values to be generated by the sequence. Not a problem if you're performing a migration on an empty database, and you want the sequence to start at 1 anyway, but if you have some number of rows of data, you'd want the sequence to be reset to the greatest value currently in use, which is hard to do reliably, unless you have an exclusive lock on that table.
Any other DDL that is dependent on the state of data in the database has similar problems. That's probably not a good schema design in the first place, I'm just trying to think of any cases that fit your question.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Child Control Initialization in Custom Composite in ASP.NET Part of the series of controls I am working on obviously involves me lumping some of them together in to composites. I am rapidly starting to learn that this takes consideration (this is all new to me!) :)
I basically have a StyledWindow control, which is essentially a glorified Panel with ability to do other bits (like add borders etc).
Here is the code that instantiates the child controls within it. Up till this point it seems to have been working correctly with mundane static controls:
protected override void CreateChildControls()
{
_panel = new Panel();
if (_editable != null)
_editable.InstantiateIn(_panel);
_regions = new List<IAttributeAccessor>();
_regions.Add(_panel);
}
The problems came today when I tried nesting a more complex control within it. This control uses a reference to the page since it injects JavaScript in to make it a bit more snappy and responsive (the RegisterClientScriptBlock is the only reason I need the page ref).
Now, this was causing "object null" errors, but I localized this down to the render method, which was of course trying to call the method against the [null] Page object.
What's confusing me is that the control works fine as a standalone, but when placed in the StyledWindow it all goes horribly wrong!
So, it looks like I am missing something in either my StyledWindow or ChildControl. Any ideas?
Update
As Brad Wilson quite rightly pointed out, you do not see the controls being added to the Controls collection. This is what the _panel is for, this was there to handle that for me, basically then override Controls (I got this from a guide somewhere):
Panel _panel; // Sub-Control to store the "Content".
public override ControlCollection Controls
{
get
{
EnsureChildControls();
return _panel.Controls;
}
}
I hope that helps clarify things. Apologies.
Update Following Longhorn213's Answer
Right, I have been doing some playing with the control, placing one within the composite, and one outside. I then got the status of Page at event major event in the control Lifecycle and rendered it to the page.
The standalone is working fine and the page is inited as expected. However, the one nested in the Composite is different. It's OnLoad event is not being fired at all! So I am guessing Brad is probably right in that I am not setting up the control hierarchy correctly, can anyone offer some advice as to what I am missing? Is the Panel method not enough? (well, it obviously isn't is it?!) :D
Thanks for your help guys, appreciated :)
A: I don't see you adding your controls to the Controls collection anywhere, which would explain why they can't access the Page (since they've never been officially placed on the page).
A: I have always put the JavaScript calls on the OnLoad Function. Such as below.
protected override void OnLoad(EventArgs e)
{
// Do something to get the script
string script = GetScript();
this.Page.ClientScript.RegisterClientScriptBlock(this.Page.GetType(), "SomeJavaScriptName", script);
// Could also use this function to determine if the script has been register. i.e. more than 1 of the controls exists
this.Page.ClientScript.IsClientScriptBlockRegistered("SomeJavaScriptName");
base.OnLoad(e);
}
If you still want to do the render, then you can just write the script in the response. Which is what the RegisterScriptBlock does, it just puts the script inline on the page.
A: Solved!
Right, I was determined to get this cracked today! Here were my thoughts:
*
*I thought the use of Panel was a bit of a hack, so I should remove it and find out how it is really done.
*I didn't want to have to do something like MyCtl.Controls[0].Controls to access the controls added to the composite.
*I wanted the damn thing to work!
So, I got searching and hit MSDN, this artcle was REALLY helpful (i.e. like almost copy 'n' paste, and explained well - something MSDN is traditionally bad at). Nice!
So, I ripped out the use of Panel and pretty much followed the artcle and took it as gospel, making notes as I went.
Here's what I have now:
*
*I learned I was using the wrong term. I should have been calling it a Templated Control. While templated controls are technically composites, there is a distinct difference. Templated controls can define the interface for items that are added to them.
*Templated controls are very powerful and actually pretty quick and easy to set up once you get your head round them!
*I will play some more with the designer support to ensure I fully understand it all, then get a blog post up :)
*A "Template" control is used to specify the interface for templated data.
For example, here is the ASPX markup for a templated control:
<cc1:TemplatedControl ID="MyCtl" runat="server">
<Template>
<!-- Templated Content Goes Here -->
</Template>
</cc1:TemplatedControl>
Heres the Code I Have Now
public class DummyWebControl : WebControl
{
// Acts as the surrogate for the templated controls.
// This is essentially the "interface" for the templated data.
}
In TemplateControl.cs...
ITemplate _template;
// Surrogate to hold the controls instantiated from
// within the template.
DummyWebControl _owner;
protected override void CreateChildControls()
{
// Note we are calling base.Controls here
// (you will see why in a min).
base.Controls.Clear();
_owner = new DummyWebControl();
// Load the Template Content
ITemplate template = _template;
if (template == null)
template = new StyledWindowDefaultTemplate();
template.InstantiateIn(_owner);
base.Controls.Add(_owner);
ChildControlsCreated = true;
}
Then, to provide easy access to the Controls of the [Surrogate] Object:
(this is why we needed to clear/add to the base.Controls)
public override ControlCollection Controls
{
get
{
EnsureChildControls();
return _owner.Controls;
}
}
And that is pretty much it, easy when you know how! :)
Next: Design Time Region Support!
A: Right, I got playing and I figured that there was something wrong with my control instantiation, since Longhorn was right, I should be able to create script references at OnLoad (and I couldn't), and Brad was right in that I need to ensure my Controls hierarchy was maintained by adding to the Controls collection of the composite.
So, I had two things here:
*
*I had overriden the Controls property accessor for the composite to return this Panel's Controls collection since I dont want to have to go ctl.Controls[0].Controls[0] to get to the actual control I want. I have removed this, but I need to get this sorted.
*I had not added the Panel to the Controls collection, I have now done this.
So, it now works, however, how do I get the Controls property for the composite to return the items in the Panel, rather than the Panel itself?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do I parse an ISO 8601-formatted date? I need to parse RFC 3339 strings like "2008-09-03T20:56:35.450686Z" into Python's datetime type.
I have found strptime in the Python standard library, but it is not very convenient.
What is the best way to do this?
A: I've coded up a parser for the ISO 8601 standard and put it on GitHub: https://github.com/boxed/iso8601. This implementation supports everything in the specification except for durations, intervals, periodic intervals, and dates outside the supported date range of Python's datetime module.
Tests are included! :P
A: Try the iso8601 module; it does exactly this.
There are several other options mentioned on the WorkingWithTime page on the python.org wiki.
A: This works for stdlib on Python 3.2 onwards (assuming all the timestamps are UTC):
from datetime import datetime, timezone, timedelta
datetime.strptime(timestamp, "%Y-%m-%dT%H:%M:%S.%fZ").replace(
tzinfo=timezone(timedelta(0)))
For example,
>>> datetime.utcnow().replace(tzinfo=timezone(timedelta(0)))
... datetime.datetime(2015, 3, 11, 6, 2, 47, 879129, tzinfo=datetime.timezone.utc)
A: I'm the author of iso8601utils. It can be found on GitHub or on PyPI. Here's how you can parse your example:
>>> from iso8601utils import parsers
>>> parsers.datetime('2008-09-03T20:56:35.450686Z')
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686)
A: One straightforward way to convert an ISO 8601-like date string to a UNIX timestamp or datetime.datetime object in all supported Python versions without installing third-party modules is to use the date parser of SQLite.
#!/usr/bin/env python
from __future__ import with_statement, division, print_function
import sqlite3
import datetime
testtimes = [
"2016-08-25T16:01:26.123456Z",
"2016-08-25T16:01:29",
]
db = sqlite3.connect(":memory:")
c = db.cursor()
for timestring in testtimes:
c.execute("SELECT strftime('%s', ?)", (timestring,))
converted = c.fetchone()[0]
print("%s is %s after epoch" % (timestring, converted))
dt = datetime.datetime.fromtimestamp(int(converted))
print("datetime is %s" % dt)
Output:
2016-08-25T16:01:26.123456Z is 1472140886 after epoch
datetime is 2016-08-25 12:01:26
2016-08-25T16:01:29 is 1472140889 after epoch
datetime is 2016-08-25 12:01:29
A: Django's parse_datetime() function supports dates with UTC offsets:
parse_datetime('2016-08-09T15:12:03.65478Z') =
datetime.datetime(2016, 8, 9, 15, 12, 3, 654780, tzinfo=<UTC>)
So it could be used for parsing ISO 8601 dates in fields within entire project:
from django.utils import formats
from django.forms.fields import DateTimeField
from django.utils.dateparse import parse_datetime
class DateTimeFieldFixed(DateTimeField):
def strptime(self, value, format):
if format == 'iso-8601':
return parse_datetime(value)
return super().strptime(value, format)
DateTimeField.strptime = DateTimeFieldFixed.strptime
formats.ISO_INPUT_FORMATS['DATETIME_INPUT_FORMATS'].insert(0, 'iso-8601')
A: An another way is to use specialized parser for ISO-8601 is to use isoparse function of dateutil parser:
from dateutil import parser
date = parser.isoparse("2008-09-03T20:56:35.450686+01:00")
print(date)
Output:
2008-09-03 20:56:35.450686+01:00
This function is also mentioned in the documentation for the standard Python function datetime.fromisoformat:
A more full-featured ISO 8601 parser, dateutil.parser.isoparse is
available in the third-party package dateutil.
A: isoparse function from python-dateutil
The python-dateutil package has dateutil.parser.isoparse to parse not only RFC 3339 datetime strings like the one in the question, but also other ISO 8601 date and time strings that don't comply with RFC 3339 (such as ones with no UTC offset, or ones that represent only a date).
>>> import dateutil.parser
>>> dateutil.parser.isoparse('2008-09-03T20:56:35.450686Z') # RFC 3339 format
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686, tzinfo=tzutc())
>>> dateutil.parser.isoparse('2008-09-03T20:56:35.450686') # ISO 8601 extended format
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686)
>>> dateutil.parser.isoparse('20080903T205635.450686') # ISO 8601 basic format
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686)
>>> dateutil.parser.isoparse('20080903') # ISO 8601 basic format, date only
datetime.datetime(2008, 9, 3, 0, 0)
The python-dateutil package also has dateutil.parser.parse. Compared with isoparse, it is presumably less strict, but both of them are quite forgiving and will attempt to interpret the string that you pass in. If you want to eliminate the possibility of any misreads, you need to use something stricter than either of these functions.
Comparison with Python 3.7+’s built-in datetime.datetime.fromisoformat
dateutil.parser.isoparse is a full ISO-8601 format parser, but in Python ≤ 3.10 fromisoformat is deliberately not. In Python 3.11, fromisoformat supports almost all strings in valid ISO 8601. See fromisoformat's docs for this cautionary caveat. (See this answer).
A: If pandas is used anyway, I can recommend Timestamp from pandas. There you can
ts_1 = pd.Timestamp('2020-02-18T04:27:58.000Z')
ts_2 = pd.Timestamp('2020-02-18T04:27:58.000')
Rant: It is just unbelievable that we still need to worry about things like date string parsing in 2021.
A: Starting from Python 3.7, strptime supports colon delimiters in UTC offsets (source). So you can then use:
import datetime
def parse_date_string(date_string: str) -> datetime.datetime
try:
return datetime.datetime.strptime(date_string, '%Y-%m-%dT%H:%M:%S.%f%z')
except ValueError:
return datetime.datetime.strptime(date_string, '%Y-%m-%dT%H:%M:%S%z')
EDIT:
As pointed out by Martijn, if you created the datetime object using isoformat(), you can simply use datetime.fromisoformat().
EDIT 2:
As pointed out by Mark Amery, I added a try..except block to account for missing fractional seconds.
A: Python >= 3.11
fromisoformat now parses Z directly:
from datetime import datetime
s = "2008-09-03T20:56:35.450686Z"
datetime.fromisoformat(s)
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686, tzinfo=datetime.timezone.utc)
Python 3.7 to 3.10
A simple option from one of the comments: replace 'Z' with '+00:00' - and use fromisoformat:
from datetime import datetime
s = "2008-09-03T20:56:35.450686Z"
datetime.fromisoformat(s.replace('Z', '+00:00'))
# datetime.datetime(2008, 9, 3, 20, 56, 35, 450686, tzinfo=datetime.timezone.utc)
Why prefer fromisoformat?
Although strptime's %z can parse the 'Z' character to UTC, fromisoformat is faster by ~ x40 (see also: A faster strptime):
%timeit datetime.fromisoformat(s.replace('Z', '+00:00'))
388 ns ± 48.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit dateutil.parser.isoparse(s)
11 µs ± 1.05 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit datetime.strptime(s, '%Y-%m-%dT%H:%M:%S.%f%z')
15.8 µs ± 1.32 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit dateutil.parser.parse(s)
87.8 µs ± 8.54 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
(Python 3.9.12 x64 on Windows 10)
A: The datetime standard library has, since Python 3.7, a function for inverting datetime.isoformat().
classmethod datetime.fromisoformat(date_string):
Return a datetime corresponding to a date_string in any valid ISO 8601 format, with the following exceptions:
*
*Time zone offsets may have fractional seconds.
*The T separator may be replaced by any single unicode character.
*Ordinal dates are not currently supported.
*Fractional hours and minutes are not supported.
Examples:
>>> from datetime import datetime
>>> datetime.fromisoformat('2011-11-04')
datetime.datetime(2011, 11, 4, 0, 0)
>>> datetime.fromisoformat('20111104')
datetime.datetime(2011, 11, 4, 0, 0)
>>> datetime.fromisoformat('2011-11-04T00:05:23')
datetime.datetime(2011, 11, 4, 0, 5, 23)
>>> datetime.fromisoformat('2011-11-04T00:05:23Z')
datetime.datetime(2011, 11, 4, 0, 5, 23, tzinfo=datetime.timezone.utc)
>>> datetime.fromisoformat('20111104T000523')
datetime.datetime(2011, 11, 4, 0, 5, 23)
>>> datetime.fromisoformat('2011-W01-2T00:05:23.283')
datetime.datetime(2011, 1, 4, 0, 5, 23, 283000)
>>> datetime.fromisoformat('2011-11-04 00:05:23.283')
datetime.datetime(2011, 11, 4, 0, 5, 23, 283000)
>>> datetime.fromisoformat('2011-11-04 00:05:23.283+00:00')
datetime.datetime(2011, 11, 4, 0, 5, 23, 283000, tzinfo=datetime.timezone.utc)
>>> datetime.fromisoformat('2011-11-04T00:05:23+04:00')
datetime.datetime(2011, 11, 4, 0, 5, 23, tzinfo=datetime.timezone(datetime.timedelta(seconds=14400)))
New in version 3.7.
Changed in version 3.11: Previously, this method only supported formats that could be emitted by date.isoformat() or datetime.isoformat().
Be sure to read the caution from the docs if you haven't upgraded to Python 3.11 yet!
A: What is the exact error you get? Is it like the following?
>>> datetime.datetime.strptime("2008-08-12T12:20:30.656234Z", "%Y-%m-%dT%H:%M:%S.Z")
ValueError: time data did not match format: data=2008-08-12T12:20:30.656234Z fmt=%Y-%m-%dT%H:%M:%S.Z
If yes, you can split your input string on ".", and then add the microseconds to the datetime you got.
Try this:
>>> def gt(dt_str):
dt, _, us= dt_str.partition(".")
dt= datetime.datetime.strptime(dt, "%Y-%m-%dT%H:%M:%S")
us= int(us.rstrip("Z"), 10)
return dt + datetime.timedelta(microseconds=us)
>>> gt("2008-08-12T12:20:30.656234Z")
datetime.datetime(2008, 8, 12, 12, 20, 30, 656234)
A: import re
import datetime
s = "2008-09-03T20:56:35.450686Z"
d = datetime.datetime(*map(int, re.split(r'[^\d]', s)[:-1]))
A: Because ISO 8601 allows many variations of optional colons and dashes being present, basically CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]. If you want to use strptime, you need to strip out those variations first.
The goal is to generate a utc datetime object.
If you just want a basic case that work for UTC with the Z suffix like 2016-06-29T19:36:29.3453Z:
datetime.datetime.strptime(timestamp.translate(None, ':-'), "%Y%m%dT%H%M%S.%fZ")
If you want to handle timezone offsets like 2016-06-29T19:36:29.3453-0400 or 2008-09-03T20:56:35.450686+05:00 use the following. These will convert all variations into something without variable delimiters like 20080903T205635.450686+0500 making it more consistent/easier to parse.
import re
# this regex removes all colons and all
# dashes EXCEPT for the dash indicating + or - utc offset for the timezone
conformed_timestamp = re.sub(r"[:]|([-](?!((\d{2}[:]\d{2})|(\d{4}))$))", '', timestamp)
datetime.datetime.strptime(conformed_timestamp, "%Y%m%dT%H%M%S.%f%z" )
If your system does not support the %z strptime directive (you see something like ValueError: 'z' is a bad directive in format '%Y%m%dT%H%M%S.%f%z') then you need to manually offset the time from Z (UTC). Note %z may not work on your system in python versions < 3 as it depended on the c library support which varies across system/python build type (i.e. Jython, Cython, etc.).
import re
import datetime
# this regex removes all colons and all
# dashes EXCEPT for the dash indicating + or - utc offset for the timezone
conformed_timestamp = re.sub(r"[:]|([-](?!((\d{2}[:]\d{2})|(\d{4}))$))", '', timestamp)
# split on the offset to remove it. use a capture group to keep the delimiter
split_timestamp = re.split(r"[+|-]",conformed_timestamp)
main_timestamp = split_timestamp[0]
if len(split_timestamp) == 3:
sign = split_timestamp[1]
offset = split_timestamp[2]
else:
sign = None
offset = None
# generate the datetime object without the offset at UTC time
output_datetime = datetime.datetime.strptime(main_timestamp +"Z", "%Y%m%dT%H%M%S.%fZ" )
if offset:
# create timedelta based on offset
offset_delta = datetime.timedelta(hours=int(sign+offset[:-2]), minutes=int(sign+offset[-2:]))
# offset datetime with timedelta
output_datetime = output_datetime + offset_delta
A: Nowadays there's Maya: Datetimes for Humans™, from the author of the popular Requests: HTTP for Humans™ package:
>>> import maya
>>> str = '2008-09-03T20:56:35.450686Z'
>>> maya.MayaDT.from_rfc3339(str).datetime()
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686, tzinfo=<UTC>)
A: Note in Python 2.6+ and Py3K, the %f character catches microseconds.
>>> datetime.datetime.strptime("2008-09-03T20:56:35.450686Z", "%Y-%m-%dT%H:%M:%S.%fZ")
See issue here
A: In these days, Arrow also can be used as a third-party solution:
>>> import arrow
>>> date = arrow.get("2008-09-03T20:56:35.450686Z")
>>> date.datetime
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686, tzinfo=tzutc())
A: Just use the python-dateutil module:
>>> import dateutil.parser as dp
>>> t = '1984-06-02T19:05:00.000Z'
>>> parsed_t = dp.parse(t)
>>> print(parsed_t)
datetime.datetime(1984, 6, 2, 19, 5, tzinfo=tzutc())
Documentation
A: The python-dateutil will throw an exception if parsing invalid date strings, so you may want to catch the exception.
from dateutil import parser
ds = '2012-60-31'
try:
dt = parser.parse(ds)
except ValueError, e:
print '"%s" is an invalid date' % ds
A: As of Python 3.7, you can basically (caveats below) get away with using datetime.datetime.strptime to parse RFC 3339 datetimes, like this:
from datetime import datetime
def parse_rfc3339(datetime_str: str) -> datetime:
try:
return datetime.strptime(datetime_str, "%Y-%m-%dT%H:%M:%S.%f%z")
except ValueError:
# Perhaps the datetime has a whole number of seconds with no decimal
# point. In that case, this will work:
return datetime.strptime(datetime_str, "%Y-%m-%dT%H:%M:%S%z")
It's a little awkward, since we need to try two different format strings in order to support both datetimes with a fractional number of seconds (like 2022-01-01T12:12:12.123Z) and those without (like 2022-01-01T12:12:12Z), both of which are valid under RFC 3339. But as long as we do that single fiddly bit of logic, this works.
Some caveats to note about this approach:
*
*It technically doesn't fully support RFC 3339, since RFC 3339 bizarrely lets you use a space instead of a T to separate the date from the time, even though RFC 3339 purports to be a profile of ISO 8601 and ISO 8601 does not allow this. If you want to support this silly quirk of RFC 3339, you could add datetime_str = datetime_str.replace(' ', 'T') to the start of the function.
*My implementation above is slightly more permissive than a strict RFC 3339 parser should be, since it will allow timezone offsets like +0500 without a colon, which RFC 3339 does not support. If you don't merely want to parse known-to-be-RFC-3339 datetimes but also want to rigorously validate that the datetime you're getting is RFC 3339, use another approach or add in your own logic to validate the timezone offset format.
*This function definitely doesn't support all of ISO 8601, which includes a much wider array of formats than RFC 3339. (e.g. 2009-W01-1 is a valid ISO 8601 date.)
*It does not work in Python 3.6 or earlier, since in those old versions the %z specifier only matches timezones offsets like +0500 or -0430 or +0000, not RFC 3339 timezone offsets like +05:00 or -04:30 or Z.
A: I have found ciso8601 to be the fastest way to parse ISO 8601 timestamps.
It also has full support for RFC 3339, and a dedicated function for strict parsing RFC 3339 timestamps.
Example usage:
>>> import ciso8601
>>> ciso8601.parse_datetime('2014-01-09T21')
datetime.datetime(2014, 1, 9, 21, 0)
>>> ciso8601.parse_datetime('2014-01-09T21:48:00.921000+05:30')
datetime.datetime(2014, 1, 9, 21, 48, 0, 921000, tzinfo=datetime.timezone(datetime.timedelta(seconds=19800)))
>>> ciso8601.parse_rfc3339('2014-01-09T21:48:00.921000+05:30')
datetime.datetime(2014, 1, 9, 21, 48, 0, 921000, tzinfo=datetime.timezone(datetime.timedelta(seconds=19800)))
The GitHub Repo README shows their speedup versus all of the other libraries listed in the other answers.
My personal project involved a lot of ISO 8601 parsing. It was nice to be able to just switch the call and go faster. :)
Edit: I have since become a maintainer of ciso8601. It's now faster than ever!
A: If you are working with Django, it provides the dateparse module that accepts a bunch of formats similar to ISO format, including the time zone.
If you are not using Django and you don't want to use one of the other libraries mentioned here, you could probably adapt the Django source code for dateparse to your project.
A: If you don't want to use dateutil, you can try this function:
def from_utc(utcTime,fmt="%Y-%m-%dT%H:%M:%S.%fZ"):
"""
Convert UTC time string to time.struct_time
"""
# change datetime.datetime to time, return time.struct_time type
return datetime.datetime.strptime(utcTime, fmt)
Test:
from_utc("2007-03-04T21:08:12.123Z")
Result:
datetime.datetime(2007, 3, 4, 21, 8, 12, 123000)
A: For something that works with the 2.X standard library try:
calendar.timegm(time.strptime(date.split(".")[0]+"UTC", "%Y-%m-%dT%H:%M:%S%Z"))
calendar.timegm is the missing gm version of time.mktime.
A: Thanks to great Mark Amery's answer I devised function to account for all possible ISO formats of datetime:
class FixedOffset(tzinfo):
"""Fixed offset in minutes: `time = utc_time + utc_offset`."""
def __init__(self, offset):
self.__offset = timedelta(minutes=offset)
hours, minutes = divmod(offset, 60)
#NOTE: the last part is to remind about deprecated POSIX GMT+h timezones
# that have the opposite sign in the name;
# the corresponding numeric value is not used e.g., no minutes
self.__name = '<%+03d%02d>%+d' % (hours, minutes, -hours)
def utcoffset(self, dt=None):
return self.__offset
def tzname(self, dt=None):
return self.__name
def dst(self, dt=None):
return timedelta(0)
def __repr__(self):
return 'FixedOffset(%d)' % (self.utcoffset().total_seconds() / 60)
def __getinitargs__(self):
return (self.__offset.total_seconds()/60,)
def parse_isoformat_datetime(isodatetime):
try:
return datetime.strptime(isodatetime, '%Y-%m-%dT%H:%M:%S.%f')
except ValueError:
pass
try:
return datetime.strptime(isodatetime, '%Y-%m-%dT%H:%M:%S')
except ValueError:
pass
pat = r'(.*?[+-]\d{2}):(\d{2})'
temp = re.sub(pat, r'\1\2', isodatetime)
naive_date_str = temp[:-5]
offset_str = temp[-5:]
naive_dt = datetime.strptime(naive_date_str, '%Y-%m-%dT%H:%M:%S.%f')
offset = int(offset_str[-4:-2])*60 + int(offset_str[-2:])
if offset_str[0] == "-":
offset = -offset
return naive_dt.replace(tzinfo=FixedOffset(offset))
A: datetime.fromisoformat() is improved in Python 3.11 to parse most ISO 8601 formats
datetime.fromisoformat() can now be used to parse most ISO 8601 formats, barring only those that support fractional hours and minutes. Previously, this method only supported formats that could be emitted by datetime.isoformat().
>>> from datetime import datetime
>>> datetime.fromisoformat('2011-11-04T00:05:23Z')
datetime.datetime(2011, 11, 4, 0, 5, 23, tzinfo=datetime.timezone.utc)
>>> datetime.fromisoformat('20111104T000523')
datetime.datetime(2011, 11, 4, 0, 5, 23)
>>> datetime.fromisoformat('2011-W01-2T00:05:23.283')
datetime.datetime(2011, 1, 4, 0, 5, 23, 283000)
A: def parseISO8601DateTime(datetimeStr):
import time
from datetime import datetime, timedelta
def log_date_string(when):
gmt = time.gmtime(when)
if time.daylight and gmt[8]:
tz = time.altzone
else:
tz = time.timezone
if tz > 0:
neg = 1
else:
neg = 0
tz = -tz
h, rem = divmod(tz, 3600)
m, rem = divmod(rem, 60)
if neg:
offset = '-%02d%02d' % (h, m)
else:
offset = '+%02d%02d' % (h, m)
return time.strftime('%d/%b/%Y:%H:%M:%S ', gmt) + offset
dt = datetime.strptime(datetimeStr, '%Y-%m-%dT%H:%M:%S.%fZ')
timestamp = dt.timestamp()
return dt + timedelta(hours=dt.hour-time.gmtime(timestamp).tm_hour)
Note that we should look if the string doesn't ends with Z, we could parse using %z.
A: Initially I tried with:
from operator import neg, pos
from time import strptime, mktime
from datetime import datetime, tzinfo, timedelta
class MyUTCOffsetTimezone(tzinfo):
@staticmethod
def with_offset(offset_no_signal, signal): # type: (str, str) -> MyUTCOffsetTimezone
return MyUTCOffsetTimezone((pos if signal == '+' else neg)(
(datetime.strptime(offset_no_signal, '%H:%M') - datetime(1900, 1, 1))
.total_seconds()))
def __init__(self, offset, name=None):
self.offset = timedelta(seconds=offset)
self.name = name or self.__class__.__name__
def utcoffset(self, dt):
return self.offset
def tzname(self, dt):
return self.name
def dst(self, dt):
return timedelta(0)
def to_datetime_tz(dt): # type: (str) -> datetime
fmt = '%Y-%m-%dT%H:%M:%S.%f'
if dt[-6] in frozenset(('+', '-')):
dt, sign, offset = strptime(dt[:-6], fmt), dt[-6], dt[-5:]
return datetime.fromtimestamp(mktime(dt),
tz=MyUTCOffsetTimezone.with_offset(offset, sign))
elif dt[-1] == 'Z':
return datetime.strptime(dt, fmt + 'Z')
return datetime.strptime(dt, fmt)
But that didn't work on negative timezones. This however I got working fine, in Python 3.7.3:
from datetime import datetime
def to_datetime_tz(dt): # type: (str) -> datetime
fmt = '%Y-%m-%dT%H:%M:%S.%f'
if dt[-6] in frozenset(('+', '-')):
return datetime.strptime(dt, fmt + '%z')
elif dt[-1] == 'Z':
return datetime.strptime(dt, fmt + 'Z')
return datetime.strptime(dt, fmt)
Some tests, note that the out only differs by precision of microseconds. Got to 6 digits of precision on my machine, but YMMV:
for dt_in, dt_out in (
('2019-03-11T08:00:00.000Z', '2019-03-11T08:00:00'),
('2019-03-11T08:00:00.000+11:00', '2019-03-11T08:00:00+11:00'),
('2019-03-11T08:00:00.000-11:00', '2019-03-11T08:00:00-11:00')
):
isoformat = to_datetime_tz(dt_in).isoformat()
assert isoformat == dt_out, '{} != {}'.format(isoformat, dt_out)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "876"
}
|
Q: What does a scrum master do all day? To quote wikipedia:
Scrum is facilitated by a ScrumMaster, whose primary job is to remove impediments to the ability of the team to deliver the sprint goal. The ScrumMaster is not the leader of the team (as they are self-organizing) but acts as a buffer between the team and any distracting influences. The ScrumMaster ensures that the Scrum process is used as intended. The ScrumMaster is the enforcer of rules."
Working on this basis, and the fact that most businesses are running 2-3 projects at a time, what actual work tasks does a SM do to fill a full time job? Or, is it not a full time job and that individual do other things such as development, sales etc?
Do any SM's out there have anything to share?
A: Please note: this question and answer is over twelve years old. The consensus understanding of the role of scrum master has moved on massively since then and so I no longer view this as a valid answer to the question, let alone one worthy of being the accepted answer. By all means downvote it. Beyond that, pay it no heed.
The Scrum Master will do things like ensuring scrums occur, organising sprint planning meetings, retrospectives etc. Also (s)he will be able to explain to management what the team is doing and why the team members cannot be poached off onto other projects until the sprint finishes. Beyond that, there aren't really any defined tasks for the Scrum Master. So one person should easily be able to be Scrum Master for 3 teams, and still have time left over to either do management type jobs (holiday requests, procedures, attending boring meetings with directors or whatever), or be free to contribute to the development resources of the team.
A: While ScrumMaster is a role within the Scrum framework, the individual fulfilling that role must be a member of the Team. In Scrum, Team members should at all costs be full time. Team members should be able to pick up tasks on the Sprint backlog. They might be development tasks, testing tasks, configuring the CI server tasks, etc... If you can't contribute to the burndown then why be on the team? Buggering off and joining another team is the last thing any self respecting ScrumMaster should do. ScrumMasters should be servant leaders that are embedded with and dedicated to their Team and product. ScrumMaster is a role on a Team, not a job title. I disagree with those that think you can be a ScrumMaster on more than one project at a time and still be world class. The fact is, that's just not Scrum.
A: Unfortunately we don't have the luxury of having dedicated scrum masters. I am also a team leader and senior developer which more than fills the day.
A: I typically am on Stack Overflow all day. Oh, and I try to co-ordinate lunches.
A: First and foremost: remove impediments.
It is best if a Scrum Master is dedicated to one team, so that impediments are removed as soon as possible. Some of this can be done proactively, for example by pushing the PO to analyze certain stories better for the next Sprint.
If there is extra time available it is convenient if the SM has some skills that can make him function as a developer or tester on the team. I've seen good result with SM's that delegate as much as possible to a (classical) project manager and focus on development most of their time.
A: To make a long story short, the Scrum Master is responsible for making things happen. And in practice it is often the case that the Scrum Master is actually a project manager in disguise. At least that's the case in my company.
A:
Working on this basis, and the fact
that most businesses are running 2-3
projects at a time, what actual work
tasks does a SM do to fill a full time
job?
Anything within their skillset to help the Team achieve the goal.
Or, is it not a full time job and
that individual do other things such
as development, sales etc?
ScrumMaster was not originally intended to be a full time job. ScrumMaster is a role fulfilled by someone on the Team. That team member is dedicated to the product full time. So, when he\she is not doing ScrumMaster duties they default back to burning down tasks on the Sprint Backlog.
A: Everything and anything that developers need to keep being productive. Order pizza. Go talk to admins, management, other teams. Do bureaucracy kind of stuff. Fix the build server if no one else's available.
A: The key word here is that a Scrum Master's role is a facilitator's role. And as someone rightly mentioned up there his most important job is to ensure seamless distraction free environment for his team, which means removing impediments, making sure his team has whatever they need at all times. Scrum master is a link between the Product team and the Development team. The decision making is done by the TEAM and not Scrum master.
It is bad idea to share one Scrum Master between multiple teams as requirement of one team may be an impediment for the other team and hence defies the whole purpose of a Scrum Master.
Also it is very dangerous to have your Manager as your Scrum master as the pressure of delivery on the manager may force him to micro manage which is a killer for any scrum team.
Other than the regular stuff which is
*
*Arrange Sprint planning and retrospectives
*Facilitate daily standups Arrange
*Demo's at the end of sprint iteration
*Address team's concerns mentioned at the standups
A few important things that a Scrum Master has to manage on a day to day basis is
*
*Foresee and remove any distractions for the team before even it hits the team.
*Encourage team to communicate more
*Maintain constant communication with product team to check what needs to be done in
preparation for future sprints
*Make sure the team follows the processes they have collectively agreed upon as sometimes
during sprint busyness some processes slip through the crack
*Constantly find ways on how to improve the processes followed by the team
Most importantly a Scrum Master has to standby and support his team.
All this work takes up a lot of time and does require a dedicated Scrum Master who performs no other role.
A: The key to the ScrumMaster role is to remove impediments.
A: The ScrumMaster/ Iteration Manager
*
*Builds the Release Plan
*Builds the Scrum/ Iteration Plan
*Plans and hosts the
*
*Scrum/ Iteration Planning Meetings
*Show & Tells
*Release Planning Meetings
*Retrospectives
*Owns the blocker board and actively works with the team to identify and remove blockers
*Updates the team WIKI
*Updates Big Visible Charts in the team room including the story card wall
*Participates in the daily standup
*Participates in the daily Scrum of Scrums
The ScrumMaster/ Iteration Manager is also the sheep dog, that is they protect the team (herd). Finally, the ScrumMaster/ Iteration Manager is the point of contact for the team to external resources but primarily the Project Manager.
A: "acts as a buffer between the team and any distracting influences"
That is a full time job. There are a bunch of people who would love to get information from the team and it is the SM to handle those questions. To do that job well, it is important to be proactive, not reactive. Therefore they should be keeping all the wheels running smoothly. It is an amazing transformation when the SM is working well.
A: I think there will be as many answers to this question as there are people to answer it. On a small team with dedicated people who mostly know what they doing, the role of SM is almost invisible; on a larger team trying to cope with vague requirements and power struggles the SM will be highly visible and probably never have a moment to themselves, as they will become the lightning conductor for all the frustrations of the team (and stakeholders outside it).
There's no substitute for knowing what you want to achieve and having a small team of people who know how to achieve it. If you have that, and you "adopt SCRUM", you will probably be convinced quickly that being a Scrum Master is easy. But if instead you have a big mess of a team, and an undefined goal, and a lot of political fighting going on, and you "adopt SCRUM", you will probably come away thinking that being a Scrum Master is a full-time (perhaps impossible) job requiring a combination of very rare talents. Most real teams are probably somewhere between these extremes.
A: Scrum Master is like the mother bear for the team. They look after the team's health (project wise), protect them from pesky outsiders and remove any obstacles for the team. I play ScrumMaster for my team but I am also a development lead (for the same team!) who takes part in technical discussions, design discussions, coordinating between the developers and QA on our team(if they arent already doing it themselves). I do try and take on actual development tasks to burn the chart down when time is available.
Isnt it extremely distracting for the ScrumMaster to play that role in multiple teams. God I would find that confusing. Which impediment is blocking which team again?? Wait who was working on this task??
A: A Scrum Master role implemented correctly, is invaluable to the Project and should not be look upon as a Part time role. The most important aspect of the role is to act as an obstacle remover for any queries raised in the Scrum meetings by the Development Teams. A Technical Scrum Master (which is what most SMs tend to be) should not be a Developer on the team, but should be able to advise on design and solutions (an extension to pair programming if you will).
They are responsible for updating the ProductBackLog (stories should be created by the business), SprintBackLog and BurnLog and for liasing with the business and IT Management on progress. They also manage a SpikeLog for any items that require investigation that may evolve into Stories (again driven by the business).
A: As drivendevelopment implies, the ScrumMaster is a full team member and thus should be full time. I generally treat my role as "ensuring the team functions as a well oiled machine", which can have a number of meanings at different times. Frequently, a SM spends a lot of time facilitating the team's interactions with people outside the team, especially those related to business analysis and stakeholder expectations. Beyond that, it is a matter of meeting the mechanical items listed by Cam and looking after the physical and emotional state of the team.
Related to one of the earlier answers, one of the fundamental aspects I insist on is that no member of the team is a direct report to me, nor to each other. This precludes things like vacation time, expenses, etc from being part of my job, but goes a long way towards not cluttering the trust relationship that must exist.
A: As generally understood priority # 1 on scrum-master list is to remove impediments as reported by team. But this should not stop here, he should constantly look out for potential impediments.. and more importantly impediments that are there but not yet identified. Ken said Impediments are opportunities. So scrum-master should avail these opportunities all day along to bring his team(s) to hyper productivity.
Ultimately purpose of scrum is to bring success to projects. Purpose of having a scrum-master is to ensure that scrum succeed in fulfilling purpose of scrum. Now to to fulfill purpose of scrum-master, he/she must think & act at strategic level as well. This is full-time job.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
}
|
Q: C# 3.0 - Object initializer I'm having a little problem and I don't see why, it's easy to go around it, but still I want to understand.
I have the following class :
public class AccountStatement : IAccountStatement
{
public IList<IAccountStatementCharge> StatementCharges { get; set; }
public AccountStatement()
{
new AccountStatement(new Period(new NullDate().DateTime,newNullDate().DateTime), 0);
}
public AccountStatement(IPeriod period, int accountID)
{
StatementCharges = new List<IAccountStatementCharge>();
StartDate = new Date(period.PeriodStartDate);
EndDate = new Date(period.PeriodEndDate);
AccountID = accountID;
}
public void AddStatementCharge(IAccountStatementCharge charge)
{
StatementCharges.Add(charge);
}
}
(note startdate,enddate,accountID are automatic property to...)
If I use it this way :
var accountStatement = new AccountStatement{
StartDate = new Date(2007, 1, 1),
EndDate = new Date(2007, 1, 31),
StartingBalance = 125.05m
};
When I try to use the method "AddStatementCharge: I end up with a "null" StatementCharges list... In step-by-step I clearly see that my list get a value, but as soon as I quit de instantiation line, my list become "null"
A: Use
public AccountStatement() : this(new Period(new NullDate().DateTime,newNullDate().DateTime), 0) { }
insetad of
public AccountStatement()
{
new AccountStatement(new Period(new NullDate().DateTime,newNullDate().DateTime), 0);
}
A: Your parameter-less constructor creates a new instance of itself, but doesn't assign it to anything.
A: This code:
public AccountStatement()
{
new AccountStatement(new Period(new NullDate().DateTime,newNullDate().DateTime), 0);
}
is undoubtedly not what you wanted. That makes a second instance of AccountStatement and does nothing with it.
I think what you meant was this instead:
public AccountStatement() : this(new Period(new NullDate().DateTime, new NullDate().DateTime), 0)
{
}
A: You are calling a parameter-less constructor so AddStatementCharge is never initialized. Use something like:
var accountStatement = new AccountStatement(period, accountId) {
StartDate = new Date(2007, 1, 1),
EndDate = new Date(2007, 1, 31),
StartingBalance = 125.05m
};
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I bind an ASP.net ajax AccordionPane to an XMLDatasource? I've got an angry boss that will beat me down if I waste another day on this :-P Many karma points to the ajax guru who can solve my dilemma.
But more detail: I want to have an AccordionPane that grabs a bunch of links from an XML source and populate itself from said source.
A: There might be a sexier way, but this works. Populate your data source however you wish. This was just for demo purposes. Ditto for PrettyTitle() Key is to remember there are two item types in the accordion.
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
<title>Accordion Binding</title>
</head>
<body>
<form id="form1" runat="server">
<asp:ScriptManager ID="AjaxScriptManager" runat="server">
</asp:ScriptManager>
<div>
<cc1:Accordion ID="AccordionControl" runat="server"
onitemdatabound="AccordionControl_ItemDataBound">
<Panes></Panes>
<HeaderTemplate>
<asp:Label ID="HeaderLabel" runat="server" />
</HeaderTemplate>
<ContentTemplate>
<asp:Literal ID="ContentLiteral" runat="server" />
<asp:HyperLink ID="ContentLink" runat="server" />
</ContentTemplate>
</cc1:Accordion><asp:xmldatasource runat="server" ID="RockNUGTwitter" ></asp:xmldatasource>
</div>
</form>
</body>
</html>
And codebehind is :
Using System;
using System.Web.UI.WebControls;
using System.Xml;
namespace Ajaxy
{
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
Fill();
}
private void Fill()
{
PopulateDataSource();
AccordionControl.DataSource = RockNUGTwitter.GetXmlDocument().SelectNodes("//item");
AccordionControl.DataBind();
}
private void PopulateDataSource()
{
XmlDocument RockNugTwitterRSSDocument = new XmlDocument();
RockNugTwitterRSSDocument.Load("http://twitter.com/statuses/user_timeline/15912811.rss");
RockNUGTwitter.Data = RockNugTwitterRSSDocument.OuterXml;
}
protected void AccordionControl_ItemDataBound(object sender, AjaxControlToolkit.AccordionItemEventArgs e)
{
XmlNode ItemNode = (XmlNode)e.AccordionItem.DataItem;
if(e.AccordionItem.ItemType == AjaxControlToolkit.AccordionItemType.Content)
{
HyperLink ContentLink = (HyperLink)e.AccordionItem.FindControl("ContentLink");
ContentLink.NavigateUrl = ItemNode.SelectSingleNode("link").InnerText;
Literal ContentLiteral = (Literal)e.AccordionItem.FindControl("ContentLiteral");
ContentLiteral.Text = ItemNode.SelectSingleNode("title").InnerText;
ContentLink.Text = "Link";
}
else if(e.AccordionItem.ItemType == AjaxControlToolkit.AccordionItemType.Header)
{
Label HeaderLabel = (Label) e.AccordionItem.FindControl("HeaderLabel");
HeaderLabel.Text = PrettyTitle(ItemNode.SelectSingleNode("title").InnerText);
}
}
private string PrettyTitle(string FullItem)
{
string PrettyString = FullItem.Replace("RockNUG: ", "");
string[] Words = PrettyString.Split(' ');
const int MAX_WORDS_TOSHOW = 4;
int WordsToShow = MAX_WORDS_TOSHOW;
if(Words.Length < MAX_WORDS_TOSHOW)
{
WordsToShow = Words.Length;
}
PrettyString = String.Join(" ", Words, 0, WordsToShow);
if (Words.Length > WordsToShow)
{
PrettyString += "...";
}
return PrettyString;
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: ASP.NET Ajax - Asynch request has separate session? We are writing a search application that saves the search criteria to session state and executes the search inside of an asp.net updatepanel. Sometimes when we execute multiple searches successively the 2nd or 3rd search will sometimes return results from the first set of search criteria.
Example: our first search we do a look up on "John Smith" -> John Smith results are displayed. The second search we do a look up on "Bob Jones" -> John Smith results are displayed.
We save all of the search criteria in session state as I said, and read it from session state inside of the ajax request to format the DB query. When we put break points in VS everything behaves as normal, but without them we get the original search criteria and results.
My guess is because they are saved in session, that the ajax request somehow gets its own session and saves the criteria to that, and then retrieves the criteria from that session every time, but the non-async stuff is able to see when the criteria is modified and saves the changes to state accordingly, but because they are from two different sessions there is a disparity in what is saved and read.
EDIT:::
To elaborate more, there was a suggestion of appending the search criteria to the query string which normally is good practice and I agree thats how it should be but following our requirements I don't see it as being viable. They want it so the user fills out the input controls hits search and there is no page reload, the only thing they see is a progress indicator on the page, and they still have the ability to navigate and use other features on the current page. If I were to add criteria to the query string I would have to do another request causing the whole page to load, which depending on the search criteria can take a really long time. This is why we are using an ajax call to perform the search and why we aren't causing another full page request..... I hope this clarifies the situation.
A: Just another thought, I've always run into problems with updatepanel and prefer to write my atlas ajax requests through the library directly, using PageMethods. You have more control over what you send and receive. UpdatePanel sends the entire page, and receives the entire page control heirarchy, then it parses out what is 'fresh' and displays that.
Edit: What is the code you're using to save the criteria to the session? And do you have code in the method that actually checks to see if the session has some saved criteria, and passes that back instead? Maybe that's why the 2nd/3rd updatepanel postbacks are returning the first set of criteria instead of the expected results? As an aside, I know from doing some heavy atlas ajax things that there is definitely not two sessions (one for normal postback, one for async) Is there any chance you're using a webfarm?
Edit #2: I wouldn't have been able to write what I've written above (first para) if I hadn't been a fan of someone who replied as well: https://stackoverflow.com/users/60/dave-ward
A: You need to set the EnableSession property of the WebMethod attribute for the function you are calling.
[WebMethod( EnableSession=true )]
public static void DoSomething(){
/// ....
}
A: There are not multiple sessions between normal ASP.NET page loads, postbacks, and ASP.NET AJAX partial postbacks. I can tell you that with certainty.
Rather than storing the search string in the session, how about just using the search TextBox's contents directly? I can't think of any reason why you'd need to shuffle it around, since it will be available throughout the entire page lifecycle anyway.
Finally, concerning your requirements... Using an UpdatePanel does not fulfill the requirement that your users should be able to use other functionality on the page if that functionality also raises partial postbacks. Only one partial postback can be in progress at a time. If another event is raised while your search is in progress, the search request will be canceled without any notification.
Using a page method or web service for the search would be a much faster, easier, and more robust way of doing it. I don't usually plug my own site, but I think a couple of my posts are exactly relevant to what you're doing:
You could use a user control to render the search results through a web service (very much faster than an UpdatePanel): http://encosia.com/2008/02/05/boost-aspnet-performance-with-deferred-content-loading/
Or, you could return the search results as JSON and render that on the client side (even faster): http://encosia.com/2008/06/26/use-jquery-and-aspnet-ajax-to-build-a-client-side-repeater/
Either of those methods could take your search functionality out of the partial postback paradigm, so that it runs faster, uses less bandwidth and server resource, and doesn't preclude other UpdatePanel activity from occurring concurrently.
A: If you use generic handlers .ashx, just derive from IRequiresSessionState interface
public class ActionRequest : IHttpHandler, IRequiresSessionState
{
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Using Blitz implementation of JavaSpaces I have great doubts about this forum, but I am willing to be pleasantly surprised ;) Kudos and great karma to those who get me back on track.
I am attempting to use the blitz implementation of JavaSpaces (http://www.dancres.org/blitz/blitz_js.html) to implement the ComputeFarm example provided at http://today.java.net/pub/a/today/2005/04/21/farm.html
The in memory example works fine, but whenever I attempt to use the blitz out-of-box implementation i get the following error:
(yes com.sun.jini.mahalo.TxnMgrProxy is in the class path)
2008-09-24 09:57:37.316 ERROR [Thread-4] JavaSpaceComputeSpace 155 - Exception while taking task.
java.rmi.ServerException: RemoteException in server thread; nested exception is:
java.rmi.UnmarshalException: unmarshalling method/arguments; nested exception is:
java.lang.ClassNotFoundException: com.sun.jini.mahalo.TxnMgrProxy
at net.jini.jeri.BasicInvocationDispatcher.dispatch(BasicInvocationDispatcher.java:644)
at com.sun.jini.jeri.internal.runtime.ObjectTable$6.run(ObjectTable.java:597)
at net.jini.export.ServerContext.doWithServerContext(ServerContext.java:103)
at com.sun.jini.jeri.internal.runtime.ObjectTable$Target.dispatch0(ObjectTable.java:595)
at com.sun.jini.jeri.internal.runtime.ObjectTable$Target.access$700(ObjectTable.java:212)
at com.sun.jini.jeri.internal.runtime.ObjectTable$5.run(ObjectTable.java:568)
at com.sun.jini.start.AggregatePolicyProvider$6.run(AggregatePolicyProvider.java:527)
at java.security.AccessController.doPrivileged(Native Method)
at com.sun.jini.jeri.internal.runtime.ObjectTable$Target.dispatch(ObjectTable.java:565)
at com.sun.jini.jeri.internal.runtime.ObjectTable$Target.dispatch(ObjectTable.java:540)
at com.sun.jini.jeri.internal.runtime.ObjectTable$RD.dispatch(ObjectTable.java:778)
at net.jini.jeri.connection.ServerConnectionManager$Dispatcher.dispatch(ServerConnectionManager.java:148)
at com.sun.jini.jeri.internal.mux.MuxServer$2.run(MuxServer.java:244)
at com.sun.jini.start.AggregatePolicyProvider$5.run(AggregatePolicyProvider.java:513)
at java.security.AccessController.doPrivileged(Native Method)
at com.sun.jini.jeri.internal.mux.MuxServer$1.run(MuxServer.java:241)
at com.sun.jini.thread.ThreadPool$Worker.run(ThreadPool.java:136)
at java.lang.Thread.run(Thread.java:595)
at com.sun.jini.jeri.internal.runtime.Util.__________EXCEPTION_RECEIVED_FROM_SERVER__________(Util.java:108)
at com.sun.jini.jeri.internal.runtime.Util.exceptionReceivedFromServer(Util.java:101)
at net.jini.jeri.BasicInvocationHandler.unmarshalThrow(BasicInvocationHandler.java:1303)
at net.jini.jeri.BasicInvocationHandler.invokeRemoteMethodOnce(BasicInvocationHandler.java:832)
at net.jini.jeri.BasicInvocationHandler.invokeRemoteMethod(BasicInvocationHandler.java:659)
at net.jini.jeri.BasicInvocationHandler.invoke(BasicInvocationHandler.java:528)
at $Proxy0.take(Unknown Source)
at org.dancres.blitz.remote.BlitzProxy.take(BlitzProxy.java:157)
at compute.impl.javaspaces.JavaSpaceComputeSpace.take(JavaSpaceComputeSpace.java:138)
at example.squares.SquaresJob.collectResults(SquaresJob.java:47)
at compute.impl.AbstractJobRunner$CollectThread.run(AbstractJobRunner.java:28)
Caused by: java.rmi.UnmarshalException: unmarshalling method/arguments; nested exception is:
java.lang.ClassNotFoundException: com.sun.jini.mahalo.TxnMgrProxy
at net.jini.jeri.BasicInvocationDispatcher.dispatch(BasicInvocationDispatcher.java:619)
at com.sun.jini.jeri.internal.runtime.ObjectTable$6.run(ObjectTable.java:597)
at net.jini.export.ServerContext.doWithServerContext(ServerContext.java:103)
at com.sun.jini.jeri.internal.runtime.ObjectTable$Target.dispatch0(ObjectTable.java:595)
at com.sun.jini.jeri.internal.runtime.ObjectTable$Target.access$700(ObjectTable.java:212)
at com.sun.jini.jeri.internal.runtime.ObjectTable$5.run(ObjectTable.java:568)
at com.sun.jini.start.AggregatePolicyProvider$6.run(AggregatePolicyProvider.java:527)
at java.security.AccessController.doPrivileged(Native Method)
at com.sun.jini.jeri.internal.runtime.ObjectTable$Target.dispatch(ObjectTable.java:565)
at com.sun.jini.jeri.internal.runtime.ObjectTable$Target.dispatch(ObjectTable.java:540)
at com.sun.jini.jeri.internal.runtime.ObjectTable$RD.dispatch(ObjectTable.java:778)
at net.jini.jeri.connection.ServerConnectionManager$Dispatcher.dispatch(ServerConnectionManager.java:148)
at com.sun.jini.jeri.internal.mux.MuxServer$2.run(MuxServer.java:244)
at com.sun.jini.start.AggregatePolicyProvider$5.run(AggregatePolicyProvider.java:513)
at java.security.AccessController.doPrivileged(Native Method)
at com.sun.jini.jeri.internal.mux.MuxServer$1.run(MuxServer.java:241)
at com.sun.jini.thread.ThreadPool$Worker.run(ThreadPool.java:136)
at java.lang.Thread.run(Thread.java:595)
Caused by: java.lang.ClassNotFoundException: com.sun.jini.mahalo.TxnMgrProxy
at java.net.URLClassLoader$1.run(URLClassLoader.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at net.jini.loader.pref.PreferredClassLoader.loadClass(PreferredClassLoader.java:922)
at java.lang.ClassLoader.loadClass(ClassLoader.java:251)
at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:242)
at net.jini.loader.pref.PreferredClassProvider.loadClass(PreferredClassProvider.java:613)
at java.rmi.server.RMIClassLoader.loadClass(RMIClassLoader.java:247)
at net.jini.loader.ClassLoading.loadClass(ClassLoading.java:138)
at net.jini.io.MarshalInputStream.resolveClass(MarshalInputStream.java:296)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1544)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1466)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1699)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1908)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1832)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
at com.sun.jini.jeri.internal.runtime.Util.unmarshalValue(Util.java:221)
at net.jini.jeri.BasicInvocationDispatcher.unmarshalArguments(BasicInvocationDispatcher.java:1049)
at net.jini.jeri.BasicInvocationDispatcher.dispatch(BasicInvocationDispatcher.java:599)
... 17 more
A: So com.sun.jini.mahalo.TxnMgrProxy is contained in some jar, that is contained in your CLASSPATH environment variable.
But probably your are using some script to start the server. And this most probably starts java by specifying a "-classpath" commandline switch which takes precendence over your environment CLASSPATH variable.
http://java.sun.com/j2se/1.4.2/docs/tooldocs/windows/classpath.html
You can simulate this by executing:
javap -classpath someUnknownJar.jar com.sun.jini.mahalo.TxnMgrProxy
... and suddenly the class cannot be found anymore. So can you please try and find out the way the java VM of the client and server are started and provide the complete command line.
(If you are using some kind of script just add an "echo ..." in front of the java command and paste the output in here).
A: This looks like an RMI classloading issue. It appears that the server process is trying to unmarshal the TxnMgrProxy object that is getting passed to it (I don't know the specifics of the example, I'm kind of guessing from the stack trace). That object needs to be annotated with a codebase where the class definition can be found. You probably need to make sure that Mahalo is started with the java.rmi.server.codebase property pointing to a URL where mahalo-dl.jar (or some JAR holding the class definition) can be downloaded.
Even if the JAR is available locally, it might not be enough. The PreferredClassProvider (it's buried in the stack trace) usurps the normal Java classloader delegation scheme, so even if the class is there locally, it'll still want to pull the definition through the codebase.
These are tough problems to figure out. Hope I hit on something close to the answer. Good luck.
A: Well, your java spaces server does not seem to find the class:
com.sun.jini.mahalo.TxnMgrProxy.
So I guess you just have to add Mahalo (should be included in the blitz distribution according to this: http://www.dancres.org/blitz/blitz_inst.html page) to your classpath when starting the server.
Please post some more information about how you are starting your server, if this advice does not help.
A: Please note my original post:yes com.sun.jini.mahalo.TxnMgrProxy is in the class path
if you are familiar with javap -- if you specify a fully qualified class name it will determine whether or not it is on the class path.
this is the result that I get when running javap com.sum.jini.mahalo.TxnMgrProxy:
C:\dev\jini\blitz>javap com.sun.jini.mahalo.TxnMgrProxy
Compiled from "TxnMgrProxy.java"
class com.sun.jini.mahalo.TxnMgrProxy extends java.lang.Object implements net.jini.core.transaction.server.TransactionManager,net.jini.admin.Admi
nistrable,java.io.Serializable,net.jini.id.ReferentUuid{
final com.sun.jini.mahalo.TxnManager backend;
final net.jini.id.Uuid proxyID;
static com.sun.jini.mahalo.TxnMgrProxy create(com.sun.jini.mahalo.TxnManager, net.jini.id.Uuid);
public net.jini.core.transaction.server.TransactionManager$Created create(long) throws net.jini.core.lease.LeaseDeniedException, java.r
mi.RemoteException;
public void join(long, net.jini.core.transaction.server.TransactionParticipant, long) throws net.jini.core.transaction.UnknownTransacti
onException, net.jini.core.transaction.CannotJoinException, net.jini.core.transaction.server.CrashCountException, java.rmi.RemoteException;
public int getState(long) throws net.jini.core.transaction.UnknownTransactionException, java.rmi.RemoteException;
public void commit(long) throws net.jini.core.transaction.UnknownTransactionException, net.jini.core.transaction.CannotCommitException,
java.rmi.RemoteException;
public void commit(long, long) throws net.jini.core.transaction.UnknownTransactionException, net.jini.core.transaction.CannotCommitExce
ption, net.jini.core.transaction.TimeoutExpiredException, java.rmi.RemoteException;
public void abort(long) throws net.jini.core.transaction.UnknownTransactionException, net.jini.core.transaction.CannotAbortException, j
ava.rmi.RemoteException;
public void abort(long, long) throws net.jini.core.transaction.UnknownTransactionException, net.jini.core.transaction.CannotAbortExcept
ion, net.jini.core.transaction.TimeoutExpiredException, java.rmi.RemoteException;
public java.lang.Object getAdmin() throws java.rmi.RemoteException;
public net.jini.id.Uuid getReferentUuid();
public int hashCode();
public boolean equals(java.lang.Object);
com.sun.jini.mahalo.TxnMgrProxy(com.sun.jini.mahalo.TxnManager, net.jini.id.Uuid, com.sun.jini.mahalo.TxnMgrProxy$1);
}
A: Make sure that you specify -Djava.security.policy=/wherever/policy.all and -Djava.security.manager= You may also have to have the RMI code server running.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: SAX vs XmlTextReader - SAX in C# I am attempting to read a large XML document and I wanted to do it in chunks vs XmlDocument's way of reading the entire file into memory. I know I can use XmlTextReader to do this but I was wondering if anyone has used SAX for .NET? I know Java developers swear by it and I was wondering if it is worth giving it a try and if so what are the benefits in using it. I am looking for specifics.
A: If you just want to get the job done quickly, the XmlTextReader exists for that purpose (in .NET).
If you want to learn a de facto standard (and available in may other programming languages) that is stable and which will force you to code very efficiently and elegantly, but which is also extremely flexible, then look into SAX. However, don't waste your time unless you're going to be creating highly esoteric XML parsers. Instead, look for parsers that next generation parsers (like XmlTextReader) for your particular platform.
SAX Resources
SAX was originally written for Java, and you can find the original open source project, which has been stable for several years, here:
http://sax.sourceforge.net/
There is a C# port of the same project here (with HTML docs as part of the source download); it is also stable:
http://saxdotnet.sourceforge.net/
If you do not like the C# implementation, you could always resort to referencing COM DLLs via COMInterop using MSXML3 or later: http://msdn.microsoft.com/en-us/library/ms994343.aspx
Articles that come from the Java world but which probably illustrate the concepts you need to be successful with this approach (there may also be downloadable Java source code that could prove useful and may be easy enough to convert to C#):
*
*Output large XML documents, Part 1 (http://www.ibm.com/developerworks/xml/library/x-tipbigdoc.html)
*Output large XML documents, Part 2 (http://www.ibm.com/developerworks/xml/library/x-tipbigdoc2.html)
*Use a SAX filter to manipulate data (http://www.ibm.com/developerworks/xml/library/x-tipsaxfilter/)
It will be a cumbersome implementation. I have only used SAX back in my pre-.NET days, but it requires some pretty advanced coding techniques. At this point, it's just not worth the trouble.
Interesting Concept for a Hybrid Parser
This thread describes a hybrid parser that uses the .NET XmlTextReader to implement a parser that provides a combination of DOM and SAX benefits...
http://bytes.com/groups/net-xml/178403-xmltextreader-versus-dom
A: If you're talking about SAX for .NET, the project doesn't appear to be maintained. The last release was more than 2 years ago. Maybe they got it perfect on the last release, but I wouldn't bet on it. The author, Karl Waclawek, seems to have disappeared off the net.
As for SAX under Java? You bet, it's great. Unfortunately, SAX was never developed as a standard, so all of the non-Java ports have been adapting a Java API for their own needs. While DOM is a pretty lousy API, it has the advantage of having been designed for multiple languages and environments, so it's easy to implement in Java, C#, JavaScript, C, et al.
A: I believe there are no benefits using SAX at least due two reasons:
*
*SAX is a "push" model while XmlReader is a pull parser that has a number of benefits.
*Being dependent on a 3rd-party library rather than using a standard .NET API.
A: Personally, I much prefer the SAX model as the XmlReader has some really annoying traps that can cause bugs in your code that might cause your code to skip elements. Most code would be structured around a while(rdr.Read()) model, but if you have any "ReadString" or "ReadInnerXml()" within that loop you will find yourself skipping elements on the next iteration.
As SAX is event based this will never hapen as you can not perform any operations that would cause your parser to seek-ahead.
My personal feeling is that Microsoft have invented the notion that the XmlReader is better with the explanation of the push/pull model, but I don't really buy it. So Microsoft think that you don't need to create a state-machine with XmlReader, that doesn't make sense to me, but anyway, it's just my opinion.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Source Safe 6.0d and Visual Studio 2005 project tree problems Whenever I try to add a new project to my SourceSafe repository it creates 3 folders with the same name nested within each other. There is only one folder on my drive yet in Sourcesafe there are 3??
Can anyone suggest what may be causing this?
Thanks
A: Try creating the project in VS2005 disconnected from source control, then creating the project folder in VSS, set the working folder correctly, add the files to sourcesafe from VSS, then lastly edit the source control bindings in VS2005 and check the bound project into source control.
A little kludgey but this is how I do it.
A: If you drag and rop a new project folder into VSS and do a recursive add then that's just how it works. Otherwise you have to create your own root project folder in VSS and add each file one at a time to VSS by hand.
A: well, that problem comes due to visual studio. because visual stuio by default save solution file in the my documents/...../.../vs 2008/projects/ location and that address is also saved in the .sln file.
that's why every time you get latest within visual stuio it try to creat same strucute and make another copy with in the main project folder.
Solution, well i still trying to figure out how to tackle it.
cheers,
Genious
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to add Announcement list/webpart to Publishing Portal I have a Publishing Portal site and I need to add some announcements to some of the pages. I've read an article which says that i have to create an announcement list to be able add an announcement web part but i can't seem to find any resources on how i can add an announcement list.
Any help will be greatly appreciated.
TIA!
A: Your problem is that you have not activated the relevant feature on the site settings page. You need to go to the site collection site settings page. Then select Site Actions - manage site features
Then activate the feature called Team Collaboration lists. You will now be able to create an announcement list
A: From the home page of your site (or from any page really) you should see a "View All Site Content" link on the top of the navigation menu.
View All Site Content http://friendfeed.s3.amazonaws.com/86fed07f0809beefaeeaee0013ee2b952079bc09
Click on that link and it will show you a dashboard listing all of the SharePoint lists that have been provisioned for the current site. Click on the Create button to create a new SharePoint list.
Create new SharePoint List http://friendfeed.s3.amazonaws.com/6c0b244801826f8b3ee01811211b88668ba8f713
From there you will see the option to create an Announcments list (under the Communications header). Complete the wizard to complete the list.
Once the list is created you can select Edit Page from the Site Actions menu on any SharePoint page in the site and then select a "Add a Web Part" on the web part zone you want to put your Announcements web part into. You should now see a web part listed with the same name as your Announcements list that you just created.
Select that web part to add it to the page and display.
Hope that helps. If this isn't the answer to your problem leave a comment or update your question with clarification and I will try to help.
A: Giving you direct instructions on how to create the list would most likely leave you more lost than ever. If this is a publishing portal, there's a lot more to learn beyond just creating a list. Content must be approved, and is versioned. I'd strongy advise you not to start poking around in there as you run a large risk of messing up the portal. Don't get stressed by people demanding you perform such things without having received any training. Grab yourself a coffee, flip your boss the finger and watch some pertinant webcasts on http://office.microsoft.com/en-us/sharepointserver/FX101211721033.aspx
Hope this helps,
Oisin
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: What should the view file/directory structure be in ASP.NET MVC? I'm confused with how views are organized, and it is important to understand this as ASP.NET MVC uses conventions to get everything working right.
Under the views directory, there are subdirectories. Inside these subdirectories are views. I'm assuming that the subdirectories map to controllers, and the controllers act on the views contained within their subdirectories.
Is there an emerging expectation of what types of views are contained within these directories? For instance, should the default page for each directory be index.aspx? Should the pages follow a naming convention such as Create[controller].aspx, List[controller].aspx, etc? Or does it not matter?
A: View directory naming and file naming are important, because the ASP.NET MVC framework makes certain assumptions about them. If you do not conform to these assumptions, then you must write code to let the framework know what you are doing. Generally speaking, you should conform to these assumptions unless you have a good reason not to.
Let's look at the simplest possible controller action:
public ActionResult NotAuthorized()
{
return View();
}
Because no view name has been specified in the call to View(), the framework will presume that the view filename will be the same as the Action name. The framework has a type called ViewEngine which will supply the extension. The default ViewEngine is WebFormViewEngine, which will take that name and append an .aspx to it. So the full filename in this case would be NotAuthorized.aspx.
But in which folder will the file be found? Again, the ViewEngine supplies that information. With WebFormViewEngine, it will look in two folders: ~/Views/Shared and ~/Views/{controller}
So if your controller was called AccountController, it would look in ~/Views/Account
But there might be times when you don't want to follow these rules. For instance, two different actions might return the same view (with a different model, or something). In this case, if you specify the view name explicitly in your action:
public ActionResult NotAuthorized()
{
return View("Foo");
}
Note that with WebFormViewEngine, the "view name" is generally the same as the filename, less the extension, but the framework does not require that of other view engines.
Similarly, you might also have a reason to want your application to look for views and non-default folders. You can do that by creating your own ViewEngine. I show the technique in this blog post, but the type names are different, since it was written for an earlier version of the framework. The basic idea is still the same, however.
A: In regard to expected names for the views, I think that it's one of those things that each project or organization will try to standardize.
As you hinted to in your question, it's possible that some of these Views (or more precisely, the Actions that render them) become popular across the board, like for example the ones below that are common in RoR applications that adopt the REST paradigm:
*
*/orders/ (i.e. index)
*/orders/show/123
*/orders/edit/123
*/orders/update/123
*/orders/new
*/orders/create
*/orders/destroy/123
The choice/standardization of the Views is largely dependent on how you model your application (to say the obvious) and how fine-grained you want to go. The closer you map your controllers to individual model classes (cough...resources...cough), the shorter your actions will tend to be and more easily you will be able to follow a standard set of actions (as in the above example).
I also believe that shorter actions help pushing more and more of the model business logic into the models themselves, where it belongs.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Which design patterns are underutilized? Is there a specfic Gang Of Four Design Pattern that you frequently use, yet hardly see used in other peoples designs? If possible, please describe a simple example where this pattern can be useful. It doesn't have to necessarily be a Gang Of Four pattern, but please include a hyperlink to the pattern's description if you choose a non-GoF pattern.
Put another way:
What are some good/useful design patterns that I, or someone else who does have a passing knowledge of the main patterns, may not already know?
A: Steve Yegge wrote a (typically) long blog entry about the Interpreter Pattern, making the claim that this pattern is the only GoF pattern that can make code "smaller", and is criminally underutilized by programmers who otherwise are quite comfortable with the other GoF patterns. I am one of those programmers - I've never used the Interpreter pattern, although I recognize it's importance to things like DSL's. Anyway, it's a very thought-provoking essay if you have the intestinal fortitude to read an entire Yegge post.
A: Strategy pattern maybe? I see not a lot of person using it and it's quite useful when calculations change or can be accumulated together. I use it when a part of the calculation can be replaced by another calculation. Often in program that use for enterprise rate for product.
Here is some documentation :
*
*Wikipedia
*DoFactory
A: Visitor has a bad reputation, partly due to some real problems
*
*cyclic dependency between Vistor and Visited hierarchies
*it's supposed to ruin encapsulation by exposing Visited classes internals
and partly due to the exposition in the GOF book, which emphasizes traversal of a structure rather than adding virtual functions to a closed hierarchy.
This means it doesn't get considered where appropriate, e.g for solving the double dispatch problem in statically typed languages. Example: a message or event passing system in C++, where the types of messages are fixed, but we want to extend by adding new recipients. Here, messages are just structs, so we don't care about encapsulating them. SendTo() doesn't know what type of Message or MessageRecipient is has.
#include <iostream>
#include <ostream>
using namespace std;
// Downside: note the cyclic dependencies, typically expressed in
// real life as include file dependency.
struct StartMessage;
struct StopMessage;
class MessageRecipient
{
public:
// Downside: hard to add new messages
virtual void handleMessage(const StartMessage& start) = 0;
virtual void handleMessage(const StopMessage& stop) = 0;
};
struct Message
{
virtual void dispatchTo(MessageRecipient& r) const = 0;
};
struct StartMessage : public Message
{
void dispatchTo(MessageRecipient& r) const
{
r.handleMessage(*this);
}
// public member data ...
};
struct StopMessage : public Message
{
StopMessage() {}
void dispatchTo(MessageRecipient& r) const
{
r.handleMessage(*this);
}
// public member data ...
};
// Upside: easy to add new recipient
class RobotArm : public MessageRecipient
{
public:
void handleMessage(const StopMessage& stop)
{
cout << "Robot arm stopped" << endl;
}
void handleMessage(const StartMessage& start)
{
cout << "Robot arm started" << endl;
}
};
class Conveyor : public MessageRecipient
{
public:
void handleMessage(const StopMessage& stop)
{
cout << "Conveyor stopped" << endl;
}
void handleMessage(const StartMessage& start)
{
cout << "Conveyor started" << endl;
}
};
void SendTo(const Message& m, MessageRecipient& r)
{
// magic double dispatch
m.dispatchTo(r);
}
int main()
{
Conveyor c;
RobotArm r;
SendTo(StartMessage(), c);
SendTo(StartMessage(), r);
SendTo(StopMessage(), r);
}
A: The Visitor pattern seems to be hard to understand for many new developers. I was using it for calculus when I had possibility to get value for Country>State>City>House. This way I haven't need to change how many data had in each sub collection. I just choose the right visitor and the final answer was get whatever the number of countries, states or cities.
*
*Visitor
*Visitor wiki
A: If we're talking non-GOF patterns then Monitor Object is the 'Hello World' of concurrent OO programming. I'm amazed how many programmers manage not to have heard of it, or prefer to design their own ad hoc synchronization schemes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Multiple ModificationForms in Sharepoint Workflow i am working on a Sharepoint Server 2007 Statemachine Workflow. Until now i have a few states and an custom Association/InitiationForm which i created with InfoPath 2007.
At the moment i have a Problem with Modification forms. The modification link in the state page of my workflow is shown and leads on click to my InfoPath form. If i click the "Submit" button the form is closed. Everything works fine.
Now i tried to add a second ModificationForm to my workflow. So i created a new InfoPath form and added it in the same way to the workflow as the first one. The workflow has no errors in the building or deploying-process.
But if i now try to click the second Modification link in the state page the form is not shown. Instead of my form the text: "The form has been closed." is shown.
I looked in the central administration and the InfoPath form is know under "Manage form templates". I gave every Modification in the Workflow.xml his own Guid. I used the following ModificationUrl: ModificationUrl="_layouts/ModWrkflIP.aspx"
Does anybody know step by step how to use two or more ModificationForms in my workflow?
Thank you in advance.
A: Thank you very much. I found the following Error Message in the Logfile:
"Form load failed with a validation error"
I searched in the web for solutions for this problem and fount this site:
http://social.msdn.microsoft.com/forums/en-US/sharepointworkflow/thread/83264f93-ebe3-49ec-bd6b-95ee02df4d8a/
I had two sceme files and i just schould use one for both forms. So i had to use the same data source. That was all. Thank you for the hint.
A: Look in your ULS logs for the error message. It will be listed there 100%. The category is "Forms Services" - the logs are located under the 12 hive in LOGS\
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Best way to register js for modularity in User Control i have a control that is organized like this
and i want to have the javascript registered on the calling master pages, etc, so that anywhere this control folder is dropped and then registered, it will know how to find the URL to the js.
Here is what i have so far (in the user control )
protected void Page_Load(object sender, EventArgs e)
{
if(!Page.IsClientScriptBlockRegistered("jqModal"))
Page.ClientScript.RegisterClientScriptInclude("jqModal", ResolveClientUrl("~js/jqModal.js"));
if (!Page.IsClientScriptBlockRegistered("jQuery"))
Page.ClientScript.RegisterClientScriptInclude("jQuery", ResolveClientUrl("~/js/jQuery.js"));
if (!Page.IsClientScriptBlockRegistered("tellAFriend"))
Page.ClientScript.RegisterClientScriptInclude("tellAFriend", ResolveClientUrl("js/tellAFriend.js"));
}
Any ideas?
A: You can use a helper class with static method:
public static class PageHelper {
public static void RegisterClientScriptIfNeeded( Page page, string key, string url ) {
if( false == page.IsClientScriptBlockRegistered( key )) {
page.ClientScript.RegisterClientScriptInclude( key , ResolveClientUrl( url ));
}
}
}
or you can have a similar instance method in some base class for page/webcontrol/usercontrol, which will do the same thing.
A: I can't see the image you posted.
You could also use Context.Items to ensure that the item is only added once per request and render the javascript through the control itself, although I think the registerclient script is great too.
protected override void Render(HtmlTextWriter writer)
{
base.Render(writer);
string[] items = new string[] { "jqModal", "jQuery", "tellAFriend" };
//Check if the Script has already been rendered during this request.
foreach(string jsFile in items)
{
if (!Context.Items.Contain(sjsFile))
{
//Specify that the Script has been rendered during this request.
Context.Items.Add(jsFile,true);
//Write the script to the page via the control
writer.Write(string.Format(SCRIPTTAG, ResolveUrl(jsFile)));
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to create reverse index for full text search in Common Lisp? What is the best way to create reverse index for full text search in Common Lisp ? Is there any proven and (relatively) bug-free open library for SBCL or LispWorks ?
I managed to create my own library on top of AllegroCache - it was fairly easy to create, reliable and fast, but lacks advanced search options (phrases, wildcarded words in phrases, etc).
Is there any open library that can be used with SBCL or LispWorks so I don't have to reinvent the wheel by writing my own ?
A: montezuma is the same thing as lucene, but written in lisp.
i don't think anyone uses this actively, nor that it's heavily tested... but it's a good start if you want to work on the thing itself. it already has the most used features. read the google-group archive to get a feel...
A: I know you're asking about Common Lisp, but there are a number of inverted text search service oriented applications. One well known and respected on is Lucene.
Could a solution be to use that search engine, but interface your Common Lisp code via a web-service API? (xml-rpc, xml over http or just text over http)?
Is there a further reason why you'd like it to be in Common Lisp? Packages like Lucene may cover all the search related features you need, while using a remote api may still allow you to perform your more complex logic in Common Lisp.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How to use win32api from IronPython Writing some test scripts in IronPython, I want to verify whether a window is displayed or not. I have the pid of the main app's process, and want to get a list of window titles that are related to the pid.
I was trying to avoid using win32api calls, such as FindWindowEx, since (to my knowledge) you cannot access win32api directly from IronPython. Is there a way to do this using built-in .net classes? Most of the stuff I have come across recommends using win32api, such as below.
.NET (C#): Getting child windows when you only have a process handle or PID?
UPDATE: I found a work-around to what I was trying to do. Answer below.
A: As of IronPython 2.6 the ctypes module is supported. This module provides C compatible data types, and allows calling functions in DLLs. Quick example:
import ctypes
buffer = ctypes.create_string_buffer(100)
ctypes.windll.kernel32.GetWindowsDirectoryA(buffer, len(buffer))
print buffer.value
A: The article below shows how to access the win32api indirectly from IronPython. It uses CSharpCodeProvider CompileAssemblyFromSource method to compile an assembly in memory from the supplied C# source code string. IronPython can then import the assembly.
Dynamically compiling C# from IronPython
A: It's like asking if you can swim without going in to the water. If you need information from windows, the only option is to use the win32api. There are lots of examples to find on how to do so.
If you don't like this answer, just leave a comment in your question and I will remove this answer, so your question will remain in the unanswered questions list.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Compact Framework : any Finger Friendly GUI? i m developing a little tool on my Pocket PC using WM6 SDK but i would like to implement a finger friendly user interface (iphone-like).
So i m looking for a free .NET framework that offers the possibility to easily integrate a finger friendly interface for Windows Mobile 6 Pro .
Any ideas ?
EDIT : Finger friendly means big icons, big buttons , scrollable screens with a simple touch of the thumb... Because the Winforms in Compact framework are made for the stylus, not fingers !!
A: I know of no such interface API.
I would code such an interface from scratch, overriding Paint and mouse events. If you need more fancy drawing tools that compact framework provides, you should look for pinvoke to access GDI+.
A: You should really check out Resco's MobileForms Toolkit 2009.
I bet their controls are exactly what you are looking for. Plus they have a whitepaper and videos to show off the controls.
A: I am not sure it is what you are looking for (I didn't have time to examine it yet myself, but I definately intend to); this UI Framework looks interesting:
http://code.msdn.microsoft.com/uiframework
A: Check out the Fluid windows mobile controls available at http://fluid.codeplex.com/
This might be what you are looking for, and its open source.
A: Any current readers on this thread should check out SlideUI (http://www.devslide.com/products/slideui). It's a current (and supported) product which offers touch friendly (iphone-like) scrolling and controls.
A: I'm not entirely sure what you're asking here... Windows Mobile 6.0 Pro is touch-screen enabled, so you should simply have to create your project targeting the Windows Mobile 6.0 Pro (note, however, that your application will not be compatible with Windows Mobile 6.0 Standard devices).
A: I know exactly what you are talking about. All the .NET Controls are designed for the stylus. When you make them bigger for the finger, there is no guarantee they will respond well. Add to that every hardware devices sensitivity is different and its even harder.
I recently built an application attempting to incorporate some touch like functionality. it was a pain having hand code all this stuff.
The problem with a 3rd party library, as opposed as coming in Windows MObile is then everyone is designing their own library and navigation techiques. Hopefully MS will wise up on this front.
http://sites.google.com/site/nebowiki/
A: If you are developing finger friendly apps, your target device needs a process to handle finger input as opposed to the stylus. HTC devices (Such as the Kaiser, Mogul, Touch Pro, etc.) use TouchFlo for this purpose. There are a few different versions of TouchFlo and I'm not sure if there is an SDK, but you need to incorporate it into whatever you program. xda-developers.com will have lots of info about it.
A: It IS amazing that with WM6.1 Pro, .NET CF 3.5 and VS2008 that all we have available are the basic stylus-sized controls that are are spartan in the extreme. i.e., coyote-ugly. I'm about ready to chew my hand off rather than use them in an app.
So where is the third-party collection of controls that all WM developers are flocking to, to provide touch-friendly apps?
A: Ugly is truly the correct word for most (mine included) mobile win apps.
I am developing for an older piece of hardware with a mono screen which makes it even worse.
Take a look here:
http://www.windowsfordevices.com/news/NS9328208835.html
and here:
http://msdn.microsoft.com/en-us/library/dd630622.aspx
This is not free, but it is affordable - some of the screen shots are pretty nice looking:
http://www.basic4ppc.com/?gclid=CIiO1di1nJoCFRAhDQodYX8-9A
Anyway...sorry if this was just googledragging - maybe it had something you had missed.
--Joe
A: Finger Freindlyness is a result of the touch screen technology (capacitive screens are less accurate, but require zero pressure; resistive screens require physical pressure and are harder to swipe, flick, etc.)
With Windows Mobile 6.5, they have introduced a system gestures library (and if you'd rather not have to P/Invoke it, there is a sample wrapper on MSDN Code Gallery). Theoretically, it would be possible to write to this new library, and maybe emulate the gestures on pre-WM6.5 devices, if required.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: ASP .Net server control events order Which are the events of an ASP .Net server control and how does their order relate to the containing page's events?
The concrete problem is that I am looking for an event inside the server control that fires before the Page_Load event of the containing page.
A: With regards to how they relate to Page events, at least for Init and Load:
"Although both Init and Load recursively occur on each control, they happen in reverse order. The Init event (and also the Unload event) for each child control occur before the corresponding event is raised for its container (bottom-up). However the Load event for a container occurs before the Load events for its child controls (top-down)."
From http://msdn.microsoft.com/en-us/library/ms178472.aspx
A: This should help: http://msdn.microsoft.com/en-us/library/ms178472.aspx
You're looking for PreLoad, i think.
A: Check out this page. It will let you know what events fire when. Looks like you could use the PreLoad event.
A: It's a littlebit problem, because the control can be placed inside the page after the "Page_Load" event.
In one my historic project, I derived all pages from my class "PageEx : System.Web.UI.Page". Which had a property "CurrentState" of type "enum PageStates { PreInit, Init, PostInit, PreLoad, /* etc... */ }". Than all my controls was able recognized state of page cycle.
A: There's a longer list at ASP.NET 2.0 Event Order (note this is for 2.0).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Why Aren't All My WinForm Controls and Forms Localizing? Greetings all,
I'm trying to localize a .NET/C# project. I'm using string resource files and setting the "Localizable" property on all my forms to 'True'. The assembly culture is empty. The resx/resource files are successfully passed through Lingobit, resgen, and al.
When running the localized executables, some of my forms are not localized (ones in which I know the Localizable flag is set) while others are. There are even a few forms which are localized but a button or two isn't. I cannot tell any difference between the non-localized forms/controls with the localized ones.
Anyone have an idea what I might be missing? Thanks!
A: When you open the form in Visual Studio, if you change the Language property of the Form to the language you are localizing to, does the same problem exist there? Could it be possible that the non-localized forms/buttons still have the English text set even in the localized resources?
A: Yeah, I'd go with Andy on this and be suspicious of the contents of the resource files. We dabbled with localisation for a time, and encountered a number of issues, but this certainly wasn't one of them.
If that isn't it, then how are you testing your app? If you haven't tried this already I'd suggest firing up a set of VMs with foreign language versions of Windows installed (rather than just changing the language settings on your machine) and seeing if that makes any difference.
A: Okay, I figured it out. You guys were correct. We were not generating the translated resx files correctly from Lingobit. Some of the files would get translated while others had the English text still in the resx.
Thanks for your help!
EDIT: Just to expand upon this, we specifically were messing up the al.exe command which takes the binary .resources file and creates a satellite assembly adding it to the executable's manifest. In the /embed command, you have to bind the resources file to a namespace. Our top-level name spaces were mapped correctly, but we weren't binding to sub-level namespaces on all of the resource files.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Working copy XXX locked and cleanup failed in SVN I get this error when I do an svn update:
Working copy XXXXXXXX locked Please
execute "Cleanup" command
When I run cleanup, I get
Cleanup failed to process the
following paths: XXXXXXXX
How do I get out of this loop?
A: I had this problem because external folders do not want to be linked into an existing folder. If you add an svn:externals property line where the destination is an existing (versioned or non-versioned) folder, you will get the SVN Woring Copy locked error. Here a cleanup will also tell you that everthing is all right but still updating won't work.
Solution: Delete the troubling folder from the repository and make an update in the root folder where the svn:externals property is set. This will create the folder and all will be fine again.
This problem arose for me because svn:externals for files requires the destination folder to be version controlled. After I noticed that this doesn't work across different repositories, I swaped from external files to external folder and got into this mess.
A: The easiest way to do this is show hidden folders and then open the .SVN folder. You should see a zero KB file named "lock" deleting this will fix the problem
A: One approach would be to:
*
*Copy edited items to another location.
*Delete the folder containing the problem path.
*Update the containing folder through Subversion.
*Copy your files back or merge changes as needed.
*Commit
Another option would be to delete the top level folder and check out again. Hopefully it doesn't come to that though.
A: I came across the exact same issue using SVN 1.7 and none of the fixes mentioned above worked.
Foremost, make sure you backup all your edited content.
After spending a couple of hours (didn't redownload everything as my branch is over 6gb in size), I found that there is a db file called "wc" in the .svn folder of your branch.
Open up the db file using any db manager (i used firefox's sqlite manager plugin) and navigate to WC_LOCK table. This table will have the entries for the acquired locks. Delete the records from the table and you're done :)
A: For me, the trick was to run svn cleanup at the top of my working copy, not in the folder where I'd been working the whole time before the problem occurred.
A: A colleague at work constantly sees this message, and for him it's because he deleted a directory under SVN version control without deleting it from SVN, and then created a new directory in its place not under version control, with the same name.
If this is your problem...:
There are different ways to fix it, depending on how/why the directory was replaced.
Either way, you will likely need to:
A) Rename the existing directory to a temporary name
B) Do an SVN revert to recover the directory deleted from the file system, but not from SVN
From there, you would either
A) Copy the relevant files into the directory that was deleted
B) If you had a significant change of contents in the directory, do an SVN delete on the original, commit, and rename your new directory back to the desired name, followed by an SVN add to get that one under version control.
A: For me none of the above solutions worked.
I found a solution by breaking locks.
When I performed svn cleanup, I selected "Break Locks" along with "Clean up working copy status".
A: When i have this problem, i find running the cleanup command directly on the problem path generally seems to work. Then I'll run cleanup from the working root again, and it'll complain about some other directory. and i just repeat until it stops complaining.
A: If you're on a Windows machine, View the repository through a browser and you may well see two files with the same filename but using different cases. Subversion is case sensitive and Windows isn't so you can get a lock when Windows thinks it's pulling down the same file and Subversion doesn't. Delete the duplicate file names on the repository and try again.
A: I did it by just creating a new folder, checking out the project, copying the updated files to the new folder.
It was fixed with a fresh checkout.
A: This one worked for me.
*
*Go to the root folder,
*Right click and cleanup
*Check all available options
*Press ok
After clean up it will allow you to update to the latest version.
A: Look in your .svn folder, there will be a file in it called lock. Delete that file and you will be able to update. There may be more lock files in the .svn directory of each subdirectory. They will need deleting also. This could be done as a batch quite simply from the command line with e.g.
find . -name 'lock' -exec rm -v {} \;
Note that you are manually editing files in the .svn folder. They have been put there for a reason. That reason might be a mistake, but if not you could be damaging your local copy.
SOURCE : http://www.svnforum.org/2017/viewtopic.php?p=6068
A: Are you using TortoiseSVN and just upgraded? I've had that problem before when moving from 1.4 to 1.5 and not rebooting. (Try a reboot).
The reason you need to reboot is because the cache file gets all funky.
Otherwise, to just move on, export that working copy into a new folder (don't copy the .svn hidden folders), re-checkout the project, and move all your code back, then proceed with your commit.
A: just delete the .svn folders, then run a cleanup on the parent directory. Works perfectly!!
A: In Versions under Mac OS:
Action -> Cleanup working copy locks at...
A: I often get such an issue. My pattern that causes cleanup problems.
*
*I open image file in viewer.
*I delete image file/folder.
*I am trying to commit/update
Closing image viewer where deleted file is opened solves the problem.
Maybe other software can block cleanup the same way.
In general. I believe restarting computer may help in such cases.
A: In my case I solved it by manually deleting a record in the SQLite ".svn\wc" file lock record in the WC_LOCK table.
I opened the "WC" file with SQLite editor and executed
delete from WC_LOCK
Following eakkas's comment, you might need to delete all the entries from WORK_QUEUE table as well.
A: For me, it was actually Tortoise's fault, sort of. Tortoise just complained "cannot clean up, run clean up", but when I ran the command line (svn cleanup), it clearly told me that it couldn't delete some files that were in use, the solution to which was obvious. Once I closed Visual Studio (which was keeping the files open), then the cleanup worked fine.
Other programs can also keep files open in the repo causing this issue. Excel holding an xls open was a culprit in another instance so it may be wise to close all programs that may be using anything in the repo or even rebooting to force programs to close out and then trying cleanup again.
A: Easiest way ever:
*
*Go to Parent directory(Folder) of Project.
*Pres Right click
*Press on TortoiseSVN then Press Clean up...
*Clean up dialog would appear automatically
*Select Clean up working copy status, Break locks, Fix time stamps, Vacuum pristine copies, Refresh shell overlays, Include externals
*Pres OK
You did your job successfully.
Check the screen shots for your reference.
First step:
Second step:
Enable the Break lock option(second check box in cleanup popup window)
Hope this will help you a lot.
A: SVN normally updates its internal structure (.svn/prop-base) of the files in a folder before the actual files is fetched from repository. Once the files are fetched this will be cleared up. Frequently the error is thrown because the "update" failed or prematurely cancelled during the update progress.
*
*Check any files are listed under .svn/prop-base directory
*Remove any files which are not under the folder
*Cleanup
*Update
Now the update should work.
A: Had the same problem because I exported a folder under a version-controlled folder. Had to delete the folder from TortoiseSVN, then delete the folder from the filesystem (TortoiseSVN does not like unversioned subfolders ... why not???)
A: I had this under TortoiseSVN and the error was related to a new directory I'd created under a new project. I had just created this project, so there was no way this directory had existed before. I looked in the repository browser and the new folder was indeed already in the repository, but TortoiseSVN didn't show it as committed.
In order to get around it, since I'd just created the folder anyway, I deleted it in the repository, and then did a commit. It worked fine.
Since I did this outside of Visual Studio, I then had to restart Visual Studio for it to figure everything out again.
A: Start Search....Lock...Select all files listed and delete..fixed
A: the following should do:
svn status | grep ". L" | sed 's/.* (.*)$/\1/' | awk '{print length($1),$1}' | sort -nr | awk '{print "pushd " $2 "; svn cleanup ; popd"}' | sh
A: Do not delete your solution!
in the .svn folder you have a file called lock it is 0 bytes long
You can delete all these files from all the .svn folders in your solution and it will work
It worked in my case
A: If you're on Linux, try this:
find "/the/path/to/your/directory" -name .svn -type d | xargs chmod 0777 -R
Then run the cleanup command on that directory, then try to update.
A: In-place unversioning of the files, and a fresh checkout into the same location, has solved this problem for me.
In TortoiseSVN, to do an in-place unversioning, right-drag the root folder of the working copy from the file list onto itself in the directory tree, and choose "SVN Export versioned items here" from the pop-up menu. TortoiseSVN notices that the destination is the same as the source, and suggests unversioning the working copy.
After unversioning, do a fresh checkout into the same folder (which now contains an unversioned copy of all the files you had). TortoiseSVN will warn you that you are checking out into an existing folder, but you can go ahead.
After this, cleanups, updates and other operations worked without a hitch. Since both of the above steps preserve local modifications, there should not be any loss of information (but backing the working copy up before this may nevertheless be a good idea).
One warning: If the working copy contains mixed versions or uncommitted property changes, that information WILL be lost. For me, this is not a common occurrence, and given the choice of a corrupt working copy or losing uncommitted property changes, I tend to opt for the latter.
A: I had this problem where the "clean up" worked, but the "update" would continue to fail. The solution that worked was to delete the folder in question via Windows Explorer, not TortoiseSVN's delete (which marks the deletion as something to commit to the repository, and then I did a "checkout" to essentially "update" the folder from the respository.
More info on the difference between an O/S delete and an SVN delete here:
http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-rename.html
Notably:
When you TortoiseSVN → Delete a file, it is removed from your working copy immediately as well as being marked for deletion in the repository on next commit.
And:
If a file is deleted via the explorer instead of using the TortoiseSVN context menu, the commit dialog shows those files and lets you remove them from version control too before the commit. However, if you update your working copy, Subversion will spot the missing file and replace it with the latest version from the repository.
A: I did the following to fix my issue:
*
*Renamed the offending folder by placing an "_" in front of the
folder name.
*Did a "Clean Up" of the parent folder.
*Renamed the offending folder back to it original name.
*Did a commit.
A: In solution explorer, right click on the project, in the opening sub-menu click on subversion and select clean-up. It will solve the problem, as it did for me. Hope it will work.
A: To do the clean up
*
*Delete the .svn folder.
*Do the svncheckout in the root folder.
*Try performing the clean up operation.
This got my issue resolved.
A: For me, the problem was with completely full disk drive (linux inodes in my case), when i deleted some folders it started working again.
The error was the following (on any svn action):
$ svn cleanup
svn: E155004: Run 'svn cleanup' to remove locks (type 'svn help cleanup' for details)
svn: E155004: Working copy locked; try running 'svn cleanup' on the root of the working copy ('/my/directory') instead.
svn: E155004: Working copy '/my/directory' locked
svn: E200030: sqlite[S14]: unable to open database file
svn: E200030: Additional errors:
svn: E200030: sqlite[S14]: unable to open database file
A: @Chuck's solution wasn't practical for me. In the first time I had the problem, it worked but also gave lots extra-work. In the second case, I changed loads of file while I was using my notebook outside the network. I couldn't see myselft going folder by folder after the changed files. Had hope on tortoise and worked. See how:
Environment Was:
*
*Visual Studio 2008
*Ankhsvn
Procedure:
*
*First I couldn't commmit, it said that I needed to clean up
*Second, I couldn't clean up, there was a folder out of the svn - "bin"
*I downloaded Tortoise lastest version, tried and doesn't work due to dammed folder.
*Renamed that folder and now I could Update the local repository with the lasted version.
*A couple of files came in.
*Did the commit and worked.
A: I had a file in my root directory that was messing it up. (No lock files, svn cleanup failed, etc.) My whole checkout is > 2GB with slow network speeds, so checking everything out again wasn't a great option for me.
What worked for me:
*
*Reverted & reverted change in the
messed up working copy (#1).
*Checked out another copy of the repo
(#2) with --depth empty
*Added and
committed the file in the new
working copy (#2).
*Updated in the
original working copy (#1).
Seemed to be back to normal for me.
A: Updating the directory permissions (granting write access) solves the problem as well.
chmod +w <dir_name>
A: I had the same problem. Seems it has been fixed in the latest versions.
I have updated my Tortoise SVN to the latest version (1.7.11) and clean up has worked well.
You can download the latest version here: downoad tortoise svn.
A: I know this is a really old thread but I maintain that:
The easiest and safest method to fix this is to delete your hidden ".svn" folder and check everything out again.
It fixes most problems when svn screws around should keep local changes (marked as "conflicted") when you check out the head revision again.
A: Clean up certainly is not enough to solve this issue sometimes.
If you use TortoiseSVN v1.7.2 or greater, right click on the Parent directory of the locked file and select TortoiseSVN -> Repo Browser from the menu. In the Repro Browser GUI right click the file that is locked and there will be an option to remove the lock.
A: Steps :
*
*Close all editing files from svn folder
*Close eclipse or any editor which are using folder or file from svn directory.
*Right click on svn check out folder and click on release lock.
*Right click on svn check out folder and click on clean.
*Your SVN is ready for SVN commit and update operation.
Cheers :)
A: Today I have experienced above issue saying
svn: run 'svn cleanup' to remove locks (type 'svn help cleanup' for
details)
And here is my solution, got working
*
*Closed Xcode IDE, from where I was trying to commit changes.
*On Mac --> Go to Terminal --> type below command
svn cleanup <Dir path of my SVN project code>
exmaple:
svn cleanup /Users/Ramdhan/SVN_Repo/ProjectName
*Hit enter and wait for cleanup done.
*Go To XCode IDE and Clean and Build project
*Now I can commit my all changes and take update as well.
Hope this will help.
A: One reason for this problem I haven't seen in the answers is that an update or checkout may have been done with other user/permissions, like for example with $sudo.
A: First of all tried many solutions, then I just deleted the folder in which I was having trouble.
And then performed SVN Update.
That worked for me.
I would not recommend it, but nothing worked but this. :(
A: While doing svn update using tortoise svn, the process got interrupted and stopped complaining the file is in use.
Next it asked me to use CleanUp command on the folder.
I tried to run CleanUp command but it failed to do so.
Then I found a command shell which was using the folder files.
So, I closed the command shell and checked if any editor is using the files related to it. We need to close them as well.
Again, I tried CleanUp on the folder with options Break locks,revert changes,clear working copy status . The CleanUp went successfully.
Then finally able to update my svn folder.
A: The clean up did not work for me no matter how many ways I tried. Instead from Visual Studio I committed each folder individually. Then I committed the top folder and was successful.
A: As exactly this answer is not listed here: my solution was to close my IDE (in this case Netbeans). Seems like the IDE had locked the file.
A: I am sure It working fine for you
Goto top level SVN folder.
Right Click on folder(that has your svn files) -> TortoiseSVN -> CleanUp
This will surely solve your problem.
A: Spotlight is its usual rubbish self at finding the lock files recursively.
EasyFind on Mac App Store works
http://itunes.apple.com/gb/app/easyfind/id411673888?mt=12
search for 'lock'
Select all / Delete
A: These types of problems can be avoided in the first place by using svn copy and svn move etc commands when making changes to your project structure. Remember svn only checks for changes inside files already added to subversion, not changes to the physical directory structure. Please see http://svnbook.red-bean.com/en/1.7/svn.tour.cycle.html
Further, upon committing changes svn first stores a "summary" of changes in a todo list. Upon performing the svn operations in this todo list it locks the file to prevent other changes while these svn actions are performed. If the svn action is interrupted midway, say by a crash, the file will remain locked until svn could complete the actions in the todo list. This can be "reactivated" by using the svn cleanup command. Please see http://svnbook.red-bean.com/en/1.7/svn.tour.cleanup.html
A: In my case, a Windows 7 machine running TortoiseSVN failed to rename a folder completely. No combination of cleanup, update, or rename operations would fix the problem. The folder was originally created with different case and Tortoise or Subversion would not change it to what was in the repository.
My solution was to:
*
*Copy the folder through Windows Explorer (without Subversion control
files) outside the project.
*Delete and commit the folder through TortoiseSVN.
*Copy the folder back with the correct (current) name through Windows Explorer.
*Add the folder back into the repository through TortoiseSVN.
I performed cleanups after each step. Dreadful solution, but it worked for me.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "597"
}
|
Q: Deleting Custom Event Log Source Without Using Code I have an application that has created a number of custom event log sources to help filter its output. How can I delete the custom sources from the machine WITHOUT writing any code as running a quick program using System.Diagnostics.EventLog.Delete is not possible.
I've tried using RegEdit to remove the custom sources from [HKEY_LOCAL_MACHINE\SYSTEM\ControlSetXXX\Services\Eventlog] however the application acts as if the logs still exist behind the scenes.
What else am I missing?
A: I was able only to delete it by using:
[System.Diagnostics.EventLog]::Delete("WrongNamedEventLog");
in powershell
A: I also think you're in the right place... it's stored in the registry, under the name of the event log. I have a custom event log, under which are multiple event sources.
HKLM\System\CurrentControlSet\Services\Eventlog\LOGNAME\LOGSOURCE1
HKLM\System\CurrentControlSet\Services\Eventlog\LOGNAME\LOGSOURCE2
Those sources have an EventMessageFile key, which is REG_EXPAND_SZ and points to:
C:\Windows\Microsoft.NET\Framework\v2.0.50727\EventLogMessages.dll
I think if you delete the Key that is the log source, LOGSOURCE1 in my example, that should be all that's needed.
For what it's worth, I tried it through .NET and that's what it did. However, it does look like each custom event log also has a source of the same name. If you have a custom log, that could affect your ability to clear it. You'd have to delete the log outright, perhaps. Further, if your app has an installer, I can see that the application name also may be registered as a source in the application event log. One more place to clear.
A: What about using Powershell?
Remove-EventLog -LogName "Custom log name"
Remove-EventLog -Source "Custom source name"
A: Perhaps your application is fault-tolerant, meaning that it checks to see if the event log source is already registered and registers the source if it isn't?
If this were the case, your application would re-create the source(s) each time it ran, no matter what you did.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
}
|
Q: Is it worth swapping Ctrl and Caps Lock for windows users that don't use Emacs I've been aware of Steve Yegge's advice to swap Ctrl and Caps Lock for a while now, although I don't use Emacs. I've just tried swapping them over as an experiment and I'm finding it difficult to adjust. There are several shortcuts that are second nature to me now and I hadn't realised quite how ingrained they are in how I use the keyboard.
In particular, I keep going to the old Ctrl key for Ctrl+Z (undo), and for cut, copy & paste operations (Ctrl+ X, C and V). Experimenting with going from the home position to Ctrl+Z I don't know which finger to put on Z, as it feels awkward with either my ring, middle or index finger. Is this something I'll get used to the same way I've got used to the original position and I should just give it time or is this arrangement not suited to windows keyboard shortcuts.
I'd be interested to hear from people who have successfully made the transition as well as those who have tried it and move back, but particularly from people who were doing it on windows.
Will it lead to any improvement in my typing speed or comfort when typing.
Do you have any tips for finger positions or typing training to speed up the transition.
A: I actually don't swap control and caps and just make caps ANOTHER control key. I can't think of a single time in my life when I have ever hit caps-lock on purpose, so I haven't missed it.
That way, you get used to using it, but if you slip up and use the old control, things still work. It's worked out very well for me.
There's a .reg file to do this here.
A: I've done it for quite a while now, and it's natural to me, even though I'm not an Emacs user either (I'm in the Vim camp of that particular war :) ). In fact, it's so natural that moving to other machines (coworkers, family members, etc.) causes me grief because Ctrl isn't where it 'ought' to be.
A: For emacs ctrl should be at caps lock - for vim the escape key should be on the caps lock. I really feel that the caps lock button should be renamed "free parking" and OSes should make a system tray utility to quickly change the free parking button from escape, to control, to anything you need to type over and over again.
A: I ended up taking the advice in Zach's answer, but I also made Caps Lock behave as an ESC key if it was held and released on it's own using the AutoHotKey script in this gist: CapsLockCtrlEscape.ahk
I also bound Ctrl+Shift+Caps Lock to Caps Lock for the rare occasions when I might need it using this AutoHotKey script:
#IfWinActive
^+Capslock::Capslock ; make CTRL+SHIFT+Caps-Lock the Caps Lock toggle
return
A: I switched the Caps Lock and Ctrl keys a couple of months ago and after the initial learning period, ~ 1 week, my biggest problem is when I use a computer that hasn't switched the keys.
I first did some registry hack but I can't remember where I found the information on how to do it. Now I'm using a small utility called Remapkey which is included in the Windows Server 2003 Resource Kit Tools even though I think I'm using an older version.
A: I had no problem making the transition. I use keyboards with both configurations without issue. Perhaps having it as a hardware solution (and the labels properly printed) makes it easier than doing it through software and having to remember how each machine/keyboard is setup.
A: I think what's best to put on caps depends on your physical keyboard.
At home I type on a Kinesis Ergo Elan where my ctrl keys are under my thumbs, along with 2*alt, space, enter, backspace, delete, pgup, pgdn, home and end; the rest of the keyboard is fairly normally laid out, except the board is split.
With the ctrl keys ready at hand, it really makes the most sense to put escape on caps lock (and caps lock on esc, for the few times I need it). Even if you're an emacser, hey... it doubles as a spare "prefix alt key", and you probably ask your browser to stop what it's doing a few times every day.
On the other hand, if I'm typing on my laptop where the lower left corner key is Fn rather than ctrl (ffs...) and I can't hold down shift+ctrl with one finger, it might make sense to put ctrl on caps (such that I can hold them with a single finger). At least if you're not a vi'er, or you don't mind the escape key being further away (or have some crazy system).
What's really interesting is putting some funky key on shift+shift (yep, both shift keys). This can be done with xmodmap fairly straightforwardly; I put my compose key there. If you don't need compose, you may want to put something else (like, say, esc).
A: Copy the following code into a file called caps-ctrl-swap.reg, execute the file, agree to allow registry to be changed, log out and back in and your caps-lock and left-ctrl keys will be swapped. I've used this script for whatever version of Windows was current in 2005 and every version in between. I needed it today since Windows 10 updated overnight and it still works great.
REGEDIT4
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layout]
"Scancode Map"=hex:00,00,00,00,00,00,00,00,03,00,00,00,1d,00,3a,00,3a,00,1d,00,00,00,00,00
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Nested SELECT Statement SQL is not my forte, but I'm working on it - thank you for the replies.
I am working on a report that will return the completion percent of services for indiviudals in our contracts. There is a master table "Contracts," each individual Contract can have multiple services from the "services" table, each service has multiple standards for the "standards" table which records the percent complete for each standard.
I've gotten as far as calculating the total percent complete for each individual service for a specific Contract_ServiceID, but how do I return all the services percentages for all the contracts? Something like this:
Contract Service Percent complete
abc Company service 1 98%
abc Company service 2 100%
xyz Company service 1 50%
Here's what I have so far:
SELECT
Contract_ServiceId,
(SUM(CompletionPercentage)/COUNT(CompletionPercentage)) * 100 as "Percent Complete"
FROM dbo.Standard sta WITH (NOLOCK)
INNER JOIN dbo.Contract_Service conSer ON sta.ServiceId = conSer.ServiceId
LEFT OUTER JOIN dbo.StandardResponse standResp ON sta.StandardId = standResp.StandardId
AND conSer.StandardReportId = standResp.StandardReportId
WHERE Contract_ServiceId = '[an id]'
GROUP BY Contract_ServiceID
This gets me too:
Contract_serviceid Percent Complete
[an id] 100%
EDIT: Tables didn't show up in post.
A: You should be able to add in your select the company name and group by that and the service id and ditch the where clause...
Perhaps like this:
SELECT
Contract,
Contract_ServiceId,
(SUM(CompletionPercentage)/COUNT(CompletionPercentage)) * 100 as "Percent Complete"
FROM dbo.Standard sta WITH (NOLOCK)
INNER JOIN dbo.Contract_Service conSer ON sta.ServiceId = conSer.ServiceId
LEFT OUTER JOIN dbo.StandardResponse standResp ON sta.StandardId = standResp.StandardId
AND conSer.StandardReportId = standResp.StandardReportId
GROUP BY Contract, Contract_ServiceID
A: I'm not sure if I understand the problem, if the result is ok for a service_contract you canContract Service
SELECT con.ContractId,
con.Contract,
conSer.Contract_ServiceID,
conSer.Service,
(SUM(CompletionPercentage)/COUNT(CompletionPercentage)) * 100 as "Percent Complete"
FROM dbo.Standard sta WITH (NOLOCK)
INNER JOIN dbo.Contract_Service conSer ON sta.ServiceId = conSer.ServiceId
INNER JOIN dbo.Contract con ON con.ContractId = conSer.ContractId
LEFT OUTER JOIN dbo.StandardResponse standResp ON sta.StandardId = standResp.StandardId
AND conSer.StandardReportId = standResp.StandardReportId
GROUP BY con.ContractId, con.Contract, conSer.Contract_ServiceID, conSer.Service
make sure you have all the columns you select from the Contract table in the group by clause
A: Assuming your query works for just the one service, looks like you're most of the way there, leave off the WHERE clause to obtain all results, your GROUP BY will take care of one service per result.
Just join on the Contract table to show the contract related to each service, and you're done.
A: In addition to removing the where clause and adding more group conditions, you also will want to watch out for null records in each of your tables. This requires changing an INNER JOIN to a LEFT JOIN (unless you don't want to see those rows) and some ISNULL's to clean up data. I'm not sure where the StandardReportId concept falls in here, but it looks like a filtering mechanism that I won't toy with.
SELECT
ContractID
ISNULL(Contract_ServiceId, '-1') -- or some other stand in value
ISNULL((SUM(CompletionPercentage)/COUNT(CompletionPercentage)) * 100, 0) as "Percent Complete"
FROM
Contract AS con
LEFT OUTER JOIN dbo.Contract_Service conSer ON con.ContractID = conSer.ContractID
LEFT OUTER JOIN dbo.Standard sta WITH (NOLOCK) ON conSer.ServiceId = sta.StandardID
LEFT OUTER JOIN dbo.StandardResponse standResp ON sta.StandardId = standResp.StandardId
AND conSer.StandardReportId = standResp.StandardReportId
GROUP BY
ContractID, Contract_ServiceID
A: Because you are grouping by the contract serviceid I think you can just remove the where clause and it should calculate the percentage for all contact serviceids.
If there are no records in dbo.Standard for that contract serviceid, you may need to left outer join instead from the contract service table to the dbo.Standard table in order to show contracts without completion records.
I hope that makes sense... My SQL is getting rusty after migrating to a data framework.
A: (SUM(CompletionPercentage)/COUNT(CompletionPercentage)) * 100
If CompletionPercentage is an int field you will have trouble with integer math. Anytime you divide by an integer you need to multiply it by 1.0 to make sure it is considering the number as a decimal. Otherwise 49/100 would = 0.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/127974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: When using a HashMap are values and keys guaranteed to be in the same order when iterating? When I iterate over the values or keys are they going to correlate? Will the second key map to the second value?
A: to use the entrySet that @Cuchullain mentioned:
Map<String, String> map = new HashMap<String, String>();
// populate hashmap
for (Map.Entry<String, String> entry : map.entrySet()) {
String key = entry.getKey();
String value = entry.getValue();
// your code here
}
A: You want to use this, LinkedHashMap, for predicable iteration order
A: public class Test {
public static void main(String[] args) {
HashMap <String,String> hashmap = new HashMap<String,String>();
hashmap.put("one", "1");
hashmap.put("two", "2");
hashmap.put("three", "3");
hashmap.put("four", "4");
hashmap.put("five", "5");
hashmap.put("six", "6");
Iterator <String> keyIterator = hashmap.keySet().iterator();
Iterator <String> valueIterator = hashmap.values().iterator();
while(keyIterator.hasNext()) {
System.out.println("key: "+keyIterator.next());
}
while(valueIterator.hasNext()) {
System.out.println("value: "+valueIterator.next());
}
}
}
key: two
key: five
key: one
key: three
key: four
key: six
value: 2
value: 5
value: 1
value: 3
value: 4
value: 6
A: Both values() and keySet() delegate to the entrySet() iterator so they will be returned in the same order. But like Alex says it is much better to use the entrySet() iterator directly.
A: No, not necessarily. You should really use the entrySet().iterator() for this purpose. With this iterator, you will be walking through all Map.Entry objects in the Map and can access each key and associated value.
A: I agree with pmac72. Don't assume that you'll get ordered values or keys from an unordered collection. If it works time to time it is just pure hazard. If you want order to be preserved, use a LinkedHashMap or a TreeMap or commons collections OrderedMap.
A: The question confused me at first but @Matt cleared it up for me.
Consider using the entrySet() method that returns a set with the key-value pairs on the Map.
Map<Integer, Integer> a = new HashMap<Integer, Integer>(2);
a.put(1, 2);
a.put(2, 3);
for (Map.Entry<Integer, Integer> entry : a.entrySet()) {
System.out.println(entry.getKey() + " => " + entry.getValue());
}
This outputs:
1 => 2
2 => 3
3 => 3
A: I second @basszero. While
for (Map.Entry<Integer, Integer> entry : a.entrySet())
will work, I find using a data structure that does this automatically is nicer. Now, you can just iterate "normally"
A: HashMap's keySet method returns a Set, which does not guarantee order.
HashMap's values() method returns a Collection, which does not guarantee order.
That said, the question was "are they going to correlate" so technically the answer is maybe, but don't rely on it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Add dismiss control to session-flash() output in CakePHP In a CakePHP 1.2 app, I'm using
<?php $session->flash();?>
to output messages like "Record edited". It's working great.
However, I want to add a link called "Dismiss" that will fade out the message. I know how to construct the link, but I don't know how to insert into the output of the flass message.
The flash message wraps itself in a DIV tag. I want to insert my dismiss code into that div, but I don't know how.
A: Figured this out:
Create a new layout in your layouts folder:
layouts/message.ctp
In that layout, include the call to output the content:
<?php echo $content_for_layout; ?>
Then when you set the flash message, specify the layout to use:
$this->Session->setFlash('Your record has been created! Wicked!','message');
A: You want to use the setflash function. If you pass setflash an empty string for $default it will not wrap your message in a div and just store it as is. This way you can display any markup you want or as Justin posted you can use another view page for your message so you don't mix your view and controllers.
A: You can achieve this with jQuery:
$(document).ready(function() {
$("#flashMessage").each(function() {
$close = $("<span class='close'>Close</span>");
$close.click(function () {
$(this).parent().hide("slow");
});
$(this).append($close);
});
});
You will need to pretty it up with a bit of CSS, but I'm sure you get the idea.
A: the default way to do is is to create a flash.ctp in your /app/views/layouts. This will override the default flash.ctp you can find in /cake/libs/view/layouts. So you don't need to use the additional param.
btw: this works for all CakePHP standard views and layouts.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Variable UITableCellView height with subview I want to create a UITableView with varying row heights, and I'm trying to accomplish this by creating UILabels inside the UITableViewCells.
Here's my code so far:
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
static NSString *MyIdentifier = @"EntryCell";
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:MyIdentifier];
if (cell == nil) {
cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:MyIdentifier] autorelease];
}
UILabel *textView = [[UILabel alloc] initWithFrame:CGRectMake(10, 0, 300, 40)];
textView.numberOfLines = 0;
textView.text = [entries objectAtIndex:[indexPath row]];
[cell.contentView addSubview:textView];
[textView release];
return cell;
}
This gives me 2 lines of text per cell. However, each "entry" has a different number of lines, and I want the UITableViewCells to resize automatically, wrapping text as necessary, without changing the font size.
[textView sizeToFit] and/or [cell sizeToFit] don't seem to work.
Here's how I want the UITableView to look:
----------------
Lorem ipsum
----------------
Lorem ipsum
Lorem ipsum
----------------
Lorem ipsum
Lorem ipsum
Lorem ipsum
----------------
Lorem ipsum
----------------
Lorem ipsum
Lorem ipsum
----------------
Does anyone know how to do this properly?
Thanks.
A: The UITableViewDelegate defines an optional method heightForRowAtIndexPath, which will get you started. You then need to use sizeWithFont.
There is some discussion of your precise problem here:
http://www.v2ex.com/2008/09/18/how-to-make-uitableviewcell-have-variable-height/
Text sizing was also discussed in this thread
A: This code works for me. Don't know if it's perfect, but works.
- (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
if(indexPath.row<[notesModel numberOfNotes]){
NSString *cellText = [@"Your text..."];
UIFont *cellFont = [UIFont fontWithName:@"Helvetica" size:12.0];
CGSize constraintSize = CGSizeMake([UIScreen mainScreen].bounds.size.width - 100, MAXFLOAT);
CGSize labelSize = [cellText sizeWithFont:cellFont constrainedToSize:constraintSize lineBreakMode:UILineBreakModeWordWrap];
return labelSize.height + 20;
}
else {
return 20;
}
}
A: textView.numberOfLines = 2?
numberOflines sets maximum nuber of lines so maybe 2 will owrk for u?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Make OSX application respond to first mouse click when not focused Normal OSX applications eat the first mouse click when not focused to first focus the application. Then future clicks are processed by the application. iTunes play/pause button and Finder behave differently, the first click is acted on even when not focused. I am looking for a way to force an existing application (Remote Desktop Connection.app) to act on the first click and not just focus.
A: Responding to the first mouse click when not focused is called 'click through'. And its worth is debated heatedly, for instance here and here.
A: Check NSView's acceptsFirstMouse, it may be what you're looking for.
acceptsFirstMouse:
Overridden by subclasses to return YES if the receiver should be sent a mouseDown: message for an initial mouse-down event, NO if not.
*
*(BOOL)acceptsFirstMouse:(NSEvent *)theEvent
Parameters
theEvent
The initial mouse-down event, which must be over the receiver in its window.
Discussion
The receiver can either return a value unconditionally or use the location of theEvent to determine whether or not it wants the event. The default implementation ignores theEvent and returns NO.
Override this method in a subclass to allow instances to respond to click-through. This allows the user to click on a view in an inactive window, activating the view with one click, instead of clicking first to make the window active and then clicking the view. Most view objects refuse a click-through attempt, so the event simply activates the window. Many control objects, however, such as instances of NSButton and NSSlider, do accept them, so the user can immediately manipulate the control without having to release the mouse button.
A: // Assuming you have 1 view controller that's always hanging around. Over ride the loadview. N.B. this won't work pre-yosemite.
- (void)loadView {
NSLog(@"loadView");
self.view = [[NSView alloc] initWithFrame:
[[app.window contentView] frame]];
[self.view setAutoresizingMask:NSViewWidthSizable | NSViewHeightSizable];
int opts = (NSTrackingMouseEnteredAndExited | NSTrackingActiveAlways);
trackingArea0 = [[NSTrackingArea alloc] initWithRect:self.view.bounds
options:opts
owner:self
userInfo:nil];
[self.view addTrackingArea:trackingArea0];
}
- (void)mouseEntered:(NSEvent *)theEvent {
NSLog(@"entered");
if ([[NSApplication sharedApplication] respondsToSelector:@selector(activateIgnoringOtherApps:)]) {
[[NSApplication sharedApplication] activateIgnoringOtherApps:YES];
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Java, Swing: how do I set the maximum width of a JTextField? I'm writing a custom file selection component. In my UI, first the user clicks a button, which pops a JFileChooser; when it is closed, the absolute path of the selected file is written to a JTextField.
The problem is, absolute paths are usually long, which causes the text field to enlarge, making its container too wide.
I've tried this, but it didn't do anything, the text field is still too wide:
fileNameTextField.setMaximumSize(new java.awt.Dimension(450, 2147483647));
Currently, when it is empty, it is already 400px long, because of GridBagConstraints attached to it.
I'd like it to be like text fields in HTML pages, which have a fixed size and do not enlarge when the input is too long.
So, how do I set the max size for a JTextField ?
A: I solved this by setting the maximum width on the container of the text field, using setMaximumSize.
According to davetron's answer, this is a fragile solution, because the layout manager might disregard that property. In my case, the container is the top-most, and in a first test it worked.
A: Don't set any of the sizes on the text field. Instead set the column size to a non-zero value via setColumns or using the constructor with the column argument.
What is happening is that the preferred size reported by the JTextComponent when columns is zero is the entire amount of space needed to render the text. When columns is set to a non-zero value the preferred size is the needed size to show that many standard column widths. (for a variable pitch font it is usually close to the size of the lower case 'm'). With columns set to zero the text field is requesting as much space as it can get and stretching out the whole container.
Since you already have it in a GridBagLayout with a fill, you could probably just set the columns to 1 and let the fill stretch it out based on the other components, or some other suitably low number.
A: It may depend on the layout manager your text field is in. Some layout managers expand and some do not. Some expand only in some cases, others always.
I'm assuming you're doing
filedNameTextField = new JTextField(80); // 80 == columns
If so, for most reasonable layouts, the field should not change size (at least, it shouldn't grow). Often layout managers behave badly when put into JScrollPanes.
In my experience, trying to control the sizes via setMaximumSize and setPreferredWidth and so on are precarious at best. Swing decided on its own with the layout manager and there's little you can do about it.
All that being said, I have no had the problem you are experiencing, which leads me to believe that some judicious use of a layout manager will solve the problem.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Best way to safely read query string parameters? We have a project that generates a code snippet that can be used on various other projects. The purpose of the code is to read two parameters from the query string and assign them to the "src" attribute of an iframe.
For example, the page at the URL http://oursite/Page.aspx?a=1&b=2 would have JavaScript in it to read the "a" and "b" parameters. The JavaScript would then set the "src" attribute of an iframe based on those parameters. For example, "<iframe src="http://someothersite/Page.aspx?a=1&b=2" />"
We're currently doing this with server-side code that uses Microsoft's Anti Cross-Scripting library to check the parameters. However, a new requirement has come stating that we need to use JavaScript, and that it can't use any third-party JavaScript tools (such as jQuery or Prototype).
One way I know of is to replace any instances of "<", single quote, and double quote from the parameters before using them, but that doesn't seem secure enough to me.
One of the parameters is always a "P" followed by 9 integers.
The other parameter is always 15 alpha-numeric characters.
(Thanks Liam for suggesting I make that clear).
Does anybody have any suggestions for us?
Thank you very much for your time.
A: Using a whitelist-approach would be better I guess.
Avoid only stripping out "bad" things. Strip out anything except for what you think is "safe".
Also I'd strongly encourage to do a HTMLEncode the Parameters. There should be plenty of Javascript functions that can this.
A: Upadte Sep 2022: Most JS runtimes now have a URL type which exposes query parameters via the searchParams property.
You need to supply a base URL even if you just want to get URL parameters from a relative URL, but it's better than rolling your own.
let searchParams/*: URLSearchParams*/ = new URL(
myUrl,
// Supply a base URL whose scheme allows
// query parameters in case `myUrl` is scheme or
// path relative.
'http://example.com/'
).searchParams;
console.log(searchParams.get('paramName')); // One value
console.log(searchParams.getAll('paramName'));
The difference between .get and .getAll is that the second returns an array which can be important if the same parameter name is mentioned multiple time as in /path?foo=bar&foo=baz.
Don't use escape and unescape, use decodeURIComponent.
E.g.
function queryParameters(query) {
var keyValuePairs = query.split(/[&?]/g);
var params = {};
for (var i = 0, n = keyValuePairs.length; i < n; ++i) {
var m = keyValuePairs[i].match(/^([^=]+)(?:=([\s\S]*))?/);
if (m) {
var key = decodeURIComponent(m[1]);
(params[key] || (params[key] = [])).push(decodeURIComponent(m[2]));
}
}
return params;
}
and pass in document.location.search.
As far as turning < into <, that is not sufficient to make sure that the content can be safely injected into HTML without allowing script to run. Make sure you escape the following <, >, &, and ".
It will not guarantee that the parameters were not spoofed. If you need to verify that one of your servers generated the URL, do a search on URL signing.
A: you can use javascript's escape() and unescape() functions.
A: Several things you should be doing:
*
*Strictly whitelist your accepted values, according to type, format, range, etc
*Explicitly blacklist certain characters (even though this is usually bypassable), IF your whitelist cannot be extremely tight.
*Encode the values before output, if youre using Anti-XSS you already know that a simple HtmlEncode is not enough
*Set the src property through the DOM - and not by generating HTML fragment
*Use the dynamic value only as a querystring parameter, and not for arbitrary sites; i.e. hardcode the name of the server, target page, etc.
*Is your site over SSL? If so, using a frame may cause inconsistencies with SSL UI...
*Using named frames in general, can allow Frame Spoofing; if on a secure site, this may be a relevant attack vector (for use with phishing etc.)
A: You can use regular expressions to validate that you have a P followed by 9 integers and that you have 15 alphanumeric values. I think that book that I have at my desk of RegEx has some examples in JavaScript to help you.
Limiting the charset to only ASCII values will help, and follow all the advice above (whitelist, set src through DOM, etc.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Why should I use Flex? In a recent conversation, I mentioned that I was using JavaScript for a web application. That comment prompted a response: "You should use Flex instead. It will cut your development time down and JavaScript is too hard to debug and maintain. You need to use the right tool for the right job." Now, I don't know too much about Flex, but I personally don't feel like JavaScript is too hard to debug or maintain, especially if you use a framework. JavaScript is also one of the most used languages right now, so it would seem a better choice in that regard too. However, his reply piqued my interest. Would Flex be a good choice for a distributable web app for which 3rd party developers could build add-ons? What are the advantages of using it vs. a JavaScript framework? What are some of the disadvantages?
A: I have recently started to develop Flex applications, and I personally find it a refreshing framework for the web.
You get a state-ful application that runs completely client side. You get no worries about cross-browser portability that you do with JavaScript, and you get some really neat things such as effects, graphing, and rich user interface components.
Flex also makes it easy to communicate to webservices and the XML parsing via ECMA is insanely powerful and simple.
I'm glad I have made the switch. As far as how popular it is...I'm not really sure, but I am fairly certain that the developer base is expanding rapidly.
The only real disadvantage I can think of is a flash player requirement, but I would say it is pretty safe to assume that most browser support flash player; even konquerer in Linux is supported; much more so then a silverlight runtime (which I NEVER plan on installing)
A: Here is my experience: you really need to consider 2 things separately - development and the end-user experience. Flex shines in the first area:
*
*ActionScript is a nice mixture of Java and JavaScript so you get a familiar language with strong support for OOP
*debugging is far easier than what you can achieve in JavaScript
*Flex framework is component-oriented and event-driven which helps in creating rich user interfaces (HTML was not really created to support application UI scenarios)
On the other hand, the end-user experience is worse when running a Flex app compared to an AJAX app. First, you need to have Flash Player installed but this is probably not an issue for most computers today. Bigger problems are with usability - Flash Player handles all UI interactions (instead of a browser) so the password manager doesn't work, text fields don't remember previous entries, Ctrl+T and middle-clicking doesn't work, text search doesn't work etc. etc.
My advice would be - if you are developing an application (rich UI, relatively separated from the rest of the web), go for Flex as it will save you time, money and will make your users happier by providing richer functionality and shorter periods between new versions. On the other hand, if your application needs to be tightly integrated with the web and you want your users to be able to use features of their browsers, go with AJAX.
Nice example is Google Docs vs Buzzword. Buzzword is much more feature rich (for instance, text can flow around an image from both sides which is something you could never ever achieve in DHTML) but Google still decided to go for an AJAX version because they are the "web company". There is no right or wrong in doing it the one or the other way, it's just different and it's important to consider who your end users are.
A: I would push you towards standard web development technologies in most cases. Javascript is no longer a great challenge to debug or maintain with good libs like jQuery/Prototype to iron out some of the browser inconsistencies and tools like Firebug and the MS script debugger to help with debugging.
There are cases when Flash is a better option, but only in cases where you are doing complex animations. And, if you are willing to invest the effort, most animations can be achieved without resorting to flash. A couple of examples...
Flash content is not as accessible as other content.
This will not only affect people with out flash, but also search engine spiders. There may be some hacks to help get around this now, but I think that most flash content will never be indexed by google.
Flash breaks the web UI.
For example:
*
*If I click my mouse wheel on a link,
that link is opened in a background
tab. In a flash app there is no way
to simulate this behavior.
*If I select text in my browser and
right-click I get options provided
by the browser that include things
like "Search Google for this text".
In a flash app those options are no
longer there.
*If I right click on a link or an
image I get a different set of
options that are not available in a
flash app. This can be very
frustrating to a user who is not
"flash savvy".
A: GWT lets you do the same stuff as Flex for the most part, and handles all the browser compatibility issues, AND lets you code/debug in Java with your favorite IDE.
All without having to learn a new language (or pay Adobe $$$ for the flex IDE you'll need to do anything real).
Flex has some prettier UI widgets than GWT has out of the box, but there's a ton of 3rd party widgets (such as GWT-EXT-JS) you can use - or, you can use your existing favorite JS widgets with GWT.
Check it out if you haven't: http://code.google.com/webtoolkit/
A: I can't be sure if it was myself, or someone else who made that statement but I would definitely be one to say 'use the right tool for the job'.
Flex has a large community behind it, and is well hyped by Adobe's platform evangelism team. Now, as far as replacing JavaScript, that sounds like a very broad spectrum discussion point. Flex is not a replacement for JavaScript. What it does, it does well, however. That is, 3D, drawing, and data rendering whether in chart or table form. Flex also has the power of ActionScript 3 behind it which allows you to do much of what Flash does in cooperation with the MXML frontend components without ever touching the timeline or keyframes.
In a way, Flex is the .NET of Flash and Rich Internet Application development. It uses the same datasource concepts, and component focused design structures which make it easy, and fast to develop in.
The real question is, what are you trying to achieve? What is the end goal?
As to the debugging point, Flex has a true debugger and profiler within the Flex Builder IDE. JavaScript, unfortunately, has different syntax and execution between browsers due to the nature of JavaScript engines in modern browsers. Flex, because it is essentially Flash, uses the same rendering engine in all browsers due to the use of the Flash plugin.
Hope that clears a few things up. :)
A: Flex has a lot of extra overhead:
*
*New language
*Clients must have flash installed (might need to install, might not be able to)
*Clients must download flex framework (few hundred kilobytes)
*Flex content is not indexed by search engines (contrary to what Google might claim)
Flex has one main advantage:
- Better at building rich interfaces (see Picnik.com, etc)
For example, in Flex, it is easy to create a custom styled dialog box, complete with drop shadows, inner glows, animated open, whatever you might want.
In summary, use Flex if you need the extra richness.
A: Aside from what's already been mentioned here, another major difference is that JavaScript is dynamically typed and ActionScript is statically typed. Whether that's good or bad will depend on your point of view :).
A: If you want your web application to look like it's not a web application, Flex is pretty good. You also get to sidestep all the messiness of making HTML+JS look like a real app. For something which is essentially a website, Flex might not be the best choice, but if you really want to write an application which happens to be accessed through the browser, it's quick to develop with and gives great looking results.
A: You should try Google Gears instead. Create your application, add some Gears to it, and you can greatly increase the speed (and reliability) of your application.
http://gears.google.com/
Essentially Google gears gives you access to two useful things for any application: offline data storage, and native threading control (allowing updates/computations to run in the background and not slow down the users computer).
The really nice thing is, you can use whatever Framework you like for your application, as long as data storage/retrieval and server side communication is handled with JavaScript.
It also allows you to cache whatever files client side you want, which is especially useful when you want to avoid that 'flickering' look in the browser while some needed image is being downloaded by the browser.
A: A few reasons to consider Flex:
*
*The control library is much richer in Flex than anything you can do with JS/DHTML. The charting controls are killer for business apps and things like the DataGrid / AdvancedDataGrid are pretty well ahead of anything you can do with HTML.
*The Flex framework was designed for building applications. It abstracts away the "frame-based" concepts in the Flash Player to really make it easy to build apps. It has a well-designed component hierarchy that makes it easy to extend any of the standard controls. It also has a pretty intuitive event model for handles user inputs and makes it easy to have any of your controls dispatch custom events that can bubble up to parent components or get routed through a central event dispatcher. While it may be possible to do this with JS/DHTML, I don't think it's nearly as easy and it certainly wasn't designed for it.
*You can take a Flex application and quickly deploy it to the desktop with the AIR runtime. AIR also offers additional APIs for things like local system access, embedded SQLite DB, etc. Gears offers something similar but it does require a browser. Granted, AIR requires the AIR runtime but at least it's purposed towards building desktop apps.
*You can build a very rich, very sexy UI that will knock your users socks' off. As programmers we might not care about UX but our users do. Part of the reason why Apple is having a lot of success lately is because they really value UX and users/consumers are taking note of this.
The biggest con I think is that if you are really used to Java or C#, the ActionScript language will seem a bit limiting. If you're comparing it JavaScript, it's at par or maybe slightly better.
A lot of people will rail on Flash Player (or AIR) because it's not "standard-based." If we were only willing to use sites that were 100% standards compliant and free of plugins, we wouldn't have YouTube today. Or pretty much any other site that does interesting data visualization you can't do with HTML/JS (or at least, not with a sane level of effort). Adobe has been pretty progressive in opening up the Flex framework, Blaze DS (for backend Java development), publishing the AMF spec and starting the Open Screen Alliance to push Flash Player to mobile devices. Flash Player, Flex, Flex Builder and Blaze DS all have public JIRA bug trackers. I'd say there is a good chance that Flash Player itself will be open source within the next 2-3 years. I think Adobe is continuing to move towards being very open and that the criticisms of the platform being "closed" and "proprietary" are becoming less relevant. I think if developers approach Flex/FP with an open mind that they would really be impressed with how it all fits together.
A: This comparison table was good enough to make me decide what to use . I prefered javascript:)
http://askmeflash.com/article_m.php?p=article&id=11
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: How do I pull from a Git repository through an HTTP proxy? Note: while the use-case described is about using submodules within a project, the same applies to a normal git clone of a repository over HTTP.
I have a project under Git control. I'd like to add a submodule:
git submodule add http://github.com/jscruggs/metric_fu.git vendor/plugins/metric_fu
But I get
...
got 1b0313f016d98e556396c91d08127c59722762d0
got 4c42d44a9221209293e5f3eb7e662a1571b09421
got b0d6414e3ca5c2fb4b95b7712c7edbf7d2becac7
error: Unable to find abc07fcf79aebed56497e3894c6c3c06046f913a under http://github.com/jscruggs/metri...
Cannot obtain needed commit abc07fcf79aebed56497e3894c6c3c06046f913a
while processing commit ee576543b3a0820cc966cc10cc41e6ffb3415658.
fatal: Fetch failed.
Clone of 'http://github.com/jscruggs/metric_fu.git' into submodule path 'vendor/plugins/metric_fu'
I have my HTTP_PROXY set up:
c:\project> echo %HTTP_PROXY%
http://proxy.mycompany:80
I even have a global Git setting for the http proxy:
c:\project> git config --get http.proxy
http://proxy.mycompany:80
Has anybody gotten HTTP fetches to consistently work through a proxy? What's really strange is that a few project on GitHub work fine (awesome_nested_set for example), but others consistently fail (rails for example).
A: For me what it worked was:
sudo apt-get install socat
Create a file inside your $BIN_PATH/gitproxy with:
#!/bin/sh
_proxy=192.168.192.1
_proxyport=3128
exec socat STDIO PROXY:$_proxy:$1:$2,proxyport=$_proxyport
Dont forget to give it execution permissions
chmod a+x gitproxy
Run following commands to setup environment:
export PATH=$BIN_PATH:$PATH
git config --global core.gitproxy gitproxy
A: Set Git credential.helper to wincred.
git config --global credential.helper wincred
Make sure there is only 1 credential.helper
git config -l
If there is more than 1 and it's not set to wincred remove it.
git config --system --unset credential.helper
Now set the proxy with no password.
git config --global http.proxy http://<YOUR WIN LOGIN NAME>@proxy:80
Check that all the settings that you added looks good....
git config --global -l
Now you good to go!
A: Setup proxy to git
command
git config --global http.proxy http://user:password@domain:port
example
git config --global http.proxy http://clairton:123456@proxy.clairtonluz.com.br:8080
A: If you just want to use proxy on a specified repository, don't need on other repositories. The preferable way is the -c, --config <key=value> option when you git clone a repository. e.g.
$ git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git --config "http.proxy=proxyHost:proxyPort"
A: You can also set the HTTP proxy that Git uses in global configuration property http.proxy:
git config --global http.proxy http://proxy.mycompany:80
To authenticate with the proxy:
git config --global http.proxy http://mydomain\\myusername:mypassword@myproxyserver:8080/
(Credit goes to @EugeneKulabuhov and @JaimeReynoso for the authentication format.)
A: I had the same problem, with a slightly different fix: REBUILDING GIT WITH HTTP SUPPORT
The git: protocol did not work through my corporate firewall.
For example, this timed out:
git clone git://github.com/miksago/node-websocket-server.git
curl github.com works just fine, though, so I know my http_proxy environment variable is correct.
I tried using http, like below, but got an immediate error.
git clone http://github.com/miksago/node-websocket-server.git
->>> fatal: Unable to find remote helper for 'http' <<<-
I tried recompiling git like so:
./configure --with-curl --with-expat
but still got the fatal error.
Finally, after several frustrating hours, I read the configure file,
and saw this:
# Define CURLDIR=/foo/bar if your curl header and library files are in
# /foo/bar/include and /foo/bar/lib directories.
I remembered then, that I had not complied curl from source, and so went
looking for the header files. Sure enough, they were not installed. That was the problem. Make did not complain about the missing header files. So
I did not realize that the --with-curl option did nothing (it is, in fact the default in my version of git).
I did the following to fix it:
*
*Added the headers needed for make:
yum install curl-devel
(expat-devel-1.95.8-8.3.el5_5.3.i386 was already installed).
*Removed git from /usr/local (as I want the new install to live there).
I simply removed git* from /usr/local/share and /usr/local/libexec
*Searched for the include dirs containing the curl and expat header files, and then (because I had read through configure) added these to the environment like so:
export CURLDIR=/usr/include
export EXPATDIR=/usr/include
*Ran configure with the following options, which, again, were described in the configure file itself, and were also the defaults but what the heck:
./configure --with-curl --with-expat
*And now http works with git through my corporate firewall:
git clone http://github.com/miksago/node-websocket-server.git
Cloning into 'node-websocket-server'...
* Couldn't find host github.com in the .netrc file, using defaults
* About to connect() to proxy proxy.entp.attws.com port 8080
* Trying 135.214.40.30... * connected
...
A: This worked to me.
git config --global http.proxy proxy_user:proxy_passwd@proxy_ip:proxy_port
A: It looks like you're using a mingw compile of Git on windows (or possibly another one I haven't heard about). There are ways to debug this: I believe all of the http proxy work for git is done by curl. Set this environment variable before running git:
GIT_CURL_VERBOSE=1
This should at least give you an idea of what is going on behind the scenes.
A: Just to post this as it is the first result on Google, this blog post I found solves the problem for me by updated the curl certificates.
http://www.simplicidade.org/notes/archives/2011/06/github_ssl_ca_errors.html
A: Use proxychains
proxychains git pull ...
update: proxychains is discontinued, use proxychains-ng instead.
A: Worth to mention:
Most examples on the net show examples like
git config --global http.proxy proxy_user:proxy_passwd@proxy_ip:proxy_port
So it seems, that - if your proxy needs authentication - you must leave your company-password in the git-config. Which isn't really cool.
But, if you just configure the user without password:
git config --global http.proxy proxy_user@proxy_ip:proxy_port
Git seems (at least on my Windows-machine without credentials-helper) to recognize that and prompts for the proxy-password on repo-access.
A: you can use:
git config --add http.proxy http://user:password@proxy_host:proxy_port
A: The below method works for me:
echo 'export http_proxy=http://username:password@roxy_host:port/' >> ~/.bash_profile
echo 'export https_proxy=http://username:password@roxy_host:port' >> ~/.bash_profile
*
*Zsh note: Modify your ~/.zshenv file instead of ~/.bash_profile.
*Ubuntu and Fedora note: Modify your ~/.bashrc file instead of ~/.bash_profile.
A: There is a way to set up a proxy for a specific URL, see the http.<url>.* section in the git config manual. For example, for https://github.com/ one can do
git config --global 'http.https://github.com/.proxy' http://proxy.mycompany:80
A: When your network team does ssl-inspection by rewriting certificates, then using a http url instead of a https one, combined with setting this var worked for me.
git config --global http.proxy http://proxy:8081
A: For me the git:// just doesn't work through the proxy although the https:// does. This caused some bit of headache because I was running scripts that all used git:// so I couldn't just easily change them all. However I found this GEM
git config --global url."https://github.com/".insteadOf git://github.com/
A: You could too edit .gitconfig file located in %userprofile% directory on Windows system (notepad %userprofile%.gitconfig) or in ~ directory on Linux system (vi ~/.gitconfig) and add a http section as below.
Content of .gitconfig file :
[http]
proxy = http://proxy.mycompany:80
A: For Windows
Goto --> C:/Users/user_name/gitconfig
Update gitconfig file with below details
[http]
[https]
proxy = https://your_proxy:your_port
[http]
proxy = http://your_proxy:your_port
How to check your proxy and port number?
Internet Explorer -> Settings -> Internet Options -> Connections -> LAN Settings
A: This is an old question but if you are on Windows, consider setting HTTPS_PROXY as well if you are retrieving via an https URL. Worked for me!
A: There's some great answers on this already. However, I thought I would chip in as some proxy servers require you to authenticate with a user Id and password. Sometimes this can be on a domain.
So, for example if your proxy server configuration is as follows:
Server: myproxyserver
Port: 8080
Username: mydomain\myusername
Password: mypassword
Then, add to your .gitconfig file using the following command:
git config --global http.proxy http://mydomain\\myusername:mypassword@myproxyserver:8080
Don't worry about https. As long as the specified proxy server supports http, and https, then one entry in the config file will suffice.
You can then verify that the command added the entry to your .gitconfig file successfully by doing cat .gitconfig:
At the end of the file you will see an entry as follows:
[http]
proxy = http://mydomain\\myusername:mypassword@myproxyserver:8080
That's it!
A: I find neither http.proxy nor GIT_PROXY_COMMAND work for my authenticated http proxy. The proxy is not triggered in either way. But I find a way to work around this.
*
*Install corkscrew, or other alternatives you want.
*Create a authfile. The format for authfile is: user_name:password, and user_name, password is your username and password to access your proxy. To create such a file, simply run command like this: echo "username:password" > ~/.ssh/authfile.
*Edit ~/.ssh/config, and make sure its permission is 644: chmod 644 ~/.ssh/config
Take github.com as an example, add the following lines to ~/.ssh/config:
Host github.com
HostName github.com
ProxyCommand /usr/local/bin/corkscrew <your.proxy> <proxy port> %h %p <path/to/authfile>
User git
Now whenever you do anything with git@github.com, it will use the proxy automatically. You can easily do the same thing to Bitbucket as well.
This is not so elegant as other approaches, but it works like a charm.
A: What finally worked was setting the http_proxy environment variable. I had set HTTP_PROXY correctly, but git apparently likes the lower-case version better.
A: On Windows, if you don't want to put your password in .gitconfig in the plain text, you can use
*
*Cntml (http://cntlm.sourceforge.net/)
It authenticates you against normal or even Windows NTLM proxy and starts localhost-proxy without authentication.
In order to get it run:
*
*Install Cntml
*Configure Cntml according to documentation to pass your proxy authentication
*Point git to your new localhost proxy:
[http]
proxy = http://localhost:3128 # change port as necessary
A: This isn't a problem with your proxy. It's a problem with github (or git). It fails for me on git-1.6.0.1 on linux as well. Bug is already reported (by you no less).
Make sure to delete your pasties, they're already on google. Edit: Must've been dreaming, i guess you can't delete them. Use Gist instead?
A: $http_proxy is for http://github.com....
$https_proxy is for https://github.com...
A: The above answers worked for me when my proxy doesn't need authentication. If you are using proxy which requires you to authenticate then you may try CCProxy. I have small tutorial on how to set it up here,
http://blog.praveenkumar.co.in/2012/09/proxy-free-windows-xp78-and-mobiles.html
I was able to push, pull, create new repos. Everything worked just fine. Make sure you do a clean uninstall and reinstall of new version if you are facing issues with Git like I did.
A: I got around the proxy using https... some proxies don't even check https.
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
c:\git\meantest>git clone http://github.com/linnovate/mean.git
Cloning into 'mean'...
fatal: unable to access 'http://github.com/linnovate/mean.git/': Failed connect
to github.com:80; No error
c:\git\meantest>git clone https://github.com/linnovate/mean.git
Cloning into 'mean'...
remote: Reusing existing pack: 2587, done.
remote: Counting objects: 27, done.
remote: Compressing objects: 100% (24/24), done.
rRemote: Total 2614 (delta 3), reused 4 (delta 0)eceiving objects: 98% (2562/26
Receiving objects: 100% (2614/2614), 1.76 MiB | 305.00 KiB/s, done.
Resolving deltas: 100% (1166/1166), done.
Checking connectivity... done
A: As this was answered by many but This is just for Winodws USER who is behind proxy with auth.
Re-Installing(first failed, Don't remove).
Goto ->
**Windows**
1. msysgit\installer-tmp\etc\gitconfig
Under [http]
proxy = http://user:pass@url:port
**Linux**
1. msysgit\installer-tmp\setup-msysgit.sh
export HTTP_PROXY="http://USER:PASS@proxy.abc.com:8080"
if you have any special char in user/pass use url_encode
A: as @user2188765 has already pointed out, try replacing the git:// protocol of the repository with http[s]://. See also this answer
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "529"
}
|
Q: How can I lock a file using java (if possible) I have a Java process that opens a file using a FileReader. How can I prevent another (Java) process from opening this file, or at least notify that second process that the file is already opened? Does this automatically make the second process get an exception if the file is open (which solves my problem) or do I have to explicitly open it in the first process with some sort of flag or argument?
To clarify:
I have a Java app that lists a folder and opens each file in the listing for processing it. It processes each file after the other. The processing of each file consists of reading it and doing some calculations based on the contents and it takes about 2 minutes. I also have another Java app that does the same thing but instead writes on the file. What I want is to be able to run these apps at the same time so the scenario goes like this. ReadApp lists the folder and finds files A, B, C. It opens file A and starts the reading. WriteApp lists the folder and finds files A, B, C. It opens file A, sees that is is open (by an exception or whatever way) and goes to file B. ReadApp finishes file A and continues to B. It sees that it is open and continues to C. It is crucial that WriteApp doesn't write while ReadApp is reading the same file or vice versa. They are different processes.
A: use java.nio.channels.FileLock in conjunction with java.nio.channels.FileChannel
A: Don't use the classes in thejava.io package, instead use the java.nio package . The latter has a FileLock class. You can apply a lock to a FileChannel.
try {
// Get a file channel for the file
File file = new File("filename");
FileChannel channel = new RandomAccessFile(file, "rw").getChannel();
// Use the file channel to create a lock on the file.
// This method blocks until it can retrieve the lock.
FileLock lock = channel.lock();
/*
use channel.lock OR channel.tryLock();
*/
// Try acquiring the lock without blocking. This method returns
// null or throws an exception if the file is already locked.
try {
lock = channel.tryLock();
} catch (OverlappingFileLockException e) {
// File is already locked in this thread or virtual machine
}
// Release the lock - if it is not null!
if( lock != null ) {
lock.release();
}
// Close the file
channel.close();
} catch (Exception e) {
}
A: This may not be what you are looking for, but in the interest of coming at a problem from another angle....
Are these two Java processes that might want to access the same file in the same application? Perhaps you can just filter all access to the file through a single, synchronized method (or, even better, using JSR-166)? That way, you can control access to the file, and perhaps even queue access requests.
A: Use a RandomAccessFile, get it's channel, then call lock(). The channel provided by input or output streams does not have sufficient privileges to lock properly. Be sure to call unlock() in the finally block (closing the file doesn't necessarily release the lock).
A: If you can use Java NIO (JDK 1.4 or greater), then I think you're looking for java.nio.channels.FileChannel.lock()
FileChannel.lock()
A: FileChannel.lock is probably what you want.
try (
FileInputStream in = new FileInputStream(file);
java.nio.channels.FileLock lock = in.getChannel().lock();
Reader reader = new InputStreamReader(in, charset)
) {
...
}
(Disclaimer: Code not compiled and certainly not tested.)
Note the section entitled "platform dependencies" in the API doc for FileLock.
A: Below is a sample snippet code to lock a file until it's process is done by JVM.
public static void main(String[] args) throws InterruptedException {
File file = new File(FILE_FULL_PATH_NAME);
RandomAccessFile in = null;
try {
in = new RandomAccessFile(file, "rw");
FileLock lock = in.getChannel().lock();
try {
while (in.read() != -1) {
System.out.println(in.readLine());
}
} finally {
lock.release();
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}finally {
try {
in.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
A: Use this for unix if you are transferring using winscp or ftp:
public static void isFileReady(File entry) throws Exception {
long realFileSize = entry.length();
long currentFileSize = 0;
do {
try (FileInputStream fis = new FileInputStream(entry);) {
currentFileSize = 0;
while (fis.available() > 0) {
byte[] b = new byte[1024];
int nResult = fis.read(b);
currentFileSize += nResult;
if (nResult == -1)
break;
}
} catch (Exception e) {
e.printStackTrace();
}
System.out.println("currentFileSize=" + currentFileSize + ", realFileSize=" + realFileSize);
} while (currentFileSize != realFileSize);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "131"
}
|
Q: What to do with null fields in compare()? In Java, I use a class in which some fields can be null. For example:
class Foo {
String bar;
//....
}
I want to write a BarComparator for this class,
private static class BarComparator
implements Comparator<Foo> {
public int compare( final Foo o1, final Foo o2 )
{
// Implementation goes here
}
}
Is there a standard way to deal with the fact that any of o1, o2, o1.bar, o2.bar can be null, without writing lots of nested if...else?
Cheers!
A: Thanks for the replies! The generic method and the Google Comparators look interesting.
And I found that there's a NullComparator in the Apache Commons Collections (which we're currently using):
private static class BarComparator
implements Comparator<Foo>
{
public int compare( final Foo o1, final Foo o2 )
{
// o1.bar & o2.bar nulleness is taken care of by the NullComparator.
// Easy to extend to more fields.
return NULL_COMPARATOR.compare(o1.bar, o2.bar);
}
private final static NullComparator NULL_COMPARATOR =
new NullComparator(false);
}
Note: I focused on the bar field here to keep it to the point.
A: It depends on whether you consider a null entry to be a valid string value worth of comparison. is null < or > "apple". The only thing I could say for sure is that null == null. If you can define where null fits into the ordering then you can write the code appropriately.
In this case I might choose to throw a NullPointerExcpetion or IllegalArgumentException and try to handle the null at a higher level by not putting it in the comparison in the first place.
A: I guess you could wrap the call to the field compareTo method with a small static method to sort nulls high or low:
static <T extends Comparable<T>> int cp(T a, T b) {
return
a==null ?
(b==null ? 0 : Integer.MIN_VALUE) :
(b==null ? Integer.MAX_VALUE : a.compareTo(b));
}
Simple usage (multiple fields is as you would normally):
public int compare( final Foo o1, final Foo o2 ) {
return cp(o1.field, o2.field);
}
A: The key thing here is to work out how you would like nulls to be treated. Some options are: a) assume nulls come before all other objects in sort order b) assume nulls come after all other objects in sort order c) treat null as equivalent to some default value d) treat nulls as error conditions. Which one you choose will depend entirely on the application you are working on.
In the last case of course you throw an exception. For the others you need a four-way if/else case (about three minutes of coding one you've worked out what you want the results to be).
A: If you're using Google collections, you may find the Comparators class helpful. If has helper methods for ordering nulls as either the greatest or least elements in the collection. You can use compound comparators to help reduce the amount of code.
A: You can write your Comparator for it. Lets say you have a class Person with String name as private field. getName() and setName() method to access the field name. Below is the Comparator for class Person.
Collections.sort(list, new Comparator<Person>() {
@Override
public int compare(Person a, Person b) {
if (a == null) {
if (b == null) {
return 0;
}
return -1;
} else if (b == null) {
return 1;
}
return a.getName().compareTo(b.getName());
}
});
Update:
As of Java 8 you can use below API's for List.
// Push nulls at the end of List
Collections.sort(subjects1, Comparator.nullsLast(String::compareTo));
// Push nulls at the beginning of List
Collections.sort(subjects1, Comparator.nullsFirst(String::compareTo));
A: There is also the class org.springframework.util.comparator.NullSafeComparator in the Spring Framework you can use.
Example (Java 8):
SortedSet<Foo> foos = new TreeSet<>( ( o1, o2 ) -> {
return new NullSafeComparator<>( String::compareTo, true ).compare( o1.getBar(), o2.getBar() );
} );
foos.add( new Foo(null) );
foos.add( new Foo("zzz") );
foos.add( new Foo("aaa") );
foos.stream().forEach( System.out::println );
This will print:
Foo{bar='null'}
Foo{bar='aaa'}
Foo{bar='zzz'}
A: Considering Customer as a POJO.My answer would be :
Comparator<Customer> compareCustomer = Comparator.nullsLast((c1,c2) -> c1.getCustomerId().compareTo(c2.getCustomerId()));
Or
Comparator<Customer> compareByName = Comparator.comparing(Customer::getName,nullsLast(String::compareTo));
A: You should not use the NullComparator the way you do - you're creating a new instance of the class for every comparison operation, and if e.g. you're sorting a list with 1000 entries, that will be 1000 * log2(1000) objects that are completely superfluous. This can quickly get problematic.
Either subclass it, or delegate to it, or simply implement your own null check - it's really not that complex:
private static class BarComparator
implements Comparator<Foo> {
private NullComparator delegate = new NullComparator(false);
public int compare( final Foo o1, final Foo o2 )
{
return delegate.compare(o1.bar, o2.bar);
}
}
A: I think early return statements would be the other alternative to lots of ifs
e.g.
if(o1==null) return x;
if(o2==null) return x;
if(o1.getBar()==null) return x;
if(o2.getBar()==null) return x;
// No null checks needed from this point.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: Algorithm for merging large files I have several log files of events (one event per line). The logs can possibly overlap. The logs are generated on separate client machines from possibly multiple time zones (but I assume I know the time zone). Each event has a timestamp that was normalized into a common time (by instantianting each log parsers calendar instance with the timezone appropriate to the log file and then using getTimeInMillis to get the UTC time). The logs are already sorted by timestamp. Multiple events can occur at the same time, but they are by no means equal events.
These files can be relatively large, as in, 500000 events or more in a single log, so reading the entire contents of the logs into a simple Event[] is not feasible.
What I am trying do is merge the events from each of the logs into a single log. It is kinda like a mergesort task, but each log is already sorted, I just need to bring them together. The second component is that the same event can be witnessed in each of the separate log files, and I want to "remove duplicate events" in the file output log.
Can this be done "in place", as in, sequentially working over some small buffers of each log file? I can't simply read in all the files into an Event[], sort the list, and then remove duplicates, but so far my limited programming capabilities only enable me to see this as the solution. Is there some more sophisticated approach that I can use to do this as I read events from each of the logs concurrently?
A: Sure - open every log file. Read in the first line for each into an array of 'current' lines. Then repeatedly pick the line with the lowest timestamp from the current array. Write it to the output, and read a new line from the appropriate source file to replace it.
Here's an example in Python, but it makes good pseudocode, too:
def merge_files(files, key_func):
# Populate the current array with the first line from each file
current = [file.readline() for file in files]
while len(current) > 0:
# Find and return the row with the lowest key according to key_func
min_idx = min(range(len(files)), key=lambda x: return key_func(current[x]))
yield current[min_idx]
new_line = files[min_idx].readline()
if not new_line:
# EOF, remove this file from consideration
del current[min_idx]
del files[min_idx]
else:
current[min_idx] = new_line
A: *
*Read the first line from each of the log files
*LOOP
a. Find the "earliest" line.
b. Insert the "earliest" line into the master log file
c. Read the next line from the file that contained the earliest line
You could check for duplicates between b and c, advancing the pointer for each of those files.
A: Checkout this link: http://www.codeodor.com/index.cfm/2007/5/10/Sorting-really-BIG-files/1194
*
*Use a heap (based on an array). The number of elements in this heap/array will be equal to the number of log files you have.
*Read the first records from all the files and insert them into your heap.
*Loop until (no more records in any of the files)
> remove the max element from the heap
> write it to the output
> read the next record from the file to which the (previous) max element belonged
if there are no more records in that file
remove it from file list
continue
> if it's not the same as the (previous) max element, add it to the heap
Now you have all your events in one log file, they are sorted, and there are no duplicates. The time complexity of the algorithm is (n log k) where n is the total number of records and k is the number of log files.
You should use buffered reader and buffered writer objects when reading to and from files to minimize the number of disk reads and writes, in order to optimize for time.
A: We were needed to merge chronologically several log files having multiple lines per one log entry (java applications do this often - their stack traces are the same). I decided to implement the simple shell+perl script. It covers our tasks. If you are interested in it - follow by the link http://code.google.com/p/logmerge/
A: Read only one line at a time from both source files.
Compare the lines and write the older one to the output file (and advance to the next line).
Do this until you have reached the end of both files and you've merged the files.
And make sure to remove duplicates :)
I guess this code in C# may illustrate the approach:
StringReader fileStream1;
StringReader fileStream2;
Event eventCursorFile1 = Event.Parse(fileStream1.ReadLine());
Event eventCursorFile2 = Event.Parse(fileStream2.ReadLine());
while !(fileStream1.EOF && fileStream2.EOF)
{
if (eventCursorFile1.TimeStamp < eventCursorFile2.TimeStamp)
{
WriteToMasterFile(eventCursorFile1);
eventCursorFile1 = Event.Parse(fileStream1.ReadLine());
}
else if (eventCursorFile1.TimeStamp == eventCursorFile2.TimeStamp)
{
WriteToMasterFile(eventCursorFile1);
eventCursorFile1 = Event.Parse(fileStream1.ReadLine());
eventCursorFile2 = Event.Parse(fileStream2.ReadLine());
}
else
{
WriteToMasterFile(eventCursorFile1);
eventCursorFile2 = Event.Parse(fileStream2.ReadLine());
}
}
The break condition isn't exactly right as this is just Quick'n'dirty, but it should look similar..
A: OR you could borrow a log merge utility from Awstats which is an open source website stats tool.
logresolvemerge.pl is a perl script that can merge multiple log files : you can even use multiple threads to merge the log files(need to have perl 5.8 for multi-thread use). Why don't you try using a readily available tool instead of building one ?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: In which cases do you test against an In-Memory Database instead of a Development Database? When do you test against an In-Memory Database vs. a Development Database?
Also, as a related side question, when you do use a Development Database, do you use an Individual Development Database, an Integration Development Database, or both?
Also++, for unit testing, when do you use an In-Memory Database over mocking out your Repository/DAL, etc.?
A: In memory is an excellent choice for your unit-tests, when the data is easy to seed for your given test cases and a very particular operation is being tested. A real database is better for integration tests, where the data pre-requisites are more complex and there is value to having the base data remain after the tests complete.
For us, the only things we allow in our 'fast' test suite of JUnit tests are those that do not have any external dependencies (database, file, network, etc) so that the suite can be run quickly and efficiently by both developers and continuous integration on checkin. If there is a certain test that absolutely needs to go to the DB, then an in memory one is the only way to go.
A couple points to keep in mind:
*
*Think carefully if you need to use a
database at all in a unit test. It
may be indicative of a poor design
in that the data access layer is
coupled too tightly to the business
logic you are trying to test and
cannot be mocked out.
*If using a real database for integration testing, ensure that the tests always restore the data to a pristine state when finished. I've seen a lot of wasted time and failed integration tests because some other test messed up the data.
As for your other question, it really depends on your need. A good rule of thumb is one development database per code branch, since schema changes may be needed that are not relevant to another branch of code. Just having a dedicated development database is important; I'm surprised at how many development teams have to share a database with the QA team, etc. It is important to be able to make changes in a sandboxed environment that does not affect other teams or prevent others from doing their work, so if you've met those requirements you're doing well.
A: For my team, it's in-memory on developper machine, and the real-database on the continuous integration server.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: What are the benefits of functional programming? What do you think the benefits of functional programming are? And how do they apply to programmers today?
What are the greatest differences between functional programming and OOP?
A: The style of functional programming is to describe what you want, rather than how to get it. ie: instead of creating a for-loop with an iterator variable and marching through an array doing something to each cell, you'd say the equivalent of "this label refers to a version of this array where this function has been done on all the elements."
Functional programming moves more basic programming ideas into the compiler, ideas such as list comprehensions and caching.
The biggest benefit of Functional programming is brevity, because code can be more concise. A functional program doesn't create an iterator variable to be the center of a loop, so this and other kinds of overhead are eliminated from your code.
The other major benefit is concurrency, which is easier to do with functional programming because the compiler is taking care of most of the operations which used to require manually setting up state variables (like the iterator in a loop).
Some performance benefits can be seen in the context of a single-processor as well, depending on the way the program is written, because most functional languages and extensions support lazy evaluation. In Haskell you can say "this label represents an array containing all the even numbers". Such an array is infinitely large, but you can ask for the 100,000th element of that array at any moment without having to know--at array initialization time--just what the largest value is you're going to need. The value will be calculated only when you need it, and no further.
A: I think the most practical example of the need for functional programming is concurrency - functional programs are naturally thread safe and given the rise of multi core hardware this is of uttermost importance.
Functional programming also increases the modularity - you can often see methods/functions in imperative that are far too long - you'll almost never see a function more than a couple of lines long. And since everything is decoupled - re-usability is much improved and unit testing is very very easy.
A: It doesn't have to be one or the other: using a language like C#3.0 allows you to mix the best elements of each. OO can be used for the large scale structure at class level and above, Functional style for the small scale structure at method level.
Using the Functional style allows code to be written that declares its intent clearly, without being mixed up with control flow statements, etc. Because of the principles like side-effect free programming, it is much easier to reason about code, and check its correctness.
A: Once the program grows, the number of commands in our vocabulary becomes too high, making it very difficult to use. This is where object-oriented programming makes our life easier, because it allows us to organize our commands in a better way.
We can associate all commands that involve customer with some customer entity (a class), which makes the description a lot clearer. However, the program is still a sequence of commands specifying how it should proceed.
Functional programming provides a completely different way of extending the vocabulary. Not limited to adding new primitive commands; we can also add new control structures–primitives that specify how we can put commands together to create a program. In imperative languages, we were able to compose commands in a sequence or using a limited number of built in constructs such as loops, but if you look at typical programs, you'll still see many recurring structures; common ways of combining commands
A: The biggest benefit is that it's not what you're used to. Pick a language like Scheme and learn to solve problems with it, and you'll become a better programmer in languages you already know. It's like learning a second human language. You assume that others are basically a variation on your own because you have nothing to compare it with. Exposure to others, particular ones that aren't related to what you already know, is instructive.
A: Do not think of functional programming in terms of a "need". Instead, think of it as another programming technique that will open up your mind just as OOP, templates, assembly language, etc may have completely changed your way of thinking when (if) you learned them. Ultimately, learning functional programming will make you a better programmer.
A: Why Functional Programming Matters
http://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf
Abstract
As software becomes more and more complex, it is more and
more important to structure it well. Well-structured software is easy
to write and to debug, and provides a collection of modules that can
be reused to reduce future programming costs.
In this paper we show
that two features of functional languages in particular, higher-order
functions and lazy evaluation, can contribute significantly to
modularity. As examples, we manipulate lists and trees, program
several numerical algorithms, and implement the alpha-beta heuristic
(an algorithm from Artificial Intelligence used in game-playing
programs). We conclude that since modularity is the key to successful
programming, functional programming offers important advantages for
software development.
A:
A good starting point therefore would be to try to understand some things that are not possible in imperative languages but possible in functional languages.
If you're talking about computability, there is of course nothing that is possible in functional but not imperative programming (or vice versa).
The point of different programming paradigms isn't to make things possible that weren't possible before, it's to make things easy that were hard before.
Functional programming aims to let you more easily write programs that are concise, bug-free and parallelizable.
A: If you don't already know functional programming then learning it gives you more ways to solve problems.
FP is a simple generalization that promotes functions to first class values whereas OOP is for large-scale structuring of code. There is some overlap, however, where OOP design patterns can be represented directly and much more succinctly using first-class functions.
Many languages provide both FP and OOP, including OCaml, C# 3.0 and F#.
Cheers,
Jon Harrop.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "110"
}
|
Q: Dynamically adding controls in ASP.NET Repeater I find my self having a repeater control which is being databound to an xml document. My client is now requesting that the Textbox's which are being repeater can be either a Textbox or a Checkbox.
I cannot seem to find an easyway to essentially do the following:
if ((System.Xml.XmlNode)e.Item.DataItem.Attributes["type"] == "text")
<asp:TextBox runat="server" ID="txtField" Text='<%#((System.Xml.XmlNode)Container.DataItem).InnerText %>' CssClass="std"></asp:TextBox>
else
<asp:CheckBox runat="server" ID="txtField" Text='<%#((System.Xml.XmlNode)Container.DataItem).InnerText %>' CssClass="std"></asp:TextBox>
Is there a nice way I can extend my current implementaion without have to rewrite the logic. If I could inject the control via "OnItemDataBound" that would also be fine. But I cannot seem to make it work
A: In your repeater, drop a Panel, then create an event handler for the repeater's data binding event and programmatically create the TextBox or CheckBox and add it as a child control of the Panel. You should be able to get the DataItem from the event args to get information like your "type" attribute or values to feed your Text properties or css information, etc.
A: I would go with mspmsp's sugestion. Here is a quick and dirty code as an example of it:
Place this in your aspx:
<asp:Repeater ID="myRepeater" runat="server" OnItemCreated="myRepeater_ItemCreated">
<ItemTemplate>
<asp:PlaceHolder ID="myPlaceHolder1" runat="server"></asp:PlaceHolder>
<br />
</ItemTemplate>
</asp:Repeater>
And this in your codebehind:
dim plh as placeholder
dim uc as usercontrol
protected sub myRepeater_ItemCreated(object sender, RepeaterItemEventArgs e)
if TypeOf e Is ListItemType.Item Or TypeOf e Is ListItemType.AlternatingItem Then
plh = ctype(e.item.findcontrol("myPlaceHolder1"), Placeholder)
uc = Page.LoadControl("~/usercontrols/myUserControl.ascx")
plh.controls.add(uc)
end if
end sub
A: What about something similar to this in your markup in each the textbox and checkbox controls?
Visible=<%= Eval("type").tostring() == "text") %>
A: If there is needed to add controls based on data then there can be used this approach:
<asp:Repeater ID="ItemsRepeater" runat="server" OnItemDataBound="ItemRepeater_ItemDataBound">
<itemtemplate>
<div>
<asp:PlaceHolder ID="ItemControlPlaceholder" runat="server"></asp:PlaceHolder>
</div>
</itemtemplate>
</asp:Repeater>
protected void ItemRepeater_ItemDataBound(object sender, RepeaterItemEventArgs e)
{
var placeholder = e.Item.FindControl("ItemControlPlaceholder") as PlaceHolder;
var col = (ItemData)e.Item.DataItem;
placeholder.Controls.Add(new HiddenField { Value = col.Name });
placeholder.Controls.Add(CreateControl(col));
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How can I write a unit test for a controller class that uses winforms for views? Has anyone been able to successfully unit test methods that are, by necessity, coupled to the System.Windows.Forms.Form class?
I've recently been working on a C# winforms application, trying to build it with an MVC structure. This is difficult enough, given that the framework isn't really built with this in mind.
However, it gets even tougher when you throw unit testing into the mix. I've been making sure that my controllers are not coupled to concrete view classes, so that I can use a stub/mock for unit testing. But referencing the Form class somewhere is unavoidable, and these methods do need to be tested.
I've been using Moq because it has some nice type-safety features, and allows mocking concrete types. But unfortunately, it doesn't allow me to "expect" calls to methods or properties on a concrete type that are neither virtual nor abstract. And since the Form class was not built with subclassing in mind, this is a big problem. I need to be able to mock the Form class to prevent real windows from being created, by "expecting" ShowDialog, for example.
So I'm left unable to run any unit tests that do much interaction with subclasses of Form, which my views are.
Is there anyone out there who has successfully unit tested this type of code? How did you do it?
Is this something that other mocking frameworks can get around? Would the string-based methods used by other mocking frameworks be subject to the same constraints? Can I write my own explicit long-hand mock classes, or will the lack of virtual members prevent me from being able to suppress the window behavior that way too?
Or is there some way I haven't thought of to structure my classes so that the Forms-coupled code ends up in methods and classes of trivial complexity, such that I can get away without explicitly unit testing them, without my conscience beating me up for it?
A: The best method I've heard of/used for unit testing with GUI elements is the Humble Dialog pattern/method. In essence, the Forms are just the interface, and all the real work is done in other classes. You unit test the classes that provide the functionality, and then just tie your GUI events to the appropriate methods in those classes.
A: My current thought is that I may have to use composition rather than inheritance with the Form class, to decouple the controllers from it.
This has the disadvantage that every time I need to use an member of the Form class that I didn't plan for, I need to add it explicitly to my view interface.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Determining the index of an Item on a Form (J2ME) Given an Item that has been appended to a Form, whats the best way to find out what index that item is at on the Form?
Form.append(Item) will give me the index its initially added at, but if I later insert items before that the index will be out of sync.
A: This was the best I could come up with:
private int getItemIndex(Item item, Form form) {
for(int i = 0, size = form.size(); i < size; i++) {
if(form.get(i).equals(item)) {
return i;
}
}
return -1;
}
I haven't actually tested this but it should work, I just don't like having to enumerate every item but then there should never be that many so I guess its ok.
A: Well, there are just two ways to do this, since the API does not have an indexOf(Item) method:
*
*You update the index you get when you add an Item. So when you insert another Item before other items, you'll have to update the indices of those items. You could keep some kind of shadow-array for this, but that seems a bit overkill.
*You loop through all the items of a form using the size and get methods of Form.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do you find Leapyear in VBA? What is a good implementation of a IsLeapYear function in VBA?
Edit: I ran the if-then and the DateSerial implementation with iterations wrapped in a timer, and the DateSerial was quicker on the average by 1-2 ms (5 runs of 300 iterations, with 1 average cell worksheet formula also working).
A: If efficiency is a consideration and the expected year is random, then it might be slightly better to do the most frequent case first:
public function isLeapYear (yr as integer) as boolean
if (mod(yr,4)) <> 0 then isLeapYear = false
elseif (mod(yr,400)) = 0 then isLeapYear = true
elseif (mod(yr,100)) = 0 then isLeapYear = false
else isLeapYear = true
end function
A: As a variation on the Chip Pearson solution, you could also try
Public Function isLeapYear(Yr As Integer) As Boolean
' returns FALSE if not Leap Year, TRUE if Leap Year
isLeapYear = (DAY(DateSerial(Yr, 3, 0)) = 29)
End Function
A: Public Function isLeapYear(Yr As Integer) As Boolean
' returns FALSE if not Leap Year, TRUE if Leap Year
isLeapYear = (Month(DateSerial(Yr, 2, 29)) = 2)
End Function
I originally got this function from Chip Pearson's great Excel site.
Pearson's site
A: I found this funny one on CodeToad :
Public Function IsLeapYear(Year As Varient) As Boolean
IsLeapYear = IsDate("29-Feb-" & Year)
End Function
Although I'm pretty sure that the use of IsDate in a function is probably slower than a couple of if, elseifs.
A: Late answer to address the performance question.
TL/DR: the Math versions are about 5x faster
I see two groups of answers here
*
*Mathematical interpretation of the Leap Year definition
*Utilize the Excel Date/Time functions to detect Feb 29 (these fall into two camps: those that build a date as a string, and those that don't)
I ran time tests on all posted answers, an discovered the Math methods are about 5x faster than the Date/Time methods.
I then did some optimization of the methods and came up with (believe it or not Integer is marginally faster than Long in this case, don't know why.)
Function IsLeapYear1(Y As Integer) As Boolean
If Y Mod 4 Then Exit Function
If Y Mod 100 Then
ElseIf Y Mod 400 Then Exit Function
End If
IsLeapYear1 = True
End Function
For comparison, I came up (very little difference to the posted version)
Public Function IsLeapYear2(yr As Integer) As Boolean
IsLeapYear2 = Month(DateSerial(yr, 2, 29)) = 2
End Function
The Date/Time versions that build a date as a string were discounted as they are much slower again.
The test was to get IsLeapYear for years 100..9999, repeated 1000 times
Results
*
*Math version: 640ms
*Date/Time version: 3360ms
The test code was
Sub Test()
Dim n As Long, i As Integer, j As Long
Dim d As Long
Dim t1 As Single, t2 As Single
Dim b As Boolean
n = 1000
Debug.Print "============================="
t1 = Timer()
For j = 1 To n
For i = 100 To 9999
b = IsYLeapYear1(i)
Next i, j
t2 = Timer()
Debug.Print 1, (t2 - t1) * 1000
t1 = Timer()
For j = 1 To n
For i = 100 To 9999
b = IsLeapYear2(i)
Next i, j
t2 = Timer()
Debug.Print 2, (t2 - t1) * 1000
End Sub
A: public function isLeapYear (yr as integer) as boolean
isLeapYear = false
if (mod(yr,400)) = 0 then isLeapYear = true
elseif (mod(yr,100)) = 0 then isLeapYear = false
elseif (mod(yr,4)) = 0 then isLeapYear = true
end function
Wikipedia for more...
http://en.wikipedia.org/wiki/Leap_year
A: Public Function ISLeapYear(Y As Integer) AS Boolean
' Uses a 2 or 4 digit year
'To determine whether a year is a leap year, follow these steps:
'1 If the year is evenly divisible by 4, go to step 2. Otherwise, go to step 5.
'2 If the year is evenly divisible by 100, go to step 3. Otherwise, go to step 4.
'3 If the year is evenly divisible by 400, go to step 4. Otherwise, go to step 5.
'4 The year is a leap year (it has 366 days).
'5 The year is not a leap year (it has 365 days).
If Y Mod 4 = 0 Then ' This is Step 1 either goto step 2 else step 5
If Y Mod 100 = 0 Then ' This is Step 2 either goto step 3 else step 4
If Y Mod 400 = 0 Then ' This is Step 3 either goto step 4 else step 5
ISLeapYear = True ' This is Step 4 from step 3
Exit Function
Else: ISLeapYear = False ' This is Step 5 from step 3
Exit Function
End If
Else: ISLeapYear = True ' This is Step 4 from Step 2
Exit Function
End If
Else: ISLeapYear = False ' This is Step 5 from Step 1
End If
End Function
A: Public Function isLeapYear(Optional intYear As Variant) As Boolean
If IsMissing(intYear) Then
intYear = Year(Date)
End If
If intYear Mod 400 = 0 Then
isLeapYear = True
ElseIf intYear Mod 4 = 0 And intYear Mod 100 <> 0 Then
isLeapYear = True
End If
End Function
A: I see many great concepts that indicate extra understanding
and usage of date functions that are terrific to learn from...
In terms of code efficiency..
consider the machine code needed for a function to execute
rather than complex date functions
use only fairly fast integer functions
BASIC was built on GOTO
I suspect that something like below is faster
Function IsYLeapYear(Y%) As Boolean
If Y Mod 4 <> 0 Then GoTo NoLY ' get rid of 75% of them
If Y Mod 400 <> 0 And Y Mod 100 = 0 Then GoTo NoLY
IsYLeapYear = True
NoLY:
End Function
A: Here's another simple option.
Leap_Day_Check = Day(DateValue("01/03/" & Required_Year) - 1)
If Leap_Day_Check = 28 then it is not a leap year, if it is 29 it is.
VBA knows what the date before 1st March is in a year and so will set it to be either 28 or 29 February for us.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Assembler file as input for a driver build with the WDK tools How to get an assembler file to be compiled and linked into a driver build.
To clarify a bit
The SOURCES file :
TARGETTYPE=DRIVER
DRIVERTYPE=WDM
TARGETPATH=obj
TARGETNAME=bla
INCLUDES=$(DDK_INC_PATH)
TARGETLIBS=$(DDK_LIB_PATH)\ks.lib
SOURCES=x.cpp y.cpp z.asm
The problem occurs with the z.asm file. NMAKE complains that it does not know how to build z.obj.
So the question is, how to get the asm file assembled with build and linked into bla.sys.
A: Have you tried the I386_SOURCES?
E.g
SOURCES=x.cpp y.cpp
I386_SOURCES=i386\z.asm
And putting the file in the i386 directory.
Also see MSDN regarding the SOURCES macro
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: What does "Create Statistics" do in SQL Server 2005? The Database Tuning Advisor is recommending that I create a bunch of statistics in my Database. I'm something of a SQL n00b, so this was the first time I'd ever come across such a creature. The entry in MSDN was a little obtuse - could someone explain what exactly this does, and why it's a good idea?
A: Statistics are used by the optimizer to determine whether to use a specific index for your query. Without statistics, the optimizer doesn't have a way to know about how many of your rows will match a given condition, causing it to have to optimize for the "many rows" case, which could be less-than-optimal.
A: Cost Based Query Optimisation is a technique that uses histograms and row counts to heuristically estimate the cost of executing a query plan. When you submit a query to SQL Server, it evaluates it and generates a series of Query Plans for which it uses heuristics to estimate the costs. It then selects the cheapest query plan.
Statistics are used by the query optimiser to calculate the cost of the query plans. If the statistics are missing or out of date it does not have correct data to estimate the plan. In this case it can generate query plans that are moderately or highly sub-optimal.
SQL Server will (under most circumstances) generate statistics on most tables and indexes automatically but you can supplement these or force refreshes. The query tuning wizard has presumably found some missing statistics or identified joins within the query that statistics should be added for.
A: In a nutshell, it prepares your database to work effectively. By having prepared statistics, your database knows (before it needs to figure out an execution plan) what is likely to be its most efficient route.
A: Basically just keeps SQL updated with what type of indexing you have, row count, etc. This helps SQL better estimate how to execute your queries. Keeping the statistics updated is a good thing.
A: From the BOL...
Creates a histogram and associated
density groups (collections) over the
supplied column or set of columns of a
table or indexed view. String summary
statistics are also created on
statistics built on char, varchar,
varchar(max), nchar, nvarchar,
nvarchar(max), text, and ntext
columns. The query optimizer uses this
statistical information to choose the
most efficient plan for retrieving or
updating data. Up-to-date statistics
allow the optimizer to accurately
assess the cost of different query
plans, and choose a high-quality plan.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
}
|
Q: Unicode in PDF My program generates relatively simple PDF documents on request, but I'm having trouble with unicode characters, like kanji or odd math symbols. To write a normal string in PDF, you place it in brackets:
(something)
There is also the option to escape a character with octal codes:
(\527)
but this only goes up to 512 characters. How do you encode or escape higher characters? I've seen references to byte streams and hex-encoded strings, but none of the references I've read seem to be willing to tell me how to actually do it.
Edit: Alternatively, point me to a good Java PDF library that will do the job for me. The one I'm currently using is a version of gnujpdf (which I've fixed several bugs in, since the original author appears to have gone AWOL), that allows you to program against an AWT Graphics interface, and ideally any replacement should do the same.
The alternatives seem to be either HTML -> PDF, or a programmatic model based on paragraphs and boxes that feels very much like HTML. iText is an example of the latter. This would mean rewriting my existing code, and I'm not convinced they'd give me the same flexibility in laying out.
Edit 2: I didn't realise before, but the iText library has a Graphics2D API and seems to handle unicode perfectly, so that's what I'll be using. Though it isn't an answer to the question as asked, it solves the problem for me.
Edit 3: iText is working nicely for me. I guess the lesson is, when faced with something that seems pointlessly difficult, look for somebody who knows more about it than you.
A: As dredkin pointed out, you have to use the glyph indices instead of the Unicode character value in the page content stream. This is sufficient to display Unicode text in PDF, but the Unicode text would not be searchable. To make the text searchable or have copy/paste work on it, you will also need to include a /ToUnicode stream. This stream should translate each glyph in the document to the actual Unicode character.
A: See Appendix D (page 995) of the PDF specification. There is a limited number of fonts and character sets pre-defined in a PDF consumer application. To display other characters you need to embed a font that contains them. It is also preferable to embed only a subset of the font, including only required characters, in order to reduce file size. I am also working on displaying Unicode characters in PDF and it is a major hassle.
Check out PDFBox or iText.
http://www.adobe.com/devnet/pdf/pdf_reference.html
A: In the PDF reference in chapter 3, this is what they say about Unicode:
Text strings are encoded in
either PDFDocEncoding or Unicode character encoding. PDFDocEncoding is a
superset of the ISO Latin 1 encoding and is documented in Appendix D. Unicode
is described in the Unicode Standard by the Unicode Consortium (see the Bibliography).
For text strings encoded in Unicode, the first two bytes must be 254 followed by
255. These two bytes represent the Unicode byte order marker, U+FEFF, indicating
that the string is encoded in the UTF-16BE (big-endian) encoding scheme specified
in the Unicode standard. (This mechanism precludes beginning a string using
PDFDocEncoding with the two characters thorn ydieresis, which is unlikely to
be a meaningful beginning of a word or phrase).
A: I have worked several days on this subject now and what I have learned is that unicode is (as good as) impossible in pdf. Using 2-byte characters the way plinth described only works with CID-Fonts.
seemingly, CID-Fonts are a pdf-internal construct and they are not really fonts in that sense - they seem to be more like graphics-subroutines, that can be invoked by addressing them (with 16-bit addresses).
So to use unicode in pdf directly
*
*you would have to convert normal fonts to CID-Fonts, which is probably extremely hard - you'd have to generate the graphics routines from the original font(?), extract character metrics etc.
*you cannot use CID-Fonts like normal fonts - you cannot load or scale them the way you load and scale normal fonts
*also, 2-byte characters don't even cover the full Unicode space
IMHO, these points make it absolutely unfeasible to use unicode directly.
What I am doing instead now is using the characters indirectly in the following way:
For every font, I generate a codepage (and a lookup-table for fast lookups) - in c++ this would be something like
std::map<std::string, std::vector<wchar_t> > Codepage;
std::map<std::string, std::map<wchar_t, int> > LookupTable;
then, whenever I want to put some unicode-string on a page, I iterate its characters, look them up in the lookup-table and - if they are new, I add them to the code-page like this:
for(std::wstring::const_iterator i = str.begin(); i != str.end(); i++)
{
if(LookupTable[fontname].find(*i) == LookupTable[fontname].end())
{
LookupTable[fontname][*i] = Codepage[fontname].size();
Codepage[fontname].push_back(*i);
}
}
then, I generate a new string, where the characters from the original string are replaced by their positions in the codepage like this:
static std::string hex = "0123456789ABCDEF";
std::string result = "<";
for(std::wstring::const_iterator i = str.begin(); i != str.end(); i++)
{
int id = LookupTable[fontname][*i] + 1;
result += hex[(id & 0x00F0) >> 4];
result += hex[(id & 0x000F)];
}
result += ">";
for example, "H€llo World!" might become <01020303040506040703080905>
and now you can just put that string into the pdf and have it printed, using the Tj operator as usual...
but you now have a problem: the pdf doesn't know that you mean "H" by a 01. To solve this problem, you also have to include the codepage in the pdf file. This is done by adding an /Encoding to the Font object and setting its Differences
For the "H€llo World!" example, this Font-Object would work:
5 0 obj
<<
/F1
<<
/Type /Font
/Subtype /Type1
/BaseFont /Times-Roman
/Encoding
<<
/Type /Encoding
/Differences [ 1 /H /Euro /l /o /space /W /r /d /exclam ]
>>
>>
>>
endobj
I generate it with this code:
ObjectOffsets.push_back(stream->tellp()); // xrefs entry
(*stream) << ObjectCounter++ << " 0 obj \n<<\n";
int fontid = 1;
for(std::list<std::string>::iterator i = Fonts.begin(); i != Fonts.end(); i++)
{
(*stream) << " /F" << fontid++ << " << /Type /Font /Subtype /Type1 /BaseFont /" << *i;
(*stream) << " /Encoding << /Type /Encoding /Differences [ 1 \n";
for(std::vector<wchar_t>::iterator j = Codepage[*i].begin(); j != Codepage[*i].end(); j++)
(*stream) << " /" << GlyphName(*j) << "\n";
(*stream) << " ] >>";
(*stream) << " >> \n";
}
(*stream) << ">>\n";
(*stream) << "endobj \n\n";
Notice that I use a global font-register - I use the same font names /F1, /F2,... throughout the whole pdf document. The same font-register object is referenced in the /Resources Entry of all pages. If you do this differently (e.g. you use one font-register per page) - you might have to adapt the code to your situation...
So how do you find the names of the glyphs (/Euro for "€", /exclam for "!" etc.)? In the above code, this is done by simply calling "GlyphName(*j)". I have generated this method with a BASH-Script from the list found at
http://www.jdawiseman.com/papers/trivia/character-entities.html
and it looks like this
const std::string GlyphName(wchar_t UnicodeCodepoint)
{
switch(UnicodeCodepoint)
{
case 0x00A0: return "nonbreakingspace";
case 0x00A1: return "exclamdown";
case 0x00A2: return "cent";
...
}
}
A major problem I have left open is that this only works as long as you use at most 254 different characters from the same font. To use more than 254 different characters, you would have to create multiple codepages for the same font.
Inside the pdf, different codepages are represented by different fonts, so to switch between codepages, you would have to switch fonts, which could theoretically blow your pdf up quite a bit, but I for one, can live with that...
A: The simple answer is that there's no simple answer. If you take a look at the PDF specification, you'll see an entire chapter — and a long one at that — devoted to the mechanisms of text display. I implemented all of the PDF support for my company, and handling text was by far the most complex part of exercise. The solution you discovered — use a 3rd party library to do the work for you — is really the best choice, unless you have very specific, special-purpose requirements for your PDF files.
A: Algoman's answer is wrong in many things. You can make a PDF document with Unicode in it and it's not rocket science, though it needs some work.
Yes he is right, to use more than 255 characters in one font you have to create a composite font (CIDFont) pdf object.
Then you just mention the actual TrueType font you want to use as a DescendatFont entry of CIDFont.
The trick is that after that you have to use glyph indices of a font instead of character codes. To get this indices map you have to parse cmap section of a font - get contents of the font with GetFontData function and take hands on TTF specification.
And that's it! I've just did it and now I have a Unicode PDF!
Sample Code for parsing cmap section is here: https://web.archive.org/web/20150329005245/http://support.microsoft.com/en-us/kb/241020
And yes, don't forget /ToUnicode entry as @user2373071 pointed out or user will not be able to search your PDF or copy text from it.
A: dredkin's answer has worked fine for me in the forward direction (unicode text to PDF representation).
I was writing an increasingly convoluted comment there about the reverse direction (PDF representation to text, when copying from the PDF document), explained by user2373071. The method referred to throughout this thread is the definition of a /ToUnicode map (which, incidentally, is optional). I found it simplest to map from glyphs to characters using the beginbfrange srcCode1 srcCode2 [ dstString1 m ] endbfrange construct.
This seems to work OK in Adobe Reader, but two glyphs (0x100 and 0x1ef) cause the mapping for cyrillic characters to fail in browsers and SumatraPDF (the copy/paste provides the glyph IDs instead of the characters. By excluding those two glyphs I made it work there. (I really can't see what's special about these glyphs, and it's independent of font (i.e. it's the same glyphs, but different characters, in Times/Georgia/Palatino, and these values are afaik identically mapped in UTF-16. Any ideas welcome!)
However, and more importantly,
I have reached the conclusion that the whole /ToUnicode mechanism is fundamentally flawed in concept, because many fonts re-use glyphs for multiple characters. Consider simple ones like 0x20 and 0xa0 (ordinary and non-breaking space); 0x2d and 0xad (hyphen and soft hyphen); these two are in the 8-bit character range. Slightly beyond that are 0x3b and 0x37e (semi-colon and greek question mark). And it would be quite reasonable to re-use cyrillic small a and latin small a, and similar homoglyphs. So the point is, in the non-ASCII world that prompts us to worry about Unicode at all, we will encountering a one-to-many mapping from glyphs to characters, and will therefore be bound to pick up the wrong character at some point - which rather removes the point of being able to extract the text in the first place.
The other method in the (1.7) PDF reference is to use /ActualText instead of /ToUnicode. This is better in principle, because completely avoids the homoglyph problem I've mentioned above, and the overhead is probably bearable, but it only seems to be implemented in Adobe Reader (i.e. I haven't got anything consistent or meaningful from SumatraPdf or four browsers).
A: I'm not a PDF expert, and (as Ferruccio said) the PDF specs at Adobe should tell you everything, but a thought popped up in my mind:
Are you sure you are using a font that supports all the characters you need?
In our application, we create PDF from HTML pages (with a third party library), and we had this problem with cyrillic characters...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
}
|
Q: Retrieve Error text from SQL Server 2000 error I need help logging errors from T-SQL in SQL Server 2000. We need to log errors that we trap, but are having trouble getting the same information we would have had sitting in front of SQL Server Management Studio.
I can get a message without any argument substitution like this:
SELECT MSG.description from master.dbo.sysmessages MSG
INNER JOIN sys.syslanguages LANG ON MSG.msglangID=LANG.msglangid
WHERE MSG.error=@err AND LANG.langid=@@LANGID
But I have not found any way of finding out the error arguments. I want to see:
Constraint violation MYCONSTRAINT2 on table MYTABLE7
not
Constraint violation %s on table %s
Googling has only turned up exotic schemes using DBCC OUTPUTBUFFER that require admin access and aren't appropriate for production code. How do I get an error message with argument replacement?
A: In .Net, retrieving error messages (and anything output from print or raiserror) from sql server is as simple as setting one property on your SqlConnection ( .FireInfoMessageEventOnUserErrors = True) and handling the connection's InfoMessage event. The data received by .Net matches what you get in the Messages window in the SQL Server Management Studio results grid.
All the code goes in the function that handles the event, and you can abstract that so that all your connections point to the same method, so there's nothing else to change in the rest of the app aside from the two lines of code when you create new connections to set the property and event (and you have that abstracted away so you only need to do it in one place, right?)
Here is a link to what I consider the definitive error guide for SQL Server.
http://www.sommarskog.se/error-handling-I.html
In certain circumstances SQL Server will continue processing even after an error. See the heading labeled What Happens when an Error Occurs? from the previous link.
A: Look in Books on-line for Raiserror (Described)
You will find the syntax looks like this:
RAISERROR ( { msg_id | msg_str } { , severity , state }
[ , argument [ ,...n ] ] )
[ WITH option [ ,...n ] ]
and the error arguments are as follows:
d or I Signed integer
o Unsigned octal
p Pointer
s String
u Unsigned integer
x or X Unsigned hexadecimal
Any language from VB onwards has the ability to catch these and let you to take the appropriate action.
Dave J
A: Any chance you'll be upgrading to SQL2005 soon? If so, you could probably leverage their TRY/CATCH model to more easily accomplish what you're trying to do.
The variables exposed in the catch can give you the object throwing the error, the line number, error message, severity, etc. From there, you can log it, send an email, etc.
A: FORMATMESSAGE (it also exists in SQL Server 2000) allows you to build up messages into their final format from the sysmessages templates like above.
However, the RAISERROR command (which is pretty much what the database engine itself uses internally calls when you have an error) already sends the completed text which can be trapped and logged in the client. SSMS is a client and does not generate it's own messages: all message come from the database engine.
However, I gather you want to log the T-SQL error using T-SQL. Frankly, you can't on SQL Server 2000. Too many errors are batch and scope aborting to reliably log anything.
You have to be on SQL Server 2005 to use TRY/CATCH/ERROR_MESSAGE, or you trap in the client and then using something like log4net to log back to SQL Server.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Embedding non-edit widgets in a DataGridView Is there any way to embed a widget in a data-bound DataGridViewCell when it is not in editing mode?
For example, we are wanting to display an existing calendar widget in a cell. That cell contains a comma separated list of dates. We want to show the calendar instead of the text.
We could create a custom cell and override the Draw method so that it draws the calendar in the cell, but that would not handle the mouse-over tool-tip that we already have in the existing calendar widget.
[Update]
I tried TcKs's suggestion, but was not able to create a working solution. See the comments on that answer.
A: You should derive your own type from DataGridViewColumn (e.g. a DataGridViewCalendarColumn) and return a DataGridViewCalendarCell (that you have to create yourself, too) as the CellTemplate.
A detailed description can be found in the MSDN article Build a Custom RadioButton Cell and Column for the DataGridView Control
A: You can instead of drawing calendar, take a calendar control, set them the grid as parent and set them same bounds ( left, top, width, height ) as cell has.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Debugging with Events in Windows If I create an event using CreateEvent in Windows, how can I check if that event is signaled or not using the debugger in Visual Studio? CreateEvent returns back a handle, which doesn't give me access to much information. Before I call WaitForSingleObject(), I want to check to see if the event is signaled before I step into the function.
A: You can use the Process Explorer tool (http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) to manually check the event outside of the debugger. It helps if the event is named, so that you can find it easier.
A: Use the handle command. Here is a sample
The following command displays detailed information about handle 0x8.
0:000> !handle 8 f
Handle 8
Type Event
Attributes 0
GrantedAccess 0x100003:
Synch
QueryState,ModifyState
HandleCount 2
PointerCount 3
Name
Object Specific Information
Event Type Auto Reset
Event is Waiting
A: If the event is signaled and you use WaitForSingleObject(), it will return immediately. Also, you can call WaitForSingleObject() with a wait time of 0 to determine if it is signaled or not. However, that should not be necessary -- set the initial state in the CreateEvent() call (what has elapsed so far is unclear in your question).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to define persistent variable in SQL*PLUS I am trying to do the following in SQL*PLUS in ORACLE.
*
*Create a variable
*Pass it as output variable to my method invocation
*Print the value from output variable
I get
undeclared variable
error. I am trying to create a variable that persists in the session till i close the SQL*PLUS window.
variable subhandle number;
exec MYMETHOD - (CHANGE_SET => 'SYNC_SET', - DESCRIPTION => 'Change data for emp',
- SUBSCRIPTION_HANDLE => :subhandle);
print subhandle;
A: It should be OK - check what you did carefully against this:
SQL> create procedure myproc (p1 out number)
2 is
3 begin
4 p1 := 42;
5 end;
6 /
Procedure created.
SQL> variable subhandle number
SQL> exec myproc(:subhandle)
PL/SQL procedure successfully completed.
SQL> print subhandle
SUBHANDLE
----------
42
A: Please can you re-post, but formatting the code with the code tag.... (ie the 101 010 button) I think some extra "-" characters came through which means it more difficult to interpret.
Might also be helpful to see SQLPlus reporting the error if you could copy the contents of the SQLPlus window instead/too?
But it looks correct.
A: I'm not sure if this is what you're looking for, but did you try the &&variable syntax? You could do
select &&subhandle from dual
or some such at the start of the script, then subhandle should be bound to that value for the remainder of the session.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Dynamically change an image in a Crystal Report at runtime I'm using the Crystal Reports included with VisualStudio 2005. I would like to change the image that is displayed on the report at runtime ideally by building a path to the image file and then have that image displayed on the report.
Has anyone been able to accomplish this with this version of Crystal Reports?
A: At work we do this by pushing the image(s) into the report as fields of a datatable. It's not pretty, but it gets the job done. Of course, this solution requires that you push data into the reports via a DataSet. I've always felt this was a hack at best. I really wish that image parameters were a possibility with CR.
Edit: It's worth noting, if you are binding your crystal report to plain old objects you want to expose a byte[] property for the report to treat that as an image.
A: [I have since found a solution using a byte array via a C# Object property - see separate Answer. Leaving this answer here for reference...]
Here's what I have seen suggested (but I tried and failed in both C#-2005 and C#-2008).
*
*Choose a directory and place a BMP there (e.g., "C:\Temp\image.bmp").
*From the CR-Designer a) Right-click->Insert->OLE Object... b) Select "Create from File" c) Check the "Link" checkbox d) Browse and pick the bmp defined in step 1 e) Click OK f) Place the image on the form.
*Overwrite/update the image at runtime in your C# code. In theory, since you inserted a Link to an image file, it will be updated when the form is refreshed.
I had no luck with this approach. The image appears when I first design the form (step 2). But at runtime, the image does not update for me. From this point forward, things get really odd. It seems that CR caches some sort of image that just won't go away. I can delete the OLE object link in CR-Designer, but if I recreate it, I always get a black box the same size as the original image (even if I change the size of image.bmp).
A: I finally reached a solution using the byte[] tip posted here by Josh.
This solution applies if you are using a plain old C# Object to populate your Crystal Reports (see http://www.aspfree.com/c/a/C-Sharp/Crystal-Reports-for-Visual-Studio-2005-in-CSharp/ for info on this approach).
In your C# class, insert the following code:
private static byte[] m_Bitmap = null;
public byte[] Bitmap
{
get
{
FileStream fs = new FileStream(bitmapPath, FileMode.Open);
BinaryReader br = new BinaryReader(fs);
int length = (int)br.BaseStream.Length;
m_Bitmap = new byte[length];
m_Bitmap = br.ReadBytes(length);
br.Close();
fs.Close();
return m_Bitmap;
}
}
Now, update your C# Object Mapping in CR using the "Verify Database" option. You should then see the Bitmap property as a CR field. Just drag it onto the form. It will be of type IBlobFieldObject. When you run, you should see your image.
A: Try using a combination of using a parameter containing the path of the image and the tutorial on this page: http://www.idautomation.com/crystal/streaming_crystal.html
Then in step #8, use the parameter instead of a hard-coded path.
A: You can also use a conditional formula to set an image's location. See Crystal Reports: Dynamic Images.
A: Another option that I've found useful is inserting the pictures you would like to use. Position the graphic accordingly, then right-click the graphic and go to Format Graphic > Common. Check the Suppress box, then click the formula button, shown as x-2. Once in the formula window, simply add the code for determining whether the graphic should be suppressed or not.
In my case, I was building one invoice template for multiple entities. In the formula window, I simply wrote COMPANY <> 1100 which meant that every time the invoice was run for a company other than 1100, the 1100 graphic would be suppressed.
Hopefully this makes life easier...
A: The current version of Crystal Reports (for Visual Studio 2012+) that I use with Visual Studio 2015 supports this function. Follow the following steps:
*
*Insert a picture into your report. This will serve as your
placeholder.'
*Right click your picture and choose Format Object
*Select the Picture tab and the press the formula button
*A formula window will open. Enter a formula that will find your pictures as links.
if({@isDonor}="1")
then "http://www.ny.org/images/aaf/picture1.jpg"
else "http://www.ny.org/images/aaf/picture2.jpg"
And you're done!
A: Just like Josh said.. You will have to push the image with a dataset. Or, put the image into a database table once and pull it in many times with a subreport.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Seeking CSS Browser compatibility information for setting width using left and right Here's a question that's been haunting me for a year now. The root question is how do I set the size of an element relative to its parent so that it is inset by N pixels from every edge? Setting the width would be nice, but you don't know the width of the parent, and you want the elements to resize with the window. (You don't want to use percents because you need a specific number of pixels.)
Edit
I also need to prevent the content (or lack of content) from stretching or shrinking both elements. First answer I got was to use padding on the parent, which would work great. I want the parent to be exactly 25% wide, and exactly the same height as the browser client area, without the child being able to push it and get a scroll bar.
/Edit
I tried solving this problem using {top:Npx;left:Npx;bottom:Npx;right:Npx;} but it only works in certain browsers.
I could potentially write some javascript with jquery to fix all elements with every page resize, but I'm not real happy with that solution. (What if I want the top offset by 10px but the bottom only 5px? It gets complicated.)
What I'd like to know is either how to solve this in a cross-browser way, or some list of browsers which allow the easy CSS solution. Maybe someone out there has a trick that makes this easy.
A: If you are only concerned with horizontal spacing, then you can make all child block elements within a parent block element "inset" by a certain amount by giving the parent element padding. You can make a single child block element within a parent block element "inset" by giving the element margins. If you use the latter approach, you may need to set a border or slight padding on the parent element to prevent margin collapsing.
If you are concerned with vertical spacing as well, then you need to use positioning. The parent element needs to be positioned; if you don't want to move it anywhere, then use position: relative and don't bother setting top or left; it will remain where it is. Then you use absolute positioning on the child element, and set top, right, bottom and left relative to the edges of the parent element.
For example:
#outer {
width: 10em;
height: 10em;
background: red;
position: relative;
}
#inner {
background: white;
position: absolute;
top: 1em;
left: 1em;
right: 1em;
bottom: 1em;
}
If you want to avoid content from expanding the width of an element, then you should use the overflow property, for example, overflow: auto.
A: The The CSS Box model might provide insight for you, but my guess is that you're not going to achieve pixel-perfect layout with CSS alone.
If I understand correctly, you want the parent to be 25% wide and exactly the height of the browser display area. Then you want the child to be 25% - 2n pixels wide and 100%-2n pixels in height with n pixels surrounding the child. No current CSS specification includes support these types of calculations (although IE5, IE6, and IE7 have non-standard support for CSS expressions and IE8 is dropping support for CSS expressions in IE8-standards mode).
You can force the parent to 100% of the browser area and 25% wide, but you cannot stretch the child's height to pixel perfection with this...
<style type="text/css">
html { height: 100%; }
body { font: normal 11px verdana; height: 100%; }
#one { background-color:gray; float:left; height:100%; padding:5px; width:25%; }
#two { height: 100%; background-color:pink;}
</style>
</head>
<body>
<div id="one">
<div id="two">
<p>content ... content ... content</p>
</div>
</div>
...but a horizontal scrollbar will appear. Also, if the content is squeezed, the parent background will not extend past 100%. This is perhaps the padding example you presented in the question itself.
You can achieve the illusion that you're seeking through images and additional divs, but CSS alone, I don't believe, can achieve pixel perfection with that height requirement in place.
A: Simply apply some padding to the parent element, and no width on the child element. Assuming they're both display:block, that should work fine.
A: Or go the other way around: set the margin of the child-element.
Floatutorial is a great resource for stuff like this.
A: Try this:
.parent {padding:Npx; display:block;}
.child {width:100%; display:block;}
It should have an Npx space on all sides, stretching to fill the parent element.
EDIT:
Of course, on the parent, you could also use
{padding-top:Mpx; padding-bottom:Npx; padding-right:Xpx; padding-left:Ypx;}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: UI Performance with custom border I am creating a user control in C# and I am adding my own border and background. Currently the background is 16 small images that I change depending on the status of the object. Performance wise, would I be better off using GDI+ instead of the images?
A: I doubt it will make a difference.
If you just blit a bunch of images that's fine and very fast with GDI and GDI+
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Displaying tabular type data in .net winforms I want to display tabular type data, but it will not be coming from a single datasource. I am currently using a label, but the alignment doesn't look that great.
Any ideas?
Again, the data is not being loaded from a datagrid or anything, each row is basically a label and a number e.g.
Total Users: 10123
Total Logins: 234
What is the best way to display this, any built-in control I should be using?
A: Options:
*
*organize your data into a
datatable and use a grid control.
*use the TableLayoutPanel to align
you information.
A: If you don't want to use a datagrid to display data I sugest using a textbox with a monospace font at least, like courier new.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What is the best way to store set data in Python? I have a list of data in the following form:
[(id\__1_, description, id\_type), (id\__2_, description, id\_type), ... , (id\__n_, description, id\_type))
The data are loaded from files that belong to the same group. In each group there could be multiples of the same id, each coming from different files. I don't care about the duplicates, so I thought that a nice way to store all of this would be to throw it into a Set type. But there's a problem.
Sometimes for the same id the descriptions can vary slightly, as follows:
IPI00110753
*
*Tubulin alpha-1A chain
*Tubulin alpha-1 chain
*Alpha-tubulin 1
*Alpha-tubulin isotype M-alpha-1
(Note that this example is taken from the uniprot protein database.)
I don't care if the descriptions vary. I cannot throw them away because there is a chance that the protein database I am using will not contain a listing for a certain identifier. If this happens I will want to be able to display the human readable description to the biologists so they know roughly what protein they are looking at.
I am currently solving this problem by using a dictionary type. However I don't really like this solution because it uses a lot of memory (I have a lot of these ID's). This is only an intermediary listing of them. There is some additional processing the ID's go through before they are placed in the database so I would like to keep my data-structure smaller.
I have two questions really. First, will I get a smaller memory footprint using the Set type (over the dictionary type) for this, or should I use a sorted list where I check every time I insert into the list to see if the ID exists, or is there a third solution that I haven't thought of? Second, if the Set type is the better answer how do I key it to look at just the first element of the tuple instead of the whole thing?
Thank you for reading my question,
Tim
Update
based on some of the comments I received let me clarify a little. Most of what I do with data-structure is insert into it. I only read it twice, once to annotate it with additional information,* and once to do be inserted into the database. However down the line there may be additional annotation that is done before I insert into the database. Unfortunately I don't know if that will happen at this time.
Right now I am looking into storing this data in a structure that is not based on a hash-table (ie. a dictionary). I would like the new structure to be fairly quick on insertion, but reading it can be linear since I only really do it twice. I am trying to move away from the hash table to save space. Is there a better structure or is a hash-table about as good as it gets?
*The information is a list of Swiss-Prot protein identifiers that I get by querying uniprot.
A: Sets don't have keys. The element is the key.
If you think you want keys, you have a mapping. More-or-less by definition.
Sequential list lookup can be slow, even using a binary search. Mappings use hashes and are fast.
Are you talking about a dictionary like this?
{ 'id1': [ ('description1a', 'type1'), ('description1b','type1') ],
'id2': [ ('description2', 'type2') ],
...
}
This sure seems minimal. ID's are only represented once.
Perhaps you have something like this?
{ 'id1': ( ('description1a', 'description1b' ), 'type1' ),
'id2': ( ('description2',), 'type2' ),
...
}
I'm not sure you can find anything more compact unless you resort to using the struct module.
A: I'm assuming the problem you try to solve by cutting down on the memory you use is the address space limit of your process. Additionally you search for a data structure that allows you fast insertion and reasonable sequential read out.
Use less structures except strings (str)
The question you ask is how to structure your data in one process to use less memory. The one canonical answer to this is (as long as you still need associative lookups), use as little other structures then python strings (str, not unicode) as possible. A python hash (dictionary) stores the references to your strings fairly efficiently (it is not a b-tree implementation).
However I think that you will not get very far with that approach, since what you face are huge datasets that might eventually just exceed the process address space and the physical memory of the machine you're working with altogether.
Alternative Solution
I would propose a different solution that does not involve changing your data structure to something that is harder to insert or interprete.
*
*Split your information up in multiple processes, each holding whatever datastructure is convinient for that.
*Implement inter process communication with sockets such that processes might reside on other machines altogether.
*Try to divide your data such as to minimize inter process communication (i/o is glacially slow compared to cpu cycles).
The advantage of the approach I outline is that
*
*You get to use two ore more cores on a machine fully for performance
*You are not limited by the address space of one process, or even the physical memory of one machine
There are numerous packages and aproaches to distributed processing, some of which are
*
*linda
*processing
A: If you're doing an n-way merge with removing duplicates, the following may be what you're looking for.
This generator will merge any number of sources. Each source must be a sequence.
The key must be in position 0. It yields the merged sequence one item at a time.
def merge( *sources ):
keyPos= 0
for s in sources:
s.sort()
while any( [len(s)>0 for s in sources] ):
topEnum= enumerate([ s[0][keyPos] if len(s) > 0 else None for s in sources ])
top= [ t for t in topEnum if t[1] is not None ]
top.sort( key=lambda a:a[1] )
src, key = top[0]
#print src, key
yield sources[ src ].pop(0)
This generator removes duplicates from a sequence.
def unique( sequence ):
keyPos= 0
seqIter= iter(sequence)
curr= seqIter.next()
for next in seqIter:
if next[keyPos] == curr[keyPos]:
# might want to create a sub-list of matches
continue
yield curr
curr= next
yield curr
Here's a script which uses these functions to produce a resulting sequence which is the union of all the sources with duplicates removed.
for u in unique( merge( source1, source2, source3, ... ) ):
print u
The complete set of data in each sequence must exist in memory once because we're sorting in memory. However, the resulting sequence does not actually exist in memory. Indeed, it works by consuming the other sequences.
A: How about using {id: (description, id_type)} dictionary? Or {(id, id_type): description} dictionary if (id,id_type) is the key.
A: Sets in Python are implemented using hash tables. In earlier versions, they were actually implemented using sets, but that has changed AFAIK. The only thing you save by using a set would then be the size of a pointer for each entry (the pointer to the value).
To use only a part of a tuple for the hashcode, you'd have to subclass tuple and override the hashcode method:
class ProteinTuple(tuple):
def __new__(cls, m1, m2, m3):
return tuple.__new__(cls, (m1, m2, m3))
def __hash__(self):
return hash(self[0])
Keep in mind that you pay for the extra function call to __hash__ in this case, because otherwise it would be a C method.
I'd go for Constantin's suggestions and take out the id from the tuple and see how much that helps.
A: It's still murky, but it sounds like you have some several lists of [(id, description, type)...]
The id's are unique within a list and consistent between lists.
You want to create a UNION: a single list, where each id occurs once, with possibly multiple descriptions.
For some reason, you think a mapping might be too big. Do you have any evidence of this? Don't over-optimize without actual measurements.
This may be (if I'm guessing correctly) the standard "merge" operation from multiple sources.
source1.sort()
source2.sort()
result= []
while len(source1) > 0 or len(source2) > 0:
if len(source1) == 0:
result.append( source2.pop(0) )
elif len(source2) == 0:
result.append( source1.pop(0) )
elif source1[0][0] < source2[0][0]:
result.append( source1.pop(0) )
elif source2[0][0] < source1[0][0]:
result.append( source2.pop(0) )
else:
# keys are equal
result.append( source1.pop(0) )
# check for source2, to see if the description is different.
This assembles a union of two lists by sorting and merging. No mapping, no hash.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do you determine a valid SoapAction? I'm calling a webservice using the NuSoap PHP library. The webservice appears to use .NET; every time I call it I get an error about using an invalid SoapAction header. The header being sent is an empty string. How can I find the SoapAction that the server is expecting?
A: You can see the SoapAction that the service operation you're calling expects by looking at the WSDL for the service. For .NET services, you can access the WSDL by opening a web browser to the url of the service and appending ?wsdl on the end.
Inside the WSDL document, you can see the SoapActions defined under the 'Operation' nodes (under 'Bindings'). For example:
<wsdl:operation name="Execute">
<soap:operation soapAction="http://tempuri.org/Execute" style="document" />
Find the operation node for the operation you're trying to invoke, and you'll find the Soap Action it expects there.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Unconditionally execute a task in ant? I'm trying to define a task that emits (using echo) a message when a target completes execution, regardless of whether that target was successful or not. Specifically, the target executes a task to run some unit tests, and I want to emit a message indicating where the results are available:
<target name="mytarget">
<testng outputDir="${results}" ...>
...
</testng>
<echo>Tests complete. Results available in ${results}</echo>
</target>
Unfortunately, if the tests fail, the task fails and execution aborts. So the message is only output if the tests pass - the opposite of what I want. I know I can put the task before the task, but this will make it easier for users to miss this message. Is what I'm trying to do possible?
Update: It turns out I'm dumb. I had haltOnFailure="true" in my <testng> task, which explains the behaviour I was seeing. Now the issue is that setting this to false causes the overall ant build to succeed even if tests fail, which is not what I want. The answer below using the task looks like it might be what I want..
A: According to the Ant docs, there are two properties that control whether the build process is stopped or not if the testng task fails:
haltonfailure - Stop the build process
if a failure has occurred during the
test run. Defaults to false.
haltonskipped - Stop the build process
if there is at least on skipped test.
Default to false.
I can't tell from the snippet if you're setting this property or not. May be worth trying to explicitly set haltonfailure to false if it's currently set to true.
Also, assuming you're using the <exec> functionality in Ant, there are similar properties to control what happens if the executed command fails:
failonerror - Stop the buildprocess if the command exits with a return code
signaling failure. Defaults to false.
failifexecutionfails - Stop the build if we can't start the program.
Defaults to true.
Can't tell based on the partial code snippet in your post, but my guess is that the most likely culprit is failonerror or haltonfailure being set to true.
A: You can use a try-catch block like so:
<target name="myTarget">
<trycatch property="foo" reference="bar">
<try>
<testing outputdir="${results}" ...>
...
</testing>
</try>
<catch>
<echo>Test failed</echo>
</catch>
<finally>
<echo>Tests complete. Results available in ${results}</echo>
</finally>
</trycatch>
</target>
A: The solution to your problem is to use the failureProperty in conjunction with the haltOnFailure property of the testng task like this:
<target name="mytarget">
<testng outputDir="${results}" failureProperty="tests.failed" haltOnFailure="false" ...>
...
</testng>
<echo>Tests complete. Results available in ${results}</echo>
</target>
Then, elsewhere when you want the build to fail you add ant code like this:
<target name="doSomethingIfTestsWereSuccessful" unless="tests.failed">
...
</target>
<target name="doSomethingIfTestsFailed" if="tests.failed">
...
<fail message="Tests Failed" />
</target>
You can then call doSomethingIfTestsFailed where you want your ant build to fail.
A: Although you are showing a fake task called "testng" in your example I presume you are using the junit target.
In this case, it is strange you are seeing these results because the junit target by default does NOT abort execution on a test failure.
There is a way to actually tell ant to stop the build on a junit failure or error by using the halt attributes, eg. haltonfailure:
<target name="junit" depends="junitcompile">
<junit printsummary="withOutAndErr" fork="yes" haltonfailure="yes">
However, both haltonfailure and haltonerror are by default set to off. I suppose you could check your build file to see if either of these flags have been set. They can even be set globally, so one thing you could try is to explicitly set it to "no" on your task to make sure it is overridden in case it is set in the global scope.
http://ant.apache.org/manual/Tasks/junit.html
A: Can you fork the testng task ? If yes, then, you might want to use that feature so that the testng task will run on a different JVM.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Incorrectly set up APC for PHP? I decided to install APC to speed up the site that I work for. Sadly, I found out that it was already installed and enabled(The developer who first worked on the servers has moved on).
Then I decided to check the usage of it to see if it needs more memory allocated to it or not. This is when I discovered something weird. A simple file with this code:
<?php
print_r(apc_cache_info());
?>
It would not work when served from apache. I get Error 320 (net::ERR_INVALID_RESPONSE): Unknown error. And there is nothing in the error log. From the cli on the server, it works fine. But it only says that my check_apc.php file is cached(name of the script that I was running).
So it looks like APC has not fully/correctly been set up. Any one know what the problem could be?
Contents of /etc/php.d/apc.ini:
; Enable apc extension module
extension = apc.so
; Options for the apc module
apc.enabled=1
apc.shm_segments=1
apc.optimization=0
apc.shm_size=32
apc.ttl=7200
apc.user_ttl=7200
apc.num_files_hint=1024
apc.mmap_file_mask=/tmp/apc.XXXXXX
apc.enable_cli=1
apc.cache_by_default=1
The server is running CentOS
A: Has anyone upgraded the version of php on the server since apc.so was created? It may be that apc.so was compiled against a different version of php.
If possible, try re-compiling apc.so against the current version of php. Or if you are using a package manager, try removing the apc package entirely and reinstall it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: LINQ: custom column names UPDATE
I'm basically binding the query to a WinForms DataGridView. I want the column headers to be appropriate and have spaces when needed. For example, I would want a column header to be First Name instead of FirstName.
How do you create your own custom column names in LINQ?
For example:
Dim query = From u In db.Users _
Select u.FirstName AS 'First Name'
A: You can also add an event handler to replace those underscores for you!
For those of you who love C#:
datagrid1.ItemDataBound +=
new DataGridItemEventHandler(datagrid1_HeaderItemDataBound);
And your handler should look like this:
private void datagrid1_HeaderItemDataBound(object sender, DataGridItemEventArgs e)
{
if (e.Item.ItemType == ListItemType.Header)
{
foreach(TableCell cell in e.Item.Cells)
cell.Text = cell.Text.Replace('_', ' ');
}
}
A: I would use:
var query = from u in db.Users
select new
{
FirstName = u.FirstName,
LastName = u.LastName,
FullName = u.FirstName + " " + u.LastName
};
(from Scott Nichols)
along with a function that reads a Camel Case string and inserts spaces before each new capital (you could add rules for ID etc.). I don't have the code for that function with me for now, but its fairly simple to write.
A: As CQ states, you can't have a space for the field name, you can return new columns however.
var query = from u in db.Users
select new
{
FirstName = u.FirstName,
LastName = u.LastName,
FullName = u.FirstName + " " + u.LastName
};
Then you can bind to the variable query from above or loop through it whatever....
foreach (var u in query)
{
// Full name will be available now
Debug.Print(u.FullName);
}
If you wanted to rename the columns, you could, but spaces wouldn't be allowed.
var query = from u in db.Users
select new
{
First = u.FirstName,
Last = u.LastName
};
Would rename the FirstName to First and LastName to Last.
A: You can make your results have underscores in the column name and use a HeaderTemplate in a TemplateField to replace underscores with spaces. Or subclass the DataControlField for the GridView and override the HeaderText property:
namespace MyControls
{
public SpacedHeaderTextField : System.Web.UI.WebControls.BoundField
{ public override string HeaderText
{ get
{ string value = base.HeaderText;
return (value.Length > 0) ? value : DataField.Replace(" ","");
}
set
{ base.HeaderText = value;
}
}
}
}
ASPX:
<%@Register TagPrefix="my" Namespace="MyControls" %>
<asp:GridView DataSourceID="LinqDataSource1" runat='server'>
<Columns>
<my:SpacedHeaderTextField DataField="First_Name" />
</Columns>
</asp:GridView>
A: I dont see why you would have to do that, if you are trying to do that for a grid or something, why not just name the header in the HTML?
A: What you would actually be doing is setting a variable reference to the return, there is not a way to name a variable with a space. Is there an end result reason you are doing this, perhaps if we knew the ultimate goal we could help you come up with a solution that fits.
A: Using Linq Extension Method:
SomeDataSource.Select(i => new { NewColumnName = i.OldColumnName, NewColumnTwoName = i.OldColumnTwoName});
A: I solved my own problem but all of your answers were very helpful and pointed me in the right direction.
In my LINQ query, if a column name had more than one word I would separate the words with an underscore:
Dim query = From u In Users _
Select First_Name = u.FirstName
Then, within the Paint method of the DataGridView, I replaced all underscores within the header with a space:
Private Sub DataGridView1_Paint(ByVal sender As Object, ByVal e As System.Windows.Forms.PaintEventArgs) Handles DataGridView1.Paint
For Each c As DataGridViewColumn In DataGridView1.Columns
c.HeaderText = c.HeaderText.Replace("_", " ")
Next
End Sub
A: If you want to change the header text, you can set that in the GridView definition...
<asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="false">
<Columns>
<asp:BoundField DataField="FirstName" HeaderText="First Name" />
</Columns>
</asp:GridView>
In the code behind you can bind to the users and it will set the header to First Name.
protected void Page_Load(object sender, EventArgs e)
{
// initialize db datacontext
var query = from u in db.Users
select u;
GridView1.DataSource = query;
GridView1.DataBind();
}
A: As others have already pointed out, if the header title etc is known at design time, turn off AutoGeneratedColumns and just set the title etc in the field definition instead of using auto generated columns. From your example it appears that the query is static and that the titles are known at design time so that is probably your best choice.
However [, although your question does not specify this requirement] - if the header text (and formatting etc) is not known at design time but will be determined at runtime and if you need to auto generate columns (using AutoGenerateColumns=
true") there are workarounds for that.
One way to do that is to create a new control class that inherits the gridview. You can then set header, formatting etc for the auto generated fields by overriding the gridview's "CreateAutoGeneratedColumn". Example:
//gridview with more formatting options
namespace GridViewCF
{
[ToolboxData("<{0}:GridViewCF runat=server></{0}:GridViewCF>")]
public class GridViewCF : GridView
{
//public Dictionary<string, UserReportField> _fieldProperties = null;
public GridViewCF()
{
}
public List<FieldProperties> FieldProperties
{
get
{
return (List<FieldProperties>)ViewState["FieldProperties"];
}
set
{
ViewState["FieldProperties"] = value;
}
}
protected override AutoGeneratedField CreateAutoGeneratedColumn(AutoGeneratedFieldProperties fieldProperties)
{
AutoGeneratedField field = base.CreateAutoGeneratedColumn(fieldProperties);
StateBag sb = (StateBag)field.GetType()
.InvokeMember("ViewState",
BindingFlags.GetProperty |
BindingFlags.NonPublic |
BindingFlags.Instance,
null, field, new object[] {});
if (FieldProperties != null)
{
FieldProperties fps = FieldProperties.Where(fp => fp.Name == fieldProperties.Name).Single();
if (fps.FormatString != null && fps.FormatString != "")
{
//formatting
sb["DataFormatString"] = "{0:" + fps.FormatString + "}";
field.HtmlEncode = false;
}
//header caption
field.HeaderText = fps.HeaderText;
//alignment
field.ItemStyle.HorizontalAlign = fps.HorizontalAlign;
}
return field;
}
}
[Serializable()]
public class FieldProperties
{
public FieldProperties()
{ }
public FieldProperties(string name, string formatString, string headerText, HorizontalAlign horizontalAlign)
{
Name = name;
FormatString = formatString;
HeaderText = headerText;
HorizontalAlign = horizontalAlign;
}
public string Name { get; set; }
public string FormatString { get; set; }
public string HeaderText { get; set; }
public HorizontalAlign HorizontalAlign { get; set; }
}
}
A: I believe this can be achieved using explicit name type
system.Name,
sysentity.Name
//change this to
entity = sysentity.Name
A: My VS2008 is busted right now, so I can't check. In C#, you would use "=" - How about
Dim query = From u In db.Users _
Select 'First Name' = u.FirstName
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: How best to define a custom action in WiX? I have a WiX installer and a single custom action (plus undo and rollback) for it which uses a property from the installer. The custom action has to happen after all the files are on the hard disk. It seems that you need 16 entries in the WXS file for this; eight within the root, like so:
<CustomAction Id="SetForRollbackDo" Execute="immediate" Property="RollbackDo" Value="[MYPROP]"/>
<CustomAction Id="RollbackDo" Execute="rollback" BinaryKey="MyDLL" DllEntry="UndoThing" Return="ignore"/>
<CustomAction Id="SetForDo" Execute="immediate" Property="Do" Value="[MYPROP]"/>
<CustomAction Id="Do" Execute="deferred" BinaryKey="MyDLL" DllEntry="DoThing" Return="check"/>
<CustomAction Id="SetForRollbackUndo" Execute="immediate" Property="RollbackUndo" Value="[MYPROP]"/>
<CustomAction Id="RollbackUndo" Execute="rollback" BinaryKey="MyDLL" DllEntry="DoThing" Return="ignore"/>
<CustomAction Id="SetForUndo" Execute="immediate" Property="Undo" Value="[MYPROP]"/>
<CustomAction Id="Undo" Execute="deferred" BinaryKey="MyDLL" DllEntry="UndoThing" Return="check"/>
And eight within the InstallExecuteSequence, like so:
<Custom Action="SetForRollbackDo" After="InstallFiles">REMOVE<>"ALL"</Custom>
<Custom Action="RollbackDo" After="SetForRollbackDo">REMOVE<>"ALL"</Custom>
<Custom Action="SetForDo" After="RollbackDo">REMOVE<>"ALL"</Custom>
<Custom Action="Do" After="SetForDo">REMOVE<>"ALL"</Custom>
<Custom Action="SetForRollbackUndo" After="InstallInitialize">REMOVE="ALL"</Custom>
<Custom Action="RollbackUndo" After="SetForRollbackUndo">REMOVE="ALL"</Custom>
<Custom Action="SetForUndo" After="RollbackUndo">REMOVE="ALL"</Custom>
<Custom Action="Undo" After="SetForUndo">REMOVE="ALL"</Custom>
Is there a better way?
A: The WiX custom actions are a great model to follow. In this case, you only declare, with CustomAction, the immediate action, the deferred action, and the rollback action. You only schedule, with Custom, the immediate action, where the immediate action is implemented as code in a native DLL.
Then, in the immediate action's code, you call MsiDoAction to schedule the rollback and deferred actions: as they are deferred, they are written into the script at the point you call MsiDoAction rather than executed immediately. You'll need to call MsiSetProperty as well to set the custom action data.
Download the WiX source code and study how the IISExtension works, for example. WiX actions generally parse a custom table and generate the data for the deferred action's property based on that table.
A: If you have complex custom actions that need to support rollback, you might consider writing a Wix extension. Extensions typically provide authoring support (i.e. new XML tags that get mapped to MSI table entries), plus automatic scheduling of custom actions.
It's more work than just writing a custom action, but once your CAs reach a certain level of complexity, the ease-of-authoring that extensions provide can be worth it.
A: I came across the same problem when writing WiX installers. My approach to the problem is mostly like what Mike suggested and I have a blog post Implementing WiX custom actions part 2: using custom tables.
In short, you can define a custom table for your data:
<CustomTable Id="LocalGroupPermissionTable">
<Column Id="GroupName" Category="Text" PrimaryKey="yes" Type="string"/>
<Column Id="ACL" Category="Text" PrimaryKey="no" Type="string"/>
<Row>
<Data Column="GroupName">GroupToCreate</Data>
<Data Column="ACL">SeIncreaseQuotaPrivilege</Data>
</Row>
</CustomTable>
Then write a single immediate custom action to schedule the deferred, rollback, and commit custom actions:
extern "C" UINT __stdcall ScheduleLocalGroupCreation(MSIHANDLE hInstall)
{
try {
ScheduleAction(hInstall,L"SELECT * FROM CreateLocalGroupTable", L"CA.LocalGroupCustomAction.deferred", L"create");
ScheduleAction(hInstall,L"SELECT * FROM CreateLocalGroupTable", L"CA.LocalGroupCustomAction.rollback", L"create");
}
catch( CMsiException & ) {
return ERROR_INSTALL_FAILURE;
}
return ERROR_SUCCESS;
}
The following code shows how to schedule a single custom action. Basically you just open the custom table, read the property you want (you can get the schema of any custom table by calling MsiViewGetColumnInfo()), then format the properties needed into the CustomActionData property (I use the form /propname:value, although you can use anything you want).
void ScheduleAction(MSIHANDLE hInstall,
const wchar_t *szQueryString,
const wchar_t *szCustomActionName,
const wchar_t *szAction)
{
CTableView view(hInstall,szQueryString);
PMSIHANDLE record;
//For each record in the custom action table
while( view.Fetch(record) ) {
//get the "GroupName" property
wchar_t recordBuf[2048] = {0};
DWORD dwBufSize(_countof(recordBuf));
MsiRecordGetString(record, view.GetPropIdx(L"GroupName"), recordBuf, &dwBufSize);
//Format two properties "GroupName" and "Operation" into
//the custom action data string.
CCustomActionDataUtil formatter;
formatter.addProp(L"GroupName", recordBuf);
formatter.addProp(L"Operation", szAction );
//Set the "CustomActionData" property".
MsiSetProperty(hInstall,szCustomActionName,formatter.GetCustomActionData());
//Add the custom action into installation script. Each
//MsiDoAction adds a distinct custom action into the
//script, so if we have multiple entries in the custom
//action table, the deferred custom action will be called
//multiple times.
nRet = MsiDoAction(hInstall,szCustomActionName);
}
}
As for implementing the deferred, rollback and commit custom actions, I prefer to use only one function and use MsiGetMode() to distinguish what should be done:
extern "C" UINT __stdcall LocalGroupCustomAction(MSIHANDLE hInstall)
{
try {
//Parse the properties from the "CustomActionData" property
std::map<std::wstring,std::wstring> mapProps;
{
wchar_t szBuf[2048]={0};
DWORD dwBufSize = _countof(szBuf); MsiGetProperty(hInstall,L"CustomActionData",szBuf,&dwBufSize);
CCustomActionDataUtil::ParseCustomActionData(szBuf,mapProps);
}
//Find the "GroupName" and "Operation" property
std::wstring sGroupName;
bool bCreate = false;
std::map<std::wstring,std::wstring>::const_iterator it;
it = mapProps.find(L"GroupName");
if( mapProps.end() != it ) sGroupName = it->second;
it = mapProps.find(L"Operation");
if( mapProps.end() != it )
bCreate = wcscmp(it->second.c_str(),L"create") == 0 ? true : false ;
//Since we know what opeartion to perform, and we know whether it is
//running rollback, commit or deferred script by MsiGetMode, the
//implementation is straight forward
if( MsiGetMode(hInstall,MSIRUNMODE_SCHEDULED) ) {
if( bCreate )
CreateLocalGroup(sGroupName.c_str());
else
DeleteLocalGroup(sGroupName.c_str());
}
else if( MsiGetMode(hInstall,MSIRUNMODE_ROLLBACK) ) {
if( bCreate )
DeleteLocalGroup(sGroupName.c_str());
else
CreateLocalGroup(sGroupName.c_str());
}
}
catch( CMsiException & ) {
return ERROR_INSTALL_FAILURE;
}
return ERROR_SUCCESS;
}
By using the above technique, for a typical custom action set you can reduce the custom action table to five entries:
<CustomAction Id="CA.ScheduleLocalGroupCreation"
Return="check"
Execute="immediate"
BinaryKey="CustomActionDLL"
DllEntry="ScheduleLocalGroupCreation"
HideTarget="yes"/>
<CustomAction Id="CA.ScheduleLocalGroupDeletion"
Return="check"
Execute="immediate"
BinaryKey="CustomActionDLL"
DllEntry="ScheduleLocalGroupDeletion"
HideTarget="yes"/>
<CustomAction Id="CA.LocalGroupCustomAction.deferred"
Return="check"
Execute="deferred"
BinaryKey="CustomActionDLL"
DllEntry="LocalGroupCustomAction"
HideTarget="yes"/>
<CustomAction Id="CA.LocalGroupCustomAction.commit"
Return="check"
Execute="commit"
BinaryKey="CustomActionDLL"
DllEntry="LocalGroupCustomAction"
HideTarget="yes"/>
<CustomAction Id="CA.LocalGroupCustomAction.rollback"
Return="check"
Execute="rollback"
BinaryKey="CustomActionDLL"
DllEntry="LocalGroupCustomAction"
HideTarget="yes"/>
And InstallSquence table to only two entries:
<InstallExecuteSequence>
<Custom Action="CA.ScheduleLocalGroupCreation"
After="InstallFiles">
Not Installed
</Custom>
<Custom Action="CA.ScheduleLocalGroupDeletion"
After="InstallFiles">
Installed
</Custom>
</InstallExecuteSequence>
In addition, with a little effort most of the code can be written to be reused (such as reading from custom table, getting the properties, formatting the needed properties and set to CustomActionData properties), and the entries in the custom action table now is not application specific (the application specific data is written in the custom table), we can put custom action table in a file of its own and just include it in each WiX project.
For the custom action DLL file, since the application data is read from the custom table, we can keep application specific details out of the DLL implementation, so the custom action table can become a library and thus easier to reuse.
This is how currently I write my WiX custom actions, if anyone knows how to improve further I would very appreciate it. :)
(You can also find the complete source code in my blog post, Implementing Wix custom actions part 2: using custom tables.).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: How do you call an Asynchronous Web Request in VB.NET? I am currently using the following code to create a web request:
Dim myRequest As WebRequest = WebRequest.Create("http://foo.com/bar")
Dim myResponse As WebResponse = myRequest.GetResponse()
The problem is that this "locks" up the program until the request is completed (and program will hang if the request never completes). How do you change something like this to execute asynchronously so that other tasks can be completed while the web request completes?
A: You'll use BeginGetResponse to add a AsyncCallback, which basically points to some other method in your code that will be called when the WebRequest returns. There is a good sample here.
http://www.sitepoint.com/forums/showpost.php?p=3753215
A: myRequest.BeginGetResponse()
You'll also need to call EndGetReponse() when the request is finished (determined via WaitHandle, callback, or polling).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What is the best framework for Unit Testing in JavaME? What is currently the best tool for JavaME unit testing? I´ve never really used unit testing before (shame on me!), so learning curve is important. I would appreciate some pros and cons with your answer. :)
A: I think it will depend on what kind of tests are you planning to do. Will you be using continuous integration. Is running tests on handsets a must.
If tests are more logic/data processing tests, the you can do fine with JUnit. But if you need to use some classes from javax.microedition.*, then the things will become a bit tricky, but not impossible.
Some examples: a test for text wrapping on screen, that would need javax.microedition.lcdui.Font. You can't just crate a font class using jars shipped with WTK, because at initialization it will be calling some native methods, that are not available.
For these kind of tests I have created a stub implementation of J2ME. Basically these are my own interpretation of J2ME classes. I can set some preconditions there (for example every character is 5 pixels wide, etc). And it is working really great, because my test only need to know, how J2ME classes respond, not how they are internally implemented.
For networking tests I have used MicroEmulator's networking implementation, and it has also worked out well.
One more issue with unit tests - it is better to have your mobile projects as a java project using Java 4,5,6, because writing test in 1.3 is, at leas for me, a pain in the...
I belive, that starting with JUnit will be just fine, to get up and running. And if some other requirements come up (running tests on handsets), then You can explore alternatives.
A: I'll be honest, the only unit tester I've used in Java is JUnit and a child project for it named DBUnit for database testing... which I'm assuming you won't need under J2ME.
JUnit's pretty easy to use for unit testing, as long as your IDE supports it (Eclipse has support for JUnit built in). You just mark tests with the @Test annotation (org.junit.Test I think). You can also specify methods that should be run @Before or @After each test, as well as before or after the entire class (with @BeforeClass and @AfterClass).
I've never used JUnit under J2ME, though... I work with J2EE at work.
A: Never found an outstanding one. You can try to read this document :
how to use it
and here the link to : download it
Made by sony ericsson but work for any J2ME development.
And I would recommend you spend some time learning unit testing in plain Java before attacking unit testing on the mobile platform. This may be a to big to swallow one shoot.
Good luck!
A: There's a framework called J2MEUnit that you could give a try, but it doesn't look like it's still being actively developed:
http://j2meunit.sourceforge.net
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What do I need in order to create 64 bit .NET applications If I want to compile my .NET applications for a 64 bit environment. Do I need
*
*64 bit OS version
or
*64 bit Visual Studio version
Or both?
A: You actually need neither of those for building the application. A pure .NET 2.0+ application will -- in the absence of specific compiler flags to the contrary -- run as a 64-bit application under a 64-bit OS and as a 32-bit application under a 32-bit OS.
Edit: Also, there's no such thing as a 64-bit version of Visual Studio.
A: Actually you don't need anything, since .NET applications are compiled to CIL. The virtual machine compiles the CIL to native code at run-time. So if you run your application on 64-bit platform, it will generate native 64-bit code, but if you run it on a 32-bit platform, it'll generate 32-bit code.
However, if you want to debug / profile / test your application in a 64-bit environment, then you need:
*
*64-bit OS
*64-bit .NET VM
Visual Studio can debug applications running in 64-bit mode. For profiling you're likely to need a 64-bit profiler.
A: You also need a 64 bit CPU.
A: I'm running Visual Studio 2005 on a 32-bit machine at work and under the Build section in my Project Properties, I can select x64 as my platform target.
So I don't think you need either a 64-bit OS or a special version of VS.
A: This should have all you need:
http://msdn.microsoft.com/en-us/library/ms241066.aspx
I'd start though, by installing a 64-bit OS (which obviously must be running on a 64-bit CPU!).
A: Java started this and it was very good. .NET has taken it further. Platform independence that is.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Tag images in the image itself? HOW-TO How to tag images in the image itself in a web page?
I know Taggify, but... is there other options?
Orkut also does it to tag people faces... How is it done?
Anyone knows any public framework that is able to do it?
See a sample bellow from Taggify:
A: I know this isn't javascript but C# 3.0 has an API for doing this. The System.Windows.Media.Imaging namespace has a class called BitmapMetadata which can be used to read and write image metadata (which is stored in the image itself). Here is a method for retrieving the metadata for an image given a file path:
public static BitmapMetadata GetMetaData(string path)
{
using (Stream s = new System.IO.FileStream(path, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite))
{
var decoder = BitmapDecoder.Create(s, BitmapCreateOptions.None, BitmapCacheOption.OnDemand);
var frame = decoder.Frames.FirstOrDefault();
if (frame != null)
{
return frame.Metadata as BitmapMetadata;
}
return null;
}
}
The BitmapMetadata class has a property for tags as well as other common image metadata. To save metadata back to the image, you can use the InPlaceBitmapMetadataWriter Class.
A: There's a map tag in HTML that could be used in conjunction with Javascript to 'tag' different parts of an image.
You can see the details here.
A: I will re-activate this question and help a bit. Currently the only thing i have found about is http://www.sanisoft.com/downloads/imgnotes-0.2/example.html . A jQuery tagging implementation. If anyone knows about another way please tell us.
;)
A: You can check out Image.InfoCards (IIC) at http://www.imageinfocards.com . With the IIC meta-data utilities you can add meta-data in very user-friendly groups called "cards".
The supplied utilities (including a Java applet) allow you to tag GIF's, JPEG's and PNG's without changing them visually.
IIC is presently proprietary but there are plans to make it an open protocol in Q1 2009.
The difference between IIC and others like IPTC/DIG35/DublinCore/etc is that it is much more consumer-centric and doesn't require a CS degree to understand and use it...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Rich Text Editing in Silverlight 2 We've looked in Silverlight 2 recently and found no way to edit formatted text there. Is this really true, and are there any (maybe commercial) external rich text editors available?
A: Vectorlight has a rich text box.
A: I haven't tried it myself yet but this is one I know of.
http://www.codeplex.com/richtextedit
A: ComponentOne also has a RichTextBox control in the works:
http://www.componentone.com/SuperProducts/RichTextBoxSilverlight/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Display DIV at Cursor Position in Textarea For a project of mine I would love to provide auto completion for a specific textarea. Similar to how intellisense/omnicomplete works. For that however I have to find out the absolute cursor position so that I know where the DIV should appear.
Turns out: that's (nearly I hope) impossible to achieve. Does anyone has some neat ideas how to solve that problem?
A: I posted a topic related to this problem on a Russian JavaScript site.
If you don't understand Russian try translated by Google version: http://translate.google.ru/translate?js=y&prev=_t&hl=ru&ie=UTF-8&layout=1&eotf=1&u=http://javascript.ru/forum/events/7771-poluchit-koordinaty-kursora-v-tekstovom-pole-v-pikselyakh.html&sl=ru&tl=en
Thre is some markup issues in the code examples in translated version so you can read the code in the original Russian post.
The idea is simple. There is no easy, universal and cross-browser method to get cursor position in pixels. Frankly speaking there is, but only for Internet Explorer.
In other browsers if you do really need to calculate it you have to ...
*
*create an invisible DIV
*copy all styles and content of the text box into that DIV
*then insert HTML element at exactly the same position in text where the caret is in the text box
*get coordinates of that HTML element
A: I won't explain the problems related to this stuff again because they are well explained in other posts. Just will point a possible solution, it has some bug but it's a starting point.
Fortunately there is a scrip on Github to calculate the caret position relative to it's container, but it requires jQuery. GitHub page here: jquery-caret-position-getter, Thanxs to Bevis.Zhao.
Based on it I have implemented the next code: check it in action here in jsFiddle.net
<html><head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<title>- jsFiddle demo by mjerez</title>
<script type="text/javascript" src="http://code.jquery.com/jquery-1.8.2.js"></script>
<link rel="stylesheet" type="text/css" href="http://jsfiddle.net/css/normalize.css">
<link rel="stylesheet" type="text/css" href="http://jsfiddle.net/css/result-light.css">
<script type="text/javascript" src="https://raw.github.com/beviz/jquery-caret-position-getter/master/jquery.caretposition.js"></script>
<style type="text/css">
body{position:relative;font:normal 100% Verdana, Geneva, sans-serif;padding:10px;}
.aux{background:#ccc;opacity: 0.5;width:50%;padding:5px;border:solid 1px #aaa;}
.hidden{display:none}
.show{display:block; position:absolute; top:0px; left:0px;}
</style>
<script type="text/javascript">//<![CDATA[
$(document).keypress(function(e) {
if ($(e.target).is('input, textarea')) {
var key = String.fromCharCode(e.which);
var ctrl = e.ctrlKey;
if (ctrl) {
var display = $("#autocomplete");
var editArea = $('#editArea');
var pos = editArea.getCaretPosition();
var offset = editArea.offset();
// now you can use left, top(they are relative position)
display.css({
left: offset.left + pos.left,
top: offset.top + pos.top,
color : "#449"
})
display.toggleClass("show");
return false;
}
}
});
window.onload = (function() {
$("#editArea").blur(function() {
if ($("#autocomplete").hasClass("show")) $("#autocomplete").toggleClass("show");
})
});
//]]>
</script>
</head>
<body>
<p>Click ctrl+space to while you write to diplay the autocmplete pannel.</p>
</br>
<textarea id="editArea" rows="4" cols="50"></textarea>
</br>
</br>
</br>
<div id="autocomplete" class="aux hidden ">
<ol>
<li>Option a</li>
<li>Option b</li>
<li>Option c</li>
<li>Option d</li>
</ol>
</div>
</body>
A: Note that this question is a duplicate of a one asked a month earlier, and I've answered it here. I'll only maintain the answer at that link, since this question should have been closed as duplicate years ago.
Copy of the answer
I've looked for a textarea caret coordinates plugin for meteor-autocomplete, so I've evaluated all the 8 plugins on GitHub. The winner is, by far, textarea-caret-position from Component.
Features
*
*pixel precision
*no dependencies whatsoever
*browser compatibility: Chrome, Safari, Firefox (despite two bugs it has), IE9+; may work but not tested in Opera, IE8 or older
*supports any font family and size, as well as text-transforms
*the text area can have arbitrary padding or borders
*not confused by horizontal or vertical scrollbars in the textarea
*supports hard returns, tabs (except on IE) and consecutive spaces in the text
*correct position on lines longer than the columns in the text area
*no "ghost" position in the empty space at the end of a line when wrapping long words
Here's a demo - http://jsfiddle.net/dandv/aFPA7/
How it works
A mirror <div> is created off-screen and styled exactly like the <textarea>. Then, the text of the textarea up to the caret is copied into the div and a <span> is inserted right after it. Then, the text content of the span is set to the remainder of the text in the textarea, in order to faithfully reproduce the wrapping in the faux div.
This is the only method guaranteed to handle all the edge cases pertaining to wrapping long lines. It's also used by GitHub to determine the position of its @ user dropdown.
A: Version 2 of My Hacky Experiment
This new version works with any font, which can be adjusted on demand, and any textarea size.
After noticing that some of you are still trying to get this to work, I decided to try a new approach. My results are FAR better this time around - at least on google chrome on linux. I no longer have a windows PC available to me, so I can only test on chrome / firefox on Ubuntu. My results work 100% consistently on Chrome, and let's say somewhere around 70 - 80% on Firefox, but I don't imagine it would be incredibly difficult to find the inconsistencies.
This new version relies on a Canvas object. In my example, I actually show that very canvas - just so you can see it in action, but it could very easily be done with a hidden canvas object.
This is most certainly a hack, and I apologize ahead of time for my rather thrown together code. At the very least, in google chrome, it works consistently, no matter what font I set it to, or size of textarea. I used Sam Saffron's example to show cursor coordinates (a gray-background div). I also added a "Randomize" link, so you can see it work in different font / texarea sizes and styles, and watch the cursor position update on the fly. I recommend looking at the full page demo so you can better see the companion canvas play along.
I'll summarize how it works...
The underlying idea is that we're trying to redraw the textarea on a canvas, as closely as possible. Since the browser uses the same font engine for both and texarea, we can use canvas's font measurement functionality to figure out where things are. From there, we can use the canvas methods available to us to figure out our coordinates.
First and foremost, we adjust our canvas to match the dimensions of the textarea. This is entirely for visual purposes since the canvas size doesn't really make a difference in our outcome. Since Canvas doesn't actually provide a means of word wrap, I had to conjure (steal / borrow / munge together) a means of breaking up lines to as-best-as-possible match the textarea. This is where you'll likely find you need to do the most cross-browser tweaking.
After word wrap, everything else is basic math. We split the lines into an array to mimic the word wrap, and now we want to loop through those lines and go all the way down until the point where our current selection ends. In order to do that, we're just counting characters and once we surpass selection.end, we know we have gone down far enough. Multiply the line count up until that point with the line-height and you have a y coordinate.
The x coordinate is very similar, except we're using context.measureText. As long as we're printing out the right number of characters, that will give us the width of the line that's being drawn to Canvas, which happens to end after the last character written out, which is the character before the currentl selection.end position.
When trying to debug this for other browsers, the thing to look for is where the lines don't break properly. You'll see in some places that the last word on a line in canvas may have wrapped over on the textarea or vice-versa. This has to do with how the browser handles word wraps. As long as you get the wrapping in the canvas to match the textarea, your cursor should be correct.
I'll paste the source below. You should be able to copy and paste it, but if you do, I ask that you download your own copy of jquery-fieldselection instead of hitting the one on my server.
I've also upped a new demo as well as a fiddle.
Good luck!
<!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset="utf-8" />
<title>Tooltip 2</title>
<script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script>
<script type="text/javascript" src="http://enobrev.info/cursor/js/jquery-fieldselection.js"></script>
<style type="text/css">
form {
float: left;
margin: 20px;
}
#textariffic {
height: 400px;
width: 300px;
font-size: 12px;
font-family: 'Arial';
line-height: 12px;
}
#tip {
width:5px;
height:30px;
background-color: #777;
position: absolute;
z-index:10000
}
#mock-text {
float: left;
margin: 20px;
border: 1px inset #ccc;
}
/* way the hell off screen */
.scrollbar-measure {
width: 100px;
height: 100px;
overflow: scroll;
position: absolute;
top: -9999px;
}
#randomize {
float: left;
display: block;
}
</style>
<script type="text/javascript">
var oCanvas;
var oTextArea;
var $oTextArea;
var iScrollWidth;
$(function() {
iScrollWidth = scrollMeasure();
oCanvas = document.getElementById('mock-text');
oTextArea = document.getElementById('textariffic');
$oTextArea = $(oTextArea);
$oTextArea
.keyup(update)
.mouseup(update)
.scroll(update);
$('#randomize').bind('click', randomize);
update();
});
function randomize() {
var aFonts = ['Arial', 'Arial Black', 'Comic Sans MS', 'Courier New', 'Impact', 'Times New Roman', 'Verdana', 'Webdings'];
var iFont = Math.floor(Math.random() * aFonts.length);
var iWidth = Math.floor(Math.random() * 500) + 300;
var iHeight = Math.floor(Math.random() * 500) + 300;
var iFontSize = Math.floor(Math.random() * 18) + 10;
var iLineHeight = Math.floor(Math.random() * 18) + 10;
var oCSS = {
'font-family': aFonts[iFont],
width: iWidth + 'px',
height: iHeight + 'px',
'font-size': iFontSize + 'px',
'line-height': iLineHeight + 'px'
};
console.log(oCSS);
$oTextArea.css(oCSS);
update();
return false;
}
function showTip(x, y) {
$('#tip').css({
left: x + 'px',
top: y + 'px'
});
}
// https://stackoverflow.com/a/11124580/14651
// https://stackoverflow.com/a/3960916/14651
function wordWrap(oContext, text, maxWidth) {
var aSplit = text.split(' ');
var aLines = [];
var sLine = "";
// Split words by newlines
var aWords = [];
for (var i in aSplit) {
var aWord = aSplit[i].split('\n');
if (aWord.length > 1) {
for (var j in aWord) {
aWords.push(aWord[j]);
aWords.push("\n");
}
aWords.pop();
} else {
aWords.push(aSplit[i]);
}
}
while (aWords.length > 0) {
var sWord = aWords[0];
if (sWord == "\n") {
aLines.push(sLine);
aWords.shift();
sLine = "";
} else {
// Break up work longer than max width
var iItemWidth = oContext.measureText(sWord).width;
if (iItemWidth > maxWidth) {
var sContinuous = '';
var iWidth = 0;
while (iWidth <= maxWidth) {
var sNextLetter = sWord.substring(0, 1);
var iNextWidth = oContext.measureText(sContinuous + sNextLetter).width;
if (iNextWidth <= maxWidth) {
sContinuous += sNextLetter;
sWord = sWord.substring(1);
}
iWidth = iNextWidth;
}
aWords.unshift(sContinuous);
}
// Extra space after word for mozilla and ie
var sWithSpace = (jQuery.browser.mozilla || jQuery.browser.msie) ? ' ' : '';
var iNewLineWidth = oContext.measureText(sLine + sWord + sWithSpace).width;
if (iNewLineWidth <= maxWidth) { // word fits on current line to add it and carry on
sLine += aWords.shift() + " ";
} else {
aLines.push(sLine);
sLine = "";
}
if (aWords.length === 0) {
aLines.push(sLine);
}
}
}
return aLines;
}
// http://davidwalsh.name/detect-scrollbar-width
function scrollMeasure() {
// Create the measurement node
var scrollDiv = document.createElement("div");
scrollDiv.className = "scrollbar-measure";
document.body.appendChild(scrollDiv);
// Get the scrollbar width
var scrollbarWidth = scrollDiv.offsetWidth - scrollDiv.clientWidth;
// Delete the DIV
document.body.removeChild(scrollDiv);
return scrollbarWidth;
}
function update() {
var oPosition = $oTextArea.position();
var sContent = $oTextArea.val();
var oSelection = $oTextArea.getSelection();
oCanvas.width = $oTextArea.width();
oCanvas.height = $oTextArea.height();
var oContext = oCanvas.getContext("2d");
var sFontSize = $oTextArea.css('font-size');
var sLineHeight = $oTextArea.css('line-height');
var fontSize = parseFloat(sFontSize.replace(/[^0-9.]/g, ''));
var lineHeight = parseFloat(sLineHeight.replace(/[^0-9.]/g, ''));
var sFont = [$oTextArea.css('font-weight'), sFontSize + '/' + sLineHeight, $oTextArea.css('font-family')].join(' ');
var iSubtractScrollWidth = oTextArea.clientHeight < oTextArea.scrollHeight ? iScrollWidth : 0;
oContext.save();
oContext.clearRect(0, 0, oCanvas.width, oCanvas.height);
oContext.font = sFont;
var aLines = wordWrap(oContext, sContent, oCanvas.width - iSubtractScrollWidth);
var x = 0;
var y = 0;
var iGoal = oSelection.end;
aLines.forEach(function(sLine, i) {
if (iGoal > 0) {
oContext.fillText(sLine.substring(0, iGoal), 0, (i + 1) * lineHeight);
x = oContext.measureText(sLine.substring(0, iGoal + 1)).width;
y = i * lineHeight - oTextArea.scrollTop;
var iLineLength = sLine.length;
if (iLineLength == 0) {
iLineLength = 1;
}
iGoal -= iLineLength;
} else {
// after
}
});
oContext.restore();
showTip(oPosition.left + x, oPosition.top + y);
}
</script>
</head>
<body>
<a href="#" id="randomize">Randomize</a>
<form id="tipper">
<textarea id="textariffic">Aliquam urna. Nullam augue dolor, tincidunt condimentum, malesuada quis, ultrices at, arcu. Aliquam nunc pede, convallis auctor, sodales eget, aliquam eget, ligula. Proin nisi lacus, scelerisque nec, aliquam vel, dictum mattis, eros. Curabitur et neque. Fusce sollicitudin. Quisque at risus. Suspendisse potenti. Mauris nisi. Sed sed enim nec dui viverra congue. Phasellus velit sapien, porttitor vitae, blandit volutpat, interdum vel, enim. Cras sagittis bibendum neque. Proin eu est. Fusce arcu. Aliquam elit nisi, malesuada eget, dignissim sed, ultricies vel, purus. Maecenas accumsan diam id nisi.
Phasellus et nunc. Vivamus sem felis, dignissim non, lacinia id, accumsan quis, ligula. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Sed scelerisque nulla sit amet mi. Nulla consequat, elit vitae tempus vulputate, sem libero rhoncus leo, vulputate viverra nulla purus nec turpis. Nam turpis sem, tincidunt non, congue lobortis, fermentum a, ipsum. Nulla facilisi. Aenean facilisis. Maecenas a quam eu nibh lacinia ultricies. Morbi malesuada orci quis tellus.
Sed eu leo. Donec in turpis. Donec non neque nec ante tincidunt posuere. Pellentesque blandit. Ut vehicula vestibulum risus. Maecenas commodo placerat est. Integer massa nunc, luctus at, accumsan non, pulvinar sed, odio. Pellentesque eget libero iaculis dui iaculis vehicula. Curabitur quis nulla vel felis ullamcorper varius. Sed suscipit pulvinar lectus.</textarea>
</form>
<div id="tip"></div>
<canvas id="mock-text"></canvas>
</body>
</html>
Bug
There's one bug I do recall. If you put the cursor before the first letter on a line, it shows the "position" as the last letter on the previous line. This has to do with how selection.end work. I don't think it should be too difficult to look for that case and fix it accordingly.
Version 1
Leaving this here so you can see the progress without having to dig through the edit history.
It's not perfect and it's most Definitely a hack, but I got it to work pretty well on WinXP IE, FF, Safari, Chrome and Opera.
As far as I can tell there's no way to directly find out the x/y of a cursor on any browser. The IE method, mentioned by Adam Bellaire is interesting, but unfortunately not cross-browser. I figured the next best thing would be to use the characters as a grid.
Unfortunately there's no font metric information built into any of the browsers, which means a monospace font is the only font type that's going to have a consistent measurement. Also, there's no reliable means of figuring out a font-width from the font-height. At first I'd tried using a percentage of the height, which worked great. Then I changed the font-size and everything went to hell.
I tried one method to figure out character width, which was to create a temporary textarea and keep adding characters until the scrollHeight (or scrollWidth) changed. It seems plausable, but about halfway down that road, I realized I could just use the cols attribute on the textarea and figured there are enough hacks in this ordeal to add another one. This means you can't set the width of the textarea via css. You HAVE to use the cols for this to work.
The next problem I ran into is that, even when you set the font via css, the browsers report the font differently. When you don't set a font, mozilla uses monospace by default, IE uses Courier New, Opera "Courier New" (with quotes), Safari, 'Lucida Grand' (with single quotes). When you do set the font to monospace, mozilla and ie take what you give them, Safari comes out as -webkit-monospace and Opera stays with "Courier New".
So now we initialize some vars. Make sure to set your line height in the css as well. Firefox reports the correct line height, but IE was reporting "normal" and I didn't bother with the other browsers. I just set the line height in my css and that resolved the difference. I haven't tested with using ems instead of pixels. Char height is just font size. Should probably pre-set that in your css as well.
Also, one more pre-setting before we start placing characters - which really had me scratching my head. For ie and mozilla, texarea chars are < cols, everything else is <= chars. So Chrome can fit 50 chars across, but mozilla and ie would break the last word off the line.
Now we're going to create an array of first-character positions for every line. We loop through every char in the textarea. If it's a newline, we add a new position to our line array. If it's a space, we try to figure out if the current "word" will fit on the line we're on or if it's going to get pushed to the next line. Punctuation counts as a part of the "word". I haven't tested with tabs, but there's a line there for adding 4 chars for a tab char.
Once we have an array of line positions, we loop through and try to find which line the cursor is on. We're using hte "End" of the selection as our cursor.
x = (cursor position - first character position of cursor line) * character width
y = ((cursor line + 1) * line height) - scroll position
I'm using jquery 1.2.6, jquery-fieldselection, and jquery-dimensions
The Demo: http://enobrev.info/cursor/
And the code:
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Tooltip</title>
<script type="text/javascript" src="js/jquery-1.2.6.js"></script>
<script type="text/javascript" src="js/jquery-fieldselection.js"></script>
<script type="text/javascript" src="js/jquery.dimensions.js"></script>
<style type="text/css">
form {
margin: 20px auto;
width: 500px;
}
#textariffic {
height: 400px;
font-size: 12px;
font-family: monospace;
line-height: 15px;
}
#tip {
position: absolute;
z-index: 2;
padding: 20px;
border: 1px solid #000;
background-color: #FFF;
}
</style>
<script type="text/javascript">
$(function() {
$('textarea')
.keyup(update)
.mouseup(update)
.scroll(update);
});
function showTip(x, y) {
y = y + $('#tip').height();
$('#tip').css({
left: x + 'px',
top: y + 'px'
});
}
function update() {
var oPosition = $(this).position();
var sContent = $(this).val();
var bGTE = jQuery.browser.mozilla || jQuery.browser.msie;
if ($(this).css('font-family') == 'monospace' // mozilla
|| $(this).css('font-family') == '-webkit-monospace' // Safari
|| $(this).css('font-family') == '"Courier New"') { // Opera
var lineHeight = $(this).css('line-height').replace(/[^0-9]/g, '');
lineHeight = parseFloat(lineHeight);
var charsPerLine = this.cols;
var charWidth = parseFloat($(this).innerWidth() / charsPerLine);
var iChar = 0;
var iLines = 1;
var sWord = '';
var oSelection = $(this).getSelection();
var aLetters = sContent.split("");
var aLines = [];
for (var w in aLetters) {
if (aLetters[w] == "\n") {
iChar = 0;
aLines.push(w);
sWord = '';
} else if (aLetters[w] == " ") {
var wordLength = parseInt(sWord.length);
if ((bGTE && iChar + wordLength >= charsPerLine)
|| (!bGTE && iChar + wordLength > charsPerLine)) {
iChar = wordLength + 1;
aLines.push(w - wordLength);
} else {
iChar += wordLength + 1; // 1 more char for the space
}
sWord = '';
} else if (aLetters[w] == "\t") {
iChar += 4;
} else {
sWord += aLetters[w];
}
}
var iLine = 1;
for(var i in aLines) {
if (oSelection.end < aLines[i]) {
iLine = parseInt(i) - 1;
break;
}
}
if (iLine > -1) {
var x = parseInt(oSelection.end - aLines[iLine]) * charWidth;
} else {
var x = parseInt(oSelection.end) * charWidth;
}
var y = (iLine + 1) * lineHeight - this.scrollTop; // below line
showTip(oPosition.left + x, oPosition.top + y);
}
}
</script>
</head>
<body>
<form id="tipper">
<textarea id="textariffic" cols="50">
Aliquam urna. Nullam augue dolor, tincidunt condimentum, malesuada quis, ultrices at, arcu. Aliquam nunc pede, convallis auctor, sodales eget, aliquam eget, ligula. Proin nisi lacus, scelerisque nec, aliquam vel, dictum mattis, eros. Curabitur et neque. Fusce sollicitudin. Quisque at risus. Suspendisse potenti. Mauris nisi. Sed sed enim nec dui viverra congue. Phasellus velit sapien, porttitor vitae, blandit volutpat, interdum vel, enim. Cras sagittis bibendum neque. Proin eu est. Fusce arcu. Aliquam elit nisi, malesuada eget, dignissim sed, ultricies vel, purus. Maecenas accumsan diam id nisi.
Phasellus et nunc. Vivamus sem felis, dignissim non, lacinia id, accumsan quis, ligula. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Sed scelerisque nulla sit amet mi. Nulla consequat, elit vitae tempus vulputate, sem libero rhoncus leo, vulputate viverra nulla purus nec turpis. Nam turpis sem, tincidunt non, congue lobortis, fermentum a, ipsum. Nulla facilisi. Aenean facilisis. Maecenas a quam eu nibh lacinia ultricies. Morbi malesuada orci quis tellus.
Sed eu leo. Donec in turpis. Donec non neque nec ante tincidunt posuere. Pellentesque blandit. Ut vehicula vestibulum risus. Maecenas commodo placerat est. Integer massa nunc, luctus at, accumsan non, pulvinar sed, odio. Pellentesque eget libero iaculis dui iaculis vehicula. Curabitur quis nulla vel felis ullamcorper varius. Sed suscipit pulvinar lectus.
</textarea>
</form>
<p id="tip">Here I Am!!</p>
</body>
</html>
A: This blog appears to be close too answering the question. I haven't tried it my self, but author says its tested with FF3, Chrome, IE, Opera, Safari. Code is on GitHub
A: fixed it here: http://jsfiddle.net/eMwKd/4/
only downside is that the already provided function getCaret() resolves to the wrong position on key down. therefor the red cursor seems to be behind the real cursor unless you release the key.
I will have another look into it.
update: hm, word-wrapping is not accurate if lines too long..
A: This blog post seems to address your question, but unfortunately the author admits he has only tested it in IE 6.
The DOM in IE does not provide information regarding relative position in terms of characters; however, it does provide bounding and offset values for browser-rendered controls. Thus, I used these values to determine the relative bounds of a character. Then, using the JavaScript TextRange, I created a mechanism for working with such measures to calculate the Line and Column position for fixed-width fonts within a given TextArea.
First, the relative bounds for the TextArea must be calculated based upon the size of the fixed-width font used. To do this, the original value of the TextArea must be stored in a local JavaScript variable and clear the value. Then, a TextRange is created to determine the Top and Left bounds of the TextArea.
A: I don't know a solution for textarea but it sure works for a div with contenteditable.
You can use the Range API. Like so: (yes, you really only need just these 3 lines of code)
// get active selection
var selection = window.getSelection();
// get the range (you might want to check selection.rangeCount
// to see if it's popuplated)
var range = selection.getRangeAt(0);
// will give you top, left, width, height
console.log(range.getBoundingClientRect());
I'm not sure about browser compatibility but I've found it works in the latest Chrome, Firefox and even IE7 (I think I tested 7, otherwise it was 9).
You can even do 'crazy' things like this: if you're typing "#hash" and the cursor is at the last h, you can look in the current range for the # character, move the range back by n characters and get the bounding-rect of that range, this will make the popup-div seem to 'stick' to the word.
One minor drawback is that contenteditable can be a bit buggy sometimes. The cursor likes to go to impossible places and you now have to deal with HTML input. But I'm sure browser vendors will address these problems are more sites starting using them.
Another tip I can give is: look at the rangy library. It attempts to be a fully featured cross-compatible range library. You don't need it, but if you're dealing with old browsers it might be worth you while.
A: maybe this will please you , it will tell the position of selection and the positition of the cursor so try to check the timer to get automatic position or uncheck to get position by clicking on Get Selection button
<form>
<p>
<input type="button" onclick="evalOnce();" value="Get Selection">
timer:
<input id="eval_switch" type="checkbox" onclick="evalSwitchClicked(this)">
<input id="eval_time" type="text" value="200" size="6">
ms
</p>
<textarea id="code" cols="50" rows="20">01234567890123456789012345678901234567890123456789 01234567890123456789012345678901234567890123456789 01234567890123456789012345678901234567890123456789 01234567890123456789012345678901234567890123456789 01234567890123456789012345678901234567890123456789 Sample text area. Please select above text. </textarea>
<textarea id="out" cols="50" rows="20"></textarea>
</form>
<div id="test"></div>
<script>
function Selection(textareaElement) {
this.element = textareaElement;
}
Selection.prototype.create = function() {
if (document.selection != null && this.element.selectionStart == null) {
return this._ieGetSelection();
} else {
return this._mozillaGetSelection();
}
}
Selection.prototype._mozillaGetSelection = function() {
return {
start: this.element.selectionStart,
end: this.element.selectionEnd
};
}
Selection.prototype._ieGetSelection = function() {
this.element.focus();
var range = document.selection.createRange();
var bookmark = range.getBookmark();
var contents = this.element.value;
var originalContents = contents;
var marker = this._createSelectionMarker();
while(contents.indexOf(marker) != -1) {
marker = this._createSelectionMarker();
}
var parent = range.parentElement();
if (parent == null || parent.type != "textarea") {
return { start: 0, end: 0 };
}
range.text = marker + range.text + marker;
contents = this.element.value;
var result = {};
result.start = contents.indexOf(marker);
contents = contents.replace(marker, "");
result.end = contents.indexOf(marker);
this.element.value = originalContents;
range.moveToBookmark(bookmark);
range.select();
return result;
}
Selection.prototype._createSelectionMarker = function() {
return "##SELECTION_MARKER_" + Math.random() + "##";
}
var timer;
var buffer = "";
function evalSwitchClicked(e) {
if (e.checked) {
evalStart();
} else {
evalStop();
}
}
function evalStart() {
var o = document.getElementById("eval_time");
timer = setTimeout(timerHandler, o.value);
}
function evalStop() {
clearTimeout(timer);
}
function timerHandler() {
clearTimeout(timer);
var sw = document.getElementById("eval_switch");
if (sw.checked) {
evalOnce();
evalStart();
}
}
function evalOnce() {
try {
var selection = new Selection(document.getElementById("code"));
var s = selection.create();
var result = s.start + ":" + s.end;
buffer += result;
flush();
} catch (ex) {
buffer = ex;
flush();
}
}
function getCode() {
// var s.create()
// return document.getElementById("code").value;
}
function clear() {
var out = document.getElementById("out");
out.value = "";
}
function print(str) {
buffer += str + "\n";
}
function flush() {
var out = document.getElementById("out");
out.value = buffer;
buffer = "";
}
</script>
look the demo here : jsbin.com
A: There is description of one hack for caret offset:
Textarea X/Y caret coordinates - jQuery plugin
Also it will be better to use div element with contenteditable attribute if you can use html5 features.
A: How about appending a span element to the cloning div and setting the fake cursor based on this span's offsets? I have updated your fiddle here. Also here's the JS bit only
// http://stackoverflow.com/questions/263743/how-to-get-caret-position-in-textarea
var map = [];
var pan = '<span>|</span>'
//found @ http://davidwalsh.name/detect-scrollbar-width
function getScrollbarWidth() {
var scrollDiv = document.createElement("div");
scrollDiv.className = "scrollbar-measure";
document.body.appendChild(scrollDiv);
// Get the scrollbar width
var scrollbarWidth = scrollDiv.offsetWidth - scrollDiv.clientWidth;
// Delete the DIV
document.body.removeChild(scrollDiv);
return scrollbarWidth;
}
function getCaret(el) {
if (el.selectionStart) {
return el.selectionStart;
} else if (document.selection) {
el.focus();
var r = document.selection.createRange();
if (r == null) {
return 0;
}
var re = el.createTextRange(),
rc = re.duplicate();
re.moveToBookmark(r.getBookmark());
rc.setEndPoint('EndToStart', re);
return rc.text.length;
}
return 0;
}
$(function() {
var span = $('#pos span');
var textarea = $('textarea');
var note = $('#note');
css = getComputedStyle(document.getElementById('textarea'));
try {
for (i in css) note.css(css[i]) && (css[i] != 'width' && css[i] != 'height') && note.css(css[i], css.getPropertyValue(css[i]));
} catch (e) {}
note.css('max-width', '300px');
document.getElementById('note').style.visibility = 'hidden';
var height = note.height();
var fakeCursor, hidePrompt;
textarea.on('keyup click', function(e) {
if (document.getElementById('textarea').scrollHeight > 100) {
note.css('max-width', 300 - getScrollbarWidth());
}
var pos = getCaret(textarea[0]);
note.text(textarea.val().substring(0, pos));
$(pan).appendTo(note);
span.text(pos);
if (hidePrompt) {
hidePrompt.remove();
}
if (fakeCursor) {
fakeCursor.remove();
}
fakeCursor = $("<div style='width:5px;height:30px;background-color: #777;position: absolute;z-index:10000'> </div>");
fakeCursor.css('opacity', 0.5);
fakeCursor.css('left', $('#note span').offset().left + 'px');
fakeCursor.css('top', textarea.offset().top + note.height() - (30 + textarea.scrollTop()) + 'px');
hidePrompt = fakeCursor.clone();
hidePrompt.css({
'width': '2px',
'background-color': 'white',
'z-index': '1000',
'opacity': '1'
});
hidePrompt.appendTo(textarea.parent());
fakeCursor.appendTo(textarea.parent());
return true;
});
});
UPDATE: I can see that there's an error if the first line contains no hard line-breaks but if it does it seems to work well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61"
}
|
Q: Is there a better way to initialize a Hastable in .NET without using Add method? I am currently initializing a Hashtable in the following way:
Hashtable filter = new Hashtable();
filter.Add("building", "A-51");
filter.Add("apartment", "210");
I am looking for a nicer way to do this.
I tried something like
Hashtable filter2 = new Hashtable() {
{"building", "A-51"},
{"apartment", "210"}
};
However the above code does not compile.
A: In C# 3 it should compile fine like this:
Hashtable table = new Hashtable {{1, 1}, {2, 2}};
A: The exact code you posted:
Hashtable filter2 = new Hashtable()
{
{"building", "A-51"},
{"apartment", "210"}
};
Compiles perfectly in C# 3. Given you reported compilation problems, I'm guessing you are using C# 2? In this case you can at least do this:
Hashtable filter2 = new Hashtable();
filter2["building"] = "A-51";
filter2["apartment"] = "210";
A: (Not a C# expert)
This is syntactic sugar, and if it's syntactic sugar I'd want to have the mapping (in a Hashtable) nice and obvious:
Hashtable filter2 = new Hashtable() { "building" => "A-51", "apartment" => "210"};
However I don't see a real need for this, there isn't much wrong with just having to call add after initialisation.
(I've known people to hack around with the Java compiler to achieve similar things in the past for Java (which caused major issues moving to Java 5 a few years later), I expect this isn't an option for C# though!)
A: I had no idea in C# 3.0 Hashtable table = new Hashtable {{1, 1}, {2, 2}}; would compile.
Anyway, poor man's implementation:
Meh, you could extend the Hashtable class:
class MyHashTable : System.Collections.Hashtable
{
public MyHashTable(string [,] values)
{
for (int i = 0; i < (values.Length)/2; i++)
{
this.Add(values[i,0], values[i,1]);
}
}
}
And then from a Console App:
class Program
{
static void Main(string[] args)
{
string[,] initialize = { { "building", "A-51" }, { "apartment", "210" }, {"wow", "nerf Druids"}};
MyHashTable myhashTable = new MyHashTable(initialize);
Console.WriteLine(myhashTable["building"].ToString());
Console.WriteLine(myhashTable["apartment"].ToString());
Console.WriteLine(myhashTable["wow"].ToString());
Console.ReadKey();
}
}
will result in:
A-51
210
nerf Druids
this was done quick so it may bomb in certain situations but then again..
A: I think the real question being asked may be about imaginable constructors such as:
HashTable ht = new HashTable(MyArray) ; // fill from array
HashTable ht = new HashTable(MyDataTable) ; // fill from datatable
AFAIK, the answer is "no", but you could write it yourself. I assume the reason that such methods are not in the library is that the Array or DataTable has to be properly formed. It's not such a big loss since any implementation of these methods would probably be using the Add method in a loop anyway.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
}
|
Q: How do I find the center of a number of geographic points? If I have a series of points as longitude and latitude, how would I calculate the center of all of those points?
A: Several people have answered to take the mean of the latitudes and longitudes. This is sort of the right idea, but means are more complicated on the sphere.
The latitude/longitude representation is essentially artificial and has discontinuities (at the poles, and opposite the prime meridian if you aren't careful), so it taking means in it doesn't seem likely (to me) to have a sensible geometric interpretation. I think you need to do something like averaging vectors in earth-centered coordinates, and then normalizing the result to put it back on the sphere.
I hope someone with more experience in these matters can comment more concretely.
A: Don't just take averages.
You can convert to 3d coordinates, then take the average (of x,y, and z coords), then project it back onto the sphere and turn that back into lat/long.
The wikipedia page on spherical coordinates has conversion algorithms.
A: First off, you need to define which centre you're interested in. Take these two points:
A. .B
The centre is easy, it's halfway between them. Now add a third point:
A. C. .B
Is the centre still halfway between A and B or is it weighted towards A because of C? So is the centre the point nearest to all points or just the points on the enclosing polygon?
Also, as it's long/lat you're dealing with the points are on a surface of a sphere so the distance between long 0 and long 90 degrees is much greater at lat 0 than at lat 45 degrees.
A: Geomidpoint covers 3 different methods for calculating this.
A: You're probably looking for the centroid of the simple polygon defined by the points. There is information on how to calculate it for various geometries in that article.
A: Wolfram Alpha will do this for you if you ask the question in the following form:
centroid of polygon with vertices: (X, Y), (X, Y), (X, Y), (X, Y), (X, Y), etc.
Just remember to convert each "(X, Y)" into decimal form first.
Wolfram Alpha will return the answer in decimal form, which you can then copy and paste into Google Earth.
A: See Moe's answer, although if your points are distributed across the globe, you'll have to be satisfied that your center tends towards the Prime Meridian and not the International Date Line.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: String Format Date - C# or VB.NET Date coming out of a database, need to format as "mm/dd/yy"
For Each dr as DataRow in ds.Tables(0).Rows
Response.Write(dr("CreateDate"))
Next
A: Convert.ToDateTime(dr("CreateDate")).ToShortDate()
See the MSDN docs for other functions available from the DateTime datatype, including custom formats available through the 'ToString' function.
A: string.Format( "{0:MM/dd/yy}", dr("CreateDate") )
Edit: If dr("CreateDate") is DBNull, this returns "".
A: Response.Write(DateTime.Parse(dr("CreateDate").ToString()).ToString("MM/dd/yyyy"))
A: Easy:
((DateTime)dr["CreateDate"]).ToString("MM/dd/yyyy")
// I would also check that it isn't dbnull before doing it though
if (! DBNull.Value.Equals(dr["CreateDate"])) // blah blah
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: PHP: using preg_replace with htmlentities I'm writing an RSS to JSON parser and as a part of that, I need to use htmlentities() on any tag found inside the description tag. Currently, I'm trying to use preg_replace(), but I'm struggling a little with it. My current (non-working) code looks like:
$pattern[0] = "/\<description\>(.*?)\<\/description\>/is";
$replace[0] = '<description>'.htmlentities("$1").'</description>';
$rawFeed = preg_replace($pattern, $replace, $rawFeed);
If you have a more elegant solution to this as well, please share. Thanks.
A: Simple. Use preg_replace_callback:
function _handle_match($match)
{
return '<description>' . htmlentities($match[1]) . '</description>';
}
$pattern = "/\<description\>(.*?)\<\/description\>/is";
$rawFeed = preg_replace_callback($pattern, '_handle_match', $rawFeed);
It accepts any callback type, so also methods in classes.
A: The more elegant solution would be to employ SimpleXML. Or a third party library such as XML_Feed_Parser or Zend_Feed to parse the feed.
Here is a SimpleXML example:
<?php
$rss = file_get_contents('http://rss.slashdot.org/Slashdot/slashdot');
$xml = simplexml_load_string($rss);
foreach ($xml->item as $item) {
echo "{$item->description}\n\n";
}
?>
Keep in mind that RSS and RDF and Atom look different, which is why it can make sense to employ one of the above libraries I mentioned.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: When developing, do you turn off UAC in Vista? I didn't upgrade to Vista until May or so and one of the things I've always heard developers I know in real life say is "first thing you should do is turn off that UAC crap"
Well, I've left it on this whole time for a few reasons. First, just as a failsafe in case I do something idiotic like have a momentary lapse of reason and run an attachment from an email, or in case I view a site which hits some unpatched exploit. Second, as a big of an experiment to see how good or bad it really is.
Finally, I figure that it enforces some better practices. I used to develop every website in Windows directly in inetpub\wwwroot (Visual Studio .NET 2003 more or less required this) but now I develop them elsewhere because the UAC clickfest is a nightmare. I figure this is Microsoft's way of saying "you should really be doing it this way".
By way of another analogy - if you wrote a web app which runs on XP and 2000 just fine but requires 50 different security features of Server 2003 to be turned off, the real solution might be instead to just fix the application such that it doesn't require the security features to be turned off.
But now I'm having to work with an app which is really really NOT designed to be developed outside of inetpub/wwwroot and so UAC is really a nuisance. It's beyond the scope of the project to rectify this. I want to stick to my guns and leave UAC on but I'm also worried about being so autopilot about clicking "Yes" or "Allow" three times every time I need to modify a file.
Am I just being hard headed? Do most developers on Vista leave the UAC on or off? And for the instance described above, is there a better/easier way?
A: I code in a standard user account, with UAC turned on.
A: No I do not close UAC.
Programming C# winform, and web with IIS. Database is progresql. No need to bother with UAC. Some program only require 1 authorization, not a big deal.
A: I keep UAC on. I find it useful to develop in an environment similar to my end user. That way if I write any code which is trying to read / write from restricted areas I will know about it quicker.
A: UAC is incredibly annoying at first when you get a new system. The problem is that when you first start out with a new install you have all kinds of programs to set up and settings to tweak. It seems like you see the UAC prompt every 5 minutes.
After a while, two things happen:
*
*You're not setting up as much new stuff.
*You've become a little more used to the prompt.
At this point UAC isn't so bad anymore. I have UAC on and I've only seen one or two prompts in the last couple weeks. That's right about perfect: if I see a prompt I wasn't expecting I know to make sure I really want to proceed.
I will argue that the 2nd effect kind of defeats the purpose. What they should do is have UAC disabled by default, but for the first month only. After the first month prompt you to turn UAC on, where the default option for someone who doesn't really read things is to turn it on. Then people aren't annoyed during their setup period, and it's easier to make an informed choice about what you want to do with UAC.
A: I think it is necessary to leave UAC on on a test machine, so you can see what a real user would see using your app. However, I turn it off on my development machine since I find it distracting, and I trust myself enough to not need it.
(Hopefully your test machine != your dev machine right?)
All this being said, I support UAC, and I am not recommending anyone else turn it off, especially 'common users'.
A: I leave it on
A: I leave it on, but have it set to automatically elevate privileges when necessary. It's a fine distinction, but a distinction nonetheless.
A: Services like Microsoft SQL Server runs with administrator privileges. Visual Studio on the other hand does not. Nor do most developer-tools.
I make heavy use of virtual machines to 1) make sure my development environment is safe at all times, and 2) to test out software with the potential of leaving my machine FUBAR. And 3) to limit down-time, restoring my development environment, "in case I do something idiotic like have a momentary lapse of reason and run an attachment from an email" :)
A: I have been using Windows 2008 in my workstation following the advices on http://www.win2008workstation.com/wordpress/ and it has worked great for me. I don't remember turning off UAC, but certainly I haven't suffered it, so I guess it's turned off.
As others have said, you do need to have test [virtual] machines that are configured as close as possible to the ones your users will have so you won't have any surprises deploying your app.
A: I think whether you do this or not should depend on the target audience for your application, although I can completely understand people disabling it.
If all your users run Vista with UAC disabled then I think you can get away with turning it off, but this probably isn't realistic--or advisable. At the other end of the spectrum, our applications are used by a vast number of people with every conceivable version and configuration of Windows from Win2k onwards, and obviously including Vista and Server 2008. Since we're an ISV with no control over our users' environments, or over policies governing their privileges and administration, I always leave UAC enabled--even though it annoys me beyond all reason at times--because then I know about any possible problems it might cause for people using our applications sooner rather than later.
Disclaimer: most of my actual coding time is spent on Windows XP, although I have a Vista 64-bit test machine under my desk which I use on a daily basis for testing. Generally I'll use this box around 20 - 30% of the time.
A: Developing or not developing - was the first thing I did after installing vista. Just seemed an annoying nuisance at best.
A: Instead of running antivirus to suck away my CPU cycles (I need as many as I can with RDPs and VMs running all the time). I just leave UAC on as a safeguard to double check and make sure only certain things run. It does more than that though, it also restricts programs access to sensitive areas, so a program basically can't trash your system without you allowing it through UAC. I have not had a problem yet and my system runs only what I need it to run, quickly and smoothly.
A: It's too annoying for me, it gets turned off as soon as I install Vista.
A: I turn it off as soon as I install the OS. Security by endless modal dialogs is no security at all. Normal users just get used to clicking even more 'OK' buttons after a couple of weeks or so.
EDIT: Wow, down-voted huh? Must be some Microsoft employees around here...Of course it should remain on on a test machine, probably should have mentioned that.
A: I turn it off on computers that I am using.
When testing, I test in the target environment, which means I may have UAC on or off.
I see no benefit to developing with it on.
A: I find it extremely annoying and turn it off at all times, I trust myself enough to not have to have fail safes in place. If I screw up and run some dodgy application that's my bad and I'll live with the consequences. Meanwhile I'm not spending 5 minutes of my day clicking though some damn annoying popups.
A: I have it off, but that's because I trust myself entirely too much. Its funny though, it seems to make the average user (I live in Jourdanton TX, we have a lot of "average users" here in the middle of nowhere) afraid of the control panel, because it causes all these weird prompts to come up and wants their password every 5 minutes if they start to poke around.
That said, I think it depends on your level of expertise with the system. On your dev machine, yes, definitely turn the darn thing off. I haven't gone a day this week without needing to install or update some piece of software, and I don't like having to elevate myself to admin status to have to do that.
What I would really like is the ability to have it elevate for a period of time, or say automatically turn itself back on when I log off, so that I could do an entire session's worth of installing stuff without being bothered, and then be secure again when I was done and (inevitably) had to restart the machine as seems to be common practice with windows installers now.
And all that ranting aside, I think for your test machine, it should definitely be on. Not because I necessarily agree with the feature (any more than I agree that the Administrator account should be disabled permananty, I love that account way too much) but because the User is very likely to have it turned on, and you need to see your program through their eyes. This is especially true if your program is going to require elevation, say to change a setting or modify a certain directory, so that you can prompt your users to accept the UAC warning in your program, which adds an extra layer of comfort to the user I think.
Oh, and as for the one program, let me harp on you just slightly. Shouldn't the program have a define somewhere in the main header files that tells it where its "working directory" is? If this is already the case, then why is it so hard to change that working directory to somewhere else? If its not the case, shame on you, and you should go fix that. ^_^ That would have saved you a lot of trouble.
-Nicholas
A: I'm running into issues where our build scripts do things like manipulate registry entries or add things to the GAC. We're trying to get away from this stuff but until we do it's there and requires privilege escalation. So the build scripts get run from an Administrator command window. The problem comes in when I open Visual Studio 2008 and try to build part of the application - I can't as a normal user because the output files can't be overwritten because the build in the Admin console produced the same files at a higher privilege level. It's causing me a lot of frustration and I'm thinking the best way is to turn UAC off for now but I'm very reluctant to do so.
A: Because I've got post-build scripts to copy executables into the Program Files directory for testing I run Visual Studio with elevated privileges.
One tip I've found that makes life easier, is that to quickly start a command prompt with elevated privileges you can:
*
*press Window Key
*type "cmd"
*Press Ctrl+Shift+Enter
*Left cursor key (with right pinky) to move to "Continue" button on UAC dialog
*Enter
I always keep one open for launching my IDE and running build scripts.
The only downside I've found is that elevated windows don't interact with some of my window tweaking software like KatMouse and Switcher.
A: No, but I do change some settings:
*
*Do not prompt for elevation if not in the administrators group.
*Evelvate automatically if you are the [machine]\administrator
I do not put myself in the administrators group.
Juts a plain old user, with no elevation prompts.
Use Run As if developing/debugging web apps with development server
A: I code with UAC off. I found annoying to see all those popups when i open visual studio or star uml, or just want to change a setting in my machine. I have always installed a good internet security suite that keeped me "virus free" on my machine for long years and i don't see the point to have always an "are you sure" prompt on every task i do. I agree with Ed because everyone click ok.
Exemple : install a firewall to some member of your family. When they will be prompted if app XYZ can connect to the internet, they will click yes. They will not make the distinction between a good app and a spyware/virus. It's the same thing with UAC.
A: I leave UAC on, but have VS set to always run as admin. The only real reason why I do that though is that I mostly work on software that requires admin permissions to run anyway. (And yes, I know that should be the minority, but my app happens to be one of those -- it's a soft-realtime hardware controller.)
For general purpose apps, you must at least test with UAC enabled; while you could do that on a separate machine, it's easier to test on your dev machine. And the prompt isn't that much of an imposition, especially if you disable the "secure desktop" option (which reacts very slowly with most graphics cards when enabled).
A: If you stay on Vista, turn off UAC and rely on Microsoft Security Essentials' real-time monitor to intercept anything that wants to alter your system. Or, upgrade to Win7, where you can leave UAC on and control the levels at which you want UAC to notify and interrupt the execution.
EDIT: It's very easy to exploit a Windows computer anyway, so what's the sense in having UAC turned on, if it really doesn't guarantee protection?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Count number of occurrences of token in a file I have a server access log, with timestamps of each http request, I'd like to obtain a count of the number of requests at each second. Using sed, and cut -c, so far I've managed to cut the file down to just the timestamps, such as:
22-Sep-2008 20:00:21 +0000
22-Sep-2008 20:00:22 +0000
22-Sep-2008 20:00:22 +0000
22-Sep-2008 20:00:22 +0000
22-Sep-2008 20:00:24 +0000
22-Sep-2008 20:00:24 +0000
What I'd love to get is the number of times each unique timestamp appears in the file. For example, with the above example, I'd like to get output that looks like:
22-Sep-2008 20:00:21 +0000: 1
22-Sep-2008 20:00:22 +0000: 3
22-Sep-2008 20:00:24 +0000: 2
I've used sort -u to filter the list of timestamps down to a list of unique tokens, hoping that I could use grep like
grep -c -f <file containing patterns> <file>
but this just produces a single line of a grand total of matching lines.
I know this can be done in a single line, stringing a few utilities together ... but I can't think of which. Anyone know?
A: I think you're looking for
uniq --count
-c, --count
prefix lines by the number of occurrences
A: Using AWK with associative arrays might be another solution to something like this.
A: Just in case you want the output in the format you originally specified (with the number of occurences at the end):
uniq -c logfile | sed 's/\([0-9]+\)\(.*\)/\2: \1/'
A: Using awk:
cat file.txt | awk '{count[$1 " " $2]++;} \
END {for(w in count){print w ": " count[w]};}'
A: Tom's solution:
awk '{count[$1 " " $2]++;} END {for(w in count){print w ": " count[w]};}' file.txt
works more generally.
My file was not sorted :
name1
name2
name3
name2
name2
name3
name1
Therefore the occurrences weren't following each other, and uniq does not work as it gives :
1 name1
1 name2
1 name3
2 name2
1 name3
1 name1
With the awk script however:
name1:2
name2:3
name3:2
A: maybe use xargs? Can't put it all together in my head on the spot here, but use xargs on your sort -u so that for each unique second you can grep the original file and do a wc -l to get the number.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: View SVG using Silverlight or Flash Is there a way to view a SVG from either a file or webpage dynamically using Silver light or flash?
Edit: I am currently converting them on the server using inkscape. The only trouble with this is the time it takes to make all 60+ pages of the catalog is a little slow. It take 5 min to make it, and some customers (boss included) would like this process to be quicker.
A: Additionally Inkscape has support for exporting SVG images to XAML output. Neither of course is exactly what you are asking for as both "convert" in some manner, but to directly answer -- No, Silverlight does not interpret SVG directly. I'm not sure about Flash though.
A: XamlTune can convert SVG to XAML for viewing in a Silverlight control.
A: timheuer: Do you know if there is a command line option to make the XAML file?
EDIT: it seems that svg does not directly translate to the xaml format, as my diagrams will crash IE in XP and Vista.
A: milhous: I'm not familiar with Inkscape's command-line interface (if any), but you can take an SVG and save as Microsoft XAML.
A: The SVG project at codeplex can read and render an SVG file to a Graphics object which you might be able to use in Silverlight. Alternatively you can just use the HttpHandler to render the SVG straight to the browser in PNG format.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What is the 'best' way to do distributed transactions across multiple databases using Spring and Hibernate I have an application - more like a utility - that sits in a corner and updates two different databases periodically.
It is a little standalone app that has been built with a Spring Application Context. The context has two Hibernate Session Factories configured in it, in turn using Commons DBCP data sources configured in Spring.
Currently there is no transaction management, but I would like to add some. The update to one database depends on a successful update to the other.
The app does not sit in a Java EE container - it is bootstrapped by a static launcher class called from a shell script. The launcher class instantiates the Application Context and then invokes a method on one of its beans.
What is the 'best' way to put transactionality around the database updates?
I will leave the definition of 'best' to you, but I think it should be some function of 'easy to set up', 'easy to configure', 'inexpensive', and 'easy to package and redistribute'. Naturally FOSS would be good.
A: Setup a transaction manager in your context. Spring docs have examples, and it is very simple. Then when you want to execute a transaction:
try {
TransactionTemplate tt = new TransactionTemplate(txManager);
tt.execute(new TransactionCallbackWithoutResult(){
protected void doInTransactionWithoutResult(
TransactionStatus status) {
updateDb1();
updateDb2();
}
} catch (TransactionException ex) {
// handle
}
For more examples, and information perhaps look at this:
XA transactions using Spring
A: When you say "two different databases", do you mean different database servers, or two different schemas within the same DB server?
If the former, then if you want full transactionality, then you need the XA transaction API, which provides full two-phase commit. But more importantly, you also need a transaction coordinator/monitor which manages transaction propagation between the different database systems. This is part of JavaEE spec, and a pretty rarefied part of it at that. The TX coordinator itself is a complex piece of software. Your application software (via Spring, if you so wish) talks to the coordinator.
If, however, you just mean two databases within the same DB server, then vanilla JDBC transactions should work just fine, just perform your operations against both databases within a single transaction.
A: The best way to distribute transactions over more than one database is: Don't.
Some people will point you to XA but XA (or Two Phase Commit) is a lie (or marketese).
Imagine: After the first phase have told the XA manager that it can send the final commit, the network connection to one of the databases fails. Now what? Timeout? That would leave the other database corrupt. Rollback? Two problems: You can't roll back a commit and how do you know what happened to the second database? Maybe the network connection failed after it successfully committed the data and only the "success" message was lost?
The best way is to copy the data in a single place. Use a scheme which allows you to abort the copy and continue it at any time (for example, ignore data which you already have or order the select by ID and request only records > MAX(ID) of your copy). Protect this with a transaction. This is not a problem since you're only reading data from the source, so when the transaction fails for any reason, you can ignore the source database. Therefore, this is a plain old single source transaction.
After you have copied the data, process it locally.
A: In this case you would need a Transaction Monitor (server supporting XA protocol) and make sure your databases supports XA also. Most (all?) J2EE servers comes with Transaction Monitor built in. If your code is running not in J2EE server then there are bunch of standalone alternatives - Atomicos, Bitronix, etc.
A: You could try Spring ChainedTransactionManager - http://docs.spring.io/spring-data/commons/docs/1.6.2.RELEASE/api/org/springframework/data/transaction/ChainedTransactionManager.html that supports distributed db transaction. This could be a better alternative to XA
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
}
|
Q: Upload photo to arbitrary FTP with iPhone app I'd like to upload a photo from my iphone to an arbitrary ftp. How can I do this with Cocoa / Xcode ?
Thanks!
A: You'll want to look into CFFTPStream in regard to the iPhone
A: There's a nice framework out there called ConnectionKit. I haven't used it personally, but I've heard it's good.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Including many rewrite directives in lighttpd I have a bunch of projects in parallel subdirectories that all have etc/lighttpd.conf files. The files are very simple; they just include a directive that looks like this:
url.rewrite-once = ("^/project(.*)$"=>"project/router.php?args=$1")
Unfortunately, I just discovered that I can't simply loop through them, because I'll get a "duplicate config variable" error. I see that the way I'm supposed to use it is like this:
url.rewrite-once = (
"^/project1(.*)$"=>"project1/router.php?args=$1"
,"^/project2(.*)$"=>"project2/router.php?args=$1"
)
However, if I make my per-directory config files just include the rewrites, and have a shell script build them, I can't really put any OTHER lighty directives in the per-directory files. Then again, I'm new to lighty, so maybe I don't need to and just don't realize it.
What's "the right way" to do this?
A: try:
url.rewrite-once += ("^/project1(.*)$"=>"project1/router.php?args=$1")
to append your new config to the existing variable instead of defining it again.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What are XML namespaces for? This is something that I always find a bit hard to explain to others:
Why do XML namespaces exist?
When should we use them and when should we not?
What are the common pitfalls when working with namespaces in XML?
Also, how do they relate to XML schemas? Should XSD schemas always be associated with a namespace?
A: Think of them as surnames for element types. If you've got two friends, both called Bob, and you are talking about one of them, somebody might ask which Bob you are talking about. Just saying "Bob" isn't very helpful, so you say "Bob Smith", or "Bob Jones".
It's the same with element types. Sometimes a short name isn't enough, because different people can pick the same name. So you include a URI as a "surname", to distinguish between the different Bobs out there.
A: XML is a super-language, meaning that it is the basis for any XML-based language (makes sense, right?). Think of XML as a pen that can write any sentence, in any language. It all depends on the writer, and preferably the language should be known to the reader.
An XML namespace is basically the name of the language, much like "English" or "עברית". I helps the recipient of the XML document to parse it and extract the information within.
Let's say that I have a furniture factory and you have a furniture store. your storage application and my supply application are completely unrelated, but when they communicate through XML messages, the messages should be understandable and easily parsed by both sides
Therefore, both systems need to know the Schema, which defines the language syntax and agreed restrictions. Think of the schema as the dictionary and grammar textbook. The schema is the document that both systems should know, that whomever writes the parsing code in each system must know, and that includes the declaration of the namespace.
Each namespace is named as a URI, which in most cases is the location of the schema document that defines it.
Of course, not every XML document needs a namespace, especially when it is not used to convey information to a remote system. For example, when you serialize objects into XML for persisting in your database.
A: They're for allowing multiple markup languages to be combined, without having to worry about conflicts of element and attribute names.
For example, look at any bit of XSLT code, and then think what would happen if you didn't use namespaces and were trying to write an XSLT where the output has to contain "template", "for-each", etc, elements. Syntax errors, is what.
I'll leave the advice and pitfalls to others with more experience than I.
A: Why do XML namespaces exist?
Because, back in 1997, some very influential persons in the W3C wanted them, and would not take no for an answer. Even when it was demonstrated, I dare say conclusively, that there were better ways to solve the "problem" they thought they had, they still wielded their influence to have their desires written up into a W3C Recommendation.
The biggest whopper in the by now extensive mythology surrounding XML Namespaces is that there is technical merit to them. (This is the downstream effect of a Recommendation simply existing and thus occupying mindspace - "gee, there's gotta be a (good) reason!" - as opposed to a forgetable footnote somewhere.)
Much pain, no gain.
When should we use them and when should we not?
You should never use them if you can help it. Unfortunately, the relentless promotion of this BAD[*] device by interested parties has fostered a clusterf*ck of specs today that make it practically impossible not to have to contend with XML namespaces at some point or another. So, even if you eschew XML namespaces yourself, you will find namespace-encrusted crud coming at you from all directions, or worse, toolsets that simply refuse to work unless you feed them such crud.
What are the common pitfalls when working with namespaces in XML?
One very common pitfall is in using Xpath expressions with documents where a namespace has been "defaulted": the namespace will have to be explicit in the expressions. Another issue is using them "correctly" when constructing documents: they create problems out of thin air.
Also, how do they relate to XML schemas? Should XSD schemas always be associated with a namespace?
There is no necessary relation, except that the XSD Schema spec was developed at a time when just about everyone on the committee had the XML Namespaces bit in their teeth. So they worked it in as deeply as they could. It's possible, nevertheless, to use XSD schemas without namespaces, but it's a steep uphill slog as just about every toolset supporting XSD schemas assumes that you will be "wanting" to use namespaces.
[*] BAD = Broken As Designed
UPDATE: An old essay on this non-solution to a non-problem.
A: We use namespaces because people xeep wanting to use the same words to mean different things in their own private idaho. Usually, you can determine from context what a person means. In a personnel database, the XML is personnel records. In a vehicle registry database, the XML is vehicle registry records.
Both keep a tag named "location", but the tag means different things to each and contains different fields.
Now, that's cool: but what if you need or want to store XML from both in the same database? Or, more interestingly, what if both databases want to store XML chunks from some other, common database (eg: an Accounts database).
XML namespaces associates with each XML tag a URI, such that the tag name itself has a url in front of it, that's part of the tag name (of course, actual XML documents use a shorthand do do this). By carefully choosing the URI, its easy to be confident that the tag names wont collide - it's as if the two location tags were named entirely differently, so there's no confusion. As a bonus, the two entirely different location tags can include stuff from the accounts database, and explicitly state that they are talking about the same thing.
The thing that makes all this useful is XPATH.
With the above, you can start to write XPATH expressions that say things like: find me any accounts:account overdue sections anywhere in this xml. Or: find me any accounts:warning message items anywhere in this particular chunk of XML, where the warning message is a child node (however deep) of either a personnel:payment node or a vehicle:status node.
That XPATH expression might be used somewhere in an XSLT document, whose job it is to convert the XML into XHTML or XPDF, for display.
What's the payoff? Why do it? Because you can search the XML logfile, pull out all the accounts overdue messages wherever they appear, without confusing them with "message" tags produced by other systems, convert 'em to xhtml, and display them in bold red via a css tag: all without writing a scrap of procedural code.
A: It's nearly the same as asking "why do we use packages for Java/C#?":
*
*reusability: You can reuse a set of tags/attributes you define across different types of xml documents.
*modularity: If you need to add some "aspect" to your XML; adding a namespace to your xml document is simpler than changing your whole xml schema definition.
*Avoid poluting the "main" namespace: You don't force your parser to work with a huge schema definition, just use the namespace you need to.
A: The biggest pitfall IMHO is human-interaction interpreting documents e.g. to develop code to process an XML Doc. It is too easy to focus on the literal expression of the document rather than the infoset result of parsing the document.
e.g. the following nodes
<a xmlns="uri:foo"/>
<foo:a xmlns:foo="uri:foo"/>
<bar:a xmlns:bar="uri:foo"/>
are all semantically identical - yet very different to the naive eye.
The 1st example yields a very common mistake developing XPaths - missing the fact that "a" is in a namespace - thus //a yields no matches. (or worse still matching nodes in a different namespace!)
The 3rd example opens another flaw in understanding - that the prefix text is semantically significant. When parsing documents with XPATH I can declare any prefix I like for matching as long as it's uri matches those of the document.
A: For example: XML Namespaces by Example
In my words: If you must use some XML format for external company ( for example ) and you need provide in XML document some informations, which has same name, you need a namespace.
Example:
<sampleDoc>
<header title="Hello world!">
<items>
<item name="Volvo" color="Blue"/>
</items>
</header>
</sampleDoc>
and you want merge some data into this document, which has a same name, but another sense ( so value to ), you should use a namespace:
<sampleDoc>
<header title="Hello world!">
<items>
<item name="Volvo" color="White" my_unique_namespace:color="#FFFFFF"/>
</items>
</header>
</sampleDoc>
Ofcourse - you can change a name of attribute. For example to "my_unique_color". Bud in another document, there can be attribute with same name again. So, if you have a unique namespace ( our web domain for example ), you can always use the same names of elements and/or attributes withoud any problems.
A: From the W3 recommendation...
XML namespaces provide a simple method for qualifying element and attribute names used in Extensible Markup Language documents by associating them with namespaces identified by URI references.
A: Namespaces are used to disambiguate names that you use within the document. It also gives you the ability to bind a short name to a name space that can then be used to refer to a remote element or attribute. The name space itself refers to the location that defines the elements and attributes you use in the document. There is a lot more to know, but that is the heart of it. There is a lot more information here.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "78"
}
|
Q: SQL query - Select * from view or Select col1, col2, ... colN from view We are using SQL Server 2005, but this question can be for any RDBMS.
Which of the following is more efficient, when selecting all columns from a view?
Select * from view
or
Select col1, col2, ..., colN from view
A: NEVER, EVER USE "SELECT *"!!!!
This is the cardinal rule of query design!
There are multiple reasons for this. One of which is, that if your table only has three fields on it and you use all three fields in the code that calls the query, there's a great possibility that you will be adding more fields to that table as the application grows, and if your select * query was only meant to return those 3 fields for the calling code, then you're pulling much more data from the database than you need.
Another reason is performance. In query design, don't think about reusability as much as this mantra:
TAKE ALL YOU CAN EAT, BUT EAT ALL YOU TAKE.
A: Just to clarify a point that several people have already made, the reason Select * is inefficient is because there has to be an initial call to the DB to find out exactly what fields are available, and then a second call where the query is made using explicit columns.
Feel free to use Select * when you are debugging, running casual queries or are in the early stages of developing a query, but as soon as you know your required columns, state them explicitly.
A: Select * is a poor programming practice. It is as likely to cause things to break as it is to save things from breaking. If you are only querying one table or view, then the efficiency gain may not be there (although it is possible if you are not intending to actually use every field). If you have an inner join, then you have at least two fields returning the same data (the join fields) and thus you are wasting network resources to send redundant data back to the application. You won't notice this at first, but as the result sets get larger and larger, you will soon have a network pipeline that is full and doesn't need to be. I can think of no instance where select * gains you anything. If a new column is added and you don't need to go to the code to do something with it, then the column shouldn't be returned by your query by definition. If someone drops and recreates the table with the columns in a different order, then all your queries will have information displaying wrong or will be giving bad results, such as putting the price into the part number field in a new record.
Plus it is quick to drag the column names over from the object browser, so that is just pure laziness not efficiency in coding.
A: It is best practice to select each column by name. In the future your DB schema might change to add columns that you would then not need for a particular query. I would recommend selecting each column by name.
A: It depends. Inheritance of views can be a handy thing and easy to maintain (SQL Anywhere):
create view v_fruit as select F.id, S.strain from F key join S;
create view v_apples as select v_fruit.*, C.colour from v_fruit key join C;
A: If you're really selecting all columns, it shouldn't make any noticeable difference whether you ask for * or if you are explicit. The SQL server will parse the request the same way in pretty much the same amount of time.
A: Always do select col1, col2 etc from view. There's no efficieny difference between the two methods that I know of, but using "select *" can be dangerous. If you modify your view definition adding new columns, you can break a program using "select *", whereas selecting a predefined set of columns (even all of them, named), will still work.
A: I guess it all depends on what the query optimizer does.
If I want to get every record in the row, I will generally use the "SELECT *..." option, since I then don't have to worry should I change the underlying table structure. As well, for someone maintaining the code, seeing "SELECT *" tells them that this query is intended to return every column, whereas listing the columns individually does not convey the same intention.
A: For performance - look at the query plan (should be no difference).
For maintainability. - always supply a fieldlist (that goes for INSERT INTO too).
A: select
column1
,column2
,column3
.
.
.
from Your-View
this one is more optimizer than Using the
select *
from Your View
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Is DataGrid a necessity in WPF? I have seen a lot of discussions going on and people asking about DataGrid for WPF and complaining about Microsoft for not having one with their WPF framework till date. We know that WPF is a great UI technology and have the Concept of ItemsControl,DataTemplate, etc,etc to make great UX. Even WPF has got a more closely matching control- ListView, which can be easily templated to give better UX than a traditional Datagrid like display. And I would say a readymade DataGrid control will kill or hide a lot of creativity and it surely will decrease the innovations in User Experience field.
So what is your opinion about the need of DataGrid in WPF as a Framework component? If you feel it is necessary then is it just because the world is so used to the DatGrid way of data display for many years?
Some other threads having the discussion about DatGrid are here and here
Link to WPF ToolKit - Latest WPF DatGrid
A: Can't think of a better control to display tabular data, especially in business apps where you don't want to reinvent the wheel by templating/developing a (Headered)ItemsControl to make it behave like the good old DGV. I'm sure you saw this.
A: DataGrids are excellent for displaying large amounts of tabular data bound to a backing store.
But what happened in the WinForms world was that people often used them for everything that required a multi-element scrolling list. Souped-up third-party DataGrids soon became available that allowed columns and fields to contain buttons and ComboBoxes and icons, etc.
The DataGrid became a workhorse because there was a need for something it could be coaxed into behaving like. Similar happened to DataTables before generic collections came along--and when you're using lots of DataTables, presenting it in the UI with a DataGrid is the path of least resistance.
I think that when WPF came out, a lot of programmers like me were still thinking in this fashion, and sought out WPF ports of the DataGrid concept.
A: Nobody is disputing that you can make a DataGrid control in WPF yourself. The same can probably be said about WinForms, although it would be more difficult. I've implemented some functionality with ListView - presenting tabular data is easy, you could even say it's well supported. However, the amount of code, manually written code, needed to make an editing ListView is enormous.
The business applications usually require editing of many tables, and you don't want to be creative, you want to be quick. That's why DataGrid is needed in my opinion.
A: Yes DataGrids will never go away as essential business UI components. People love their spreadsheets and we want to share in that love!
Note that MS are shipping these extra controls - they have created the WPF Toolkit on CodePlex to provide a fast-turnaround, open-source style of deployment.
It already includes a DataGrid and Calendar.
A: Yes it is!
Among many other controls that ms failed to deliver. (Datepicker, NumericControl)
MS should first give us the tools to get the job done, that is the least i expect from a programming enviroment with the hype of wpf.
A: It is essential, but you can achieve nearly the same effect with a ListView that is using a GridView, can't you?
A: After working with WPF for about 2 years now. I would say that a DataGrid is really just a glorified ListBox (since [almost] everything in WPF is styleless).
One could style a ListBox to take an Entity of some sort and show a "record" control for each entry. Depending on how flexible these are made, they could automatically adjust based on the entity passed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
}
|
Q: Reverse-projection 2D points into 3D Suppose we have a 3D Space with a plane on it with an arbitrary equation: ax+by+cz+d=0
now suppose that we pick 3 random points on that plane: (x0, y0, z0) (x1, y1, z1) (x1, y1, z1)
now I have a different point of view(camera) for this plane. I mean I have a different camera that will look at this plane from a different point of view. From that camera point of view these points have different locations. for example (x0, y0, z0) will be (x0', y0')
and (x1, y1, z1) will be (x1', y1') and (x2, y2, z2) will be (x2', y2') from the new camera point of view.
I want to pick a point for example (X,Y) from the new camera point of view and tell where it will be on that plane. All I know is that 3 points and their locations on 3D space and their projection locations on the new camera view.
Do you know the coefficients of the plane-equation and the camera positions (along with the projection), or do you only have the six points?
I know the location of first 3 points. therefore we can calculate the coefficients of the plane. so we know exactly where the plane is from (0,0,0) point of view. and then we have the camera that can only see the points! So the only thing that camera sees is 3 points and also it knows their locations in 3D space (and for sure their locations on 2D camera view plane). and after all I want to look at camera view, pick a point (for example (x1, y1)) and tell where is that point on that plane. (for sure this (X,Y,Z) point should fit on the plane equation). Also I know nothing about the camera location.
A: You are asking how to intersect a line and a plane?
See here http://paulbourke.net/geometry/pointlineplane/
ps. Your teacher knows this site!
A: It is not possible to give an unambiguous solution to this problem. However, here's how I would extract the different solutions:
1) Solve for the camera position and direction using the P3P (Perspective-3-Point) algorithm from the original RANSAC paper, which give up to four possible feasible solutions (with the points in front of the camera).
2) Project a ray with the camera position as origin having (X,Y) as projection in the camera and calculate its intersection with the plane.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Run Amazon EC2 AMI in Windows Is there a way to run an Amazon EC2 AMI image in Windows? I'd like to be able to do some testing and configuration locally. I'm looking for something like Virtual PC.
A: If you build your images from scratch you can do it with VMware (or insert your favorite VM software here).
Build and install your linux box as you'd like it, then run the AMI packaging/uploading tools in the guest. Then, just keep backup copies of your VM image in sync with the different AMI's you upload.
Some caveats: you'll need to make sure you're using compatible kernels, or at least have compatible kernel modules in the VM, or your instance won't boot on the EC2 network. You'll also have to make sure your system can autoconfigure itself, too (network, mounts, etc).
If you want to use an existing AMI, it's a little trickier. You need to download and unpack the AMI into a VM image, add a kernel and boot it. As far as I know, there's no 'one click' method to make it work. Also, the AMI's might be encrypted (I know they are at least signed).
You may be able to do this by having a 'bootstrap' VM set up to specifically extract the AMI's into a virtual disk using the AMI tools, then boot that virtual disk separately.
I know it's pretty vague, but those are the steps you'd have to go through. You could probably do some scripting to automate the process of converting AMI's to vdks.
The Amazon forum is also helpful. For example, see this article.
Oh, this article also talks about some of these processes in detail.
A: Amazon EC2 with Windows Server - announced this morning, very exciting
http://aws.amazon.com/windows/
A: It's a bit of a square peg in a round hole ... kind of like running MS-Office on Linux.
Depending on how you value your time, it's cheaper to just get another PC and install Linux and Xen.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/128431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.