text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: SQL Server Management Studio 2005 - Change Default Directory for Backup Location Using MS SQL Server Management Studio 2005 - To Restore a Database:
*
*Restore Database
*(*) From Device:
*Click "..." Button
*Backup media: File
*Click "Add" Button
*Popup Window: "Locate Backup File"
That window Defaults to C:\Program Files\Microsoft SQL Server\MSSQL.1\Backup
How do I configure MS SQL Server Management Studio to look in D:\data\databases\
instead of looking in C:\Program Files\Microsoft SQL Server\MSSQL.1\Backup ?
A: It's stored in the registry.
On my computer, it's at...
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Microsoft SQL Server\MSSQL.1\MSSQLServer
There is a registry key named BackupDirectory
I suspect the registry key will be in a different location for you (I have 64 bit vista). I did a search in my registry for 'MSSQL.1\Backup' to find it.
A: For convenience, here's a reg file entry to do this:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\MSSQLServer]
"BackupDirectory"="D:\\data\\databases\\"
A: In the registry, edit the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\MSSQLServer\BackupDirectory value to point to d:\data\databases
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Are ruby command line switches -rubygems & -r incompatible? I recently converted a ruby library to a gem, which seemed to break the command line usability
Worked fine as a library
$ ruby -r foobar -e 'p FooBar.question' # => "answer"
And as a gem, irb knows how to require a gem from command-line switches
$ irb -rubygems -r foobar
irb(main):001:0> FooBar.question # => "answer"
But the same fails for ruby itself:
$ ruby -rubygems -r foobar -e 'p FooBar.question'
ruby: no such file to load -- foobar (LoadError)
must I now do this, which seems ugly:
ruby -rubygems -e 'require "foobar"; p FooBar.question' # => "answer"
Or is there a way to make the 2 switches work?
Note: I know the gem could add a bin/program for every useful method but I don't like to pollute the command line namespace unnecessarily
A: -rubygems is actually the same as -r ubygems.
It doesn't mess with your search path, as far as I understand, but I think it doesn't add anything to your -r search path either. I was able to do something like this:
ruby -rubygems -r /usr/lib/ruby/gems/myhelpfulclass-0.0.1/lib/MyHelpfulClass -e "puts MyHelpfulClass"
MyHelpfulClass.rb exists in the lib directory specified above.
That kind of sucks, but it at least demonstrates that you can have multiple -r equire directives.
As a slightly less ugly workaround, you can add additional items to the ruby library search path (colon delimited in *nix, semicolon delimited in windows).
export RUBYLIB=/usr/lib/ruby/gems/1.8/gems/myhelpfulclass-0.0.1/lib
ruby -rubygems -r MyHelpfulClass -e "puts MyHelpfulClass"
If you don't want to mess with the environment variable, you can add something to the load path yourself:
ruby -I /usr/lib/ruby/gems/1.8/gems/myhelpfulclass-0.0.1/lib \
-rubygems -r MyHelpfulClass -e "puts MyHelpfulClass"
A: Note: this problem exists for ruby 1.8, but is resolved in ruby 1.9.
On 1.8, if you specify both libs via -r, ruby will try to load each library without paying attention to changes in the $LOAD_PATH. But rubygems does change $LOAD_PATH so the gems can be found.
The reason it works with irb is that irb does pay attention to $LOAD_PATH changes.
Unfortunately, the best workaround I've found is to use the more verbose form:
ruby -rubygems -e 'require "foobar"; p FooBar.question'
The pain doesn't increase linearly with the number of libs though, if you use an iterator:
ruby -rubygems -e '%w(rake rspec).each{|r| require r }'
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Is MEF a replacement for System.Addin?
Possible Duplicate:
Choosing between MEF and MAF (System.AddIn)
Is the Managed Extensibility Framework a replacement for System.Addin? Or are they complementary?
A: .. just two links for further reference:
*
*SO question: Choosing between MEF and MAF (System.AddIn)
*Kent Boogaart blog: MAF and MEF
A: Short answer no it is not. System.Addin allows you to isolate add-ins in to a seprate app-domain / process. It also provides facilities for versioning. These capabilities are critical for many customers particularly large ISVs. MEF on the other hand is designed to be simple programming model for extensibility. The two can work together and complement each other.
A: It is touched in the MSDN Forums here:
Comparison to the AddIn libraries?
And also by Krzysztof Cwalina in his blog on the release of MEF:
Managed Extensibility Framework
Summary: they live side by side.
A: Just as a side note, some time ago I developed so called IsolatingCatalog, which is MEF catalog for providing part isolation. You can instruct it so that you parts are created in a separate app domain/process. WCF is used for IPC. Check it out here.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: What do you put in your webservice? I have a website (ASP.NET) and some winforms(.Net 2.0) for a project (written in C#). I use the webservice (IIS6) for task that both require like sending email inside the business.
I think Webservice is nice but I would like from your experience what should and what should not be in a webservice?
A: Well it sounds like you have a limited Service Oriented Architecture (at least, that's what I think you're getting at), which according to Gartner means you'll be rich soon. :)
I find that the benefit of SOA for me really comes down to the heterogeneity of the systems involved (sounds like yours doesn't qualify there because it's all .NET), and the negative of SOA is primarily because of the verbose nature of XML. True, you don't need XML for SOA, but it's the current majority, IMHO.
But if you're not concerned about the bandwidth/parsing penalties, who cares? Maybe you're not piping through 10,000 service calls a minute. With this style of implementation, you're following DRY, just with a WS instead of a sub, and you're adhering to a standard that is by nature compatible with multiple systems.
There's worse approaches.
A: In My Opinion:
Web services should be reserved for code that
*
*You either can't or don't want to distribute; or,
*code that needs to seriously scale up.
One example is custom business logic that multiple applications need access to.
Code you don't want to put into web services include:
*
*code that is performance based;
*code that applies only to the application in question.
A: It seems like the new trend for web services/SOA is to more or less expose a light-weight middle tier that your host application can use. Instead of having individual method calls exposed through a service (as in your example), SOA-oriented applications have extensive Data/Operation contracts that act as the "traditional" middle tier assembly.
A: As little as possible, while still being useful.
By default, DON'T put every field of the return objects in the return data, and DON'T expose every method of an existing class.
read this too...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: SQL Server 2005 vs. ASP.net datetime format confusion I've found a similar question on stack overflow, but it didn't really answer the question I have. I need to make sure that my asp.net application is formatting the date dd/mm/yyyy the same as my SQL Server 2005.
How do I verify the date culture (if that's what it's called) of the server matches how I've programmed my app? Are there specific database settings and OS settings? Is it table-specific? I don't want to transpose my days and months.
thank you
A: When you get a DateTime out of the database, it should be in a non-cultured format (like the DateTime object, based on the number of ticks since a certain date). It is only when you are converting that value into a string that you need to be concerned with culture. In those cases, you can use yourDateTimeValue.ToString("dd/MM/yyyy", CultureInfo.InvariantCulture) to make sure that the information displays correctly.
A: I belive that if you use SqlParameters ADO.NET will take care of the rest and you don't have to worry about it. Besides, it's good for defending against SQL Injection attacks too! :)
A: ** Watch out because SQL DateTime columns are non-nullable and their minimum value is 1/1/1753 while .net DateTimes are non-nullable with min values of 1/1/0001. **
If you're pulling data from a real DateTime column, by default it will always be in the same standard format. For saving the data to the column, you might want to specify the SqlDbType.DateTime in your parameter.
i ripped this off of http://bytes.com/forum/thread767920.html :
com.Parameters.Add("@adate", SqlDbType.DateTime).Value = DateTime.Now;
A: Well, if you keep datetime fields in the DB you shouldn't worry about it.
As long as you keep the dates in app strongly typed (DateTime variables) and send the dates through prepared statements with DBParameter/SqlParameter your DB will take them as is.
If you use strings to hold your dates in code, some casts will ensure you send the right values:
string sqlCmd = @"SELECT *
FROM MyTable
WHERE MyDateField = CONVERT(datetime, '{0}', 101)";
// assuming myDateString is a string with a date in the local format
sqlCmd = string.Format(sqlCmd,
Convert.ToDateTime(myDateString).ToString("yyyyMMdd"));
(the code is ugly, but hopefully it gets the point across)
A: As others have mentioned, you should be OK as far as storing datetimes culturally. What I would recommend is that you store all of your times as standard UTC time. In SQL Server 2005 and older there is no way to store time zone information, but if everything is stored in universal time, you should be OK because the time can be converted to the local time later on.
SQL Server 2008 does have some datatypes that are aware of time zones, and if you're using .NET 3.5 there are tools to assist with time zone handling/conversions.
Definitely keep times in universal format. This will make a world of a difference if you have to work in multiple time zones.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: php String Concatenation, Performance In languages like Java and C#, strings are immutable and it can be computationally expensive to build a string one character at a time. In said languages, there are library classes to reduce this cost such as C# System.Text.StringBuilder and Java java.lang.StringBuilder.
Does php (4 or 5; I'm interested in both) share this limitation? If so, are there similar solutions to the problem available?
A: No, there is no type of stringbuilder class in PHP, since strings are mutable.
That being said, there are different ways of building a string, depending on what you're doing.
echo, for example, will accept comma-separated tokens for output.
// This...
echo 'one', 'two';
// Is the same as this
echo 'one';
echo 'two';
What this means is that you can output a complex string without actually using concatenation, which would be slower
// This...
echo 'one', 'two';
// Is faster than this...
echo 'one' . 'two';
If you need to capture this output in a variable, you can do that with the output buffering functions.
Also, PHP's array performance is really good. If you want to do something like a comma-separated list of values, just use implode()
$values = array( 'one', 'two', 'three' );
$valueList = implode( ', ', $values );
Lastly, make sure you familiarize yourself with PHP's string type and it's different delimiters, and the implications of each.
A: PHP strings are mutable. You can change specific characters like this:
$string = 'abc';
$string[2] = 'a'; // $string equals 'aba'
$string[3] = 'd'; // $string equals 'abad'
$string[5] = 'e'; // $string equals 'abad e' (fills character(s) in between with spaces)
And you can append characters to a string like this:
$string .= 'a';
A: I wrote the code at the end of this post to test the different forms of string concatenation and they really are all almost exactly equal in both memory and time footprints.
The two primary methods I used are concatenating strings onto each other, and filling an array with strings and then imploding them. I did 500 string additions with a 1MB string in php 5.6 (so the result is a 500MB string).
At every iteration of the test, all memory and time footprints were very very close (at ~$IterationNumber*1MB). The runtime of both tests was 50.398 seconds and 50.843 seconds consecutively which are most likely within acceptable margins of error.
Garbage collection of strings that are no longer referenced seems to be pretty immediate, even without ever leaving the scope. Since the strings are mutable, no extra memory is really required after the fact.
HOWEVER, The following tests showed that there is a different in peak memory usage WHILE the strings are being concatenated.
$OneMB=str_repeat('x', 1024*1024);
$Final=$OneMB.$OneMB.$OneMB.$OneMB.$OneMB;
print memory_get_peak_usage();
Result=10,806,800 bytes (~10MB w/o the initial PHP memory footprint)
$OneMB=str_repeat('x', 1024*1024);
$Final=implode('', Array($OneMB, $OneMB, $OneMB, $OneMB, $OneMB));
print memory_get_peak_usage();
Result=6,613,320 bytes (~6MB w/o the initial PHP memory footprint)
So there is in fact a difference that could be significant in very very large string concatenations memory-wise (I have run into such examples when creating very large data sets or SQL queries).
But even this fact is disputable depending upon the data. For example, concatenating 1 character onto a string to get 50 million bytes (so 50 million iterations) took a maximum amount of 50,322,512 bytes (~48MB) in 5.97 seconds. While doing the array method ended up using 7,337,107,176 bytes (~6.8GB) to create the array in 12.1 seconds, and then took an extra 4.32 seconds to combine the strings from the array.
Anywho... the below is the benchmark code I mentioned at the beginning which shows the methods are pretty much equal. It outputs a pretty HTML table.
<?
//Please note, for the recursion test to go beyond 256, xdebug.max_nesting_level needs to be raised. You also may need to update your memory_limit depending on the number of iterations
//Output the start memory
print 'Start: '.memory_get_usage()."B<br><br>Below test results are in MB<br>";
//Our 1MB string
global $OneMB, $NumIterations;
$OneMB=str_repeat('x', 1024*1024);
$NumIterations=500;
//Run the tests
$ConcatTest=RunTest('ConcatTest');
$ImplodeTest=RunTest('ImplodeTest');
$RecurseTest=RunTest('RecurseTest');
//Output the results in a table
OutputResults(
Array('ConcatTest', 'ImplodeTest', 'RecurseTest'),
Array($ConcatTest, $ImplodeTest, $RecurseTest)
);
//Start a test run by initializing the array that will hold the results and manipulating those results after the test is complete
function RunTest($TestName)
{
$CurrentTestNums=Array();
$TestStartMem=memory_get_usage();
$StartTime=microtime(true);
RunTestReal($TestName, $CurrentTestNums, $StrLen);
$CurrentTestNums[]=memory_get_usage();
//Subtract $TestStartMem from all other numbers
foreach($CurrentTestNums as &$Num)
$Num-=$TestStartMem;
unset($Num);
$CurrentTestNums[]=$StrLen;
$CurrentTestNums[]=microtime(true)-$StartTime;
return $CurrentTestNums;
}
//Initialize the test and store the memory allocated at the end of the test, with the result
function RunTestReal($TestName, &$CurrentTestNums, &$StrLen)
{
$R=$TestName($CurrentTestNums);
$CurrentTestNums[]=memory_get_usage();
$StrLen=strlen($R);
}
//Concatenate 1MB string over and over onto a single string
function ConcatTest(&$CurrentTestNums)
{
global $OneMB, $NumIterations;
$Result='';
for($i=0;$i<$NumIterations;$i++)
{
$Result.=$OneMB;
$CurrentTestNums[]=memory_get_usage();
}
return $Result;
}
//Create an array of 1MB strings and then join w/ an implode
function ImplodeTest(&$CurrentTestNums)
{
global $OneMB, $NumIterations;
$Result=Array();
for($i=0;$i<$NumIterations;$i++)
{
$Result[]=$OneMB;
$CurrentTestNums[]=memory_get_usage();
}
return implode('', $Result);
}
//Recursively add strings onto each other
function RecurseTest(&$CurrentTestNums, $TestNum=0)
{
Global $OneMB, $NumIterations;
if($TestNum==$NumIterations)
return '';
$NewStr=RecurseTest($CurrentTestNums, $TestNum+1).$OneMB;
$CurrentTestNums[]=memory_get_usage();
return $NewStr;
}
//Output the results in a table
function OutputResults($TestNames, $TestResults)
{
global $NumIterations;
print '<table border=1 cellspacing=0 cellpadding=2><tr><th>Test Name</th><th>'.implode('</th><th>', $TestNames).'</th></tr>';
$FinalNames=Array('Final Result', 'Clean');
for($i=0;$i<$NumIterations+2;$i++)
{
$TestName=($i<$NumIterations ? $i : $FinalNames[$i-$NumIterations]);
print "<tr><th>$TestName</th>";
foreach($TestResults as $TR)
printf('<td>%07.4f</td>', $TR[$i]/1024/1024);
print '</tr>';
}
//Other result numbers
print '<tr><th>Final String Size</th>';
foreach($TestResults as $TR)
printf('<td>%d</td>', $TR[$NumIterations+2]);
print '</tr><tr><th>Runtime</th>';
foreach($TestResults as $TR)
printf('<td>%s</td>', $TR[$NumIterations+3]);
print '</tr></table>';
}
?>
A: I just came across this problem:
$str .= 'String concatenation. ';
vs.
$str = $str . 'String concatenation. ';
Seems noone has compared this so far here.
And the results are quite crazy with 50.000 iterations and php 7.4:
String 1: 0.0013918876647949
String 2: 1.1183910369873
Faktor: 803 !!!
$currentTime = microtime(true);
$str = '';
for ($i = 50000; $i > 0; $i--) {
$str .= 'String concatenation. ';
}
$currentTime2 = microtime(true);
echo "String 1: " . ( $currentTime2 - $currentTime);
$str = '';
for ($i = 50000; $i > 0; $i--) {
$str = $str . 'String concatenation. ';
}
$currentTime3 = microtime(true);
echo "<br>String 2: " . ($currentTime3 - $currentTime2);
echo "<br><br>Faktor: " . (($currentTime3 - $currentTime2) / ( $currentTime2 - $currentTime));
Can someone confirm this? I run into this because I was deleting some lines from a big file by reading and only attaching the wanted lines to a string again.
Using .= was solving all my problems here. Before I got a timeout!
A: I was curious about this, so I ran a test. I used the following code:
<?php
ini_set('memory_limit', '1024M');
define ('CORE_PATH', '/Users/foo');
define ('DS', DIRECTORY_SEPARATOR);
$numtests = 1000000;
function test1($numtests)
{
$CORE_PATH = '/Users/foo';
$DS = DIRECTORY_SEPARATOR;
$a = array();
$startmem = memory_get_usage();
$a_start = microtime(true);
for ($i = 0; $i < $numtests; $i++) {
$a[] = sprintf('%s%sDesktop%sjunk.php', $CORE_PATH, $DS, $DS);
}
$a_end = microtime(true);
$a_mem = memory_get_usage();
$timeused = $a_end - $a_start;
$memused = $a_mem - $startmem;
echo "TEST 1: sprintf()\n";
echo "TIME: {$timeused}\nMEMORY: $memused\n\n\n";
}
function test2($numtests)
{
$CORE_PATH = '/Users/shigh';
$DS = DIRECTORY_SEPARATOR;
$a = array();
$startmem = memory_get_usage();
$a_start = microtime(true);
for ($i = 0; $i < $numtests; $i++) {
$a[] = $CORE_PATH . $DS . 'Desktop' . $DS . 'junk.php';
}
$a_end = microtime(true);
$a_mem = memory_get_usage();
$timeused = $a_end - $a_start;
$memused = $a_mem - $startmem;
echo "TEST 2: Concatenation\n";
echo "TIME: {$timeused}\nMEMORY: $memused\n\n\n";
}
function test3($numtests)
{
$CORE_PATH = '/Users/shigh';
$DS = DIRECTORY_SEPARATOR;
$a = array();
$startmem = memory_get_usage();
$a_start = microtime(true);
for ($i = 0; $i < $numtests; $i++) {
ob_start();
echo $CORE_PATH,$DS,'Desktop',$DS,'junk.php';
$aa = ob_get_contents();
ob_end_clean();
$a[] = $aa;
}
$a_end = microtime(true);
$a_mem = memory_get_usage();
$timeused = $a_end - $a_start;
$memused = $a_mem - $startmem;
echo "TEST 3: Buffering Method\n";
echo "TIME: {$timeused}\nMEMORY: $memused\n\n\n";
}
function test4($numtests)
{
$CORE_PATH = '/Users/shigh';
$DS = DIRECTORY_SEPARATOR;
$a = array();
$startmem = memory_get_usage();
$a_start = microtime(true);
for ($i = 0; $i < $numtests; $i++) {
$a[] = "{$CORE_PATH}{$DS}Desktop{$DS}junk.php";
}
$a_end = microtime(true);
$a_mem = memory_get_usage();
$timeused = $a_end - $a_start;
$memused = $a_mem - $startmem;
echo "TEST 4: Braced in-line variables\n";
echo "TIME: {$timeused}\nMEMORY: $memused\n\n\n";
}
function test5($numtests)
{
$a = array();
$startmem = memory_get_usage();
$a_start = microtime(true);
for ($i = 0; $i < $numtests; $i++) {
$CORE_PATH = CORE_PATH;
$DS = DIRECTORY_SEPARATOR;
$a[] = "{$CORE_PATH}{$DS}Desktop{$DS}junk.php";
}
$a_end = microtime(true);
$a_mem = memory_get_usage();
$timeused = $a_end - $a_start;
$memused = $a_mem - $startmem;
echo "TEST 5: Braced inline variables with loop-level assignments\n";
echo "TIME: {$timeused}\nMEMORY: $memused\n\n\n";
}
test1($numtests);
test2($numtests);
test3($numtests);
test4($numtests);
test5($numtests);
...
And got the following results. Image attached. Clearly, sprintf is the least efficient way to do it, both in terms of time and memory consumption.
EDIT: view image in another tab unless you have eagle vision.
A: Yes. They do. For e.g., if you want to echo couple of strings together, use
echo str1,str2,str3
instead of
echo str1.str2.str3 to get it a little faster.
A: StringBuilder analog is not needed in PHP.
I made a couple of simple tests:
in PHP:
$iterations = 10000;
$stringToAppend = 'TESTSTR';
$timer = new Timer(); // based on microtime()
$s = '';
for($i = 0; $i < $iterations; $i++)
{
$s .= ($i . $stringToAppend);
}
$timer->VarDumpCurrentTimerValue();
$timer->Restart();
// Used purlogic's implementation.
// I tried other implementations, but they are not faster
$sb = new StringBuilder();
for($i = 0; $i < $iterations; $i++)
{
$sb->append($i);
$sb->append($stringToAppend);
}
$ss = $sb->toString();
$timer->VarDumpCurrentTimerValue();
in C# (.NET 4.0):
const int iterations = 10000;
const string stringToAppend = "TESTSTR";
string s = "";
var timer = new Timer(); // based on StopWatch
for(int i = 0; i < iterations; i++)
{
s += (i + stringToAppend);
}
timer.ShowCurrentTimerValue();
timer.Restart();
var sb = new StringBuilder();
for(int i = 0; i < iterations; i++)
{
sb.Append(i);
sb.Append(stringToAppend);
}
string ss = sb.ToString();
timer.ShowCurrentTimerValue();
Results:
10000 iterations:
1) PHP, ordinary concatenation: ~6ms
2) PHP, using StringBuilder: ~5 ms
3) C#, ordinary concatenation: ~520ms
4) C#, using StringBuilder: ~1ms
100000 iterations:
1) PHP, ordinary concatenation: ~63ms
2) PHP, using StringBuilder: ~555ms
3) C#, ordinary concatenation: ~91000ms // !!!
4) C#, using StringBuilder: ~17ms
A: When you do a timed comparison, the differences are so small that it isn't very relevant. It would make more since to go for the choice that makes your code easier to read and understand.
A: I know what you're talking about. I just created this simple class to emulate the Java StringBuilder class.
class StringBuilder {
private $str = array();
public function __construct() { }
public function append($str) {
$this->str[] = $str;
}
public function toString() {
return implode($this->str);
}
}
A: Firstly, if you don't need the strings to be concatenated, don't do it: it will always be quicker to do
echo $a,$b,$c;
than
echo $a . $b . $c;
However, at least in PHP5, string concatenation is really quite fast, especially if there's only one reference to a given string. I guess the interpreter uses a StringBuilder-like technique internally.
A: If you're placing variable values within PHP strings, I understand that it's slightly quicker to use in-line variable inclusion (that's not it's official name - I can't remember what is)
$aString = 'oranges';
$compareString = "comparing apples to {$aString}!";
echo $compareString
comparing apples to oranges!
Must be inside double-quotes to work. Also works for array members (i.e.
echo "You requested page id {$_POST['id']}";
)
A: no such limitation in php,
php can concatenate strng with the dot(.) operator
$a="hello ";
$b="world";
echo $a.$b;
outputs "hello world"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "74"
}
|
Q: What do these abbreviations in network hostnames mean? When I use traceroute, I often see abbreviations in the hostnames along the route, such as "ge", "so", "ic", "gw", "bb" etc. I can guess "bb" means backbone.
Does anyone know what any these strings abbreviate, or know any other common abbreviations?
A: The examples you provided makes me think it's not about country codes.
I guess it's just what you thought: ISP network admins using shorcut when naming their servers.
*
*bb = backbone
*gw = gateway
*ic = interconnect?
*ge = ?
*so = stackoverflow? :)
A: They're unlikely to be country codes. When you're in charge of a large scale network, you come up with naming schemes that make sense to you, mixing geographical and functional notations, but without being too verbose since it's too wasteful to type.
gw, for example, always stands for gateway. ge typically means "gateway external", i.e. a border gateway to a "friendly" network. ix stands for interchange.
A: These are ISO-3166-1 Alpha2 geographical domain id's converted to lower case.
*
*ge - Georgia
*gw - Guinea-Bisseau
*so - Somalia
*bb - Barbados
*ic - old code for Iceland?
Just look for ISO-3166 for the complete list of country codes. And RFC 1700 for the geo domain list.
Can you please provide the output from one of your traceroutes?
Hostnames using components such as bb for backbone and gw for gateway tend to put those towards the start of a hostname, e.g. bb1.toto.com.au or gw2.wtf.co.uk.
This follows a naming convention of more specfic to less specific elements in the name as you traverse from left to right.
Geographical domains are, almost always, at the end of the hostname.
A: Unless they are the top level domain name (eg "foo.bb" rather than "bb.example.net"), then they are choosen by the organisation that owns that domain name, remember if you own a domain name, you own all subdomains. In that case, you can call it whatever you want. There's no specification and people call it many different things.
There are many 2 level top level domains, one for each country. E.g. .fr for France. More info: http://en.wikipedia.org/wiki/CcTLD
A: Short version; Country codes
Likely not totally correct, but...
A: A comlete listing of country codes is at
http://www.iso.org/iso/country_codes/iso_3166_code_lists/english_country_names_and_code_elements.htm
Other Top Level Domains (TLD) are at:
http://www.icann.org/en/registries/listing.html
A: actually, "ge" most likely stands for "Gigabit Ethernet", and it's quite common for the ports on routers to be named after the interface name.
Hence the first Gig-E port on a router will quite often have a hostname that includes "ge0" or similar.
You'll also see:
*
*"fa" for "Fast Ethernet" (on Cisco routers)
*"s0" for "Serial" (i.e. E1 or T1 ports)
*"lo0" for "Loopback"
A: so = sonet, pos = packet over sonet
xe= ten gigabit
ge= gigabit
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I quickly slice and dice large data files? I'd like to slice and dice large datafiles, up to a gig, in a fairly quick and efficient manner. If I use something like UNIX's "CUT", it's extremely fast, even in a CYGWIN environment.
I've tried developing and benchmarking various Ruby scripts to process these files, and always end up with glacial results.
What would you do in Ruby to make this not so dog slow?
A: This question reminds me of Tim Bray's Wide Finder project. The fastest way he could read an Apache logfile using Ruby and figure out which articles have been fetched the most was with this script:
counts = {}
counts.default = 0
ARGF.each_line do |line|
if line =~ %r{GET /ongoing/When/\d\d\dx/(\d\d\d\d/\d\d/\d\d/[^ .]+) }
counts[$1] += 1
end
end
keys_by_count = counts.keys.sort { |a, b| counts[b] <=> counts[a] }
keys_by_count[0 .. 9].each do |key|
puts "#{counts[key]}: #{key}"
end
It took this code 7½ seconds of CPU, 13½ seconds elapsed, to process a million and change records, a quarter-gig or so, on last year’s 1.67Ghz PowerBook.
A: I'm guessing that your Ruby implementations are reading the entire file prior to processing. Unix's cut works by reading things one byte at a time and immediately dumping to an output file. There is of course some buffering involved, but not more than a few KB.
My suggestion: try doing the processing in-place with as little paging or backtracking as possible.
A: Why not combine them together - using cut to do what it does best and ruby to provide the glue/value add with the results from CUT? you can run shell scripts by putting them in backticks like this:
puts `cut somefile > foo.fil`
# process each line of the output from cut
f = File.new("foo.fil")
f.each{|line|
}
A: I doubt the problem is that ruby is reading the whole file in memory. Look at the memory and disk usage while running the command to verify.
I'd guess the main reason is because cut is written in C and is only doing one thing, so it has probably be compiled down to the very metal. It's probably not doing a lot more than calling the system calls.
However the ruby version is doing many things at once. Calling a method is much slower in ruby than C function calls.
Remember old age and trechery beat youth and skill in unix: http://ridiculousfish.com/blog/archives/2006/05/30/old-age-and-treachery/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: What's the best way to implement clean UI functionality in WinForms while maintaining a decent decoupled architecture? I tend to implement UI functionality using fairly self-documenting void doSomething() methods, i.e. if the user presses this button then perform this action then enable this list box, disable that button, etc. Is this the best approach? Is there a better pattern for general UI management i.e. how to control when controls are enabled/disabled/etc. etc. depending on user input?
Often I feel like I'm veering towards the 'big class that does everything' anti-pattern as so much seems to interact with the 'main' form class. Often, even if I'm including private state variables in the class that have been implemented using a relatively modular design, I'm still finding it grows so quickly it's ridiculous.
So could people give me some good advice towards producing quality, testable, decoupled WinForms design without falling into these traps?
A: using the MVP pattern is pretty good with winforms.
have a look at http://www.objectmentor.com/resources/articles/TheHumbleDialogBox.pdf
A: You can try MVP if you want to put the logic of the UI in a separate class..
In model view presenter just as Martin Fowler or Michael Feathers say, the logic of the UI is separated into a class called presenter, that handles all the input from the user and that tells the "dumb" view what and when to display. The special testability of the pattern comes from the fact that the entire view can be replaced with a mock object and in this way the presenter, which is the most important part, can be easily unit tested in isolation.
A: I would only put UI logic in the Form class and put any application logic in its own class:
class Form1 : Form
{
void Button1_Click
{
Program.DoCommand1();
}
}
static class Program
{
internal static void DoCommand1() {/* ... */}
}
A: One thing I have been dong lately is leveraging the partial class feature of .NET for some of these larger type forms. If I have a tab control with 5 different tabs on it. I'll create partial classes and name the files CardImportMethods.cs, ManageLookupTables.cs, etc. while leaving it all a part of the CentralizedForm class.
Even with just the UI logic, having this breakdown has helped when it comes to managing those things.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Passing around urls between applications in the same project I am trying to mock-up an API and am using separate apps within Django to represent different web services. I would like App A to take in a link that corresponds to App B and parse the json response.
Is there a way to dynamically construct the url to App B so that I can test the code in development and not change to much before going into production? The problem is that I can't use localhost as part of a link.
I am currently using urllib, but eventually I would like to do something less hacky and better fitting with the web services REST paradigm.
A: You could do something like
if settings.DEBUG:
other = "localhost"
else:
other = "somehost"
and use other to build the external URL. Generally you code in DEBUG mode and deploy in non-DEBUG mode. settings.DEBUG is a 'standard' Django thing.
A: By "separate apps within Django" do you mean separate applications with a common settings? That is to say, two applications within the same Django site (or project)?
If so, the {% url %} tag will generate a proper absolute URL to any of the apps listed in the settings file.
If there are separate Django servers with separate settings, you have the standard internet problem of URI design. Your URI's can be consistent with only the hostname changing.
- http://localhost/some/path - development
- http://123.45.67.78/some/path - someone's laptop who's running a server for testing
- http://qa.mysite.com/some/path - QA
- http://www.mysite.com/some/path - production
You never need to provide the host information, so all of your links are <A HREF="/some/path/">.
This, generally, works out the best. You have can someone's random laptop being a test server; you can get the IP address using ifconfig.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: .NET + Copying large amounts of memory tricks Back in the olden days, there were tricks used (often for blitting offscreen framebuffers), to copy large chunks of memory from one location to another.
Now that I'm working in C#, I've found the need to move an array of bytes (roughly 32k in size) from one memory location to another approximately 60 times per second.
Somehow, I don't think a byte by byte copy in a for loop is optimal here.
Does anyone know a good trick to do this kinda of work while still staying in purely managed code?
If not, I'm willing to do some P/Invoking or go into unsafe mode, but I'd like to stay managed if I can for cross platform reasons.
EDIT:
Some benchmarking code I wrote up just for fun:
Byte by Byte: 15.6192
4 Bytes per loop: 15.6192
Block Copy: 0
Byte[] src = new byte[65535];
Byte[] dest = new byte[65535];
DateTime startTime, endTime;
startTime = DateTime.Now;
for (int k = 0; k < 60; k++)
{
for (int i = 0; i < src.Length; i++)
{
dest[i] = src[i];
}
}
endTime = DateTime.Now;
Console.WriteLine("Byte by Byte: " + endTime.Subtract(startTime).TotalMilliseconds);
startTime = DateTime.Now;
for (int k = 0; k < 60; k++)
{
int i = 0;
while (i < src.Length)
{
if (i + 4 > src.Length)
{
// Copy the remaining bytes one at a time.
while(i < src.Length)
{
dest[i] = src[i];
i++;
}
break;
}
dest[i] = src[i];
dest[i + 1] = src[i + 1];
dest[i + 2] = src[i + 2];
dest[i + 3] = src[i + 3];
i += 4;
}
}
endTime = DateTime.Now;
Console.WriteLine("4 Bytes per loop: " + endTime.Subtract(startTime).TotalMilliseconds);
startTime = DateTime.Now;
for (int k = 0; k < 60; k++)
{
Buffer.BlockCopy(src, 0, dest,0, src.Length);
}
endTime = DateTime.Now;
Console.WriteLine("Block Copy: " + endTime.Subtract(startTime).TotalMilliseconds);
A: I think you can count on Buffer.BlockCopy() to do the right thing
http://msdn.microsoft.com/en-us/library/system.buffer.blockcopy.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: IIS 6.0 on Enterprise Server - Memory Limit We want to switch a web server from Windows 2003 to Windows 2003 Enterprise (64 bits) to use 8GB of RAM. Will IIS 6.0 and an ASPNET 1.1 application be able to benefit from the change?
A: Since ASP.Net 1.1 has no x64 support, you are limited to running IIS 6 using 32 bit worker processes. The /3GB switch doesn't do anything on x64, but x64 natively gives 32bit processes 4 GB instead of 2GB, so you will have more memory available for your worker proces.
You will need to set the AppPools to 32 bit:
cscript %SystemDrive%\inetpub\AdminScripts\adsutil.vbs set w3svc/AppPools/Enable32bitAppOnWin64 1
You could consider tweaking the ASP.net memory from 60% of the application to 80%, which we've had some success.
<system.web>
<processModel memoryLimit="80" />
</system.web>
This can stress the app pool when you get up into the 1.2GB to 1.6 GB range.
Other things to consider is that most ASP.Net 1.1 applications have no issues when run in a 2.0 application pool, allowing you to easily convert your 1.1 32 bit application to a 2.0 64 bit application. This doesn't require any recompilation, just change the app pool to 2.0, then switch to x64 using the above ADSUTIL.VBS script (set to 0 rather than 1).
A: My understanding is that there was a virtual address space limitation of 3 GB in ASP.NET 1.1, and that it was never made 64 bit compatible, though 2.0 was.
You can get IIS 6.0 to run 32 bit (i.e. ASP.NET 1.1) on the 64 OS, but it will be in a 32 bit mode (along with anything else hosted, including ASP.NET 2.0 sites).
Microsoft article on switching between 32 bit and 64 bit
A: The memory limit is 2GB unless you use the /3GB switch on the process which will use 1GB of the kernel space for the process itself. The only way to go beyond 3GB with IIS is to run the 64-bit version.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to create Excel 2003 UDF with a C# Excel add-in using VSTO 2005 SE I saw an article on creating Excel UDFs in VSTO managed code, using VBA: http://blogs.msdn.com/pstubbs/archive/2004/12/31/344964.aspx.
However I want to get this working in a C# Excel add-in using VSTO 2005 SE, can any one help?
I tried the technique Romain pointed out but when trying to load Excel I get the following exception:
The customization assembly could not
be found or could not be loaded. You
can still edit and save the
document.....
Details:
Type mismatch. (Exception from HRESULT: 0x80020005 (DISP_E_TYPEMISMATCH))
************** Exception Text **************
System.Runtime.InteropServices.COMException (0x80020005): Type mismatch. (Exception from HRESULT: 0x80020005 (DISP_E_TYPEMISMATCH))
at Microsoft.Office.Interop.Excel._Application.Run(Object Macro, Object Arg1, Object Arg2, Object Arg3, Object Arg4, Object Arg5, Object Arg6, Object Arg7, Object Arg8, Object Arg9, Object Arg10, Object Arg11, Object Arg12, Object Arg13, Object Arg14, Object Arg15, Object Arg16, Object Arg17, Object Arg18, Object Arg19, Object Arg20, Object Arg21, Object Arg22, Object Arg23, Object Arg24, Object Arg25, Object Arg26, Object Arg27, Object Arg28, Object Arg29, Object Arg30)
at ExcelWorkbook4.ThisWorkbook.ThisWorkbook_Startup(Object sender, EventArgs e) in C:\projects\ExcelWorkbook4\ExcelWorkbook4\ThisWorkbook.cs:line 42
at Microsoft.Office.Tools.Excel.Workbook.OnStartup()
at ExcelWorkbook4.ThisWorkbook.FinishInitialization() in C:\projects\ExcelWorkbook4\ExcelWorkbook4\ThisWorkbook.Designer.cs:line 66
at Microsoft.VisualStudio.Tools.Applications.Runtime.AppDomainManagerInternal.ExecutePhase(String methodName)
at Microsoft.VisualStudio.Tools.Applications.Runtime.AppDomainManagerInternal.ExecuteCustomizationStartupCode()
at Microsoft.VisualStudio.Tools.Applications.Runtime.AppDomainManagerInternal.ExecuteCustomization(IHostServiceProvider serviceProvider)
************** Loaded Assemblies **************
A: You should also have a look at ExcelDna - http://www.codeplex.com/exceldna. ExcelDna allows managed assemblies to expose user-defined functions (UDFs) and macros to Excel through the native .xll interface. The project is open-source and freely allows commercial use.
Your user-defined functions can be written in C#, Visual Basic, F#, Java (using IKVM.NET), and can be compiled to a .dll or exposed through a text-based script file. Excel versions from Excel 97 to Excel 2007 are supported.
Some advantages of using the .xll interface rather than making automation add-ins include:
*
*older versions of Excel are supported,
*deployment is much easier since COM registration is not required and references to user-defined functions in worksheet formulae do not bind to the location of the add-in, and
*the performance of UDF functions exposed through ExcelDna is excellent.
A: Creating UDF using a simple automation addin is quite easy. You will have to create a dedicated assembly and make it visible from COM. Unfortunately, you can't define a UDF in a managed VSTO Excel Addin.
Anyway, there is a work around, which I found very limiting. It is described in this discussion. Basically, your addin needs to inject some VB code into each workbook to register the UDF it contains.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Moving the child nodes of an XML node upwards Imagine I have the folling XML file:
<a>before<b>middle</b>after</a>
I want to convert it into something like this:
<a>beforemiddleafter</a>
In other words I want to get all the child nodes of a certain node, and move them to the parent node in order. This is like doing this command: "mv ./directory/* .", but for xml nodes.
I'd like to do this in using unix command line tools. I've been trying with xmlstarlet, which is a powerful command line XML manipulator. I tried doing something like this, but it doesn't work
echo "<a>before<b>middle</b>after</a>" | xmlstarlet ed -m "//b/*" ".."
Update: XSLT templates are fine, since they can be called from the command line.
My goal here is 'remove the links from an XHTML page', in other words replace where the link was, with the contents of the link tag.
A: Example input file (test.xml):
<?xml version="1.0" encoding="UTF-8"?>
<test>
<x>before<y>middle</y>after</x>
<a>before<b>middle</b>after</a>
<a>before<b>middle</b>after</a>
<x>before<y>middle</y>after</x>
<a>before<b>middle</b>after</a>
<embedded>foo<a>before<b>middle</b>after</a>bar</embedded>
</test>
XSLT stylesheet (collapse.xsl:
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="a">
<xsl:copy>
<xsl:value-of select="."/>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
Run with XmlStarlet using
xml tr collapse.xsl test.xml
Produces:
<?xml version="1.0"?>
<test>
<x>before<y>middle</y>after</x>
<a>beforemiddleafter</a>
<a>beforemiddleafter</a>
<x>before<y>middle</y>after</x>
<a>beforemiddleafter</a>
<embedded>foo<a>beforemiddleafter</a>bar</embedded>
</test>
The first template in the stylesheet is the basic identity transformation (just copies the whole of your input XML document). The second template specifically matches the elements that you want to 'collapse' and just copies the tags and inserts the string value of the element (=concatenation of the string-value of descendant nodes).
A: In XSLT, you could just write:
<xsl:template match="a"><a><xsl:apply-templates /></a></xsl:template>
<xsl:template match="a/b"><xsl:value-of select="."/></xsl:template>
And you'd get:
<a>beforemiddleafter</a>
So if you wanted to do this the easy way you could just create an XSL stylesheet and run your XML file through that.
I realise this isn't what you said you'd like to do (using Unix command line), however. I don't know anything about Unix, so maybe someone else can fill in the blanks, eg. some sort of command line calls that can execute the above.
A: If your actual goal is to remove the links from a web page, then you should use a stylesheet like this, which matches all XHTML <a> elements (I'm assuming you're using XHTML?) and simply applies templates to their content:
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:h="http://www.w3.org/1999/xhtml"
exclude-result-prefixes="h">
<!-- Don't copy the <a> elements, just process their content -->
<xsl:template match="h:a">
<xsl:apply-templates />
</xsl:template>
<!-- identity template; copies everything by default -->
<xsl:template match="node()|@*">
<xsl:copy>
<xsl:apply-templates select="@*|node()" />
</xsl:copy>
</xsl:template>
</xsl:stylesheet>
This stylesheet will deal with a situation where you have something nested within the <a> element that you want to retain, such as:
<p>Here is <a href="....">some <em>linked</em> text</a>.</p>
which you will want to come out as:
<p>Here is some <em>linked</em> text.</p>
And it will deal with the situation where you have the link nested within an unexpected element between the usual parent (the <p> element) and the <a> element, such as:
<p>Here is <em>some <a href="...">linked</a> text</em>.</p>
A: Using xmlstarlet:
xmlstr='<a>before<b>middle</b>after</a>'
updatestr="$(echo "$xmlstr" | xmlstarlet sel -T -t -m "/a/b" -v '../.' -n | sed -n '1{p;q;}')"
echo "$xmlstr" | xmlstarlet ed -u "/a" -v "$updatestr"
A: Have you tried this?
file.xml
<r>
<a>start<b>middle</b>end</a>
</r>
template.xsl
<xsl:template match="/">
<a><xsl:value-of select="r/a" /></a>
</xsl:template>
output
<a>startmiddleend</a>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Why are Exceptions not Checked in .NET? I know Googling I can find an appropriate answer, but I prefer listening to your personal (and maybe technical) opinions.
What is the main reason of the difference between Java and C# in throwing exceptions?
In Java the signature of a method that throws an exception has to use the "throws" keyword, while in C# you don't know in compilation time if an exception could be thrown.
A: In the article The Trouble with Checked Exceptions and in Anders Hejlsberg's (designer of the C# language) own voice, there are three main reasons for C# not supporting checked exceptions as they are found and verified in Java:
*
*Neutral on Checked Exceptions
“C# is basically silent on the checked
exceptions issue. Once a better
solution is known—and trust me we
continue to think about it—we can go
back and actually put something in
place.”
*Versioning with Checked Exceptions
“Adding a new exception to a throws
clause in a new version breaks client
code. It's like adding a method to an
interface. After you publish an
interface, it is for all practical
purposes immutable, …”
“It is funny how people think that the
important thing about exceptions is
handling them. That is not the
important thing about exceptions. In a
well-written application there's a
ratio of ten to one, in my opinion, of
try finally to try catch. Or in C#,
using statements, which are
like try finally.”
*Scalability of Checked Exceptions
“In the small, checked exceptions are
very enticing…The trouble
begins when you start building big
systems where you're talking to four
or five different subsystems. Each
subsystem throws four to ten
exceptions. Now, each time you walk up
the ladder of aggregation, you have
this exponential hierarchy below you
of exceptions you have to deal with.
You end up having to declare 40
exceptions that you might throw.…
It just balloons out of control.”
In his article, “Why doesn't C# have exception specifications?”, Anson Horton (Visual C# Program Manager) also lists the following reasons (see the article for details on each point):
*
*Versioning
*Productivity and code quality
*Impracticality of having class author differentiate between
checked and unchecked exceptions
*Difficulty of determining the correct exceptions for interfaces.
It is interesting to note that C# does, nonetheless, support documentation of exceptions thrown by a given method via the <exception> tag and the compiler even takes the trouble to verify that the referenced exception type does indeed exist. There is, however, no check made at the call sites or usage of the method.
You may also want to look into the Exception Hunter, which is a commerical tool by Red Gate Software, that uses static analysis to determine and report exceptions thrown by a method and which may potentially go uncaught:
Exception Hunter is a new analysis
tool that finds and reports the set of
possible exceptions your functions
might throw – before you even ship.
With it, you can locate unhandled
exceptions easily and quickly, down to
the line of code that is throwing the
exceptions. Once you have the results,
you can decide which exceptions need
to be handled (with some exception
handling code) before you release your
application into the wild.
Finally, Bruce Eckel, author of Thinking in Java, has an article called, “Does Java need Checked Exceptions?”, that may be worth reading up as well because the question of why checked exceptions are not there in C# usually takes root in comparisons to Java.
A: Fundamentally, whether an exception should be handled or not is a property of the caller, rather than of the function.
For example, in some programs there is no value in handling an IOException (consider ad hoc command-line utilities to perform data crunching; they're never going to be used by a "user", they're specialist tools used by specialist people). In some programs, there is value in handling an IOException at a point "near" to the call (perhaps if you get a FNFE for your config file you'll drop back to some defaults, or look in another location, or something of that nature). In other programs, you want it to bubble up a long way before it's handled (for example you might want it to abort until it reaches the UI, at which point it should alert the user that something has gone wrong.
Each of these cases is dependent on the application, and not the library. And yet, with checked exceptions, it is the library that makes the decision. The Java IO library makes the decision that it will use checked exceptions (which strongly encourage handling that's local to the call) when in some programs a better strategy may be non-local handling, or no handling at all.
This shows the real flaw with checked exceptions in practice, and it's far more fundamental than the superficial (although also important) flaw that too many people will write stupid exception handlers just to make the compiler shut up. The problem I describe is an issue even when experienced, conscientious developers are writing the program.
A: Because the response to checked exceptions is almost always:
try {
// exception throwing code
} catch(Exception e) {
// either
log.error("Error fooing bar",e);
// OR
throw new RuntimeException(e);
}
If you actually know that there is something you can do if a particular exception is thrown, then you can catch it and then handle it, but otherwise it's just incantations to appease the compiler.
A: Interestingly, the guys at Microsoft Research have added checked exceptions to Spec#, their superset of C#.
A: Anders himself answers that question in this episode of the Software engineering radio podcast
A: The basic design philosophy of C# is that actually catching exceptions is rarely useful, whereas cleaning up resources in exceptional situations is quite important. I think it's fair to say that using (the IDisposable pattern) is their answer to checked exceptions. See [1] for more.
*
*http://www.artima.com/intv/handcuffs.html
A: By the time .NET was designed, Java had checked exceptions for quite some time and this feature was viewed by Java developers at best as controversial controversial. Thus .NET designers chose not to include it in C# language.
A: I went from Java to C# because of a job change. At first, I was a little concerned about the difference, but in practice, it hasn't made a difference.
Maybe, it's because I come from C++, which has the exception declaration, but it's not commonly used. I write every single line of code as if it could throw -- always use using around Disposable and think about cleanup I should do in finally.
In retrospect the propagation of the throws declaration in Java didn't really get me anything.
I would like a way to say that a function definitely never throws -- I think that would be more useful.
A: Additionally to the responses that were written already, not having checked exceptions helps you in many situations a lot. Checked exceptions make generics harder to implement and if you have read the closure proposals you will notice that every single closure proposal has to work around checked exceptions in a rather ugly way.
A: I sometimes miss checked exceptions in C#/.NET.
I suppose besides Java no other notable platform has them. Maybe the .NET guys just went with the flow...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
}
|
Q: Should I never use primitive types again? Mixing the use of primitive data types and their respective wrapper classes, in Java, can lead to a lot of bugs. The following example illustrates the issue:
int i = 4;
...
if (i == 10)
doStuff();
Later on you figure that you want the variable i to be either defined or undefined, so you change the above instantiation to:
Integer i = null;
Now the equality check fails.
Is it good Java practise to always use the primitive wrapper classes? It obviously would get some bugs out of the way early, but what are the downsides to this? Does it impact performance or the application's memory footprint? Are there any sneaky gotchas?
A: Firstly, switching from using a primitive to using an object just to get the ability to set it to null is probably a bad design decision. I often have arguments with my coworkers about whether or not null is a sentinel value, and my opinion is usually that it is not (and thus shouldn't be prohibited like sentinel values should be), but in this particular case you're going out of your way to use it as a sentinel value. Please don't. Create a boolean that indicates whether or not your integer is valid, or create a new type that wraps the boolean and integer together.
Usually, when using newer versions of Java, I find I don't need to explicitly create or cast to the object versions of primitives because of the auto-boxing support that was added some time in 1.5 (maybe 1.5 itself).
A: I'd suggest using primitives all the time unless you really have the concept of "null".
Yes, the VM does autoboxing and all that now, but it can lead to some really wierd cases where you'll get a null pointer exception at a line of code that you really don't expect, and you have to start doing null checks on every mathematical operation. You also can start getting some non-obvious behaviors if you start mixing types and getting wierd autoboxing behaviors.
For float/doubles you can treat NaN as null, but remember that NaN != NaN so you still need special checks like !Float.isNaN(x).
It would be really nice if there were collections that supported the primitive types instead of having to waste the time/overhead of boxing.
A: Using the boxed types does have both performance and memory issues.
When doing comparisons (eg (i == 10) ), java has to unbox the type before doing the comparison. Even using i.equals(TEN) uses a method call, which is costlier and (IMO) uglier than the == syntax.
Re memory, the object has to be stored on the heap (which also takes a hit on performance) as well as storing the value itself.
A sneaky gotcha? i.equals(j) when i is null.
I always use the primitives, except when it may be null, but always check for null before comparison in those cases.
A: In your example, the if statement will be ok until you go over 127 (as Integer autoboxing will cache values up to 127 and return the same instance for each number up to this value)
So it is worse than you present it...
if( i == 10 )
will work as before, but
if( i == 128 )
will fail. It is for reasons like this that I always explicitly create objects when I need them, and tend to stick to primitive variables if at all possible
A: Thee java POD types are there for a reason. Besides the overhead, you can't do normal operations with objects. An Integer is an object, which need to be allocated and garbage collected. An int isn't.
A: If that value can be empty, you may find that in your design you are in need of something else.
There are two possibilities--either the value is just data (the code won't act any differently if it's filled in or not), or it's actually indicating that you have two different types of object here (the code acts differently if there is a value than a null)
If it's just data for display/storage, you might consider using a real DTO--one that doesn't have it as a first-class member at all. Those will generally have a way to check to see if a value has been set or not.
If you check for the null at some point, you may want to be using a subclass because when there is one difference, there are usually more. At least you want a better way to indicate your difference than "if primitiveIntValue == null", that doesn't really mean anything.
A: Don't switch to non-primitives just to get this facility. Use a boolean to indicate whether the value was set or not. If you don't like that solution and you know that your integers will be in some reasonable limit (or don't care about the occasional failure) use a specific value to indicate 'uninitialized', such as Integer.MIN_VALUE. But that's a much less safe solution than the boolean.
A: When you got to that 'Later on' point, a little more work needed to be accomplished during the refactoring. Use primitives when possible. (Capital period) Then make POJOs if more functionality is needed. The primitive wrapper classes, in my opinion, are best used for data that needs to travel across the wire, meaning networked apps. Allowing nulls as acceptable values causes headaches as a system 'grows'. To much code wasted, or missed, guarding what should be simple comparisons.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: Make a div fade away nicely after a given amount of time What is the best way to make a <div> fade away after a given amount of time (without using some of the JavaScript libraries available).
I'm looking for a very lightweight solution not requiring a huge JavaScript library to be sent to the browser.
A: Here's some javascript that does it. I found it on a javascript tutorial web site somewhere (which I was unable to find again) and modified it.
var TimeToFade = 200.0;
function fade(eid)
{
var element = document.getElementById(eid);
if(element == null) return;
if(element.FadeState == null)
{
if(element.style.opacity == null || element.style.opacity == ''
|| element.style.opacity == '1') {
element.FadeState = 2;
} else {
element.FadeState = -2;
}
}
if(element.FadeState == 1 || element.FadeState == -1) {
element.FadeState = element.FadeState == 1 ? -1 : 1;
element.FadeTimeLeft = TimeToFade - element.FadeTimeLeft;
} else {
element.FadeState = element.FadeState == 2 ? -1 : 1;
element.FadeTimeLeft = TimeToFade;
setTimeout("animateFade(" + new Date().getTime()
+ ",'" + eid + "')", 33);
}
}
function animateFade(lastTick, eid)
{
var curTick = new Date().getTime();
var elapsedTicks = curTick - lastTick;
var element = document.getElementById(eid);
if(element.FadeTimeLeft <= elapsedTicks) {
element.style.opacity = element.FadeState == 1 ? '1' : '0';
element.style.filter = 'alpha(opacity = '
+ (element.FadeState == 1 ? '100' : '0') + ')';
element.FadeState = element.FadeState == 1 ? 2 : -2;
element.style.display = "none";
return;
}
element.FadeTimeLeft -= elapsedTicks;
var newOpVal = element.FadeTimeLeft/TimeToFade;
if(element.FadeState == 1) {
newOpVal = 1 - newOpVal;
}
element.style.opacity = newOpVal;
element.style.filter = 'alpha(opacity = ' + (newOpVal*100) + ')';
setTimeout("animateFade(" + curTick + ",'" + eid + "')", 33);
}
The following html shows how it works:
<html><head>
<script type="text/javascript" src="fade.js"></script>
</head><body>
<div id="fademe" onclick="fade( 'fademe' )">
<p>This will fade when you click it</p>
</div>
</body></html>
A: These days, I would always use a library for that -- the progress they've made has been phenomenal, and the cross-browser functionality alone is worth it. So this answer is a non-answer. I'd just like to point out that jQuery is all of 15kB.
A: Not sure why you'd be so against using something like jQuery, which would make accomplishing this effect all but trivial, but essentially, you need to wrap a series of changes to the -moz-opacity, opacity, and filter:alpha CSS rules in a setTimeout().
Or, use jQuery, and wrap a fadeOut() call in setTimeout. Your choice.
A: Use setTimeout with the initial time to trigger the fade routine and then use setTimeout with low timer to step through the opacity level of the image until it's gone.
However, jQuery can get down to about 15k and is a one-time download for the client so I wouldn't call it huge.
A: Try the YUI (Yahoo User Interface) Animation library: http://developer.yahoo.com/yui/animation/
Don't reinvent the wheel. Libraries are our friends. :-)
A: I know you're down on libraries, but I'd recommend taking a look at moo.fx: http://moofx.mad4milk.net/ - I think it's like 3k.
jQuery is pretty damn small too.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Bash variable scope Please explain to me why the very last echo statement is blank? I expect that XCODE is incremented in the while loop to a value of 1:
#!/bin/bash
OUTPUT="name1 ip ip status" # normally output of another command with multi line output
if [ -z "$OUTPUT" ]
then
echo "Status WARN: No messages from SMcli"
exit $STATE_WARNING
else
echo "$OUTPUT"|while read NAME IP1 IP2 STATUS
do
if [ "$STATUS" != "Optimal" ]
then
echo "CRIT: $NAME - $STATUS"
echo $((++XCODE))
else
echo "OK: $NAME - $STATUS"
fi
done
fi
echo $XCODE
I've tried using the following statement instead of the ++XCODE method
XCODE=`expr $XCODE + 1`
and it too won't print outside of the while statement. I think I'm missing something about variable scope here, but the ol' man page isn't showing it to me.
A: One more option:
#!/bin/bash
cat /some/file | while read line
do
var="abc"
echo $var | xsel -i -p # redirect stdin to the X primary selection
done
var=$(xsel -o -p) # redirect back to stdout
echo $var
EDIT:
Here, xsel is a requirement (install it).
Alternatively, you can use xclip:
xclip -i -selection clipboard
instead of
xsel -i -p
A: I got around this when I was making my own little du:
ls -l | sed '/total/d ; s/ */\t/g' | cut -f 5 |
( SUM=0; while read SIZE; do SUM=$(($SUM+$SIZE)); done; echo "$(($SUM/1024/1024/1024))GB" )
The point is that I make a subshell with ( ) containing my SUM variable and the while, but I pipe into the whole ( ) instead of into the while itself, which avoids the gotcha.
A: #!/bin/bash
OUTPUT="name1 ip ip status"
+export XCODE=0;
if [ -z "$OUTPUT" ]
----
echo "CRIT: $NAME - $STATUS"
- echo $((++XCODE))
+ export XCODE=$(( $XCODE + 1 ))
else
echo $XCODE
see if those changes help
A: Another option is to output the results into a file from the subshell and then read it in the parent shell. something like
#!/bin/bash
EXPORTFILE=/tmp/exportfile${RANDOM}
cat /tmp/randomFile | while read line
do
LINE="$LINE $line"
echo $LINE > $EXPORTFILE
done
LINE=$(cat $EXPORTFILE)
A: Because you're piping into the while loop, a sub-shell is created to run the while loop.
Now this child process has its own copy of the environment and can't pass any
variables back to its parent (as in any unix process).
Therefore you'll need to restructure so that you're not piping into the loop.
Alternatively you could run in a function, for example, and echo the value you
want returned from the sub-process.
http://tldp.org/LDP/abs/html/subshells.html#SUBSHELL
A: This should work as well (because echo and while are in same subshell):
#!/bin/bash
cat /tmp/randomFile | (while read line
do
LINE="$LINE $line"
done && echo $LINE )
A: The problem is that processes put together with a pipe are executed in subshells (and therefore have their own environment). Whatever happens within the while does not affect anything outside of the pipe.
Your specific example can be solved by rewriting the pipe to
while ... do ... done <<< "$OUTPUT"
or perhaps
while ... do ... done < <(echo "$OUTPUT")
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "120"
}
|
Q: A Delphi/FreePascal lib or function that emulates the PHP's function parse_url I'm doing a sitemap producer in Object Pascal and need a good function or lib to emulate the parse_url function on PHP.
Does anyone know of any good ones?
A: I am not familiar with the parse_url function on PHP, but you might try the TIdURI class that is included with Indy (which in turn is included with most recent Delphi releases). I think they ported it to FreePascal as well.
TIdURI is a TObject descendant that encapsulates a Universal Resource Identifier, as described in the Internet Standards document:
RFC 1630 - Universal Resource Identifiers in WWW
TIdURI provides methods and properties for assembly and disassembly of URIs using the component parts that make up the URI, including: Protocol, Host, Port, Path, Document, and Bookmark.
If that does not work, please give a specific example of what you are trying to accomplish - what are you trying to parse out of a URL.
A: Freepascal has the unit URIParser with the ParseURI function. An example how to use it can be found in one of the example in Freepascal's source. Or an old example which is somewhat easier to understand.
A: Be careful with Indy's TIdURI class. It was supposed to be a general-purpose parser, but it has a few bugs and design flaws in it that prevent it from being a fully compliant parser. I'm currently in the process of writing a new class from scratch for Indy 11 to replace TIdURI. It will be a fully compliant URI parser, and it will also suppor IRI (RFC 3987) parsing as well.
A: If you're using wininet.dll you can also use their InternetCrackUrl API.
A: The URI RFC lists this regular expression for URI parsing:
^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?
12 3 4 5 6 7 8 9
Where the numbers are these groups:
$1 = http:
$2 = http
$3 = //www.ics.uci.edu
$4 = www.ics.uci.edu
$5 = /pub/ietf/uri/
$6 = <undefined>
$7 = <undefined>
$8 = #Related
$9 = Related
For this URI:
http://www.ics.uci.edu/pub/ietf/uri/#Related
The regular expression is pretty simple and uses no special features the regular expression lib has to provide, so grab one that is compatible with your pascal implementation and there you go.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Comparison of Python and Perl solutions to Wide Finder challenge I'd be very grateful if you could compare the winning O’Rourke's Perl solution to Lundh's Python solution, as I don't know Perl good enough to understand what's going on there. More specifically I'd like to know what gave Perl version 3x advantage: algorithmic superiority, quality of C extensions, other factors?
Wide Finder: Results
A: Perl is heavily optimized for text processing. There are so many factors that it's hard to say what's the exact difference. Text is represented completely differently internally (utf-8 versus utf-16/utf-32) and the regular expression engines are completely different too. Python's regular expression engine is a custom one and not as much used as the perl one. There are very few developers working on it (I think it's largely unmaintained) in contrast to the Perl one which is basically the "core of the language".
After all Perl is the text processing language.
A: Many-core Engine (MCE) has been released for Perl. MCE does quite well at this, even when reading directly from disk with 8 workers (cold cache). Compare to wf_mmap. MCE follows a bank queuing model when reading input data. Look under the images folder for slides on it.
The source code is hosted at http://code.google.com/p/many-core-engine-perl/
The perl documentation can be read at https://metacpan.org/module/MCE
An implementation of the Wide Finder with MCE is provided under examples/tbray/
https://metacpan.org/source/MARIOROY/MCE-1.514/examples/tbray/
Enjoy MCE.
Script....: baseline1 baseline2 wf_mce1 wf_mce2 wf_mce3 wf_mmap
Cold cache: 1.674 1.370 1.252 1.182 1.174 3.056
Warm cache: 1.236 0.923 0.277 0.106 0.098 0.092
A: The better regex implementation of perl is one part of the story. That can't explain however why the perl implementation scales better. The difference become bigger with more processors. For some reason the python implementation has an issue there.
A: The Perl implementation uses the mmap system call. What that call does is establish a pointer which to the process appears to be a normal segment of memory or buffer to the program. It maps the contents of a file to a region of memory. There are performances advantages of doing this vs normal file IO (read) - one is that there are no user-space library calls necessary to get access to the data, another is that there are often less copy operations necessary (eg: moving data between kernel and user space).
Perl's strings and regular expressions are 8-bit byte based (as opposed to utf16 for Java for example), so Perl's native 'character type' is the same encoding of the mmapped file.
When the regular expression engine then operates on the mmap backed variable, it is directly accessing the file data via the mamped memory region - without going through Perl's IO functions, or even libc's IO functions.
The mmap is probably largely responsible for the performance difference vs the Python version using the normal Python IO libraries - which additionally introduce the overhead of looking for line breaks.
The Perl program also supports a -J to parallelize the processing, where the oepen "-|" causes a fork() where the file handle in the parent is to the child's stdout. The child processes serialize their results to stdout and the parent de-serializes them to coordinate and summarize the results.
A:
The Perl implementation uses the mmap system call.
This. It avoids buffer copying and provides async I/O.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: How can I do the equivalent of "SHOW TABLES" in T-SQL? I would like to do a lookup of tables in my SQL Server 2005 Express database based on table name. In MySQL I would use SHOW TABLES LIKE "Datasheet%", but in T-SQL this throws an error (it tries to look for a SHOW stored procedure and fails).
Is this possible, and if so, how?
A: One who doesn't know the TABLE NAME will not able to get the result as per the above answers.
TRY THIS
SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA='dbo';
A: I know you've already accepted an answer, but why not just use the much simpler sp_tables?
sp_tables 'Database_Name'
A: Try this :
select * from information_schema.columns
where table_name = 'yourTableName'
also look for other information_schema views.
A: This will give you a list of the tables in the current database:
Select Table_name as "Table name"
From Information_schema.Tables
Where Table_type = 'BASE TABLE' and Objectproperty
(Object_id(Table_name), 'IsMsShipped') = 0
Some other useful T-SQL bits can be found here: http://www.devx.com/tips/Tip/28529
A: And, since INFORMATION_SCHEMA is part of the SQL-92 standard, a good many databases support it - including MySQL.
A: Try following
SELECT table_name
FROM information_schema.tables
WHERE
table_name LIKE 'Datasheet%'
A: Try it :
SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE '%'
A: MS is slowly phasing out methods other than information_schema views. so for forward compatibility always use those.
A: Try this:
USE your_database
go
Sp_tables
go
A: Try this
SELECT * FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE 'Datasheet%'
A: I know this is an old question but I've just come across it.
Normally I would say access the information_schema.tables view, but on finding out the PDO can not access that database from a different data object I needed to find a different way. Looks like sp_tables 'Database_Name is a better way when using a non privileged user or PDO.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
}
|
Q: HTML Help Workshop Crashes on Compiling a CHM Trying to build a CHM using Microsoft HTML Help Workshop. A soon as I click Compile, HTML Help Workshop states:
An internal error has occurred. An error record has been saved to c:\os.err.
The only contents of the file are:
((Today's Date & Time))
Microsoft HTML Help Workshop Version 4.74.8702
HHA Version 4.74.8702
htmlproc.cpp(114) : Assertion failure: (pszTmp == m_pCompiler->m_pHtmlMem->psz)
The error only occurs for a few select, large projects, and happens from both the command line as well as the HTML Help Workshop GUI.
What causes this error to occur and how can I fix my project to run through the compiler?
A: The Microsoft HTML Help compiler has some unstated requirements for path name sizes.
Moving the project to a directory closer to the root drive (i.e. "C:\helpsystem\") and renaming folders inside the project to smaller name reduced the path name size enough so that the project would compile.
A: I found Microsoft HTML Help Workshop to be a bit delicate to work with. Do you have all the prerequistes installed? Try running the compiler, hhc.exe, from the command line.
A: Another thing to watch out for is an Error 413 - Request Entity Too Large error.
I'm not sure how big is too big for HTML Help Workshop, but my htm file is a touch over 2MB, a large table, and it causes HTML Help Workshop to crash when processing it.
While this isn't the same problem, it was the hint I needed - I'm not the first to find this on SO..
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: IMAP: "immediate" delete, without going through Trash folder? I currently filter some message from my inbox with these steps:
select inbox
pick messages
set \Deleted tag
and then repeat the process after selecting Trash.
Is there a more direct way of disposing of these messages? Or is it just the feature of the Mail server that deleting a message puts it in the trash, and deleting from the trash permantently disposes of it?
A: I believe you have to call EXPUNGE after setting the tag Deleted.
RFC 3501
A: Not sure exactly where you're doing these operations. IMAP itself doesn't specify that you move things to a Trash folder. Typically IMAP will let you mark a message as deleted and keep it within your inbox but marked as deleted. You can then choose to "purge" the folder which will actually delete all items marked for deletion.
A: With my mail client (thunderbird), to direct delete instead of send to trash, I hold down the Shift key along with the Delete key.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Best Practices of Test Driven Development Using C# and RhinoMocks In order to help my team write testable code, I came up with this simple list of best practices for making our C# code base more testable. (Some of the points refer to limitations of Rhino Mocks, a mocking framework for C#, but the rules may apply more generally as well.) Does anyone have any best practices that they follow?
To maximize the testability of code, follow these rules:
*
*Write the test first, then the code. Reason: This ensures that you write testable code and that every line of code gets tests written for it.
*Design classes using dependency injection. Reason: You cannot mock or test what cannot be seen.
*Separate UI code from its behavior using Model-View-Controller or Model-View-Presenter. Reason: Allows the business logic to be tested while the parts that can't be tested (the UI) is minimized.
*Do not write static methods or classes. Reason: Static methods are difficult or impossible to isolate and Rhino Mocks is unable to mock them.
*Program off interfaces, not classes. Reason: Using interfaces clarifies the relationships between objects. An interface should define a service that an object needs from its environment. Also, interfaces can be easily mocked using Rhino Mocks and other mocking frameworks.
*Isolate external dependencies. Reason: Unresolved external dependencies cannot be tested.
*Mark as virtual the methods you intend to mock. Reason: Rhino Mocks is unable to mock non-virtual methods.
A: Know the difference between fakes, mocks and stubs and when to use each.
Avoid over specifying interactions using mocks. This makes tests brittle.
A: Definitely a good list. Here are a few thoughts on it:
Write the test first, then the code.
I agree, at a high level. But, I'd be more specific: "Write a test first, then write just enough code to pass the test, and repeat." Otherwise, I'd be afraid that my unit tests would look more like integration or acceptance tests.
Design classes using dependency injection.
Agreed. When an object creates its own dependencies, you have no control over them. Inversion of Control / Dependency Injection gives you that control, allowing you to isolate the object under test with mocks/stubs/etc. This is how you test objects in isolation.
Separate UI code from its behavior using Model-View-Controller or Model-View-Presenter.
Agreed. Note that even the presenter/controller can be tested using DI/IoC, by handing it a stubbed/mocked view and model. Check out Presenter First TDD for more on that.
Do not write static methods or classes.
Not sure I agree with this one. It is possible to unit test a static method/class without using mocks. So, perhaps this is one of those Rhino Mock specific rules you mentioned.
Program off interfaces, not classes.
I agree, but for a slightly different reason. Interfaces provide a great deal of flexibility to the software developer - beyond just support for various mock object frameworks. For example, it is not possible to support DI properly without interfaces.
Isolate external dependencies.
Agreed. Hide external dependencies behind your own facade or adapter (as appropriate) with an interface. This will allow you to isolate your software from the external dependency, be it a web service, a queue, a database or something else. This is especially important when your team doesn't control the dependency (a.k.a. external).
Mark as virtual the methods you intend to mock.
That's a limitation of Rhino Mocks. In an environment that prefers hand coded stubs over a mock object framework, that wouldn't be necessary.
And, a couple of new points to consider:
Use creational design patterns. This will assist with DI, but it also allows you to isolate that code and test it independently of other logic.
Write tests using Bill Wake's Arrange/Act/Assert technique. This technique makes it very clear what configuration is necessary, what is actually being tested, and what is expected.
Don't be afraid to roll your own mocks/stubs. Often, you'll find that using mock object frameworks makes your tests incredibly hard to read. By rolling your own, you'll have complete control over your mocks/stubs, and you'll be able to keep your tests readable. (Refer back to previous point.)
Avoid the temptation to refactor duplication out of your unit tests into abstract base classes, or setup/teardown methods. Doing so hides configuration/clean-up code from the developer trying to grok the unit test. In this case, the clarity of each individual test is more important than refactoring out duplication.
Implement Continuous Integration. Check-in your code on every "green bar." Build your software and run your full suite of unit tests on every check-in. (Sure, this isn't a coding practice, per se; but it is an incredible tool for keeping your software clean and fully integrated.)
A: This is a very helpful post!
I would add that it is always important to understand the Context and System Under Test (SUT). Following TDD principals to the letter is much easier when you're writing new code in an environment where existing code follows the same principals. But when you're writing new code in a non TDD legacy environment you find that your TDD efforts can quickly balloon far beyond your estimates and expectations.
For some of you, who live in an entirely academic world, timelines and delivery may not be important, but in an environment where software is money, making effective use of your TDD effort is critical.
TDD is highly subject to the Law of Diminishing Marginal Return. In short, your efforts towards TDD are increasingly valuable until you hit a point of maximum return, after which, subsequent time invested into TDD has less and less value.
I tend to believe that TDD's primary value is in boundary (blackbox) as well as in occasional whitebox testing of mission-critical areas of the system.
A: The real reason for programming against interfaces is not to make life easier for Rhino, but to clarify the relationships between objects in the code. An interface should define a service that an object needs from its environment. A class provides a particular implementation of that service. Read Rebecca Wirfs-Brock's "Object Design" book on Roles, Responsibilities, and Collaborators.
A: If you are working with .Net 3.5, you may want to look into the Moq mocking library - it uses expression trees and lambdas to remove non-intuitive record-reply idiom of most other mocking libraries.
Check out this quickstart to see how much more intuitive your test cases become, here is a simple example:
// ShouldExpectMethodCallWithVariable
int value = 5;
var mock = new Mock<IFoo>();
mock.Expect(x => x.Duplicate(value)).Returns(() => value * 2);
Assert.AreEqual(value * 2, mock.Object.Duplicate(value));
A: Good list. One of the things that you might want to establish - and I can't give you much advice since I'm just starting to think about it myself - is when a class should be in a different library, namespace, nested namespaces. You might even want to figure out a list of libraries and namespaces beforehand and mandate that the team has to meet and decide to merge two/add a new one.
Oh, just thought of something that I do that you might want to also. I generally have a unit tests library with a test fixture per class policy where each test goes into a corresponding namespace. I also tend to have another library of tests (integration tests?) which is in a more BDD style. This allows me to write tests to spec out what the method should do as well as what the application should do overall.
A: Here's a another one that I thought of that I like to do.
If you plan to run tests from the unit test Gui as opposed to from TestDriven.Net or NAnt then I've found it easier to set the unit testing project type to console application rather than library. This allows you to run tests manually and step through them in debug mode (which the aforementioned TestDriven.Net can actually do for you).
Also, I always like to have a Playground project open for testing bits of code and ideas I'm unfamiliar with. This should not be checked into source control. Even better, it should be in a separate source control repository on the developer's machine only.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "88"
}
|
Q: What are the advantages and disadvantages of GPGPU (general-purpose GPU) development? I am wondering what is the key thing that helps you in GPGPU development and of course what is the constraints that you find unacceptable.
Comes to mind for me:
*
*Key advantage: the raw power of these things
*Key constraint: the memory model
What's your view?
A: You have to be careful with how you interpret Tim Sweeney's statements in that Ars interview. He's saying that having two separate platforms (the CPU and GPU), one suitable for single-threaded performance and one suitable for throughput-oriented computing, will soon be a thing of the past, as our applications and hardware grow towards one another.
The GPU grew out of technology limitations with the CPU, which made the arguably more natural algorithms like ray-tracing and photon mapping nigh-undoable at reasonable resolutions and framerates. In came the GPU, with a wildly different and restrictive programming model, but maybe 2 or 3 orders of magnitude better throughput for applications painstakingly coded to that model. The two machine models had (and still have) essentially different coding styles, languages (OpenGL, DirectX, shader languages vs. traditional desktop languages), and workflows. This makes code reuse, and even algorithm/programming skill reuse, extremely difficult, and hamstrings any developer who wants to make use of a dense parallel compute substrate into this restrictive programming model.
Finally, we're coming to a point where this dense compute substrate is similarly programmable to a CPU. Although there is still a sizeable performance delta between one "core" of these massively-parallel accelerators (though the threads of execution within, for example, an SM on the G80, are not exactly cores in the traditional sense) and a modern x86 desktop core, two factors drive convergence of these two platforms:
*
*Intel and AMD are moving towards more, simpler cores on x86 chips, converging the hardware with the GPU, where units are becoming more coarse-grained and programmable over time).
*This and other forces are spawning many new applications that can take advantage of Data- or Thread-Level Parallelism (DLP/TLP), effectively utilizing this kind of substrate.
So, what Tim was saying is that the 2 distinct platforms will converge, to an even greater extent than, for instance, OpenCl, affords. A salient quote from the interview:
TS: No, I see exactly where you're
heading. In the next console
generation you could have consoles
consist of a single non-commodity
chip. It could be a general processor,
whether it evolved from a past CPU
architecture or GPU architecture, and
it could potentially run
everything—the graphics, the AI,
sound, and all these systems in an
entirely homogeneous manner. That's a
very interesting prospect, because it
could dramatically simplify the
toolset and the processes for creating
software.
Right now, in the course of shipping
Unreal 3, we have to use multiple
programming languages. We use one
programming language for writing pixel
shaders, another for writing gameplay
code, and then on PlayStation 3 we use
yet another compiler to write code to
run on the Cell processor. So the
PlayStation 3 ends up being a
particular challenge, because there
you have three completely different
processors from different vendors with
different instruction sets and
different compilers and different
performance techniques. So, a lot of
the complexity is unnecessary and
makes load-balancing more difficult.
When you have, for example, three
different chips with different
programming capabilities, you often
have two of those chips sitting idle
for much of the time, while the other
is maxed out. But if the architecture
is completely uniform, then you can
run any task on any part of the chip
at any time, and get the best
performance tradeoff that way.
A: I found this article to be interesting about how GPU's won't be as necessary with the speed of CPU's and # of cores ever increasing.
http://arstechnica.com/articles/paedia/gpu-sweeney-interview.ars
A: Used to be interesting for their parallel architectures and extra silicon that was mostly idle and hence could be used on the side for general purpos programming tasks -
see - http://en.wikipedia.org/wiki/CUDA
but it might not be too relevant in the face of Lou's answer above.
A: The key advantage is gigaflops - raw power. Disadvantages include limited, non orthogonal instruction set and programming model.
Here's a survey paper:
http://graphics.idav.ucdavis.edu/publications/print_pub?pub_id=907
The wikipedia article's a pretty good start.
Lou Franco points to an interview with Tim Sweeney; here's the slides of a talk he gave, which has more detail:
http://www.scribd.com/doc/5687/The-Next-Mainstream-Programming-Language-A-Game-Developers-Perspective-by-Tim-Sweeney
Might also nose around:
http://gpgpu.org
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Does Git work in Windows? I work on Linux all the time and I'm clueless about Windows, not even having a Windows box. Is Git nowadays working on Windows? Or am I making problems for my Windows pals by using it?
A: I have had no problems, even with the gui tools (gitk and git gui), using git from Cygwin. The Cygwin people are very conscientious and have a large community to boot.
A: Yes it does. Check out this screencast at GitCasts.
A: You should also checkout Git-Extensions which adds git commands as shell extensions - works great with msysgit.
A: There's a port of Tortoise for GIT, in version 0.4 so far:
*
*Tortoise GIT
A: I've heard good things about it, but a sticking point for me (and the Japanese company I work for) is lack of cross-platform Unicode filename support. It depends if that particular feature is important to you.
See Issue 80 in the msysgit bug tracker.
See the What DVCS support Unicode filenames? question I asked about this.
A: As far as I can tell msysgit works perfectly well under Windows Vista.
This after a whole 2-month experience checking out plugins and applications for Ruby on Rails :-)
Anyway, it was a breeze to install, no problem.
A: It works, but not well. If you Google around a bit, you'll find the port which uses MinGW. The main problems are instability and some very Linux-like tools (gittk). If you really need it though, you should be able to get by.
A: In the case you are primary using Eclipse as your IDE, there's a fine team provider called EGit, which is pretty easy to install. Check this: http://www.eclipse.org/egit/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: What's the best way to manage a large number of tables in MS SQL Server? This question is related to another:
Will having multiple filegroups help speed up my database?
The software we're developing is an analytical tool that uses MS SQL Server 2005 to store relational data. Initial analysis can be slow (since we're processing millions or billions of rows of data), but there are performance requirements on recalling previous analyses quickly, so we "save" results of each analysis.
Our current approach is to save analysis results in a series of "run-specific" tables, and the analysis is complex enough that we might end up with as many as 100 tables per analysis. Usually these tables use up a couple hundred MB per analysis (which is small compared to our hundreds of GB, or sometimes multiple TB, of source data). But overall, disk space is not a problem for us. Each set of tables is specific to one analysis, and in many cases this provides us enormous performance improvements over referring back to the source data.
The approach starts to break down once we accumulate enough saved analysis results -- before we added more robust archive/cleanup capability, our testing database climbed to several million tables. But it's not a stretch for us to have more than 100,000 tables, even in production. Microsoft places a pretty enormous theoretical limit on the size of sysobjects (~2 billion), but once our database grows beyond 100,000 or so, simple queries like CREATE TABLE and DROP TABLE can slow down dramatically.
We have some room to debate our approach, but I think that might be tough to do without more context, so instead I want to ask the question more generally: if we're forced to create so many tables, what's the best approach for managing them? Multiple filegroups? Multiple schemas/owners? Multiple databases?
Another note: I'm not thrilled about the idea of "simply throwing hardware at the problem" (i.e. adding RAM, CPU power, disk speed). But we won't rule it out either, especially if (for example) someone can tell us definitively what effect adding RAM or using multiple filegroups will have on managing a large system catalog.
A: Without first seeing the entire system, my first recommendation would be to save the historical runs in combined tables with a RunID as part of the key - a dimensional model may also be relevant here. This table can be partitioned for improvement, which will also allow you to spread the table into other filegroups.
Another possibility it to put each run in its own database and then detach them, only attaching them as needed (and in read-only form)
CREATE TABLE and DROP TABLE are probably performing poorly because the master or model databases are not optimized for this kind of behavior.
I also recommend talking to Microsoft about your choice of database design.
A: Are the tables all different structures? If they are the same structure you might get away with a single partitioned table.
If they are different structures, but just subsets of the same set of dimension columns, you could still store them in partitions in the same table with nulls in the non-applicable columns.
If this is analytic (derivative pricing computations perhaps?) you could dump the results of a computation run to flat files and reuse your computations by loading from the flat files.
A: This seems to be a very interesting problem/application that you are working with. I would love to work on something like this. :)
You have a very large problem surface area, and that makes it hard to start helping. There are several solution parameters that are not evident in your post. For example, how long do you plan to keep the run analysis tables? There's a LOT other questions that need to be asked.
You are going to need a combination of serious data warehousing, and data/table partitioning. Depending on how much data you want to keep and archive you may need to start de-normalizing and flattening the tables.
This would be pretty good case where contacting Microsoft directly can be mutually beneficial. Microsoft gets a good case to show other customers, and you get help directly from the vendor.
A: We ended up splitting our database into multiple databases. So the main database contains a "databases" table that refers to one or more "run" databases, each of which contains distinct sets of analysis results. Then the main "run" table contains a database ID, and the code that retrieves a saved result includes the relevant database prefix on all queries.
This approach allows the system catalog of each database to be more reasonable, it provides better separation between the core/permanent tables and the dynamic/run tables, and it also makes backups and archiving more manageable. It also allows us to split our data across multiple physical disks, although using multiple filegroups would have done that too. Overall, it's working well for us now given our current requirements, and based on expected growth we think it will scale well for us too.
We've also noticed that SQL 2008 tends to handle large system catalogs better than SQL 2000 and SQL 2005 did. (We hadn't upgraded to 2008 when I posted this question.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Mysql results in PHP - arrays or objects? Been using PHP/MySQL for a little while now, and I'm wondering if there are any specific advantages (performance or otherwise) to using mysql_fetch_object() vs mysql_fetch_assoc() / mysql_fetch_array().
A: Something to keep in mind: arrays can easily be added to a memory cache (eaccelerator, XCache, ..), while objects cannot (they need to get serialized when storing and unserialized on every retrieval!).
You may switch to using arrays instead of objects when you want to add memory cache support - but by that time you may have to change a lot of code already, which uses the previously used object type return values.
A: Performance-wise it doesn't matter what you use. The difference is that mysql_fetch_object returns object:
while ($row = mysql_fetch_object($result)) {
echo $row->user_id;
echo $row->fullname;
}
mysql_fetch_assoc() returns associative array:
while ($row = mysql_fetch_assoc($result)) {
echo $row["userid"];
echo $row["fullname"];
}
and mysql_fetch_array() returns array:
while ($row = mysql_fetch_array($result)) {
echo $row[0];
echo $row[1] ;
}
A: Fetching an array with mysql_fetch_array() lets you loop through the result set via either a foreach loop or a for loop. mysql_fetch_object() cannot be traversed by a for loop.
Not sure if that even matters much, just thought I'd mention it.
A: mysql_fetch_array makes your code difficult to read = a maintenace nightmare. You can't see at a glance what data your object is dealing with. It slightly faster, but if that is important to you you are processing so much data that PHP is probably not the right way to go.
mysql_fetch_object has some drawbacks, especially if you base a db layer on it.
*
*Column names may not be valid PHP identifiers, e.g tax-allowance or user.id if your database driver gives you the column name as specified in the query. Then you have to start using {} all over the place.
*If you want to get a column based on its name, stroed in some variable you also have to start using variable properties $row->{$column_name}, while array syntax $row[$column_name]
*Constructors don't get invoked when you might expect if you specify the classname.
*If you don't specify the class name you get a stdClass, which is hardly better than an array anyway.
mysql_fetch_assoc is the easiest of the three to work with, and I like the distinction this gives in the code between objects and database result rows...
$object->property=$row['column1'];
$object->property=$row[$column_name];
foreach($row as $column_name=>$column_value){...}
While many OOP fans (and I am an OOP fan) like the idea of turning everything into an object, I feel that the associative array is a better model of a row from a database than an object, as in my mind an object is a set of properties with methods to act upon them, whereas the row is just data and should be treated as such without further complication.
A: Additionally, if you eventually want to apply Memcaching to your MySQL results, you may want to opt for arrays. It seems it's safer to store array types, rather than object type results.
A: while ($Row = mysql_fetch_object($rs)) {
// ...do stuff...
}
...is how I've always done it. I prefer to use objects for collections of data instead of arrays, since it organizes the data a little better, and I know I'm a lot less likely to try to add arbitrary properties to an object than I am to try to add an index to an array (for the first few years I used PHP, I thought you couldn't just assign arbitrary properties to an object, so it's ingrained to not do that).
A: Speed-wise, mysql_fetch_object() is identical to mysql_fetch_array(), and almost as quick as mysql_fetch_row().
Also, with mysql_fetch_object() you will only be able to access field data by corresponding field names.
A: I think the difference between all these functions is insignificant, especially when compared to code readability.
If you're concerned about this kind of optimization, use mysql_fetch_row(). It's the fastest because it doesn't use associative arrays (e.g. $row[2]), but it's easiest to break your code with it.
A: I vote against mysql_fetch_array()
Because you get back both numerically indexed columns and column names, this creates an array that is twice as large. It's fine if you don't need to debug your code and view it's contents. But for the rest of us, it becomes harder to debug since you have to wade through twice as much data in an odd looking format.
I sometimes run into a project that uses this function then when I debug, I think that something has gone terribly wrong since I have numeric columns mixed in with my data.
So in the name of sanity, please don't use this function, it makes code maintenance more difficult
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: Do standard asp.net validators work with Ajax and update panel? I am having issues with validators not firing (No expected error messages showing) when using Page.Validate() from code behind. The validators are placed inside an Ajax updatepanel.
It seems there are downloadable Ajax versions of the validators. I am not sure if I need these or if VS 2008 SP1 has them already. When the form is posted through a button, the validators work but they don't when I do a Page.Validate() on demand.
A: Yes, validators do work inside an UpdatePanel, but you need to use at least SP1 of ASP.NET 2.0. If you use SP1, you do not need and should not use the "ajax version" of the validators.
More details on this subject are available here:
StackOverflow: ASP.NET Validators inside an UpdatePanel
A: I don't want to force an update. In certain situations, I want to validate some form elements when a user changes the value on some form element. When I user makes a change to say a radio button or a dropdownlist, an automatic postback happens. When the postback occurs, I want the validation controls to fire as if I hit the submit button.
These controls which cause a postback have 'causevalidation' turned on. Another test is in the event handler of the control which caused the postback, I have a Page.Validate().
The question is why a button postback fires the validation but not another control which caused a postback?
A: Maybe we can take it from the top. Can you answer these?
*
*Are you using .NET 2.0 SP1 or greater?
*Are your validator controls inside the UpdatePanel or outside?
*Are you using your site with javascript disabled (very unlikely)?
Note that your validators MUST be inside an updated UpdatePanel for them to display the error messages. If they are not in an updated UpdatePanel, the validators cannot change their appearance on the browser.
A: Did you call Update on your updatepanel?
A: They were included in a update for .Net framework a while ago, so yes, you have them in VS2008 SP1. I've found a problem where the server side method for CustomValidators fires twice with no "evil" effect, but otherwise they work ok.
As for the specific problem you're having, maybe the validators aren't inside the updatepanel, or some other panel ends up being refreshed by whatever control posted instead of the one that you want? Or even some ValidationGroups are defined somewhere and only these end up being validated? It's very hard to say without seeing code.
But making sure your validators are shown is easy: MyUpdatePanel.Update() will force the refresh.
A: I ended up using a single custom validator and doing my own validations in code behind and setting the custom validator's error message. This way I had more flexibility and it worked.
Using Ajax, it feels like client side validation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to do a rolling restart of a cluster of mongrels Anybody know a nice way to restart a mongrel cluster via capistrano in a "rolling" style, eg, one mongrel at a time. Would be great to have a bit of wait time in there as well for each, to let the mongrel load the rails app up as well.
I've done some searching, and haven't found too much, so looking for help before I dive into the mongrel_cluster gem myself.
Thanks!
A: I agree with the seesaw approach more than the rolling approach you are seeking. The problem is that you end up in situations where load balancing can throw users back and forth between different versions of the application while you are transitioning.
The solutions we came up with (before finding SeeSaw, which we don't use) was to take half of the mongrels off line from the load balancer. Shut them down. Update them. Start them up. Put those mongrels back online in the load balancer and take the other half off. Shut the second half down. Update the second half. Start them up. This greatly minimizes the time where you have two different versions of the application running simultaneously.
I wrote a windows bat file to do this. (Deploying on Windows is not recommended, btw)
It is very important to note that having database migrations can make the whole approach a little dangerous. If you have only additive migrations, you can run those at any time before the deployment. If you are removing columns, you need to do it after the deployment. If you are renaming columns, it is better to split it into a create a new column and copy data into it migration to run before deployment and a separate script to remove the old column after deployment. In fact, it may be dangerous to use your regular migrations on a production database in general if you don't make a specific effort to organize them. All of this points to making more frequent deliveries so each update is lower risk and less complex, but that's a subject for another response.
A: Seesaw is a gem found in the Rails Oceania Rubyforge Project that provides this kind of functionality to mongrel clusters. However, the project may be suffering from some bit-rot not havain had a release since 2007. Still worth a look even just to pinch the ideas :)
A: #!/bin/bash
for PIDFILE in /tmp/mongrel.*; do
PID=$(cat ${PIDFILE})
kill ${PID}
${RUN_MONGREL_CMD} ${PID}
sleep 2
done
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Visual Studio 2008 Documentation I've installed the MSDN Library for Visual Studio 2008 SP1, however, dynamic help in Visual Studio still spawns the RTM Document Explorer. Anyone know how to change it to the SP1 version?
A: Sometimes you see someone else struggling with the same things and that quickly reminds you are not alone.
So here is what you have to do:
*
*Open Registry Editor using regedit.exe
*Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Help\0x0409 - 0x0409 is for US english.
*HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Help\0x0409\Collection is a kind of pointer to a sub-key with the preferred collection.
*Check what of the subkeys is the help collection for the MSDN Library for Visual Studio 2008 SP1 you just installed
*Copy the sub-key name that you found on step 4.
*Update the value of the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\9.0\Help\0x0409\Collection to the value of the copied sub-key.
*Now Visual Studio opens your just installed help collection when using dynamic help.
Note: You may have to restart Visual Studio after executing these steps in order to make them effective.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How would you allow users to edit attachments in a web application? We have created a web application, using ASP.NET, that allows users to upload documents and attach them to business entities, like customers, contacts and so on.
The application runs on the intranet and all files are uploaded through the web application into a shared folder on the server.
I would like, right from the web page, for the user to open the actual file, edit it and then save the changes back to the original location. This is a piece of cake in a Windows environment, I'm just wondering what, if any, is the best way to handle this in a web environment?
The files are usually Word documents, Excel documents and images.
Clarification
We would display all the attachments in a list format. We would like it so that the user would click on an edit link and the file would be opened in the appropriate application, for example, Microsoft Word or Microsoft Excel. I think the file associations in Windows would already handle this. We are just trying to save our user the time to download the original file, make their changes, delete the old file, and the upload the new file.
A: SharePoint does this by exposing FrontPage extensions which Word and Excel know how to deal with.
If you want to look at a commercial product for ASP.NET that allows you to edit images with AJAX (no need for installed software), I work for a company that has one (Atalasoft)
A: WebDAV is probably what you want. (Free)
A: I'm trying to do something with using file:// instead of http:// but it's real sporadic based on the browser. Seems to work fine in IE, okay in Firefox, and goes nowhere in Chrome.
Looks like I may just be stuck with downloading, editing, and re-uploading the document.
A: If all your client computers are Windows, map a shared folder on the server to the same drive letter on every client and use the file:// format.
Let's say you share \ServerName\ShareName to H: on every client's computer, the you can make the link as file://h:\pat_to_the_file_under_your_share\fileName.doc
If not every one of the client's computers are in Windows, then you might try to make your links as follows (not sure if ot works):
file://\ServerName\ShareName\pat_to_the_file_under_your_share\fileName.doc
A: It sounds like you want something similar t eRoom, where the browser works in conjunction with a component that intercepts a stream from http, stores it in a temp folder, then fires up Word or Excel and allows you to edit the stream.
You may have to create a component that will intervene and create a temporary local copy of the file.
A: This tool should do what you need.
http://www.dlitools.com/dlitools/dlitoolsHome.nsf/0FA6B8B31F831F468525736B0001C606/4BBD7E8684EA8DB78525754E006C63A3?OpenDocument
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Sort Object in PHP What is an elegant way to sort objects in PHP? I would love to accomplish something similar to this.
$sortedObjectArary = sort($unsortedObjectArray, $Object->weight);
Basically specify the array I want to sort as well as the field I want to sort on. I looked into multidimensional array sorting and there might be something useful there, but I don't see anything elegant or obvious.
A: Almost verbatim from the manual:
function compare_weights($a, $b) {
if($a->weight == $b->weight) {
return 0;
}
return ($a->weight < $b->weight) ? -1 : 1;
}
usort($unsortedObjectArray, 'compare_weights');
If you want objects to be able to sort themselves, see example 3 here: http://php.net/usort
A: You can even build the sorting behavior into the class you're sorting, if you want that level of control
class thingy
{
public $prop1;
public $prop2;
static $sortKey;
public function __construct( $prop1, $prop2 )
{
$this->prop1 = $prop1;
$this->prop2 = $prop2;
}
public static function sorter( $a, $b )
{
return strcasecmp( $a->{self::$sortKey}, $b->{self::$sortKey} );
}
public static function sortByProp( &$collection, $prop )
{
self::$sortKey = $prop;
usort( $collection, array( __CLASS__, 'sorter' ) );
}
}
$thingies = array(
new thingy( 'red', 'blue' )
, new thingy( 'apple', 'orange' )
, new thingy( 'black', 'white' )
, new thingy( 'democrat', 'republican' )
);
print_r( $thingies );
thingy::sortByProp( $thingies, 'prop1' );
print_r( $thingies );
thingy::sortByProp( $thingies, 'prop2' );
print_r( $thingies );
A: For that compare function, you can just do:
function cmp( $a, $b )
{
return $b->weight - $a->weight;
}
A: For php >= 5.3
function osort(&$array, $prop)
{
usort($array, function($a, $b) use ($prop) {
return $a->$prop > $b->$prop ? 1 : -1;
});
}
Note that this uses Anonymous functions / closures. Might find reviewing the php docs on that useful.
A: The usort function (http://uk.php.net/manual/en/function.usort.php) is your friend. Something like...
function objectWeightSort($lhs, $rhs)
{
if ($lhs->weight == $rhs->weight)
return 0;
if ($lhs->weight > $rhs->weight)
return 1;
return -1;
}
usort($unsortedObjectArray, "objectWeightSort");
Note that any array keys will be lost.
A: You could use the usort() function and make your own comparison function.
$sortedObjectArray = usort($unsortedObjectArray, 'sort_by_weight');
function sort_by_weight($a, $b) {
if ($a->weight == $b->weight) {
return 0;
} else if ($a->weight < $b->weight) {
return -1;
} else {
return 1;
}
}
A: Depending on the problem you are trying to solve, you may also find the SPL interfaces useful. For example, implementing the ArrayAccess interface would allow you to access your class like an array. Also, implementing the SeekableIterator interface would let you loop through your object just like an array. This way you could sort your object just as if it were a simple array, having full control over the values it returns for a given key.
For more details:
*
*Zend Article
*PHPriot Article
*PHP Manual
A: function PHPArrayObjectSorter($array,$sortBy,$direction='asc')
{
$sortedArray=array();
$tmpArray=array();
foreach($this->$array as $obj)
{
$tmpArray[]=$obj->$sortBy;
}
if($direction=='asc'){
asort($tmpArray);
}else{
arsort($tmpArray);
}
foreach($tmpArray as $k=>$tmp){
$sortedArray[]=$array[$k];
}
return $sortedArray;
}
e.g =>
$myAscSortedArrayObject=PHPArrayObjectSorter($unsortedarray,$totalMarks,'asc');
$myDescSortedArrayObject=PHPArrayObjectSorter($unsortedarray,$totalMarks,'desc');
A: You can have almost the same code as you posted with sorted function from Nspl:
use function \nspl\a\sorted;
use function \nspl\op\propertyGetter;
use function \nspl\op\methodCaller;
// Sort by property value
$sortedByWeight = sorted($objects, propertyGetter('weight'));
// Or sort by result of method call
$sortedByWeight = sorted($objects, methodCaller('getWeight'));
A: Update from 2022 - sort array of objects:
usort($array, fn(object $a, object $b): int => $a->weight <=> $b->weight);
Full example:
$array = [
(object) ['weight' => 5],
(object) ['weight' => 10],
(object) ['weight' => 1],
];
usort($array, fn(object $a, object $b): int => $a->weight <=> $b->weight);
// Now, $array is sorted by objects' weight.
// display example :
echo json_encode($array);
Output:
[{"weight":1},{"weight":5},{"weight":10}]
Documentation links:
*
*usort
*spaceship operator (PHP 7.0)
*scalar type declaration (PHP 7.0)
*return type declaration (PHP 7.0)
*arrow function (PHP 7.4)
A: If you want to explore the full (terrifying) extent of lambda style functions in PHP, see:
http://docs.php.net/manual/en/function.create-function.php
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
}
|
Q: Simplest SOAP example What is the simplest SOAP example using Javascript?
To be as useful as possible, the answer should:
*
*Be functional (in other words actually work)
*Send at least one parameter that can be set elsewhere in the code
*Process at least one result value that can be read elsewhere in the code
*Work with most modern browser versions
*Be as clear and as short as possible, without using an external library
A: Has anyone tried this? https://github.com/doedje/jquery.soap
Seems very easy to implement.
Example:
$.soap({
url: 'http://my.server.com/soapservices/',
method: 'helloWorld',
data: {
name: 'Remy Blom',
msg: 'Hi!'
},
success: function (soapResponse) {
// do stuff with soapResponse
// if you want to have the response as JSON use soapResponse.toJSON();
// or soapResponse.toString() to get XML string
// or soapResponse.toXML() to get XML DOM
},
error: function (SOAPResponse) {
// show error
}
});
will result in
<soap:Envelope
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<helloWorld>
<name>Remy Blom</name>
<msg>Hi!</msg>
</helloWorld>
</soap:Body>
</soap:Envelope>
A: There are many quirks in the way browsers handle XMLHttpRequest, this JS code will work across all browsers:
https://github.com/ilinsky/xmlhttprequest
This JS code converts XML into easy to use JavaScript objects:
http://www.terracoder.com/index.php/xml-objectifier
The JS code above can be included in the page to meet your no external library requirement.
var symbol = "MSFT";
var xmlhttp = new XMLHttpRequest();
xmlhttp.open("POST", "http://www.webservicex.net/stockquote.asmx?op=GetQuote",true);
xmlhttp.onreadystatechange=function() {
if (xmlhttp.readyState == 4) {
alert(xmlhttp.responseText);
// http://www.terracoder.com convert XML to JSON
var json = XMLObjectifier.xmlToJSON(xmlhttp.responseXML);
var result = json.Body[0].GetQuoteResponse[0].GetQuoteResult[0].Text;
// Result text is escaped XML string, convert string to XML object then convert to JSON object
json = XMLObjectifier.xmlToJSON(XMLObjectifier.textToXML(result));
alert(symbol + ' Stock Quote: $' + json.Stock[0].Last[0].Text);
}
}
xmlhttp.setRequestHeader("SOAPAction", "http://www.webserviceX.NET/GetQuote");
xmlhttp.setRequestHeader("Content-Type", "text/xml");
var xml = '<?xml version="1.0" encoding="utf-8"?>' +
'<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ' +
'xmlns:xsd="http://www.w3.org/2001/XMLSchema" ' +
'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">' +
'<soap:Body> ' +
'<GetQuote xmlns="http://www.webserviceX.NET/"> ' +
'<symbol>' + symbol + '</symbol> ' +
'</GetQuote> ' +
'</soap:Body> ' +
'</soap:Envelope>';
xmlhttp.send(xml);
// ...Include Google and Terracoder JS code here...
Two other options:
*
*JavaScript SOAP client:
http://www.guru4.net/articoli/javascript-soap-client/en/
*Generate JavaScript from a WSDL:
https://cwiki.apache.org/confluence/display/CXF20DOC/WSDL+to+Javascript
A: Thomas:
JSON is preferred for front end use because we have easy lookups. Therefore you have no XML to deal with. SOAP is a pain without using a library because of this. Somebody mentioned SOAPClient, which is a good library, we started with it for our project. However it had some limitations and we had to rewrite large chunks of it. It's been released as SOAPjs and supports passing complex objects to the server, and includes some sample proxy code to consume services from other domains.
A: <html>
<head>
<title>Calling Web Service from jQuery</title>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js"></script>
<script type="text/javascript">
$(document).ready(function () {
$("#btnCallWebService").click(function (event) {
var wsUrl = "http://abc.com/services/soap/server1.php";
var soapRequest ='<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <getQuote xmlns:impl="http://abc.com/services/soap/server1.php"> <symbol>' + $("#txtName").val() + '</symbol> </getQuote> </soap:Body></soap:Envelope>';
alert(soapRequest)
$.ajax({
type: "POST",
url: wsUrl,
contentType: "text/xml",
dataType: "xml",
data: soapRequest,
success: processSuccess,
error: processError
});
});
});
function processSuccess(data, status, req) { alert('success');
if (status == "success")
$("#response").text($(req.responseXML).find("Result").text());
alert(req.responseXML);
}
function processError(data, status, req) {
alert('err'+data.state);
//alert(req.responseText + " " + status);
}
</script>
</head>
<body>
<h3>
Calling Web Services with jQuery/AJAX
</h3>
Enter your name:
<input id="txtName" type="text" />
<input id="btnCallWebService" value="Call web service" type="button" />
<div id="response" ></div>
</body>
</html>
Hear is best JavaScript with SOAP tutorial with example.
http://www.codeproject.com/Articles/12816/JavaScript-SOAP-Client
A: This cannot be done with straight JavaScript unless the web service is on the same domain as your page. Edit: In 2008 and in IE<10 this cannot be done with straight javascript unless the service is on the same domain as your page.
If the web service is on another domain [and you have to support IE<10] then you will have to use a proxy page on your own domain that will retrieve the results and return them to you. If you do not need old IE support then you need to add CORS support to your service. In either case, you should use something like the lib that timyates suggested because you do not want to have to parse the results yourself.
If the web service is on your own domain then don't use SOAP. There is no good reason to do so. If the web service is on your own domain then modify it so that it can return JSON and save yourself the trouble of dealing with all the hassles that come with SOAP.
Short answer is: Don't make SOAP requests from javascript. Use a web service to request data from another domain, and if you do that then parse the results on the server-side and return them in a js friendly form.
A: Easily consume SOAP Web services with JavaScript -> Listing B
function fncAddTwoIntegers(a, b)
{
varoXmlHttp = new XMLHttpRequest();
oXmlHttp.open("POST",
"http://localhost/Develop.NET/Home.Develop.WebServices/SimpleService.asmx'",
false);
oXmlHttp.setRequestHeader("Content-Type", "text/xml");
oXmlHttp.setRequestHeader("SOAPAction", "http://tempuri.org/AddTwoIntegers");
oXmlHttp.send(" \
<soap:Envelope xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' \
xmlns:xsd='http://www.w3.org/2001/XMLSchema' \
xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/'> \
<soap:Body> \
<AddTwoIntegers xmlns='http://tempuri.org/'> \
<IntegerOne>" + a + "</IntegerOne> \
<IntegerTwo>" + b + "</IntegerTwo> \
</AddTwoIntegers> \
</soap:Body> \
</soap:Envelope> \
");
return oXmlHttp.responseXML.selectSingleNode("//AddTwoIntegersResult").text;
}
This may not meet all your requirements but it is a start at actually answering your question. (I switched XMLHttpRequest() for ActiveXObject("MSXML2.XMLHTTP")).
A: Some great examples (and a ready-made JavaScript SOAP client!) here:
http://plugins.jquery.com/soap/
Check the readme, and beware the same-origin browser restriction.
A: This is the simplest JavaScript SOAP Client I can create.
<html>
<head>
<title>SOAP JavaScript Client Test</title>
<script type="text/javascript">
function soap() {
var xmlhttp = new XMLHttpRequest();
xmlhttp.open('POST', 'https://somesoapurl.com/', true);
// build SOAP request
var sr =
'<?xml version="1.0" encoding="utf-8"?>' +
'<soapenv:Envelope ' +
'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ' +
'xmlns:api="http://127.0.0.1/Integrics/Enswitch/API" ' +
'xmlns:xsd="http://www.w3.org/2001/XMLSchema" ' +
'xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">' +
'<soapenv:Body>' +
'<api:some_api_call soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">' +
'<username xsi:type="xsd:string">login_username</username>' +
'<password xsi:type="xsd:string">password</password>' +
'</api:some_api_call>' +
'</soapenv:Body>' +
'</soapenv:Envelope>';
xmlhttp.onreadystatechange = function () {
if (xmlhttp.readyState == 4) {
if (xmlhttp.status == 200) {
alert(xmlhttp.responseText);
// alert('done. use firebug/console to see network response');
}
}
}
// Send the POST request
xmlhttp.setRequestHeader('Content-Type', 'text/xml');
xmlhttp.send(sr);
// send request
// ...
}
</script>
</head>
<body>
<form name="Demo" action="" method="post">
<div>
<input type="button" value="Soap" onclick="soap();" />
</div>
</form>
</body>
</html> <!-- typo -->
A: The question is 'What is the simplest SOAP example using Javascript?'
This answer is of an example in the Node.js environment, rather than a browser. (Let's name the script soap-node.js) And we will use the public SOAP web service from Europe PMC as an example to get the reference list of an article.
const XMLHttpRequest = require("xmlhttprequest").XMLHttpRequest;
const DOMParser = require('xmldom').DOMParser;
function parseXml(text) {
let parser = new DOMParser();
let xmlDoc = parser.parseFromString(text, "text/xml");
Array.from(xmlDoc.getElementsByTagName("reference")).forEach(function (item) {
console.log('Title: ', item.childNodes[3].childNodes[0].nodeValue);
});
}
function soapRequest(url, payload) {
let xmlhttp = new XMLHttpRequest();
xmlhttp.open('POST', url, true);
// build SOAP request
xmlhttp.onreadystatechange = function () {
if (xmlhttp.readyState == 4) {
if (xmlhttp.status == 200) {
parseXml(xmlhttp.responseText);
}
}
}
// Send the POST request
xmlhttp.setRequestHeader('Content-Type', 'text/xml');
xmlhttp.send(payload);
}
soapRequest('https://www.ebi.ac.uk/europepmc/webservices/soap',
`<?xml version="1.0" encoding="UTF-8"?>
<S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
<S:Header />
<S:Body>
<ns4:getReferences xmlns:ns4="http://webservice.cdb.ebi.ac.uk/"
xmlns:ns2="http://www.scholix.org"
xmlns:ns3="https://www.europepmc.org/data">
<id>C7886</id>
<source>CTX</source>
<offSet>0</offSet>
<pageSize>25</pageSize>
<email>ukpmc-phase3-wp2b---do-not-reply@europepmc.org</email>
</ns4:getReferences>
</S:Body>
</S:Envelope>`);
Before running the code, you need to install two packages:
npm install xmlhttprequest
npm install xmldom
Now you can run the code:
node soap-node.js
And you'll see the output as below:
Title: Perspective: Sustaining the big-data ecosystem.
Title: Making proteomics data accessible and reusable: current state of proteomics databases and repositories.
Title: ProteomeXchange provides globally coordinated proteomics data submission and dissemination.
Title: Toward effective software solutions for big biology.
Title: The NIH Big Data to Knowledge (BD2K) initiative.
Title: Database resources of the National Center for Biotechnology Information.
Title: Europe PMC: a full-text literature database for the life sciences and platform for innovation.
Title: Bio-ontologies-fast and furious.
Title: BioPortal: ontologies and integrated data resources at the click of a mouse.
Title: PubMed related articles: a probabilistic topic-based model for content similarity.
Title: High-Impact Articles-Citations, Downloads, and Altmetric Score.
A: You can use the jquery.soap plugin to do the work for you.
This script uses $.ajax to send a SOAPEnvelope. It can take XML DOM, XML string or JSON as input and the response can be returned as either XML DOM, XML string or JSON too.
Example usage from the site:
$.soap({
url: 'http://my.server.com/soapservices/',
method: 'helloWorld',
data: {
name: 'Remy Blom',
msg: 'Hi!'
},
success: function (soapResponse) {
// do stuff with soapResponse
// if you want to have the response as JSON use soapResponse.toJSON();
// or soapResponse.toString() to get XML string
// or soapResponse.toXML() to get XML DOM
},
error: function (SOAPResponse) {
// show error
}
});
A: Simplest example would consist of:
*
*Getting user input.
*Composing XML SOAP message similar to this
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<GetInfoByZIP xmlns="http://www.webserviceX.NET">
<USZip>string</USZip>
</GetInfoByZIP>
</soap:Body>
</soap:Envelope>
*POSTing message to webservice url using XHR
*Parsing webservice's XML SOAP response similar to this
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<soap:Body>
<GetInfoByZIPResponse xmlns="http://www.webserviceX.NET">
<GetInfoByZIPResult>
<NewDataSet xmlns="">
<Table>
<CITY>...</CITY>
<STATE>...</STATE>
<ZIP>...</ZIP>
<AREA_CODE>...</AREA_CODE>
<TIME_ZONE>...</TIME_ZONE>
</Table>
</NewDataSet>
</GetInfoByZIPResult>
</GetInfoByZIPResponse>
</soap:Body>
</soap:Envelope>
*Presenting results to user.
But it's a lot of hassle without external JavaScript libraries.
A: function SoapQuery(){
var namespace = "http://tempuri.org/";
var site = "http://server.com/Service.asmx";
var xmlhttp = new ActiveXObject("Msxml2.ServerXMLHTTP.6.0");
xmlhttp.setOption(2, 13056 ); /* if use standard proxy */
var args,fname = arguments.callee.caller.toString().match(/ ([^\(]+)/)[1]; /*Имя вызвавшей ф-ции*/
try { args = arguments.callee.caller.arguments.callee.toString().match(/\(([^\)]+)/)[1].split(",");
} catch (e) { args = Array();};
xmlhttp.open('POST',site,true);
var i, ret = "", q = '<?xml version="1.0" encoding="utf-8"?>'+
'<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">'+
'<soap:Body><'+fname+ ' xmlns="'+namespace+'">';
for (i=0;i<args.length;i++) q += "<" + args[i] + ">" + arguments.callee.caller.arguments[i] + "</" + args[i] + ">";
q += '</'+fname+'></soap:Body></soap:Envelope>';
// Send the POST request
xmlhttp.setRequestHeader("MessageType","CALL");
xmlhttp.setRequestHeader("SOAPAction",namespace + fname);
xmlhttp.setRequestHeader('Content-Type', 'text/xml');
//WScript.Echo("Запрос XML:" + q);
xmlhttp.send(q);
if (xmlhttp.waitForResponse(5000)) ret = xmlhttp.responseText;
return ret;
};
function GetForm(prefix,post_vars){return SoapQuery();};
function SendOrder2(guid,order,fio,phone,mail){return SoapQuery();};
function SendOrder(guid,post_vars){return SoapQuery();};
A: Angularjs $http wrap base on XMLHttpRequest. As long as at the header content set following code will do.
"Content-Type": "text/xml; charset=utf-8"
For example:
function callSoap(){
var url = "http://www.webservicex.com/stockquote.asmx";
var soapXml = "<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:web=\"http://www.webserviceX.NET/\"> "+
"<soapenv:Header/> "+
"<soapenv:Body> "+
"<web:GetQuote> "+
"<web:symbol></web:symbol> "+
"</web:GetQuote> "+
"</soapenv:Body> "+
"</soapenv:Envelope> ";
return $http({
url: url,
method: "POST",
data: soapXml,
headers: {
"Content-Type": "text/xml; charset=utf-8"
}
})
.then(callSoapComplete)
.catch(function(message){
return message;
});
function callSoapComplete(data, status, headers, config) {
// Convert to JSON Ojbect from xml
// var x2js = new X2JS();
// var str2json = x2js.xml_str2json(data.data);
// return str2json;
return data.data;
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "261"
}
|
Q: Does anyone know of any cross platform GUI log viewers for Ruby On Rails? I'm tired of using:
tail -f development.log
To keep track of my rails logs. Instead I would like something that displays the info in a grid and allows my to sort, filter and look at stack traces per log message.
Does anyone know of a GUI tool for displaying rails logs. Ideally I would like a standalone app (not something in Netbeans or Eclipse)
A: FWIW I started this project at GitHub to try and solve this problem, its far from functional.
A: Splunk, there is a Free version that is limited to 500mb but has all the same functionality as the full version.
A: You might be able to use http://logging.apache.org/chainsaw/index.html . Haven't used it in a long time but I think its log parser should be configurable
A: I like using the Exception Logger plugin for live sites. I can visit http://domain.com/logged_exceptions, and read all of the unhandled exceptions that have been throw in production, along with full stack traces. From there, it's pretty easy to write tests to find and correct the problem. There's a whole railscast on the topic here.
A: I asked the same question a few days ago and got suggestions for Splunk, BareTail, and tail [been using Chainsaw until now].
Chainsaw didn't work so well for me. It was buggy and non-responsive. So I looked into Splunk which turned out to be a big overkill for just viewing logs. Tail seemed a bit too primitive for my taste. So if you're on Windows, I'd say BareTail is your best bet.
HTH
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Is there any way to use XmlSiteMapProvider within WinForm/Console/VSTest application? I wonder whether there is a workaround for using the standard XmlSiteMapProvider within a non asp.net application, like WinForm/Console or, in my case, VS Unit Test one.
The following code fails, because it cannot create a path to the .sitemap file inside a private GetConfigDocument method.
XmlSiteMapProvider provider = new XmlSiteMapProvider();
NameValueCollection providerAttributes = new NameValueCollection();
providerAttributes.Add("siteMapFile", "Web.sitemap");
provider.Initialize("XmlSiteMapReader", providerAttributes);
provider.BuildSiteMap();
I feel the right solution is to write another provider.
A: I do not see why not. It is just a provider that implements an interface. You may not need many of the features, but you can access the API for what it provides you. Your WinForms screens can simply use the Urls for identification so that you can determine your place in the hierarchy.
What you may have to do is create a custom implementation of the provider because it will use the HttpContext to get the Url of the current web request to identify current placement while you will need to get that value differently. That is what could be tricky because your WinForm application could be displaying multiple windows at time. If you know there is only one window showing at a time you could use a static value which is set prior to accessing the SiteMap API.
Now you have to question the value of using an API if you have to do all of the work. There may not be enough benefit to make it worthwhile.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Intra-process coordination in mod_perl under the worker MPM I need to do some simple timezone calculation in mod_perl. DateTime isn't an option. What I need to do is easily accomplished by setting $ENV{TZ} and using localtime and POSIX::mktime, but under a threaded MPM, I'd need to make sure only one thread at a time was mucking with the environment. (I'm not concerned about other uses of localtime, etc.)
How can I use a mutex or other locking strategy to serialize (in the non-marshalling sense) access to the environment? The docs I've looked at don't explain well enough how I would create a mutex for just this use. Maybe there's something I'm just not getting about how you create mutexes in general.
Update: yes, I am aware of the need for using Env::C to set TZ.
A: (repeating what I said over at PerlMonks...)
BEGIN {
my $mutex;
sub that {
$mutex ||= APR::ThreadMutex->new( $r->pool() );
$mutex->lock();
$ENV{TZ}= ...;
...
$mutex->unlock();
}
}
But, of course, lock() should happen in a c'tor and unlock() should happen in a d'tor except for one-off hacks.
Update: Note that there is a race condition in how $mutex is initialized in the subroutine (two threads could call that() for the first time nearly simultaneously). You'd most likely want to initialize $mutex before (additional) threads are created but I'm unclear on the details on the 'worker' Apache MPM and how you would accomplish that easily. If there is some code that gets run "early", simply calling that() from there would eliminate the race.
Which all suggests a much safer interface to APR::ThreadMutex:
BEGIN {
my $mutex;
sub that {
my $autoLock= APR::ThreadMutex->autoLock( \$mutex );
...
# Mutex automatically released when $autoLock destroyed
}
}
Note that autoLock() getting a reference to undef would cause it to use a mutex to prevent a race when it initializes $mutex.
A: Because of this issue, mod_perl 2 actually deals with the %ENV hash differently than mod_perl 1. In mod_perl 1 %ENV was tied directly to the environ struct, so changing %ENV changed the environment. In mod_perl 2, the %ENV hash is populated from environ, but changes are not passed back.
This means you can no longer muck with $ENV{TZ} to adjust the timezone -- particularly in a threaded environment. The Apache2::Localtime module will make it work for the non-threaded case (by using Env::C) but when running in a threaded MPM that will be bad news.
There are some comments in the mod_perl source (src/modules/perl/modperl_env.c) regarding this issue:
/* * XXX: what we do here might change:
* - make it optional for %ENV to be tied to r->subprocess_env
* - make it possible to modify environ
* - we could allow modification of environ if mpm isn't threaded
* - we could allow modification of environ if variable isn't a CGI
* variable (still could cause problems)
*/
/*
* problems we are trying to solve:
* - environ is shared between threads
* + Perl does not serialize access to environ
* + even if it did, CGI variables cannot be shared between threads!
* problems we create by trying to solve above problems:
* - a forked process will not inherit the current %ENV
* - C libraries might rely on environ, e.g. DBD::Oracle
*/
A: If you're using apache 1.3, then you shouldn't need to resort to mutexes. Apache 1.3 spawns of a number of worker processes, and each worker executes a single thread. In this case, you can write:
{
local $ENV{TZ} = whatever_I_need_it_to_be();
# Do calculations here.
}
Changing the variable with local means that it reverts back to the previous value at the end of the block, but is still passed into any subroutine calls made from within that block. It's almost certainly what you want. Since each process has its own independent environment, you won't be changing the environment of other processes using this technique.
For apache 2, I don't know what model it uses with regards to forks and threads. If it keeps the same approach of forking off processes and having a single thread each, you're fine.
If apache 2 uses honest to goodness real threads, then that's outside my area of detailed knowledge, but I hope another lovely stackoverflow person can provide assistance.
All the very best,
Paul
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How do I deploy a managed stored procedure without using Visual Studio? Everything I have read says that when making a managed stored procedure, to right click in Visual Studio and choose deploy. That works fine, but what if I want to deploy it outside of Visual Studio to a number of different locations? I tried creating the assembly with the dll the project built in SQL, and while it did add the assembly, it did not create the procedures out of the assembly. Has anyone figured out how to do this in SQL directly, without using Visual Studio?
A: Copy your assembly DLL file to the local drive on your various servers. Then register your assembly with the database:
create assembly [YOUR_ASSEMBLY]
from '(PATH_TO_DLL)'
...then you create a function referencing the appropriate public method in the DLL:
create proc [YOUR_FUNCTION]
as
external name [YOUR_ASSEMBLY].[NAME_SPACE].[YOUR_METHOD]
Be sure to use the [ brackets, especially around the NAME_SPACE. Namespaces can have any number of dots in them, but SQL identifiers can't, unless the parts are explicitly set apart by square brackets. This was a source of many headaches when I was first using SQL CLR.
To be clear, [YOUR_ASSEMBLY] is the name you defined in SQL; [NAME_SPACE] is the .NET namespace inside the DLL where your method can be found; and [YOUR_METHOD] is simply the name of the method within that namespace.
A: For add some more detail/clarification to @kcrumley's anwser above:
[NAME_SPACE] is the fully qualified type name and not just the namespace
- i.e. if your class is called StoredProcedures in a namespace of My.Name.Space, you must use [My.Name.Space.StoredProcedures] for the [NAME_SPACE] part.
If your managed stored procedures are in a class without a namespace defined, you just use the bare class name (e.g. [StoredProcedures]).
I also struggled for a bit trying to work out how to add a procedure with arguments/parameters. So heres a sample for anyone else trying to do so:
CREATE PROCEDURE [YOUR_FUNCTION]
(
@parameter1 int,
@parameter2 nvarchar
)
WITH EXECUTE AS CALLER
AS
EXTERNAL NAME [YOUR_ASSEMBLY].[StoredProcedures].[YOUR_FUNCTION]
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Widget Data Across Multiple Controllers Let's say that I have a widget that displays summary information about how many posts or comments that I have on a site.
What's the cleanest way to persist this information across controllers?
Including the instance variables in the application controller seems like a bad idea. Having a before filter that loads the data for each controller smells like code duplication.
Do I have to use a plugin like the Cells Plugin (http://cells.rubyforge.org/) or is there a simpler way of doing it?
A: Presumably, you have a single partial that displays this info. You can put the methods that fetch the data you need in ApplicationHelper or as class methods on whatever model(s) you're getting the data from. Then call that method in the partial when you need to display it.
A: I wound up doing something similar to this:
In controllers/application.rb
def load_sidebar
@posts = Post.find(:all)
end
To include the sidebar in various actions I did this:
before_filter :load_sidebar, :only => [ :index ] #load from application.rb file
I made the sidebar into a shared partial.
A: def load_sidebar
@posts = Post.find(:all)
end
You mentioned that you were displaying summary info. If you don't really want to load all of your posts into memory, you can do the following.
def load_sidebar
@post_count = Post.count(:id)
end
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What is the best way to manage Time in a Java application? So I'm using hibernate and working with an application that manages time.
What is the best way to deal with times in a 24 hour clock?
I do not need to worry about TimeZone issues at the beginning of this application but it would be best to ensure that this functionality is built in at the beginning.
I'm using hibernate as well, just as an fyi
A: Store them as long ts = System.currentTimeMillis().
That format is actually TimeZone-safe as it return time in UTC.
If you only need time part, well, I'm not aware of built-in type in Hib, but writing your own type Time24 is trivial -- just implement either org.hibernate.UserType or org.hibernate.CompositeUserType (load=nullSafeGet and store=nullSafeSet methods in them).
See http://docs.jboss.org/hibernate/core/3.3/reference/en/html/mapping.html#mapping-types-custom
But I'd still save absolute time anyway. May help in future.
P.S. That's all presuming storing Date is out of question for some reason. TimeZone in Date sometimes gets in the way, really. ;)
A: I would suggest you look into using Joda, http://joda-time.sourceforge.net/, which offers much more intuitive and controllable time handling functionality than the core Date and Calendar implementations. JSR 310 is actually a proposition to include a new time API into java 7 that will be based largely on Joda. Joda also offers both timezone dependent Time handling and timezone independent time handling which eases difficulties when dealing with intervals.
A: Is there something wrong with using java.util.Date?
A: java.util.Date should be used; not a long (and definitely not a Calendar).
If you are using annotations be sure to use @Temporal
A: I also found another library recently that seems to be a response to JODA.
http://www.date4j.net/
The advantages are listed on the project home page.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Removing the "Categories" field from an Issue Tracking list in SharePoint When I create an Issue Tracking list in SharePoint, I am unable to remove the Categories choice field that it adds by default as part of the Issue content type. I looked in the list definition but I can't find anything explicit about not allowing the column to be deleted. Does anybody know why the Delete button isn't showing up? Is there a way to delete this field?
A: Toni's solution did work, but be careful- this will also remove the category field from EVERY ISSUES TRACKING LIST being used currently and any future ones created.
A: In the interest of time and retaining functionality, I decided to take the less obtrusive approach and renamed the Category Column to "." and made the default dropdown choice ".". It's barely noticable and quick/easy to do.
A: I know that I have had a similar issue with field in a variety of fields where once the field is added, it is not possible to remove it.
Sometimes it is possible to create code to delete the field, but in most of the situations I have come across we have had to hide the field to prevent it from appearing.
This requires moving to using Powershell and the SharePoint object model to make the changes.
In most of our implementations, we have found it much better to create a custom solution with a custom feature that adds the custom lists and fields using the XML format for doing this. A list template can then be create exactly for what you need.
Doing it this way gives us more control over the result in a repeatable manner.
A: *
*Go to: Portal > Site Settings > Site Content Type Gallery > Site Content Type
*Select Issue Content Type
*Select Categories
*Click Remove button on the new page
Worked for me, just tried it.
A: I haven't found a way to delete the categories field from an issue tracking list in SharePoint, but I did find that it is possible to re-purpose it. You can change the pick-list values to whatever you want, including "Active", "Resolved" and "Closed".
It seems ridiculous to re-purpose the category field as the status field, especially since a status field is included in the issue tracking list by default. But you can delete the status field and then use the category field as a status field--which is probably used more often.
A: Don't know if anyone is still viewing this. The options I've taken are:
*
*Create a custom list (not issues tracking list).
*Allow for managment of content types on the list. Then make the column hidden in the list content type. That way you don't disrupt the site content type. (just the list content type).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Handling undelivered emails in webapp We have a typical business web app that allows our users to send e-mails with offerings to their clients. We set user e-mail in FROM field so the client can reply directly to the user. The problem is that because of SMTP protocol, undelivered e-mail notification is returned to our e-mail address(the address of the account we send e-mails from).
Do you know elegant way to handle this undelivered emails? I mean the easiest way to let the sender know that his mail was not delivered.
A: There are 3 "Headers" that emails have.
*
*From.
*
*This is what the user sees as the 'originator'
*Reply-to.
*
*This is where an email gets sent if a reply is intented
*Return-path.
*
*This is where an email gets routed in the event that the destination does not exist.
You probably want to be setting the 3rd :)
( Note, some servers don't reply to these lost messages at all, becuase recently spammers have been putting addresses there that are not their own, doing a 3rd part bounce attack using the automated reply system to turn email servers into an open relay! )
See Section 4.4 of this document for further details: http://www.faqs.org/rfcs/rfc822.html
A: First, it's important to understand the difference between the "From:" header (which the recipient sees in their email client) and the sender address (which is also called the envelope return path, or the argument to the SMTP "MAIL FROM" command). The sender address is where bounce messages go when the email can't be delivered, hence the other name return path.
SMTP doesn't restrict what address you use as the sender address (except that it must by syntactically valid), but whatever SMTP client library you use might, so you'll need to check that out.
Changing the sender address is where you can do clever things to help detect email bounces and report them back to the webapp or sender. The most common thing you'll see is to encode the recipient address in the sender address, e.g. with a sender address like this: sender+recipient=recipientdomain.com@senderdomain.com. The MTA responsible for senderdomain.com needs to know to deliver all emails for sender+foo@senderdomain.com to sender@senderdomain.com -- but that's a fairly common requirement. Then you take the email that is received, and instead of trying to work out from the bounce message in the contents (which could be in any format) who the recipient was, you can get it right from the recipient address.
You can do more complex things as well, like hashing the recipient address so it's not visible directly in the sender address, e.g. sender+e72fab38fb@senderdomain.com. And you could include some identifier for the email that was sent, in case you're sending multiple emails to the same address and want to know which one bounced.
These tricks are called Variable Envelope Return Path or VERP, and are commonly implemented by mailing list software.
A: Exactly which routine are you using to send the Email?
We send emails via raw SMTP using HTTP put_lines and the replies bounce back to the address we nominate in the FROM: field.
See if your SMTP API wrapper has a Reply To: field
Some APIs might not provide that functionality because it increases the possibility of spamming.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: IsolationLevel.RepeatableRead to prevent duplicates I'm working on an application that is supposed to create products (like shipping insurance policies) when PayPal Instant Payment Notifications are received. Unfortunately, PayPal sometimes sends duplicate notifications. Furthermore, there is another third-party that is performing web-service updates simultaneously when they get updates from PayPal as well.
Here is a basic diagram of the database tables involved.
// table "package"
// columns packageID, policyID, other data...
//
// table "insurancepolicy"
// columns policyID, coverageAmount, other data...
Here is a basic diagram of what I want to do:
using (SqlConnection conn = new SqlConnection(...))
{
sqlTransaction sqlTrans = conn.BeginTransaction(IsolationLevel.RepeatableRead);
// Calls a stored procedure that checks if the foreign key in the transaction table has a value.
if (PackageDB.HasInsurancePolicy(packageID, conn))
{
sqlTrans.Commit();
return false;
}
// Insert row in foreign table.
int policyID = InsurancePolicyDB.Insert(coverageAmount, conn);
if (policyID <= 0)
{
sqlTrans.Rollback();
return false;
}
// Assign foreign key to parent table. If this fails, roll back everything.
bool assigned = PackageDB.AssignPolicyID(packageID, policyID, conn);
if (!assigned)
{
sqlTrans.Rollback();
return false;
}
}
If there are two (or more) threads (or processes or applications) doing this at the same time, I want the first thread to lock the "package" row while it has no policyID, until the policy is created and the policyID is assigned to the package table. Then the lock would be released after the policyID is assigned to the package table. It is my hope that the other thread which is calling this same code will pause when it reads the package row to make sure it doesn't have a policyID first. When the first transaction's lock is released, it is my hope that the second transaction will see the policyID is there and therefore return without inserting any rows into the policy table.
Note: Because of the CRUD database design, each the stored procedures involved either Read (select), Create (insert), or Update.
Is this the right use of RepeatableRead transaction isolation?
Thanks.
A: I believe you're actually wanting Serializable isolation level. The problem is that two threads can get past the HasInsurancePolicyCheck (though I have no idea what InsurancePolicyDB.Insert would do or why it would return 0)
You have many other options for this as well. One is using a message queue and processing these requests serially yourself. Another is to use sp_getapplock and lock on some key unique to that package. That way you don't lock any more rows or tables than you must.
A: It would be safer and cleaner if insert into Policy just hit some uniqueness table constraint on attempt to insert duplicate. Raising isolation level can lower concurrency and lead to other nasty issues like deadlocks.
Another way is to always insert Policy row, then roll it back if Package has been attached to a Policy already:
begin tran (read committed)
/* tentatively insert new Policy */
insert Policy
/* attach Package to Policy if it's still free */
update Package
set Package.policy_id = @policy_id
where Package.package_id = @package_id and Package.policy_id is null
if @@rowcount > 0
commit
else
rollback
This works best when conflicts are rare, which seems to be your case.
A: I agree with the "message queue" idea in aaronjensen's response. If you are concerned about multiple concurrent threads attempting to update the same row of data simultaneously, you should instead have the threads insert their data into a work queue, which is then processed sequentially by a single thread. This significantly reduces contention on the database, because the target table is updated by only one thread instead of "N", and the work queue operations are limited to inserts by the messaging threads, and a read/update by the data processing thread.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: To Update blindly or to Update Where? I have a table that holds information about cities in a game, you can build one building each turn and this is recorded with the value "usedBuilding".
Each turn I will run a script that alters usedBuilding to 0, the question is, which of the following two ways is faster and does it actually matter which way is used?
UPDATE cities SET usedBuilding = 0;
UPDATE cities SET usedBuilding = 0 WHERE usedBuilding = 1;
A: In general, the 2nd case (with the WHERE) clause would be faster - as it won't cause trigger evaluation, transaction logging, index updating, etc. on the unused rows.
Potentially - depending on the distribution of 0/1 values, it could actually be faster to update all rows rather than doing the comparison - but that's a pretty degenerate case.
Since ~95% of your query costs are I/O, using the WHERE clause will either make no difference (since the column is not indexed, and you're doing a table scan) or a huge difference (if the column is indexed, or the table partitioned, etc.). Either way, it doesn't hurt.
I'd suspect that for the amount of data you're talking, you won't notice a difference in either execution plans or speed - which makes it academic at best, premature optimization at worst. So, I'd advise to go with whatever logically makes sense for your app.
A: If usedBuilding is indexed, it will be quicker to use the where clause since it will only access/update rows where usedBuilding is true.
If it's not indexed, you'd be doing a full table scan anyway, so it wouldn't make much (any?) difference.
A: Try both ways in a loop a few thousand times and time them!
It probably depends on: how many records are actually in this table, and whether they all fit in memory or have to be paged to disk. How many buildings are at value 1 before you run the update (I'm guessing this might be 1).
It doesn't matter which way is used, but the shortest one's probably got the least that can go wrong with it. Code that you don't write can't have bugs.
A: How often are these turns happening? How many rows do you expect to have in this table? If the answers are 'less than once a second' and 'less than 10000', just stop worrying.
Unless if you happen to have some sort of academic interest in this, of course.
A: It seems like there would be a lower number of transactions to make the "UPDATE cities SET usedBuilding = 0;" execute than the more specific query. The main reason I can think of against this would be if you had more than one state to your column. If its merely a boolean then it would be fine, but you may want to spend some time thinking if that will always be the case.
Indexing could also cause the execution plan to be more efficient using the WHERE clause.
A: The best way to get a definitive answer would be to profile using a lot of sample data under differing scenarios.
A: indexing won't help you at all unless you have something like maybe 2% of the usedBuilding = 1 values.
however these 2 statements are logically different and can mean totally different things.
but if for your case they are the same then use the one without the where clause.
A: How many rows exactly will you have? I suspect that for a smallish online game, you really don't care.
If you're doing several updates to the "cities" table, it might be a good idea to do them all in one UPDATE statement if possible.
Making any change to a row probably takes just as much I/O as writing the entire row (except of course updating indexed columns also requires index writes), so you lose out by making several UPDATEs which hit lots of rows.
But if you have, say, <1000 rows, you really don't care :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is it worth it to use Wap to have cellular webpage? Is it worth it to develop WAP webpage for cellular since now cellular have browser that is really better than before?
I have a PHP website and I would like to increase the use of portable device to use it. Is it worth it to implement.
A: Frankly, no. I've written WAP pages (it's painful), I've never seen a person use a WAP browser, and for the 4.25 people in the world who do, there are WAP Gateways.
Unless your site is geared towards providing WAP friendly content (e.g. bitesize) that people will reload regularly, it's probably more cost effective to provide a static page with a phone number users can call so that you'll read them the content they want, than spend any time on WAP.
A: It depends what devices you're targeting. If you want to have the broadest possible range of devices accessing your site, then you would do well to tailor specifically to different device capabilities and some of those devices will only support WAP.
However, if you only want to target higher-end, more modern devices (nokia smartphones, iPhone, G1, etc) then you can build a "traditional" HTML/CSS site and it will render just fine.
A: More than being about technology, mobile internet is about different use cases and design. I recommend reading:
Designing for the mobile web, and
Mobile Web Design: Tips & Tricks
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: game programming What libraries can i use for motion and blending in game programming in c++? I need libraries regarding sound, image, expansion algorithm like 2xsai and super eagle. I need libraries like fblend and also for motion . How to compile functions of fblend in devcpp? 2d gaming... and the library should be compatible with devcpp
A: I use mainly these two libraries for 3d gaming:
SDL
Simple DirectMedia Layer is a cross-platform multimedia library designed to provide low level access to audio, keyboard, mouse, joystick, 3D hardware via OpenGL, and 2D video framebuffer.
ODE
ODE is an open source, high performance library for simulating rigid body dynamics. It is fully featured, stable, mature and platform independent with an easy to use C/C++ API. It has advanced joint types and integrated collision detection with friction. ODE is useful for simulating vehicles, objects in virtual reality environments and virtual creatures.
A: DevMaster is a nice place to start with.
Check Irrlicht - one of the best, free engines. It is very easy to start and get going.
A: Gosu is a compact, thoughtful library for C++ (with Ruby bindings as well). I'm using Gosu right now for a project, and it lives up to its promise: it is indeed minimal, but it doesn't get in your way.
Alternatively, there is SDL. SDL is ultimately a fairly low-level API for doing 2D graphics with OpenGL.
I used to use ClanLib all the time. It was very feature-rich. However, its development seemed extraordinarily slow, and I eventually moved on. It's certainly worth looking at, though.
For basic physics in 2D, you may find Box2D useful. Its documentation is unfortunately somewhat poor and confusing, but overall it's a good library. Using a 3D physics engine if you're just going to do 2D work is definitely over-kill and will make your job much harder than it needs to be.
Using one of these libraries is not strictly necessary, although I would strongly recommend. It's entirely possible to build a game using OpenGL or Direct3D directly. This route is preferable if you have plans on implementing rather advanced graphical techniques.
As Ben said, gamedev.net is a phenomenal place for questions about game development. I've been viewing the forums there for years now.
Finally, I have an incomplete listing of free game development technologies here, including libraries for languages other than C++.
A: You definitely want to look at shaders. Shaders allow you to use world data or previous frame data to decorate the current scene. Doing so it's relatively easy to create motion blur and other effects using shaders.
I'd recommend reading up on http://gamedev.net and maybe checking out some of the books called Game Programming Gems.
A: Begin using GNU Emacs (gnu.org/software/emacs) and gcc (gcc.gnu.org or mingwm.org for a windows version). Get comfortable with your OS's shell (command line), enough to create files, change directories and copy/move files.
Read up a bit on Emacs's features (almost endless), then get started (IDE's are there just to hide the command line from you).
SDL gives you access to the computer's video card in a cross platform way, it can also play sound, and there's many third party addons for it (SDL Image, SDL TTF to name a few). However, it does not offer 3d capability, use OpenGL for that (you can use SDL along with OpenGL). I recommend starting with SDL1.2 before SDL2.0, as it's simpler. www.libsdl.org/release/SDL-1.2.15/docs/html/ is a great place to start picking up SDL1.2.
SDL1.2 does not provide any primitive drawing routines, that's not a problem if you do an internet search you will find many algorithms that can be implemented in a simple file.
OpenAL is a library for 3d sound.
There exists open source versions of all the software i've referred to in this post.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: .NET Generic Method Question I'm trying to grasp the concept of .NET Generics and actually use them in my own code but I keep running into a problem.
Can someone try to explain to me why the following setup does not compile?
public class ClassA
{
ClassB b = new ClassB();
public void MethodA<T>(IRepo<T> repo) where T : ITypeEntity
{
b.MethodB(repo);
}
}
public class ClassB
{
IRepo<ITypeEntity> repo;
public void MethodB(IRepo<ITypeEntity> repo)
{
this.repo = repo;
}
}
I get the following error:
cannot convert from IRepo<'T> to IRepo<'ITypeEntity>
MethodA gets called with a IRepo<'DetailType> object parameter where DetailType inherits from ITypeEntity.
I keep thinking that this should compile as I'm constraining T within MethodA to be of type ITypeEntity.
Any thoughts or feedback would be extremely helpful.
Thanks.
Edit: Nick R has a great suggestion but unfortunately in my context, I don't have the option of making ClassA Generic. ClassB could be though.
A: Well this compiles ok. I basically redifined the classes to take generic parameters. This may be ok in your context.
public interface IRepo<TRepo>
{
}
public interface ITypeEntity
{
}
public class ClassA<T> where T : ITypeEntity
{
ClassB<T> b = new ClassB<T>();
public void MethodA(IRepo<T> repo)
{
b.MethodB(repo);
}
}
public class ClassB<T> where T : ITypeEntity
{
IRepo<T> repo;
public void MethodB(IRepo<T> repo)
{
this.repo = repo;
}
}
A: Inheritance doesn't work the same when using generics. As Smashery points out, even if TypeA inherits from TypeB, myType<TypeA> doesn't inherit from myType<TypeB>.
As such, you can't make a call to a method defined as MethodA(myType<TypeB> b) expecting a myType<TypeB> and give it a myType<TypeA> instead. The types in question have to match exactly. Thus, the following won't compile:
myType<TypeA> a; // This should be a myType<TypeB>, even if it contains only TypeA's
public void MethodB(myType<TypeB> b){ /* do stuff */ }
public void Main()
{
MethodB(a);
}
So in your case, you would need to pass in an IRepo<ITypeEntity> to MethodB, even if it only contains DetailTypes. You'd need to do some conversion between the two. If you were using a generic IList, you might do the following:
public void MethodA<T>(IList<T> list) where T : ITypeEntity
{
IList<T> myIList = new List<T>();
foreach(T item in list)
{
myIList.Add(item);
}
b.MethodB(myIList);
}
I hope this is helpful.
A: The problem is a tricky one to get your head around. DetailType may inherit from ITypeEntity, but isn't actually ITypeEntity. Your implementation of DetailType could introduce different functionality, so DetailType implements ITypeEntity but isn't equal to ITypeEntity. I hope that makes sense...
A:
I get the following error: cannot
convert from IRepo<'T> to
IRepo<'ITypeEntity>
You are getting this compilation error because IRepo<T> and IRepo<ITypeEntity> are not the same thing. The latter is a specialization of the former. IRepo<T> is a generic type definition, where the type parameter T is a placeholder, and IRepo<ITypeEntity> is a constructured generic type of the generic type definition, where the type parameter T from is specified to be ITypeEntity.
I keep thinking that this should
compile as I'm constraining T within
MethodA to be of type ITypeEntity.
The where constraint does not help here because it only contrains the type you can provide for T at the call-sites for MethodA.
Here is the terminology from the MSDN documentation (see Generics in the .NET Framework) that may help:
*
*A generic type definition is a
class, structure, or interface
declaration that functions as a
template, with placeholders for the
types that it can contain or use.
For example, the Dictionary<<K, V> class can contain
two types: keys and values. Because
it is only a template, you cannot
create instances of a class,
structure, or interface that is a
generic type definition.
*Generic type parameters, or type
parameters, are the placeholders in
a generic type or method definition.
The Dictionary<K, V> generic type has two type
parameters, K and V, that
represent the types of its keys and
values.
*A constructed generic type, or
constructed type, is the result of
specifying types for the generic
type parameters of a generic type
definition.
*A generic type argument is any type
that is substituted for a generic
type parameter.
*The general term generic type
includes both constructed types and
generic type definitions.
*Constraints are limits placed on
generic type parameters. For
example, you might limit a type
parameter to types that implement
the IComparer<T> generic
interface, to ensure that instances
of the type can be ordered. You can
also constrain type parameters to
types that have a particular base
class, that have a default
constructor, or that are reference
types or value types. Users of the
generic type cannot substitute type
arguments that do not satisfy the
constraints.
A: Please see @monoxide's question
And as I said there, checking out Eric Lippert's series of posts on contravariance and covariance for generics will make a lot of this clearer.
A: If B is a subclass of A, that does not mean that Class<B> is a subclass of Class<A>. So, for this same reason, if you say "T is an ITypeEntity", that does not mean that "IRepo<T> is an IRepo<ITypeEntity>". You might have to write your own conversion method if you want to get this working.
A: T is a type variable that will be bound to a partcular type in usage. The restriction ensures that that type will represent a subset of the types that implement ITypeEntity, excluding other types that implement the interface.
A: at compile time even though you're constraining it the compiler only knows that T in MethodA is a reference type. it doesn't know what type it is constrained to.
A: This is a redundant use of generics, if T can only ever be an instance of ITypeEntity you shouldn't use generics.
Generics are for when you have multiple types which can be inside something.
A: In the context of wrapping your head around generic methods, allow me to give you a simple generic function. It's a generic equivalent of VB's IIf() (Immediate if), which is itself a poor imitation of the C-style ternary operator (?). It's not useful for anything since the real ternary operator is better, but maybe it will help you understand how generic function are built and in what contexts they should be applied.
T IIF<T>(bool Expression, T TruePart, T FalsePart)
{
return Expression ? TruePart : FalsePart;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How to convert an "object" into a function in JavaScript? JavaScript allows functions to be treated as objects--if you first define a variable as a function, you can subsequently add properties to that function. How do you do the reverse, and add a function to an "object"?
This works:
var foo = function() { return 1; };
foo.baz = "qqqq";
At this point, foo() calls the function, and foo.baz has the value "qqqq".
However, if you do the property assignment part first, how do you subsequently assign a function to the variable?
var bar = { baz: "qqqq" };
What can I do now to arrange for bar.baz to have the value "qqqq" and bar() to call the function?
A: Object types are functions and an object itself is a function instantiation.
alert([Array, Boolean, Date, Function, Number, Object, RegExp, String].join('\n\n'))
displays (in FireFox):
function Array() {
[native code]
}
function Boolean() {
[native code]
}
function Date() {
[native code]
}
function Function() {
[native code]
}
function Number() {
[native code]
}
function Object() {
[native code]
}
function RegExp() {
[native code]
}
function String() {
[native code]
}
In particular, note a Function object, function Function() { [native code] }, is defined as a recurrence relation (a recursive definition using itself).
Also, note that the answer 124402#124402 is incomplete regarding 1[50]=5. This DOES assign a property to a Number object and IS valid Javascript. Observe,
alert([
[].prop="a",
true.sna="fu",
(new Date()).tar="fu",
function(){}.fu="bar",
123[40]=4,
{}.forty=2,
/(?:)/.forty2="life",
"abc".def="ghi"
].join("\t"))
displays
a fu fu bar 4 2 life ghi
interpreting and executing correctly according to Javascript's "Rules of Engagement".
Of course there is always a wrinkle and manifest by =. An object is often "short-circuited" to its value instead of a full fledged entity when assigned to a variable. This is an issue with Boolean objects and boolean values.
Explicit object identification resolves this issue.
x=new Number(1); x[50]=5; alert(x[50]);
"Overloading" is quite a legitimate Javascript exercise and explicitly endorsed with mechanisms like prototyping though code obfuscation can be a hazard.
Final note:
alert( 123 . x = "not" );
alert( (123). x = "Yes!" ); /* ()'s elevate to full object status */
A: It's easy to be confused here, but you can't (easily or clearly or as far as I know) do what you want. Hopefully this will help clear things up.
First, every object in Javascript inherits from the Object object.
//these do the same thing
var foo = new Object();
var bar = {};
Second, functions ARE objects in Javascript. Specifically, they're a Function object. The Function object inherits from the Object object. Checkout the Function constructor
var foo = new Function();
var bar = function(){};
function baz(){};
Once you declare a variable to be an "Object" you can't (easily or clearly or as far as I know) convert it to a Function object. You'd need to declare a new Object of type Function (with the function constructor, assigning a variable an anonymous function etc.), and copy over any properties of methods from your old object.
Finally, anticipating a possible question, even once something is declared as a function, you can't (as far as I know) change the functionBody/source.
A: There doesn't appear to be a standard way to do it, but this works.
WHY however, is the question.
function functionize( obj , func )
{
out = func;
for( i in obj ){ out[i] = obj[i]; } ;
return out;
}
x = { a: 1, b: 2 };
x = functionize( x , function(){ return "hello world"; } );
x() ==> "hello world"
There is simply no other way to acheive this,
doing
x={}
x()
WILL return a "type error". because "x" is an "object" and you can't change it. its about as sensible as trying to do
x = 1
x[50] = 5
print x[50]
it won't work. 1 is an integer. integers don't have array methods. you can't make it.
A: Use a temporary variable:
var xxx = function()...
then copy all the properties from the original object:
for (var p in bar) { xxx[p] = bar[p]; }
finally reassign the new function with the old properties to the original variable:
bar = xxx;
A: var A = function(foo) {
var B = function() {
return A.prototype.constructor.apply(B, arguments);
};
B.prototype = A.prototype;
return B;
};
A: NB: Post written in the style of how I solved the issue. I'm not 100% sure it is usable in the OP's case.
I found this post while looking for a way to convert objects created on the server and delivered to the client by JSON / ajax.
Which effectively left me in the same situation as the OP, an object that I wanted to be convert into a function so as to be able to create instances of it on the client.
In the end I came up with this, which is working (so far at least):
var parentObj = {}
parentObj.createFunc = function (model)
{
// allow it to be instantiated
parentObj[model._type] = function()
{
return (function (model)
{
// jQuery used to clone the model
var that = $.extend(true, null, model);
return that;
})(model);
}
}
Which can then be used like:
var data = { _type: "Example", foo: "bar" };
parentObject.createFunc(data);
var instance = new parentObject.Example();
In my case I actually wanted to have functions associated with the resulting object instances, and also be able to pass in parameters at the time of instantiating it.
So my code was:
var parentObj = {};
// base model contains client only stuff
parentObj.baseModel =
{
parameter1: null,
parameter2: null,
parameterN: null,
func1: function ()
{
return this.parameter2;
},
func2: function (inParams)
{
return this._variable2;
}
}
// create a troop type
parentObj.createModel = function (data)
{
var model = $.extend({}, parentObj.baseModel, data);
// allow it to be instantiated
parentObj[model._type] = function(parameter1, parameter2, parameterN)
{
return (function (model)
{
var that = $.extend(true, null, model);
that.parameter1 = parameter1;
that.parameter2 = parameter2;
that.parameterN = parameterN;
return that;
})(model);
}
}
And was called thus:
// models received from an AJAX call
var models = [
{ _type="Foo", _variable1: "FooVal", _variable2: "FooVal" },
{ _type="Bar", _variable1: "BarVal", _variable2: "BarVal" },
{ _type="FooBar", _variable1: "FooBarVal", _variable2: "FooBarVal" }
];
for(var i = 0; i < models.length; i++)
{
parentObj.createFunc(models[i]);
}
And then they can be used:
var test1 = new parentObj.Foo(1,2,3);
var test2 = new parentObj.Bar("a","b","c");
var test3 = new parentObj.FooBar("x","y","z");
// test1.parameter1 == 1
// test1._variable1 == "FooVal"
// test1.func1() == 2
// test2.parameter2 == "a"
// test2._variable2 == "BarVal"
// test2.func2() == "BarVal"
// etc
A: Here's easiest way to do this that I've found:
let bar = { baz: "qqqq" };
bar = Object.assign(() => console.log("do something"), bar)
This uses Object.assign to concisely make copies of all the the properties of bar onto a function.
Alternatively you could use some proxy magic.
A: var bar = {
baz: "qqqq",
runFunc: function() {
return 1;
}
};
alert(bar.baz); // should produce qqqq
alert(bar.runFunc()); // should produce 1
I think you're looking for this.
can also be written like this:
function Bar() {
this.baz = "qqqq";
this.runFunc = function() {
return 1;
}
}
nBar = new Bar();
alert(nBar.baz); // should produce qqqq
alert(nBar.runFunc()); // should produce 1
A:
JavaScript allows functions to be
treated as objects--you can add a
property to a function. How do you do
the reverse, and add a function to an
object?
You appear to be a bit confused. Functions, in JavaScript, are objects. And variables are variable. You wouldn't expect this to work:
var three = 3;
three = 4;
assert(three === 3);
...so why would you expect that assigning a function to your variable would somehow preserve its previous value? Perhaps some annotations will clarify things for you:
// assigns an anonymous function to the variable "foo"
var foo = function() { return 1; };
// assigns a string to the property "baz" on the object
// referenced by "foo" (which, in this case, happens to be a function)
foo.baz = "qqqq";
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50"
}
|
Q: C++ handling very large integers I am using the RSA Algorithm for encryption/decryption, and in order to decrypt the files you have to deal with some pretty big values. More specifically, things like
P = C^d % n
= 62^65 % 133
Now that is really the only calculations that ill be doing. I have tried using Matt McCutchen's BigInteger Library, but I am getting a lot of compiler errors during linking, such as:
encryption.o(.text+0x187):encryption.cpp: undefined reference to `BigInteger::BigInteger(int)'
encryption.o(.text+0x302):encryption.cpp: undefined reference to `operator<<(std::ostream&, BigInteger const&)'
encryption.o(.text$_ZNK10BigIntegermlERKS_[BigInteger::operator*(BigInteger const&) const]+0x63):encryption.cpp: undefined reference to `BigInteger::multiply(BigInteger const&, BigInteger const&)'
So I was wondering what would be the best way to go about handling the really big integers that come out of the RSA Algorithm.
I heard that a possibility would be to declare your variables as a double long, so...
long long decryptedCharacter;
but I'm not sure exactly how big of an integer that can store.
Well for example, I try to compile and run the following program using dev C++:
#include iostream
#include "bigint\BigIntegerLibrary.hh"
using namespace std;
int main()
{
BigInteger a = 65536;
cout << (a * a * a * a * a * a * a * a);
return 0;
}
then I get those errors.
Derek, I thought that by including the BigIntegerLibrary.hh file, that the compiler would go through and compile all the necessary files that it will use.
How should I try and compile the program above in order to resolve the linking errors?
A: To see the size of a long long try this:
#include <stdio.h>
int main(void) {
printf("%d\n", sizeof(long long));
return 0;
}
On my machine it returns 8 which means 8 bytes which can store 2^64 values.
A: I would try out the GMP library - it is robust, well tested, and commonly used for this type of code.
A: For RSA you need a bignum library. The numbers are way too big to fit into a 64-bit long long. I once had a colleague at university who got an assignment to implement RSA including building his own bignum library.
As it happens, Python has a bignum library. Writing bignum handlers is small enough to fit into a computer science assignment, but still has enough gotchas to make it a non-trivial task. His solution was to use the Python library to generate test data to validate his bignum library.
You should be able to get other bignum libraries.
Alternatively, try implementing a prototype in Python and see if it's fast enough.
A: If you're not implementing RSA as a school assignment or something, then I'd suggest looking at the crypto++ library http://www.cryptopp.com
It's just so easy to implement crypto stuff badly.
A: Here is my approach, it combines fast exponentation using squaring + modular exponentation which reduces the space required.
long long mod_exp (long long n, long long e, long long mod)
{
if(e == 1)
{
return (n % mod);
}
else
{
if((e % 2) == 1)
{
long long temp = mod_exp(n, (e-1)/2, mod);
return ((n * temp * temp) % mod);
}
else
{
long long temp = mod_exp(n, e/2, mod);
return ((temp*temp) % mod);
}
}
}
A: There is more to secure RSA implementation than just big numbers. A simple RSA implementation tends to leak private information through side channels, especially timing (in simple words: computation time depends on the processed data, which allows an attacker to recover some, possibly all, of the private key bits). Good RSA implementations implement countermeasures.
Also, beyond the modular exponentiation, there is the whole padding business, which is not conceptually hard, but, as all I/O and parsing code, has room for subtle bugs. The easiest code to write is the code which has already been written by somebody else.
Another point is that once you have your RSA code up and running, you may begin to envision extensions and other situations, e.g. "what if the private key I want to use is not in RAM but in a smartcard ?". Some existing RSA implementations are actually API which can handle that. In the Microsoft world, you want to lookup CryptoAPI, which is integrated in Windows. You may also want to look at NSS, which is what the Firefox browser uses for SSL.
To sum up: you can build up a RSA-compliant implementation from big integers, but this is more difficult to do correctly than what it usually seems, so my advice is to use an existing RSA implementation.
A: Openssl also has a Bignum type you can use. I've used it and it works well. Easy to wrap in an oo language like C++ or objective-C, if you want.
https://www.openssl.org/docs/crypto/bn.html
Also, in case you didn't know, to find the answer to the equation of this form x^y % z, look up an algorithm called modular exponentiation. Most crypto or bignum libraries will have a function specifically for this computation.
A: Tomek, it sounds like you aren't linking to the BigInteger code correctly. I think you should resolve this problem rather than looking for a new library. I took a look at the source, and BigInteger::BigInteger(int) is most definitely defined. A brief glance indicates that the others are as well.
The link errors you're getting imply that you are either neglecting to compile the BigInteger source, or neglecting to include the resulting object files when you link. Please note that the BigInteger source uses the "cc" extension rather than "cpp", so make sure you are compiling these files as well.
A: I'd suggest using gmp, it can handle arbitrarily long ints and has decent C++ bindings.
afaik on current hardware/sofware long longs are 64bit, so unsigned can handle numbers up to (2**64)-1 == 18446744073709551615 which is quite a bit smaller than numbers you'd have to deal with with RSA.
A: A long int is typically 64 bits which would probably not be enough to handle an integer that large. You'll probably need a bigint library of some kind.
See also this question on Stack Overflow
A: Check out your compiler documentation. Some compilers have types defined such as __int64 that give you their size. Maybe you've got some of them available.
A: Just to note: __int64 and long long are non-standard extensions. Neither one is guaranteed to be supported by all C++ compilers. C++ is based on C89 (it came out in 98, so it couldn't be based on C99)
(C has support for 'long long' since C99)
By the way, I don't think that 64bit integers solve this problem.
A: I have had a lot of success using the LibTomCrypt library for my crypto needs. It's fast, lean, and portable. It can do your RSA for you, or just handle the math if you want.
A: The fact, that you have a problem using some biginteger library doesn't mean, that it's a bad approach.
Using long long is definitely a bad approach.
As others said already using a biginteger library is probably a good approach, but You have to post more detail on haw you use mentioned library for us to be able to help You resolve those errors.
A: I used GMP when I wrote the RSA implementation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
}
|
Q: A way of casting a base type to a derived type I'm not sure if this is a strange thing to do or not, or if it is some how code smell...but I was wondering if there was a way (some sort of oop pattern would be nice) to "cast" a base type to a form of its derived type. I know this makes little sense as the derived type will have additional functionality that the parent doesn't offer which is in its self not fundamentally sound. But is there some way to do this? Here is a code example to so I can better explain what I"m asking.
public class SomeBaseClass {
public string GetBaseClassName {get;set;}
public bool BooleanEvaluator {get;set;}
}
public class SomeDerivedClass : SomeBaseClass {
public void Insert(SqlConnection connection) {
//...random connection stuff
cmd.Parameters["IsItTrue"].Value = this.BooleanEvalutar;
//...
}
}
public static void Main(object[] args) {
SomeBaseClass baseClass = new SomeBaseClass();
SomeDerivedClass derClass = (SomeDerivedClass)baseClass;
derClass.Insert(new sqlConnection());
}
I know this seems goofy but is there any way to accomplish something of this sort?
A: Downcasting makes sense, if you have an Object of derived class but it's referenced by a reference of base class type and for some reason You want it back to be referenced by a derived class type reference. In other words You can downcast to reverse the effect of previous upcasting. But You can't have an object of base class referenced by a reference of a derived class type.
A: I'm not saying I recommend this. But you could turn base class into JSON string and then convert it to the derived class.
SomeDerivedClass layer = JsonConvert.DeserializeObject<SomeDerivedClass>(JsonConvert.SerializeObject(BaseClassObject));
A: No, this is not possible. In a managed language like C#, it just won't work. The runtime won't allow it, even if the compiler lets it through.
You said yourself that this seems goofy:
SomeBaseClass class = new SomeBaseClass();
SomeDerivedClass derClass = (SomeDerivedClass)class;
So ask yourself, is class actually an instance of SomeDerivedClass? No, so the conversion makes no sense. If you need to convert SomeBaseClass to SomeDerivedClass, then you should provide some kind of conversion, either a constructor or a conversion method.
It sounds as if your class hierarchy needs some work, though. In general, it shouldn't be possible to convert a base class instance into a derived class instance. There should generally be data and/or functionality that do not apply to the base class. If the derived class functionality applies to all instances of the base class, then it should either be rolled up into the base class or pulled into a new class that is not part of the base class hierarchy.
A: C# language doesn't permit such operators, but you can still write them and they work:
[System.Runtime.CompilerServices.SpecialName]
public static Derived op_Implicit(Base a) { ... }
[System.Runtime.CompilerServices.SpecialName]
public static Derived op_Explicit(Base a) { ... }
A: Not soundly, in "managed" languages. This is downcasting, and there is no sane down way to handle it, for exactly the reason you described (subclasses provide more than base classes - where does this "more" come from?). If you really want a similar behaviour for a particular hierarchy, you could use constructors for derived types that will take the base type as a prototype.
One could build something with reflection that handled the simple cases (more specific types that have no addition state). In general, just redesign to avoid the problem.
Edit: Woops, can't write conversion operators between base/derived types. An oddity of Microsoft trying to "protect you" against yourself. Ah well, at least they're no where near as bad as Sun.
A: Try composition instead of inheritance!
It seems to me like you'd be better off passing an instance of SomeBaseClass to the SomeDerivedClass (which will no longer derive base class, and should be renamed as such)
public class BooleanHolder{
public bool BooleanEvaluator {get;set;}
}
public class DatabaseInserter{
BooleanHolder holder;
public DatabaseInserter(BooleanHolder holder){
this.holder = holder;
}
public void Insert(SqlConnection connection) {
...random connection stuff
cmd.Parameters["IsItTrue"].Value = holder.BooleanEvalutar;
...
}
}
public static void Main(object[] args) {
BooleanHolder h = new BooleanHolder();
DatabaseInserter derClass = new DatabaseInserter(h);
derClass.Insert(new sqlConnection);
}
Check out http://www.javaworld.com/javaworld/jw-11-1998/jw-11-techniques.html (page 3):
Code reuse via composition Composition
provides an alternative way for Apple
to reuse Fruit's implementation of
peel(). Instead of extending Fruit,
Apple can hold a reference to a Fruit
instance and define its own peel()
method that simply invokes peel() on
the Fruit.
A: Personally I don't think it's worth the hassle of using Inheritance in this case. Instead just pass the base class instance in in the constructor and access it through a member variable.
private class ExtendedClass //: BaseClass - like to inherit but can't
{
public readonly BaseClass bc = null;
public ExtendedClass(BaseClass b)
{
this.bc = b;
}
public int ExtendedProperty
{
get
{
}
}
}
A: Yes - this is a code smell, and pretty much nails down the fact that your inheritance chain is broken.
My guess (from the limited sample) is that you'd rather have DerivedClass operate on an instance of SomeBaseClass - so that "DerivedClass has a SomeBaseClass", rather than "DerivedClass is a SomeBaseClass". This is known as "favor composition over inheritance".
A: As others have noted, the casting you suggest is not really possible.
Would it maybe be a case where the Decorator pattern(Head First extract) can be introduced?
A: Have you thought about an interface that what is currently your base class and your derived class both would implement? I don't know the specifics of why you're implementing this way but it might work.
A: That cannot work. Go look at the help page linked by the compile error.
The best solution is to use factory methods here.
A: This is called downcasting and Seldaek's suggestion to use the "safe" version is sound.
Here's a pretty decent description with code samples.
A: This is not possible because how are you going to get the "extra" that the derived class has. How would the compiler know that you mean derivedClass1 and not derivedClass2 when you instantiate it?
I think what you are really looking for is the factory pattern or similar so you can instantiate objects without really knowing the explicit type that's being instantiate. In your example, having the "Insert" method would be an interface that instance the factory returns implements.
A: I dont know why no one has said this and i may have miss something but you can use the as keyword and if you need to do an if statement use if.
SomeDerivedClass derClass = class as SomeDerivedClass; //derClass is null if it isnt SomeDerivedClass
if(class is SomeDerivedClass)
;
-edit- I asked this question long ago
A: I've recently been in the need of extending a simple DTO with a derived type in order to put some more properties on it. I then wanted to reuse some conversion logic I had, from internal database types to the DTOs.
The way I solved it was by enforcing an empty constructor on the DTO classes, using it like this:
class InternalDbType {
public string Name { get; set; }
public DateTime Date { get; set; }
// Many more properties here...
}
class SimpleDTO {
public string Name { get; set; }
// Many more properties here...
}
class ComplexDTO : SimpleDTO {
public string Date { get; set; }
}
static class InternalDbTypeExtensions {
public static TDto ToDto<TDto>(this InternalDbType obj) where TDto : SimpleDTO, new() {
var dto = new TDto {
Name = obj.Name
}
}
}
I can then reuse the conversion logic from the simple DTO when converting to the complex one. Of course, I will have to fill in the properties of the complex type in some other way, but with many, many properties of the simple DTO, this really simplifies things IMO.
A: As many answers have pointed out, you can't downcast which makes total sense.
However, in your case, SomeDerivedClass doesn't have properties that will be 'missing'. So you could create an extension method like this:
public static T ToDerived<T>(this SomeBaseClass baseClass)
where T:SomeBaseClass, new()
{
return new T()
{
BooleanEvaluator = baseClass.BooleanEvaluator,
GetBaseClassName = baseClass.GetBaseClassName
};
}
So you aren't casting, just converting:
SomeBaseClass b = new SomeBaseClass();
SomeDerivedClass c = b.ToDerived<SomeDerivedClass>();
This only really works if all of the data in the base class is in the form of readable and writable properties.
A: C++ handles it using a constructor. C++ Typecasting. It seems like an oversight to me. Many of you have brought up the issue of what would the process do with the extra properties. I would answer, what does the compiler do when it creates the derived class when the programmer does not set the properties? I have handled this situation similar to C++. I create a constructor that takes the base class then manually set the properties in the constructor. This is definitely preferable to setting a variable in the derived class and breaking the inheritance. I would also choose it over a factory method because I think the resulting code would be cleaner looking.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
}
|
Q: How do I publish multiple web site projects in the same solution to one folder? I want to keep multiple web site projects in one solution and I want to publish them all together in one folder. What are the steps to do that? Can I publish the whole solution? I am guessing the solution is only a container that is holding different web site projects. I want to publish all these small web site projects into one place and I want to use a single page for the authentication for the whole application. How can I publish these multiple web sites at one location? Do I have to manage web.config?
A: It sounds like what you really need is a single web project (and I'd recommend web application projects rather than web "site" projects... but that's another topic).
If they all share common authentication and root folder, why separate them?
A: You could use an MsBuild on NAnt build script to do that.
A: I would seriously consider merging the projects into 1, especially if they will be using the same authentication system. Once merged, you would be able to use inherited MasterPages to easily give each section it's own "look-and-feel", while still maintaining the site's style.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Reuse MVC arhitecture; Have two layers of UI : ASP.NET MVC and .NET Winforms Although my question might seem abstract I hope it's not. Suppose I develop an application, an ASP.NET MVC site and later I am tasked to build an Winforms client for this application how much and how can I reuse from the existing application?
I defined the models, I defined controllers and views. They all work well.
Now the boss comes asking for a Winforms client and I am hoping I can reuse the models and the controllers (provided I put them in different assemblies) and not reuse just the views (ASPX views).
Can this be done? How?
A: Have a look at the Northwind starter kit (don't be put off by the Northwind bit)- that has various GUIs attached to a layered architecture including both MVC and Winforms.
It does exactly what you want to achieve.
A: I have done this previously, not with asp.net MVC but with pure asp.net web forms. I used a home-grown MVP (Model-View-Presenter) pattern, and the absolute most important thing to allow the Presenter (== Controller in your case) to be used in a WinForms app was to not reference anything to do with system.web
So the first thing you need to do is introduce interface(s) to wrap any request, response, web etc stuff, and have every Presenter accept these interfaces via Dependency Injection (or make them available to the Presenters by some other technique), then if the Presenter uses those rather than the actual system.web stuff.
Example:
Imagine you want to transfer control from Page A to Page B (which in your winforms app you might want to close form A then open form B).
Interface:
public interface IRuntimeContext
{
void TransferTo(string destination);
}
web implementation:
public class AspNetRuntimeContext
{
public void TransferTo(string destination)
{
Response.Redirect(destination);
}
}
winforms implementation:
public class WinformsRuntimeContext
{
public void TransferTo(string destination)
{
var r = GetFormByName(destination);
r.Show();
}
}
Now the Presenter (Controller in your case):
public class SomePresenter
{
private readonly runtimeContext;
public SomePresenter(IRuntimeContext runtimeContext)
{
this.runtimeContext = runtimeContext;
}
public void SomeAction()
{
// do some work
// then transfer control to another page/form
runtimeContext.TransferTo("somewhereElse");
}
}
I haven't looked at the asp.net MVC implementation in detail but I hope this gives you some indication that it will probably be a lot of work to enable the scenario you are after.
You may instead want to consider accepting that you will have to re-code the View and Controller for the different platforms, and instead concentrate on keeping your controllers extremely thin and putting the bulk of your code in a service layer that can be shared.
Good Luck!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: What is the best way to run asynchronous jobs in a Rails application? I know there are several plugins that do asynchronous processing. Which one is the best one and why?
The ones I know about are:
*
*BackgrounDRb
A: I'll add DJ (Delayed Job) to the list - http://blog.leetsoft.com/2008/2/17/delayed-job-dj
The github guys recently gave it a great review: http://github.com/blog/197-the-new-queue
A: starling and workling seem pretty interesting (see the screencast) if you might have several such process, and you want to queue them.
you might also be interested by the previous screencast that use rake for background process, and by the future one that will probably be about another solution to the same question.
A: Whether something is the 'best' solution really depends on what the problem is you're trying to solve. In some cases the best solution will be the most lightweight solution, in other the most heavyweight.
BackgroundRb is probably the most fully-featured Rails background job processor, but it's also the most complicated so will require more investment to get to grips with it. BackgroundRb can probably handle most use cases, from the simple to the complex.
I have heard very good things about Ara T. Howard's Background Job (Bj) which, to quote the README is a brain dead simple zero admin background priority queue for Rails. This is a much more lightweight solution and may be preferable to BackgroundRb for a majority of scenarios as a result.
If all you want is a solution for infrequent offline batch-style processing then script/runner which comes with all Rails apps would be more than adequate.
For further reading you might want to look at HowToRunBackgroundJobsInRails from the Rails Wiki.
A: Resque can help you, too.
It's a very nice tool for creating background jobs, placing those jobs on multiple queues, and processing them later.
The Github's guys have created and use it.
The article below may help you get started:
http://rubylearning.com/blog/2010/11/08/do-you-know-resque/
A: BackgrounDRb - Pros: Full featured, messaging, Cons: Threaded (eek - Rails isn't thread safe!), complex
Daemon Generator - Pros: Simple, runs jobs and thats it!, Cons: None of that fancy messaging stuff.
A: Starling + Workling plugin is dead simple. Plus, it uses Memcached which is simple, tested and scalable.
A: We use Cron. Easy to set up, easy to maintain, and it Always Works.
BackgroundRb will eat your brain.
A: BackgrounDRb is not threaded, its completely process based. It only has a feature of thread pools which user can use, if he wants to handle IO bound tasks concurrently.
Try 1.1 release and let me (on my blog) or on mailing list know about any issues.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: AD locks my account I had developed an application in visual Studio 2005. whenever i run that application active directory locks domain my account. I really cant understand what am i doing wrong.
A: I can't tell you exactly what's causing it, but here are some tips for troubleshooting. Install the Account Lockout and Management Tools from Microsoft so you can get a bit more detail about the why's and whens on a lockout.
Here's an excellent article of debugging that walks you though everything and is more resourceful that me repeating it here : Implementing and Troubleshooting Account Lockout
Using these tools you can see if it's stale credentials used in your app, bad password overloads, or some security elsewhere that's being tripped by your application that your AD policies don't like.
A: Is you application trying to access a network resource such as database, network file or web server?
Did you accidently hard coded some outdated network credentials (username and password)?
A: Are you using impersonation? Check your web.config for account info.
A: Are you using authentication to a database, SSAS cube or other item that uses user/password authentication against an AD account?
If this is the case and your application tries three logins with the wrong password, AD may lock out your account (depending on its settings). Analysis Services is quite bad for this.
Nigel.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Shell script to recursively browse a directory and replace a string I need to recursively search directories and replace a string (say http://development:port/URI) with another (say http://production:port/URI) in all the files where ever it's found. Can anyone help?
It would be much better if that script can print out the files that it modified and takes the search/replace patterns as input parameters.
Regards.
A: find . -type f | xargs sed -i s/pattern/replacement/g
A: Try this:
find . -type f | xargs grep -l development | xargs perl -i.bak -p -e 's(http://development)(http://production)g'
Another approach with slightly more feedback:
find . -type f | while read file
do
grep development $file && echo "modifying $file" && perl -i.bak -p -e 's(http://development)(http://prodution)g' $file
done
Hope this helps.
A: It sounds like you would benefit from a layer of indirection. (But then, who wouldn't?)
I'm thinking that you could have the special string in just one location. Either reference the configuration settings at runtime, or generate these files with the correct string at build time.
A: Don't try the above within a working SVN / CVS directory, since it will also patch the .svn/.cvs, which is definitely not what you want. To avoid .svn modifications, for example, use:
find . -type f | fgrep -v .svn | xargs sed -i 's/pattern/replacement/g'
A: Use zsh so with advanced globing you can use only one command.
E.g.:
sed -i 's:pattern:target:g' ./**
HTH
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Editing Embedded PowerPoint from Excel VBA I have an embedded PowerPoint presentation in an Excel workbook. How can I edit this (open, copy slides, add data to slides, close) using VBA?
A: 1. Add a reference to the PowerPoint Object Model to your VBA application
From the VBA window, choose Tools | References
Look for Microsoft Powerpoint 12.0 Object Library and check it
2. Select and activate the PowerPoint presentation object
ActiveSheet.Shapes("Object 1").Select
Selection.Verb Verb:=xlOpen
Note: this code assumes that the PowerPoint object is named Object 1 (look in the top left corner to see what it's really named) and that it is on the active sheet.
3. Get a reference to the Presentation object
Dim p As PowerPoint.Presentation
Set p = Selection.Object
4. Manipulate it
All the methods and properties of a presentation object are available to you. Here's an example of adding a slide:
p.Slides.Add 1, ppLayoutBlank
5. Deselect it
The easiest way is just to select a cell.
[a1].Select
Hope that helps!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How would you generate default user profile pictures? I've been admiring StackOverflow's default quilt-like profile pictures (which I notice are also on the Fail Blog) and am curious what program both are using to generate them.
But what I really want to know is: If you were to design the system to create default profile pictures, how would you do it?
I'm looking for ideas on what algorithm you'd use, as well as things like how you would related the image to the user, be it related to their username, or some portrayal of their progress (ie the image gets more complex, or larger, as they gain reputation).
A: This is an editorial, not necessarily an answer.
Those auto-generated avatars on this site come from a service (Gravatar) that focuses exclusively on providing avatars and is therefore the core of their business. For apps that aren't specifically intended to generate and display avatars, I would just go with an empty placeholder (like Facebook). It's a neat feature, but is it worth your development time when a simple placeholder would be just as effective?
A: FWIW, the default pictures are generated by gravatar, which is why you'll see them on more than this site.
A: It's called an Identicon. On Stackoverflow it Gravatar uses your IP address to generate the image.
A: A very good source of images would be flame fractals. They are rather computationally expensive, so simply sourcing them from a project like electric sheep or having them be rendered by the user's computer should be considered to offload the work.
Who wouldn't want default profile pictures like these?
alt text http://sheepserver.net/v2d6/gen/202/124809/icon.jpg alt text http://sheepserver.net/v2d6/gen/202/124805/icon.jpg alt text http://sheepserver.net/v2d6/gen/202/125373/i77.jpg alt text http://sheepserver.net/v2d6/gen/202/125431/i116.jpg
A: Use a Julia set or something like that and set the initial conditions to a hash of the user's email address.
A: I'd use a jpeg server tool (aspjpg or similar) to manipulate the image on load so it displays their badges within their profile pic.
In fact, using any tool to dynamically generate images is pretty cool. Applying some sort of 3d or flash technology to dynamically create images using random variables for eye spacing or facial structure would be pretty wicked as well.
But ya this is a weird question. hah!
A: I did something similar years back, I used POV-Ray to generate little 3D scenes with torusses (torii ?) and spheres. There were lots of parameters to tweak such as the position, size and colour of each object.
POV-Ray is a scriptable 3D render engine, you can find it here.
Unfortunately my images all looked too similar to each other. I love Gravatar's identicons as uses on this site. I think the symmetry helps and the shapes are unique enough that you can identify users fairly clearly.
A: In ruby there have a library http://github.com/swdyh/quilt to generate it!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: How to share host VPN connection with VM instances in Hyper-V? I'm running my workstation on Server 2008 and a few servers in Hyper-V VM's on that server. I connect to my corporate LAN using VPN from the main OS (the host) but my VM's aren't seeing the servers in the corporate LAN. Internet and local access to my home network work fine. Each of the VMs has one virtual network adapter.
What should I try to make it work?
Maybe I need to provide more details, please ask if needed.
More details:
*
*cannot start multiple VPN connections
*not using NAT through the host
*VM gets IP address from the home network router (DHCP)
A: Like I said you need to setup some routes. Add a route to your Corp LAN via your Host as the gateway. Just the fact alone you telling me that it gets it from home DHCPP tells me that is the issue. Your VM's only see 1 default gateway, and that is to the internet. The VM's have no idea whatsoever that the Host has a VPN on it. Adding that route (on VM machines) causes any requests that your VM's make to the subnet of your corp network to route through your host rather than the home router.
Adding something like this:
route ADD 10.0.0.0 MASK 255.0.0.0 192.168.1.30
on your VM'S would do this: Any requests made to the 10...* network would route through the computer with the IP address of 192.168.1.30. So replace the 10.0.0.0 and subnet with your corp lan, and the 192 ip with your hosts IP. That should take care of the issue.
A: What type of VPN are you using? Ar you using the built-in windows VPN client, or do you have to install the client ?
You could just set up the VPN client independently on every VM, providing you are allowed multiple simultaneous connections.
I don't think that setting up routes would work because then you will also need to set up routes on your company network.
A: Setup some routes in your routing tablke. It really depends on how its setup but if you can access your corp network fine on the host, then setup the routes in your vm machines.
Also, as I am not familiar with that VM, are the network adapters like VMWares bridged adapaters? If so you need to setup the route to route to your host.
A: Let me make sure something clearer. You servers act as if they are physically seperated from your host. So with that in mind they need to be setup the same way as if they were seperated. That means that they need a route in their routing table. Why? Because right now their default route is to the internet via your gateway, NOT your host.
In short, approach the problem the way you would if they were not VM's and they were real servers on your network.
But as I aksed in my initial repsonse, Are they like VMWare bridged adapters. If they are what I say stands. If they are not, then thats a different story. FOr example, if they are setup in a NAT with your host, VPN should already work. Any other situation will require further investigation and more information.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Connection Interrupted. The connection to the server was reset while the page was loading I am calling a PHP-Script belonging to a MySQL/PHP web application using FF3. I run XAMPP on localhost. All I get is this:
Connection Interrupted
The connection to the server was reset while the page was loading.
The network link was interrupted while negotiating a connection. Please try again.
A: There are a number of possible solutions ... depends on the "why" ... so it ends up being a bit of trial and error. On a fresh install, that's tricky to determine. But, if you made a recent "major" change that's a place to start looking - like modifying virtual hosts or adding/enabling XDebug.
Here's a list of things I've used/done/tried in the past
*
*check for infinite loops ... in particular looping through a SQL fetch result which works 99% of the time except the 1% it doesn't. In one case, I was using the results of two previous queries as the upper and lower bounds of a for loop ... and occasionally got a upper bound of a UINT max ... har har har (vomit)
*copying the ./php/libmysql.dll to the windows/system32 directory (Particularly if you see Parent: child process exited with status 3221225477 -- Restarting in your log files ... check out: http://www.java-samples.com/showtutorial.php?tutorialid=1050)
*if you modify PHP's error_reporting at runtime ... in certain circumstances this can cause PHP to degenerate into an unstable state if, say, in your PHP code you modify the superglobals or fiddle around with other deep and personal background system variables (Nah, who would ever do such evil hackery? ahem)
*if you convert your MySQL to something other than MyISAM or mysqli
There is a known bug with MySQL related to MyISAM, the UTF8 character set and indexes (http://bugs.mysql.com/bug.php?id=4541)
Solution is to use InnoDB dialect (eg sql set GLOBAL storage_engine='InnoDb';)
*
*Doing that changes how new tables are created ... which might slightly alter the way results are returned to a fetch statement ... leading to an infinite loop, a malformed dataset, etc. (although this change should not hang the database itself)
Other helpful items are to ramp up the debug reporting for PHP and apache in their config files and restart the servers. The log files sometimes give a clue as to at least where the problem might reside. If it happens after your page content was finished it's more likely in the php settings. If it's during page construction, check your PHP code. Etc. etc.
Hope the above laundry list helps somebody someday ... probably myself when I run into it again and come back here looking for "how the heck did I fix it last time?" ... :)
A: It's possible that your script could be caught in an infinite loop. If that doesn't apply, then I'd check the error logs like TimB suggested.
A: It sounds like the PHP script you're calling is failing without returning a valid response. Depending on the level of logging that you have set up, this should generate an error in the Apache logfile, which will give you some idea of the problem. I'm not familiar with XAMPP, but you should be able to find out where the logs are, and look for an error that occurred at the time you made your request to the PHP script.
A: copying libmysql.dll to apache\bin folder may help you overcome this strange error
A: I solved this problem Upgrading the xampp\php\ext\xdebug\php_xdebug.dll
(changed to php xdebug v.2.0.5-5.3-vc9 )
A: I had the same problem and this is what i did.
I issued the http get command through php cli script, and as it turns out I had declared one class twice somewhere.
By the way , i use AMPPS on an mac
Hope this helps some one!
A: Try doing the request with Firebug enabled and see what info you can get out of that; I always find that using wget is helpful for seeing the raw HTTP interaction without worrying about Firefox's UI elements interfering.
A: If you are using certificates for ssl in Windows 2008 Server(iis 7) from old selfssl tool(iis 6), that is the problem. Sometimes Microsoft releases patches which can destruct all these old certificates. The solution is to generate them again.
A: copying libmysql.dll to apache\bin folder may help you overcome this strange error
Indeed this helped me to solve this problem
The connection to the server was reset while the page was loading.
A: Incase the issue is not working this did the trick for me.
1. I got a new zip directory for PHP and connected it with apache
2. I searched for the libmysql in the new php and inserted this to the apache/bin
its this libmysql.dll that is needed there and not the one form mySQL/bin.
ok at least thats the one that worked.
A: I experienced a very similar issue - which doesn't apply to the person who asked this question - but may be of help to others who are reading this page...
I had an issue where in certain cases PHP 5.4 + eAccelerator = connection reset. There was no error output in any log files, and it only happened on certain URLs, which made it difficult to diagnose. Turns out it only happened for certain PHP code / certain PHP files, and was due to some incompatibilities with specific PHP code and eAccelerator. Easiest solution was to disable eAccelerator for that specific site, by adding the following to .htaccess file
php_flag eaccelerator.enable 0
php_flag eaccelerator.optimizer 0
(or equivalent lines in php.ini):
eaccelerator.enable="0"
eaccelerator.optimizer="0"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: When do transactions start when using (restful) rails Is it the case that the entire restful verb is under a single all encompassing transaction? That is to say, if I raise a Error in the validation or callbacks at any point in the handling of a UPDATE, DELETE, or CREATE operation, is every database operation that I may have performed in previous callbacks also rolled back? Succinctly, does raising a Error on any callback or validation make it such that no change at all occurs in the database for that verb action?
A: By default there is no database code written inside a transaction, you need to tell it to do that in the code.
def create
Model.transaction do
Model.create!(params[:model])
Model.association.create!(params[:association])
end
rescue ActiveRecord::RecordNotSaved, ActiveRecord::RecordInvalid
flash[:notice] = "That record could not be saved."
render :action => "new"
end
Using the #create! methods will attempt to save the record and if they fail they will raise an exception which will then rollback any code already performed inside the transaction block.
If you don't rescue the action you will be redirected to (I think) a 405.html in your public directory if one exists.
A:
Is it the case that the entire restful verb is under a single all encompassing transaction?
No
if I raise a Error in the validation or callbacks at any point in the handling of a UPDATE, DELETE, or CREATE operation, is every database operation that I may have performed in previous callbacks also rolled back?
No.
does raising a Error on any callback or validation make it such that no change at all occurs in the database for that verb action?
No.
If you desire this behaviour you can either explicitly create transactions in your controller (see the examples provided by other users), or use an around_filter to attach the behaviour to all your restful actions.
A: Some methods (create, destroy) go to the database immediately. Transactions occur by using the transaction method on classes derived from ActiveRecord as follows:
Student.transaction do
Course.transaction do
course.enroll(student)
student.units += course.units
end
end
(This example is for multiple databases. For a single database, you only need one transaction.)
You can then rollback on these transactions, and exceptions thrown within the transaction are propagated after the rollback.
This depends upon the database having transactions.
NB: save and destroy are wrapped in transactions.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How to get GCC to use more than two SIMD registers when using intrinsics? I am writing some code and trying to speed it up using SIMD intrinsics SSE2/3. My code is of such nature that I need to load some data into an XMM register and act on it many times. When I'm looking at the assembler code generated, it seems that GCC keeps flushing the data back to the memory, in order to reload something else in XMM0 and XMM1. I am compiling for x86-64 so I have 15 registers. Why is GCC using only two and what can I do to ask it to use more? Is there any way that I can "pin" some value in a register? I added the "register" keyword to my variable definition, but the generated assembly code is identical.
A: Yes, you can. Explicit Reg Vars talks about the syntax you need to pin a variable to a specific register.
A: If you're getting to the point where you're specifying individual registers for each intrinsic, you might as well just write the assembly directory, especially given gcc's nasty habit of pessimizing intrinsics unnecessarily in many cases.
A: It sounds like you compiled with optimization disabled, so no variables are kept in registers between C statements, not even int.
Compile with gcc -O3 -march=native to let the compiler make non-terrible asm, optimized for your machine. The default is -O0 with a "generic" target ISA and tuning.
See also Why does clang produce inefficient asm with -O0 (for this simple floating point sum)? for more about why "debug" builds in general are like that, and the fact that register int foo; or register __m128 bar; can stay in a register even in a debug build. But it's much better to actually have the compiler optimize, as well as using registers, if you want your code to run fast overall!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Using .Net, how can I determine if a type is a Numeric ValueType? But here's an example:
Dim desiredType as Type
if IsNumeric(desiredType) then ...
EDIT: I only know the Type, not the Value as a string.
Ok, so unfortunately I have to cycle through the TypeCode.
But this is a nice way to do it:
if ((desiredType.IsArray))
return 0;
switch (Type.GetTypeCode(desiredType))
{
case 3:
case 6:
case 7:
case 9:
case 11:
case 13:
case 14:
case 15:
return 1;
}
;return 0;
A: Great article here Exploring IsNumeric for C#.
Option 1:
Reference Microsoft.VisualBasic.dll, then do the following:
if (Microsoft.VisualBasic.Information.IsNumeric("5"))
{
//Do Something
}
Option 2:
public static bool Isumeric (object Expression)
{
bool f;
ufloat64 a;
long l;
IConvertible iConvertible = null;
if ( ((Expression is IConvertible)))
{
iConvertible = (IConvertible) Expression;
}
if (iConvertible == null)
{
if ( ((Expression is char[])))
{
Expression = new String ((char[]) Expression);
goto IL_002d; // hopefully inserted by optimizer
}
return 0;
}
IL_002d:
TypeCode typeCode = iConvertible.GetTypeCode ();
if ((typeCode == 18) || (typeCode == 4))
{
string str = iConvertible.ToString (null);
try
{
if ( (StringType.IsHexOrOctValue (str, l)))
{
f = true;
return f;
}
}
catch (Exception )
{
f = false;
return f;
};
return DoubleType.TryParse (str, a);
}
return Utils.IsNumericTypeCode (typeCode);
}
internal static bool IsNumericType (Type typ)
{
bool f;
TypeCode typeCode;
if ( (typ.IsArray))
{
return 0;
}
switch (Type.GetTypeCode (typ))
{
case 3:
case 6:
case 7:
case 9:
case 11:
case 13:
case 14:
case 15:
return 1;
};
return 0;
}
A: If you have a reference to an actual object, here's a simple solution for C# that's very straightforward:
/// <summary>
/// Determines whether the supplied object is a .NET numeric system type
/// </summary>
/// <param name="val">The object to test</param>
/// <returns>true=Is numeric; false=Not numeric</returns>
public static bool IsNumeric(ref object val)
{
if (val == null)
return false;
// Test for numeric type, returning true if match
if
(
val is double || val is float || val is int || val is long || val is decimal ||
val is short || val is uint || val is ushort || val is ulong || val is byte ||
val is sbyte
)
return true;
// Not numeric
return false;
}
A: With all due credit to @SFun28 and @nawfal (thanks!), I used both of their answers, tweaked slightly and came up with these extension methods:
public static class ReflectionExtensions
{
public static bool IsNullable(this Type type) {
return
type != null &&
type.IsGenericType &&
type.GetGenericTypeDefinition() == typeof(Nullable<>);
}
public static bool IsNumeric(this Type type) {
if (type == null || type.IsEnum)
return false;
if (IsNullable(type))
return IsNumeric(Nullable.GetUnderlyingType(type));
switch (Type.GetTypeCode(type)) {
case TypeCode.Byte:
case TypeCode.Decimal:
case TypeCode.Double:
case TypeCode.Int16:
case TypeCode.Int32:
case TypeCode.Int64:
case TypeCode.SByte:
case TypeCode.Single:
case TypeCode.UInt16:
case TypeCode.UInt32:
case TypeCode.UInt64:
return true;
default:
return false;
}
}
}
A: I know this is a VERY late answer, but here is the function I use:
public static bool IsNumeric(Type type)
{
var t = Nullable.GetUnderlyingType(type) ?? type;
return t.IsPrimitive || t == typeof(decimal);
}
If you wanted to exclude char as a numeric type then you can use this example:
return (t.IsPrimitive || t == typeof(decimal)) && t != typeof(char);
According to the MSDN:
The primitive types are Boolean, Byte, SByte, Int16, UInt16, Int32,
UInt32, Int64, UInt64, IntPtr, UIntPtr, Char, Double, and Single.
Note: This check includes IntPtr and UIntPtr.
Here is the same function as a generic extension method (I know this doesn't work for the OP's case, but someone else might find it useful):
public static bool IsNumeric<T>(this T value)
{
var t = Nullable.GetUnderlyingType(value.GetType()) ?? value.GetType();
return t.IsPrimitive || t == typeof(decimal);
}
A: This is how MS has implemented it in System.Dynamic.Utils.TypeUtils which is an internal class. Turns out that they dont consider System.Decimal to be a numeric type (Decimal is omitted from enumeration). And interestingly MS finds System.Char type to be numeric. Otherwise it's exactly the same as SFun28's answer. I suppose his answer is "more correct".
internal static bool IsNumeric(Type type)
{
type = type.GetNonNullableType();
if (!type.IsEnum)
{
switch (Type.GetTypeCode(type))
{
case TypeCode.Char:
case TypeCode.SByte:
case TypeCode.Byte:
case TypeCode.Int16:
case TypeCode.UInt16:
case TypeCode.Int32:
case TypeCode.UInt32:
case TypeCode.Int64:
case TypeCode.UInt64:
case TypeCode.Single:
case TypeCode.Double:
return true;
}
}
return false;
}
//where GetNonNullableType is defined as
internal static Type GetNonNullableType(this Type type)
{
if (type.IsNullableType())
{
return type.GetGenericArguments()[0];
}
return type;
}
//where IsNullableType is defined as
internal static bool IsNullableType(this Type type)
{
return type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable<>);
}
A: You can find out if a variable is numeric using the Type.GetTypeCode() method:
TypeCode typeCode = Type.GetTypeCode(desiredType);
if (typeCode == TypeCode.Double || typeCode == TypeCode.Integer || ...)
return true;
You'll need to complete all the available numeric types in the "..." part ;)
More details here: TypeCode Enumeration
A: A few years late here, but here's my solution (you can choose whether to include boolean). Solves for the Nullable case. XUnit test included
/// <summary>
/// Determines if a type is numeric. Nullable numeric types are considered numeric.
/// </summary>
/// <remarks>
/// Boolean is not considered numeric.
/// </remarks>
public static bool IsNumericType( Type type )
{
if (type == null)
{
return false;
}
switch (Type.GetTypeCode(type))
{
case TypeCode.Byte:
case TypeCode.Decimal:
case TypeCode.Double:
case TypeCode.Int16:
case TypeCode.Int32:
case TypeCode.Int64:
case TypeCode.SByte:
case TypeCode.Single:
case TypeCode.UInt16:
case TypeCode.UInt32:
case TypeCode.UInt64:
return true;
case TypeCode.Object:
if ( type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable<>))
{
return IsNumericType(Nullable.GetUnderlyingType(type));
}
return false;
}
return false;
}
/// <summary>
/// Tests the IsNumericType method.
/// </summary>
[Fact]
public void IsNumericTypeTest()
{
// Non-numeric types
Assert.False(TypeHelper.IsNumericType(null));
Assert.False(TypeHelper.IsNumericType(typeof(object)));
Assert.False(TypeHelper.IsNumericType(typeof(DBNull)));
Assert.False(TypeHelper.IsNumericType(typeof(bool)));
Assert.False(TypeHelper.IsNumericType(typeof(char)));
Assert.False(TypeHelper.IsNumericType(typeof(DateTime)));
Assert.False(TypeHelper.IsNumericType(typeof(string)));
// Arrays of numeric and non-numeric types
Assert.False(TypeHelper.IsNumericType(typeof(object[])));
Assert.False(TypeHelper.IsNumericType(typeof(DBNull[])));
Assert.False(TypeHelper.IsNumericType(typeof(bool[])));
Assert.False(TypeHelper.IsNumericType(typeof(char[])));
Assert.False(TypeHelper.IsNumericType(typeof(DateTime[])));
Assert.False(TypeHelper.IsNumericType(typeof(string[])));
Assert.False(TypeHelper.IsNumericType(typeof(byte[])));
Assert.False(TypeHelper.IsNumericType(typeof(decimal[])));
Assert.False(TypeHelper.IsNumericType(typeof(double[])));
Assert.False(TypeHelper.IsNumericType(typeof(short[])));
Assert.False(TypeHelper.IsNumericType(typeof(int[])));
Assert.False(TypeHelper.IsNumericType(typeof(long[])));
Assert.False(TypeHelper.IsNumericType(typeof(sbyte[])));
Assert.False(TypeHelper.IsNumericType(typeof(float[])));
Assert.False(TypeHelper.IsNumericType(typeof(ushort[])));
Assert.False(TypeHelper.IsNumericType(typeof(uint[])));
Assert.False(TypeHelper.IsNumericType(typeof(ulong[])));
// numeric types
Assert.True(TypeHelper.IsNumericType(typeof(byte)));
Assert.True(TypeHelper.IsNumericType(typeof(decimal)));
Assert.True(TypeHelper.IsNumericType(typeof(double)));
Assert.True(TypeHelper.IsNumericType(typeof(short)));
Assert.True(TypeHelper.IsNumericType(typeof(int)));
Assert.True(TypeHelper.IsNumericType(typeof(long)));
Assert.True(TypeHelper.IsNumericType(typeof(sbyte)));
Assert.True(TypeHelper.IsNumericType(typeof(float)));
Assert.True(TypeHelper.IsNumericType(typeof(ushort)));
Assert.True(TypeHelper.IsNumericType(typeof(uint)));
Assert.True(TypeHelper.IsNumericType(typeof(ulong)));
// Nullable non-numeric types
Assert.False(TypeHelper.IsNumericType(typeof(bool?)));
Assert.False(TypeHelper.IsNumericType(typeof(char?)));
Assert.False(TypeHelper.IsNumericType(typeof(DateTime?)));
// Nullable numeric types
Assert.True(TypeHelper.IsNumericType(typeof(byte?)));
Assert.True(TypeHelper.IsNumericType(typeof(decimal?)));
Assert.True(TypeHelper.IsNumericType(typeof(double?)));
Assert.True(TypeHelper.IsNumericType(typeof(short?)));
Assert.True(TypeHelper.IsNumericType(typeof(int?)));
Assert.True(TypeHelper.IsNumericType(typeof(long?)));
Assert.True(TypeHelper.IsNumericType(typeof(sbyte?)));
Assert.True(TypeHelper.IsNumericType(typeof(float?)));
Assert.True(TypeHelper.IsNumericType(typeof(ushort?)));
Assert.True(TypeHelper.IsNumericType(typeof(uint?)));
Assert.True(TypeHelper.IsNumericType(typeof(ulong?)));
// Testing with GetType because of handling with non-numerics. See:
// http://msdn.microsoft.com/en-us/library/ms366789.aspx
// Using GetType - non-numeric
Assert.False(TypeHelper.IsNumericType((new object()).GetType()));
Assert.False(TypeHelper.IsNumericType(DBNull.Value.GetType()));
Assert.False(TypeHelper.IsNumericType(true.GetType()));
Assert.False(TypeHelper.IsNumericType('a'.GetType()));
Assert.False(TypeHelper.IsNumericType((new DateTime(2009, 1, 1)).GetType()));
Assert.False(TypeHelper.IsNumericType(string.Empty.GetType()));
// Using GetType - numeric types
// ReSharper disable RedundantCast
Assert.True(TypeHelper.IsNumericType((new byte()).GetType()));
Assert.True(TypeHelper.IsNumericType(43.2m.GetType()));
Assert.True(TypeHelper.IsNumericType(43.2d.GetType()));
Assert.True(TypeHelper.IsNumericType(((short)2).GetType()));
Assert.True(TypeHelper.IsNumericType(((int)2).GetType()));
Assert.True(TypeHelper.IsNumericType(((long)2).GetType()));
Assert.True(TypeHelper.IsNumericType(((sbyte)2).GetType()));
Assert.True(TypeHelper.IsNumericType(2f.GetType()));
Assert.True(TypeHelper.IsNumericType(((ushort)2).GetType()));
Assert.True(TypeHelper.IsNumericType(((uint)2).GetType()));
Assert.True(TypeHelper.IsNumericType(((ulong)2).GetType()));
// ReSharper restore RedundantCast
// Using GetType - nullable non-numeric types
bool? nullableBool = true;
Assert.False(TypeHelper.IsNumericType(nullableBool.GetType()));
char? nullableChar = ' ';
Assert.False(TypeHelper.IsNumericType(nullableChar.GetType()));
DateTime? nullableDateTime = new DateTime(2009, 1, 1);
Assert.False(TypeHelper.IsNumericType(nullableDateTime.GetType()));
// Using GetType - nullable numeric types
byte? nullableByte = 12;
Assert.True(TypeHelper.IsNumericType(nullableByte.GetType()));
decimal? nullableDecimal = 12.2m;
Assert.True(TypeHelper.IsNumericType(nullableDecimal.GetType()));
double? nullableDouble = 12.32;
Assert.True(TypeHelper.IsNumericType(nullableDouble.GetType()));
short? nullableInt16 = 12;
Assert.True(TypeHelper.IsNumericType(nullableInt16.GetType()));
short? nullableInt32 = 12;
Assert.True(TypeHelper.IsNumericType(nullableInt32.GetType()));
short? nullableInt64 = 12;
Assert.True(TypeHelper.IsNumericType(nullableInt64.GetType()));
sbyte? nullableSByte = 12;
Assert.True(TypeHelper.IsNumericType(nullableSByte.GetType()));
float? nullableSingle = 3.2f;
Assert.True(TypeHelper.IsNumericType(nullableSingle.GetType()));
ushort? nullableUInt16 = 12;
Assert.True(TypeHelper.IsNumericType(nullableUInt16.GetType()));
ushort? nullableUInt32 = 12;
Assert.True(TypeHelper.IsNumericType(nullableUInt32.GetType()));
ushort? nullableUInt64 = 12;
Assert.True(TypeHelper.IsNumericType(nullableUInt64.GetType()));
}
A: ''// Return true if a type is a numeric type.
Private Function IsNumericType(ByVal this As Type) As Boolean
''// All the numeric types have bits 11xx set whereas non numeric do not.
''// That is if you include char type which is 4(decimal) = 100(binary).
If this.IsArray Then Return False
If (Type.GetTypeCode(this) And &HC) > 0 Then Return True
Return False
End Function
A: You can now use the .NET Framework method
typeof(decimal?).IsNumericType()
A: Use Type.IsValueType() and TryParse():
public bool IsInteger(Type t)
{
int i;
return t.IsValueType && int.TryParse(t.ToString(), out i);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49"
}
|
Q: Is there a Max function in SQL Server that takes two values like Math.Max in .NET? I want to write a query like this:
SELECT o.OrderId, MAX(o.NegotiatedPrice, o.SuggestedPrice)
FROM Order o
But this isn't how the MAX function works, right? It is an aggregate function so it expects a single parameter and then returns the MAX of all rows.
Does anyone know how to do it my way?
A: SELECT o.OrderId,
--MAX(o.NegotiatedPrice, o.SuggestedPrice)
(SELECT MAX(v) FROM (VALUES (o.NegotiatedPrice), (o.SuggestedPrice)) AS value(v)) as ChoosenPrice
FROM Order o
A: I would go with the solution provided by kcrumley
Just modify it slightly to handle NULLs
create function dbo.HigherArgumentOrNull(@val1 int, @val2 int)
returns int
as
begin
if @val1 >= @val2
return @val1
if @val1 < @val2
return @val2
return NULL
end
EDIT
Modified after comment from Mark. As he correctly pointed out in 3 valued logic x > NULL or x < NULL should always return NULL. In other words unknown result.
A: SQL Server 2012 introduced IIF:
SELECT
o.OrderId,
IIF( ISNULL( o.NegotiatedPrice, 0 ) > ISNULL( o.SuggestedPrice, 0 ),
o.NegotiatedPrice,
o.SuggestedPrice
)
FROM
Order o
Handling NULLs is recommended when using IIF, because a NULL on either side of your boolean_expression will cause IIF to return the false_value (as opposed to NULL).
A: If you're using SQL Server 2008 (or above), then this is the better solution:
SELECT o.OrderId,
(SELECT MAX(Price)
FROM (VALUES (o.NegotiatedPrice),(o.SuggestedPrice)) AS AllPrices(Price))
FROM Order o
All credit and votes should go to Sven's answer to a related question, "SQL MAX of multiple columns?"
I say it's the "best answer" because:
*
*It doesn't require complicating your code with UNION's, PIVOT's,
UNPIVOT's, UDF's, and crazy-long CASE statments.
*It isn't plagued with the problem of handling nulls, it handles them just fine.
*It's easy to swap out the "MAX" with "MIN", "AVG", or "SUM". You can use any aggregate function to find the aggregate over many different columns.
*You're not limited to the names I used (i.e. "AllPrices" and "Price"). You can pick your own names to make it easier to read and understand for the next guy.
*You can find multiple aggregates using SQL Server 2008's derived_tables like so: SELECT MAX(a), MAX(b) FROM (VALUES (1, 2), (3, 4), (5, 6), (7, 8), (9, 10) ) AS MyTable(a, b)
A: I probably wouldn't do it this way, as it's less efficient than the already mentioned CASE constructs - unless, perhaps, you had covering indexes for both queries. Either way, it's a useful technique for similar problems:
SELECT OrderId, MAX(Price) as Price FROM (
SELECT o.OrderId, o.NegotiatedPrice as Price FROM Order o
UNION ALL
SELECT o.OrderId, o.SuggestedPrice as Price FROM Order o
) as A
GROUP BY OrderId
A: Oops, I just posted a dupe of this question...
The answer is, there is no built in function like Oracle's Greatest, but you can achieve a similar result for 2 columns with a UDF, note, the use of sql_variant is quite important here.
create table #t (a int, b int)
insert #t
select 1,2 union all
select 3,4 union all
select 5,2
-- option 1 - A case statement
select case when a > b then a else b end
from #t
-- option 2 - A union statement
select a from #t where a >= b
union all
select b from #t where b > a
-- option 3 - A udf
create function dbo.GREATEST
(
@a as sql_variant,
@b as sql_variant
)
returns sql_variant
begin
declare @max sql_variant
if @a is null or @b is null return null
if @b > @a return @b
return @a
end
select dbo.GREATEST(a,b)
from #t
kristof
Posted this answer:
create table #t (id int IDENTITY(1,1), a int, b int)
insert #t
select 1,2 union all
select 3,4 union all
select 5,2
select id, max(val)
from #t
unpivot (val for col in (a, b)) as unpvt
group by id
A: Its as simple as this:
CREATE FUNCTION InlineMax
(
@p1 sql_variant,
@p2 sql_variant
) RETURNS sql_variant
AS
BEGIN
RETURN CASE
WHEN @p1 IS NULL AND @p2 IS NOT NULL THEN @p2
WHEN @p2 IS NULL AND @p1 IS NOT NULL THEN @p1
WHEN @p1 > @p2 THEN @p1
ELSE @p2 END
END;
A: DECLARE @MAX INT
@MAX = (SELECT MAX(VALUE)
FROM (SELECT 1 AS VALUE UNION
SELECT 2 AS VALUE) AS T1)
A: You can do something like this:
select case when o.NegotiatedPrice > o.SuggestedPrice
then o.NegotiatedPrice
else o.SuggestedPrice
end
A: SELECT o.OrderID
CASE WHEN o.NegotiatedPrice > o.SuggestedPrice THEN
o.NegotiatedPrice
ELSE
o.SuggestedPrice
END AS Price
A: For the answer above regarding large numbers, you could do the multiplication before the addition/subtraction. It's a bit bulkier but requires no cast. (I can't speak for speed but I assume it's still pretty quick)
SELECT 0.5 * ((@val1 + @val2) +
ABS(@val1 - @val2))
Changes to
SELECT @val1*0.5+@val2*0.5 +
ABS(@val1*0.5 - @val2*0.5)
at least an alternative if you want to avoid casting.
A: Here's a case example that should handle nulls and will work with older versions of MSSQL. This is based on the inline function in one one of the popular examples:
case
when a >= b then a
else isnull(b,a)
end
A: -- Simple way without "functions" or "IF" or "CASE"
-- Query to select maximum value
SELECT o.OrderId
,(SELECT MAX(v)
FROM (VALUES (o.NegotiatedPrice), (o.SuggestedPrice)) AS value(v)) AS MaxValue
FROM Order o;
A: Can be done in one line:
-- the following expression calculates ==> max(@val1, @val2)
SELECT 0.5 * ((@val1 + @val2) + ABS(@val1 - @val2))
Edit: If you're dealing with very large numbers you'll have to convert the value variables into bigint in order to avoid an integer overflow.
A: CREATE FUNCTION [dbo].[fnMax] (@p1 INT, @p2 INT)
RETURNS INT
AS BEGIN
DECLARE @Result INT
SET @p2 = COALESCE(@p2, @p1)
SELECT
@Result = (
SELECT
CASE WHEN @p1 > @p2 THEN @p1
ELSE @p2
END
)
RETURN @Result
END
A: Here is @Scott Langham's answer with simple NULL handling:
SELECT
o.OrderId,
CASE WHEN (o.NegotiatedPrice > o.SuggestedPrice OR o.SuggestedPrice IS NULL)
THEN o.NegotiatedPrice
ELSE o.SuggestedPrice
END As MaxPrice
FROM Order o
A: Here is an IIF version with NULL handling (based on of Xin's answer):
IIF(a IS NULL OR b IS NULL, ISNULL(a,b), IIF(a > b, a, b))
The logic is as follows, if either of the values is NULL, return the one that isn't NULL (if both are NULL, a NULL is returned). Otherwise return the greater one.
Same can be done for MIN.
IIF(a IS NULL OR b IS NULL, ISNULL(a,b), IIF(a < b, a, b))
A: select OrderId, (
select max([Price]) from (
select NegotiatedPrice [Price]
union all
select SuggestedPrice
) p
) from [Order]
A: In SQL Server 2012 or higher, you can use a combination of IIF and ISNULL (or COALESCE) to get the maximum of 2 values.
Even when 1 of them is NULL.
IIF(col1 >= col2, col1, ISNULL(col2, col1))
Or if you want it to return 0 when both are NULL
IIF(col1 >= col2, col1, COALESCE(col2, col1, 0))
Example snippet:
-- use table variable for testing purposes
declare @Order table
(
OrderId int primary key identity(1,1),
NegotiatedPrice decimal(10,2),
SuggestedPrice decimal(10,2)
);
-- Sample data
insert into @Order (NegotiatedPrice, SuggestedPrice) values
(0, 1),
(2, 1),
(3, null),
(null, 4);
-- Query
SELECT
o.OrderId, o.NegotiatedPrice, o.SuggestedPrice,
IIF(o.NegotiatedPrice >= o.SuggestedPrice, o.NegotiatedPrice, ISNULL(o.SuggestedPrice, o.NegotiatedPrice)) AS MaxPrice
FROM @Order o
Result:
OrderId NegotiatedPrice SuggestedPrice MaxPrice
1 0,00 1,00 1,00
2 2,00 1,00 2,00
3 3,00 NULL 3,00
4 NULL 4,00 4,00
But if one needs the maximum of multiple columns?
Then I suggest a CROSS APPLY on an aggregation of the VALUES.
Example:
SELECT t.*
, ca.[Maximum]
, ca.[Minimum], ca.[Total], ca.[Average]
FROM SomeTable t
CROSS APPLY (
SELECT
MAX(v.col) AS [Maximum],
MIN(v.col) AS [Minimum],
SUM(v.col) AS [Total],
AVG(v.col) AS [Average]
FROM (VALUES (t.Col1), (t.Col2), (t.Col3), (t.Col4)) v(col)
) ca
This has the extra benefit that this can calculate other things at the same time.
A: You'd need to make a User-Defined Function if you wanted to have syntax similar to your example, but could you do what you want to do, inline, fairly easily with a CASE statement, as the others have said.
The UDF could be something like this:
create function dbo.InlineMax(@val1 int, @val2 int)
returns int
as
begin
if @val1 > @val2
return @val1
return isnull(@val2,@val1)
end
... and you would call it like so ...
SELECT o.OrderId, dbo.InlineMax(o.NegotiatedPrice, o.SuggestedPrice)
FROM Order o
A: Try this. It can handle more than 2 values
SELECT Max(v) FROM (VALUES (1), (2), (3)) AS value(v)
A: I don't think so. I wanted this the other day. The closest I got was:
SELECT
o.OrderId,
CASE WHEN o.NegotiatedPrice > o.SuggestedPrice THEN o.NegotiatedPrice
ELSE o.SuggestedPrice
END
FROM Order o
A: Why not try IIF function (requires SQL Server 2012 and later)
IIF(a>b, a, b)
That's it.
(Extra hint: be careful about either a or b is null, as in this case the result of a>b will be false. So b will be the return result if either is null) (Also by system design, column null is not a good practice)
A: The other answers are good, but if you have to worry about having NULL values, you may want this variant:
SELECT o.OrderId,
CASE WHEN ISNULL(o.NegotiatedPrice, o.SuggestedPrice) > ISNULL(o.SuggestedPrice, o.NegotiatedPrice)
THEN ISNULL(o.NegotiatedPrice, o.SuggestedPrice)
ELSE ISNULL(o.SuggestedPrice, o.NegotiatedPrice)
END
FROM Order o
A: Sub Queries can access the columns from the Outer query so you can use this approach to use aggregates such as MAX across columns. (Probably more useful when there is a greater number of columns involved though)
;WITH [Order] AS
(
SELECT 1 AS OrderId, 100 AS NegotiatedPrice, 110 AS SuggestedPrice UNION ALL
SELECT 2 AS OrderId, 1000 AS NegotiatedPrice, 50 AS SuggestedPrice
)
SELECT
o.OrderId,
(SELECT MAX(price)FROM
(SELECT o.NegotiatedPrice AS price
UNION ALL SELECT o.SuggestedPrice) d)
AS MaxPrice
FROM [Order] o
A: YES, THERE IS.
T-SQL (SQL Server 2022 (16.x)) now supports GREATEST/LEAST functions:
MAX/MIN as NON-aggregate function
This is now live for Azure SQL Database and SQL Managed Instance. It will roll into the next version of SQL Server.
Logical Functions - GREATEST (Transact-SQL)
This function returns the maximum value from a list of one or more expressions.
GREATEST ( expression1 [ ,...expressionN ] )
So in this case:
SELECT o.OrderId, GREATEST(o.NegotiatedPrice, o.SuggestedPrice)
FROM [Order] o;
db<>fiddle demo
A: In its simplest form...
CREATE FUNCTION fnGreatestInt (@Int1 int, @Int2 int )
RETURNS int
AS
BEGIN
IF @Int1 >= ISNULL(@Int2,@Int1)
RETURN @Int1
ELSE
RETURN @Int2
RETURN NULL --Never Hit
END
A: For SQL Server 2012:
SELECT
o.OrderId,
IIF( o.NegotiatedPrice >= o.SuggestedPrice,
o.NegotiatedPrice,
ISNULL(o.SuggestedPrice, o.NegiatedPrice)
)
FROM
Order o
A: Expanding on Xin's answer and assuming the comparison value type is INT, this approach works too:
SELECT IIF(ISNULL(@A, -2147483648) > ISNULL(@B, -2147483648), @A, @B)
This is a full test with example values:
DECLARE @A AS INT
DECLARE @B AS INT
SELECT @A = 2, @B = 1
SELECT IIF(ISNULL(@A, -2147483648) > ISNULL(@B, -2147483648), @A, @B)
-- 2
SELECT @A = 2, @B = 3
SELECT IIF(ISNULL(@A, -2147483648) > ISNULL(@B, -2147483648), @A, @B)
-- 3
SELECT @A = 2, @B = NULL
SELECT IIF(ISNULL(@A, -2147483648) > ISNULL(@B, -2147483648), @A, @B)
-- 2
SELECT @A = NULL, @B = 1
SELECT IIF(ISNULL(@A, -2147483648) > ISNULL(@B, -2147483648), @A, @B)
-- 1
A: In Presto you could use use
SELECT array_max(ARRAY[o.NegotiatedPrice, o.SuggestedPrice])
A: In MemSQL do the following:
-- DROP FUNCTION IF EXISTS InlineMax;
DELIMITER //
CREATE FUNCTION InlineMax(val1 INT, val2 INT) RETURNS INT AS
DECLARE
val3 INT = 0;
BEGIN
IF val1 > val2 THEN
RETURN val1;
ELSE
RETURN val2;
END IF;
END //
DELIMITER ;
SELECT InlineMax(1,2) as test;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "609"
}
|
Q: Why does Hibernate seem to be designed for short lived sessions? I know this is a subjective question, but why does Hibernate seem to be designed for short lived sessions? Generally in my apps I create DAOs to abstract my data layer, but since I can't predict how the entity objects are going to be used some of its collections are lazy loaded, or I should say fail to load once the session is closed.
Why did they not design it so that it would automatically re-open the session, or have sessions always stay open?
A: Becuase once you move out of your transaction boundary you can't hit the database again without starting a new transaction. Having long running transactions 'just in case' is a bad thing (tm).
I guess you want to lazy load object from your view - take a look here for some options. I prefer to define exactly how much of the object map is going to be returned by my session facade methods. I find this makes it easier to unit test and to performance test my business tier.
A: I worked on a desktop app that used EJB and Hibernate. We had to set lazy=false everywhere, because when the objects get serialized, they lose their ability to be fetched from the backend. That's just how it goes, unfortunately.
If you are concerned with performance, you could use caching on the backend so that your non-lazy fetches are not as painful.
A: You're looking for the OpenSessionInView pattern, which is essentially a conceptual filter (and sometimes implemented as a servlet filter) that detects when a session needs to be transparently reopened. Several frameworks implement this so it handles it automagically.
A: I'm writing a desktop application so using a filter isn't applicable.
A: Connections are a scarce resource that need to be recycled as soon as you are done using them. If you are also using connection pooling, getting another one when you need it should be quick. This is the architecture that you have to use to make websites scale -- even though you are a desktop app, their use-cases probably concentrate on scalable sites.
If you look at MS ADO.NET, you will see a similar focus on keeping connections open for a short time -- they have a whole offline model for updating data disconnected and then applying to a database when you are ready.
A: Hibernate is designed as a way to map Objects to Relational Database tables. It accomplishes that job very well. But, it can't please everybody all of the time. I think there is some complexity in learning how initialization works but once you get the hang of it it makes sense. I don't know if it was necessarily "designed" to specifically to anger you, it's just the way it happened.
If it was going to magically reopen sessions in non-webapps I think the complexity of learning the framework would far outweight the benefits.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do you pre-size an array in Lua? I've got a Lua program that seems to be slower than it ought to be. I suspect the issue is that I'm adding values to an associative array one at a time and the table has to allocate new memory each time.
There did seem to be a table.setn function, but it fails under Lua 5.1.3:
stdin:1: 'setn' is obsolete
stack traceback:
[C]: in function 'setn'
stdin:1: in main chunk
[C]: ?
I gather from the Google searching I've done that this function was depreciated in Lua 5.1, but I can't find what (if anything) replaced the functionality.
Do you know how to pre-size a table in Lua?
Alternatively, is there some other way to avoid memory allocation when you add an object to a table?
A: static int new_sized_table( lua_State *L )
{
int asize = lua_tointeger( L, 1 );
int hsize = lua_tointeger( L, 2 );
lua_createtable( L, asize, hsize );
return( 1 );
}
...
lua_pushcfunction( L, new_sized_table );
lua_setglobal( L, "sized_table" );
Then, in Lua,
array = function(size) return sized_table(size,0) end
a = array(10)
As a quick hack to get this running you can add the C to lua.c.
A: I don't think you can - it's not an array, it's an associative array, like a perl hash or an awk array.
http://www.lua.org/manual/5.1/manual.html#2.5.5
I don't think you can preset its size meaningfully from the Lua side.
If you're allocating the array on the C side, though, the
void lua_createtable (lua_State *L, int narr, int nrec);
may be what you need.
Creates a new empty table and pushes
it onto the stack. The new table has
space pre-allocated for narr array
elements and nrec non-array elements.
This pre-allocation is useful when you
know exactly how many elements the
table will have. Otherwise you can use
the function lua_newtable.
A: Let me focus more on your question:
adding values to an associative array
one at a time
Tables in Lua are associative, but using them in an array form (1..N) is optimized. They have double faces, internally.
So.. If you indeed are adding values associatively, follow the rules above.
If you are using indices 1..N, you can force a one-time size readjust by setting t[100000]= something. This should work until the limit of optimized array size, specified within Lua sources (2^26 = 67108864). After that, everything is associative.
p.s. The old 'setn' method handled the array part only, so it's no use for associative usage (ignore those answers).
p.p.s. Have you studied general tips for keeping Lua performance high? i.e. know table creation and rather reuse a table than create a new one, use of 'local print=print' and such to avoid global accesses.
A: There is still an internal luaL_setn and you can compile Lua so that
it is exposed as table.setn. But it looks like that it won't help
because the code doesn't seem to do any pre-extending.
(Also the setn as commented above the setn is related to the array part
of a Lua table, and you said that your are using the table as an associative
array)
The good part is that even if you add the elements one by one, Lua does not
increase the array that way. Instead it uses a more reasonable strategy. You still
get multiple allocations for a larger array but the performance is better than
getting a new allocation each time.
A: Although this doesn't answer your main question, it answers your second question:
Alternatively, is there some other way to avoid memory allocation when you add an object to a table?
If your running Lua in a custom application, as I can guess since your doing C coding, I suggest you replace the allocator with Loki's small value allocator, it reduced my memory allocations 100+ fold. This improved performance by avoiding round trips to the Kernel, and made me a much happier programmer :)
Anyways I tried other allocators, but they were more general, and provide guarantee's that don't benefit Lua applications (such as thread safety, and large object allocation, etc...), also writing your own small-object allocator can be a good week of programming and debugging to get just right, and after searching for an available solution Loki's allocator wasthe easiest and fastest I found for this problem.
A: If you declare your table in code with a specific amount of items, like so:
local tab = { 0, 1, 2, 3, 4, 5, ... , n }
then Lua will create the table with memory already allocated for at least n items.
However, Lua uses the 2x incremental memory allocation technique, so adding an item to a table should rarely force a reallocation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: .net - How do you Register a startup script? I have limited experience with .net. My app throws an error this.dateTimeFormat is undefined which I tracked down to a known ajax bug. The workaround posted said to:
"Register the following as a startup script:"
Sys.CultureInfo.prototype._getAbbrMonthIndex = function(value)
{
if (!this._upperAbbrMonths) {
this._upperAbbrMonths = this._toUpperArray(this.dateTimeFormat.AbbreviatedMonthNames);
}
return Array.indexOf(this._upperAbbrMonths, this._toUpper(value));
};
So how do I do this? Do I add the script to the bottom of my aspx file?
A: You would use ClientScriptManager.RegisterStartupScript()
string str = @"Sys.CultureInfo.prototype._getAbbrMonthIndex = function(value) {
if (!this._upperAbbrMonths) {
this._upperAbbrMonths = this._toUpperArray(this.dateTimeFormat.AbbreviatedMonthNames);
}
return Array.indexOf(this._upperAbbrMonths, this._toUpper(value));
};";
if(!ClientScriptManager.IsStartupScriptRegistered("MyScript"){
ClientScriptManager.RegisterStartupScript(this.GetType(), "MyScript", str, true)
}
A: I had the same problem in my web application (this.datetimeformat is undefined), indeed it is due to a bug in Microsoft Ajax and this function over-rides the error causing function in MS Ajax.
But there are some problems with the code above. Here's the correct version.
string str = @"Sys.CultureInfo.prototype._getAbbrMonthIndex = function(value) {
if (!this._upperAbbrMonths) {
this._upperAbbrMonths = this._toUpperArray(this.dateTimeFormat.AbbreviatedMonthNames);
}
return Array.indexOf(this._upperAbbrMonths, this._toUpper(value));
};";
ClientScriptManager cs = Page.ClientScript;
if(!cs.IsStartupScriptRegistered("MyScript"))
{
cs.RegisterStartupScript(this.GetType(), "MyScript", str, true);
}
Put in the Page_Load event of your web page in the codebehind file. If you're using Master Pages, put it in the your child page, and not the master page, because the code in the child pages will execute before the Master page and if this is in the codebehind of Master page, you will still get the error if you're using AJAX on the child pages.
A: Put it in the header portion of the page
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How to make HTTP requests in PHP and not wait on the response Is there a way in PHP to make HTTP calls and not wait for a response? I don't care about the response, I just want to do something like file_get_contents(), but not wait for the request to finish before executing the rest of my code. This would be super useful for setting off "events" of a sort in my application, or triggering long processes.
Any ideas?
A: *
*Fake a request abortion using CURL setting a low CURLOPT_TIMEOUT_MS
*set ignore_user_abort(true) to keep processing after the connection closed.
With this method no need to implement connection handling via headers and buffer too dependent on OS, Browser and PHP version
Master process
function async_curl($background_process=''){
//-------------get curl contents----------------
$ch = curl_init($background_process);
curl_setopt_array($ch, array(
CURLOPT_HEADER => 0,
CURLOPT_RETURNTRANSFER =>true,
CURLOPT_NOSIGNAL => 1, //to timeout immediately if the value is < 1000 ms
CURLOPT_TIMEOUT_MS => 50, //The maximum number of mseconds to allow cURL functions to execute
CURLOPT_VERBOSE => 1,
CURLOPT_HEADER => 1
));
$out = curl_exec($ch);
//-------------parse curl contents----------------
//$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
//$header = substr($out, 0, $header_size);
//$body = substr($out, $header_size);
curl_close($ch);
return true;
}
async_curl('http://example.com/background_process_1.php');
Background process
ignore_user_abort(true);
//do something...
NB
If you want cURL to timeout in less than one second, you can use
CURLOPT_TIMEOUT_MS, although there is a bug/"feature" on "Unix-like
systems" that causes libcurl to timeout immediately if the value is <
1000 ms with the error "cURL Error (28): Timeout was reached". The
explanation for this behavior is:
[...]
The solution is to disable signals using CURLOPT_NOSIGNAL
Resources
*
*curl timeout less than 1000ms always fails?
*http://www.php.net/manual/en/function.curl-setopt.php#104597
*http://php.net/manual/en/features.connection-handling.php
A: The swoole extension. https://github.com/matyhtf/swoole
Asynchronous & concurrent networking framework for PHP.
$client = new swoole_client(SWOOLE_SOCK_TCP, SWOOLE_SOCK_ASYNC);
$client->on("connect", function($cli) {
$cli->send("hello world\n");
});
$client->on("receive", function($cli, $data){
echo "Receive: $data\n";
});
$client->on("error", function($cli){
echo "connect fail\n";
});
$client->on("close", function($cli){
echo "close\n";
});
$client->connect('127.0.0.1', 9501, 0.5);
A: The answer I'd previously accepted didn't work. It still waited for responses. This does work though, taken from How do I make an asynchronous GET request in PHP?
function post_without_wait($url, $params)
{
foreach ($params as $key => &$val) {
if (is_array($val)) $val = implode(',', $val);
$post_params[] = $key.'='.urlencode($val);
}
$post_string = implode('&', $post_params);
$parts=parse_url($url);
$fp = fsockopen($parts['host'],
isset($parts['port'])?$parts['port']:80,
$errno, $errstr, 30);
$out = "POST ".$parts['path']." HTTP/1.1\r\n";
$out.= "Host: ".$parts['host']."\r\n";
$out.= "Content-Type: application/x-www-form-urlencoded\r\n";
$out.= "Content-Length: ".strlen($post_string)."\r\n";
$out.= "Connection: Close\r\n\r\n";
if (isset($post_string)) $out.= $post_string;
fwrite($fp, $out);
fclose($fp);
}
A: let me show you my way :)
needs nodejs installed on the server
(my server sends 1000 https get request takes only 2 seconds)
url.php :
<?
$urls = array_fill(0, 100, 'http://google.com/blank.html');
function execinbackground($cmd) {
if (substr(php_uname(), 0, 7) == "Windows"){
pclose(popen("start /B ". $cmd, "r"));
}
else {
exec($cmd . " > /dev/null &");
}
}
fwite(fopen("urls.txt","w"),implode("\n",$urls);
execinbackground("nodejs urlscript.js urls.txt");
// { do your work while get requests being executed.. }
?>
urlscript.js >
var https = require('https');
var url = require('url');
var http = require('http');
var fs = require('fs');
var dosya = process.argv[2];
var logdosya = 'log.txt';
var count=0;
http.globalAgent.maxSockets = 300;
https.globalAgent.maxSockets = 300;
setTimeout(timeout,100000); // maximum execution time (in ms)
function trim(string) {
return string.replace(/^\s*|\s*$/g, '')
}
fs.readFile(process.argv[2], 'utf8', function (err, data) {
if (err) {
throw err;
}
parcala(data);
});
function parcala(data) {
var data = data.split("\n");
count=''+data.length+'-'+data[1];
data.forEach(function (d) {
req(trim(d));
});
/*
fs.unlink(dosya, function d() {
console.log('<%s> file deleted', dosya);
});
*/
}
function req(link) {
var linkinfo = url.parse(link);
if (linkinfo.protocol == 'https:') {
var options = {
host: linkinfo.host,
port: 443,
path: linkinfo.path,
method: 'GET'
};
https.get(options, function(res) {res.on('data', function(d) {});}).on('error', function(e) {console.error(e);});
} else {
var options = {
host: linkinfo.host,
port: 80,
path: linkinfo.path,
method: 'GET'
};
http.get(options, function(res) {res.on('data', function(d) {});}).on('error', function(e) {console.error(e);});
}
}
process.on('exit', onExit);
function onExit() {
log();
}
function timeout()
{
console.log("i am too far gone");process.exit();
}
function log()
{
var fd = fs.openSync(logdosya, 'a+');
fs.writeSync(fd, dosya + '-'+count+'\n');
fs.closeSync(fd);
}
A: You can use non-blocking sockets and one of pecl extensions for PHP:
*
*http://php.net/event
*http://php.net/libevent
*http://php.net/ev
*https://github.com/m4rw3r/php-libev
You can use library which gives you an abstraction layer between your code and a pecl extension: https://github.com/reactphp/event-loop
You can also use async http-client, based on the previous library: https://github.com/reactphp/http-client
See others libraries of ReactPHP: http://reactphp.org
Be careful with an asynchronous model.
I recommend to see this video on youtube: http://www.youtube.com/watch?v=MWNcItWuKpI
A: If you control the target that you want to call asynchronously (e.g. your own "longtask.php"), you can close the connection from that end, and both scripts will run in parallel. It works like this:
*
*quick.php opens longtask.php via cURL (no magic here)
*longtask.php closes the connection and continues (magic!)
*cURL returns to quick.php when the connection is closed
*Both tasks continue in parallel
I have tried this, and it works just fine. But quick.php won't know anything about how longtask.php is doing, unless you create some means of communication between the processes.
Try this code in longtask.php, before you do anything else. It will close the connection, but still continue to run (and suppress any output):
while(ob_get_level()) ob_end_clean();
header('Connection: close');
ignore_user_abort();
ob_start();
echo('Connection Closed');
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush();
flush();
The code is copied from the PHP manual's user contributed notes and somewhat improved.
A: class async_file_get_contents extends Thread{
public $ret;
public $url;
public $finished;
public function __construct($url) {
$this->finished=false;
$this->url=$url;
}
public function run() {
$this->ret=file_get_contents($this->url);
$this->finished=true;
}
}
$afgc=new async_file_get_contents("http://example.org/file.ext");
A: Event Extension
Event extension is very appropriate. It is a port of Libevent library which is designed for event-driven I/O, mainly for networking.
I have written a sample HTTP client that allows to schedule a number of
HTTP requests and run them asynchronously.
This is a sample HTTP client class based on Event extension.
The class allows to schedule a number of HTTP requests, then run them asynchronously.
http-client.php
<?php
class MyHttpClient {
/// @var EventBase
protected $base;
/// @var array Instances of EventHttpConnection
protected $connections = [];
public function __construct() {
$this->base = new EventBase();
}
/**
* Dispatches all pending requests (events)
*
* @return void
*/
public function run() {
$this->base->dispatch();
}
public function __destruct() {
// Destroy connection objects explicitly, don't wait for GC.
// Otherwise, EventBase may be free'd earlier.
$this->connections = null;
}
/**
* @brief Adds a pending HTTP request
*
* @param string $address Hostname, or IP
* @param int $port Port number
* @param array $headers Extra HTTP headers
* @param int $cmd A EventHttpRequest::CMD_* constant
* @param string $resource HTTP request resource, e.g. '/page?a=b&c=d'
*
* @return EventHttpRequest|false
*/
public function addRequest($address, $port, array $headers,
$cmd = EventHttpRequest::CMD_GET, $resource = '/')
{
$conn = new EventHttpConnection($this->base, null, $address, $port);
$conn->setTimeout(5);
$req = new EventHttpRequest([$this, '_requestHandler'], $this->base);
foreach ($headers as $k => $v) {
$req->addHeader($k, $v, EventHttpRequest::OUTPUT_HEADER);
}
$req->addHeader('Host', $address, EventHttpRequest::OUTPUT_HEADER);
$req->addHeader('Connection', 'close', EventHttpRequest::OUTPUT_HEADER);
if ($conn->makeRequest($req, $cmd, $resource)) {
$this->connections []= $conn;
return $req;
}
return false;
}
/**
* @brief Handles an HTTP request
*
* @param EventHttpRequest $req
* @param mixed $unused
*
* @return void
*/
public function _requestHandler($req, $unused) {
if (is_null($req)) {
echo "Timed out\n";
} else {
$response_code = $req->getResponseCode();
if ($response_code == 0) {
echo "Connection refused\n";
} elseif ($response_code != 200) {
echo "Unexpected response: $response_code\n";
} else {
echo "Success: $response_code\n";
$buf = $req->getInputBuffer();
echo "Body:\n";
while ($s = $buf->readLine(EventBuffer::EOL_ANY)) {
echo $s, PHP_EOL;
}
}
}
}
}
$address = "my-host.local";
$port = 80;
$headers = [ 'User-Agent' => 'My-User-Agent/1.0', ];
$client = new MyHttpClient();
// Add pending requests
for ($i = 0; $i < 10; $i++) {
$client->addRequest($address, $port, $headers,
EventHttpRequest::CMD_GET, '/test.php?a=' . $i);
}
// Dispatch pending requests
$client->run();
test.php
This is a sample script on the server side.
<?php
echo 'GET: ', var_export($_GET, true), PHP_EOL;
echo 'User-Agent: ', $_SERVER['HTTP_USER_AGENT'] ?? '(none)', PHP_EOL;
Usage
php http-client.php
Sample Output
Success: 200
Body:
GET: array (
'a' => '1',
)
User-Agent: My-User-Agent/1.0
Success: 200
Body:
GET: array (
'a' => '0',
)
User-Agent: My-User-Agent/1.0
Success: 200
Body:
GET: array (
'a' => '3',
)
...
(Trimmed.)
Note, the code is designed for long-term processing in the CLI SAPI.
For custom protocols, consider using low-level API, i.e. buffer events, buffers. For SSL/TLS communications, I would recommend the low-level API in conjunction with Event's ssl context. Examples:
*
*SSL echo server
*SSL client
Although Libevent's HTTP API is simple, it is not as flexible as buffer events. For example, the HTTP API currently doesn't support custom HTTP methods. But it is possible to implement virtually any protocol using the low-level API.
Ev Extension
I have also written a sample of another HTTP client using Ev extension with sockets in non-blocking mode. The code is slightly more verbose than the sample based on Event, because Ev is a general purpose event loop. It doesn't provide network-specific functions, but its EvIo watcher is capable of listening to a file descriptor encapsulated into the socket resource, in particular.
This is a sample HTTP client based on Ev extension.
Ev extension implements a simple yet powerful general purpose event loop. It doesn't provide network-specific watchers, but its I/O watcher can be used for asynchronous processing of sockets.
The following code shows how HTTP requests can be scheduled for parallel processing.
http-client.php
<?php
class MyHttpRequest {
/// @var MyHttpClient
private $http_client;
/// @var string
private $address;
/// @var string HTTP resource such as /page?get=param
private $resource;
/// @var string HTTP method such as GET, POST etc.
private $method;
/// @var int
private $service_port;
/// @var resource Socket
private $socket;
/// @var double Connection timeout in seconds.
private $timeout = 10.;
/// @var int Chunk size in bytes for socket_recv()
private $chunk_size = 20;
/// @var EvTimer
private $timeout_watcher;
/// @var EvIo
private $write_watcher;
/// @var EvIo
private $read_watcher;
/// @var EvTimer
private $conn_watcher;
/// @var string buffer for incoming data
private $buffer;
/// @var array errors reported by sockets extension in non-blocking mode.
private static $e_nonblocking = [
11, // EAGAIN or EWOULDBLOCK
115, // EINPROGRESS
];
/**
* @param MyHttpClient $client
* @param string $host Hostname, e.g. google.co.uk
* @param string $resource HTTP resource, e.g. /page?a=b&c=d
* @param string $method HTTP method: GET, HEAD, POST, PUT etc.
* @throws RuntimeException
*/
public function __construct(MyHttpClient $client, $host, $resource, $method) {
$this->http_client = $client;
$this->host = $host;
$this->resource = $resource;
$this->method = $method;
// Get the port for the WWW service
$this->service_port = getservbyname('www', 'tcp');
// Get the IP address for the target host
$this->address = gethostbyname($this->host);
// Create a TCP/IP socket
$this->socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP);
if (!$this->socket) {
throw new RuntimeException("socket_create() failed: reason: " .
socket_strerror(socket_last_error()));
}
// Set O_NONBLOCK flag
socket_set_nonblock($this->socket);
$this->conn_watcher = $this->http_client->getLoop()
->timer(0, 0., [$this, 'connect']);
}
public function __destruct() {
$this->close();
}
private function freeWatcher(&$w) {
if ($w) {
$w->stop();
$w = null;
}
}
/**
* Deallocates all resources of the request
*/
private function close() {
if ($this->socket) {
socket_close($this->socket);
$this->socket = null;
}
$this->freeWatcher($this->timeout_watcher);
$this->freeWatcher($this->read_watcher);
$this->freeWatcher($this->write_watcher);
$this->freeWatcher($this->conn_watcher);
}
/**
* Initializes a connection on socket
* @return bool
*/
public function connect() {
$loop = $this->http_client->getLoop();
$this->timeout_watcher = $loop->timer($this->timeout, 0., [$this, '_onTimeout']);
$this->write_watcher = $loop->io($this->socket, Ev::WRITE, [$this, '_onWritable']);
return socket_connect($this->socket, $this->address, $this->service_port);
}
/**
* Callback for timeout (EvTimer) watcher
*/
public function _onTimeout(EvTimer $w) {
$w->stop();
$this->close();
}
/**
* Callback which is called when the socket becomes wriable
*/
public function _onWritable(EvIo $w) {
$this->timeout_watcher->stop();
$w->stop();
$in = implode("\r\n", [
"{$this->method} {$this->resource} HTTP/1.1",
"Host: {$this->host}",
'Connection: Close',
]) . "\r\n\r\n";
if (!socket_write($this->socket, $in, strlen($in))) {
trigger_error("Failed writing $in to socket", E_USER_ERROR);
return;
}
$loop = $this->http_client->getLoop();
$this->read_watcher = $loop->io($this->socket,
Ev::READ, [$this, '_onReadable']);
// Continue running the loop
$loop->run();
}
/**
* Callback which is called when the socket becomes readable
*/
public function _onReadable(EvIo $w) {
// recv() 20 bytes in non-blocking mode
$ret = socket_recv($this->socket, $out, 20, MSG_DONTWAIT);
if ($ret) {
// Still have data to read. Append the read chunk to the buffer.
$this->buffer .= $out;
} elseif ($ret === 0) {
// All is read
printf("\n<<<<\n%s\n>>>>", rtrim($this->buffer));
fflush(STDOUT);
$w->stop();
$this->close();
return;
}
// Caught EINPROGRESS, EAGAIN, or EWOULDBLOCK
if (in_array(socket_last_error(), static::$e_nonblocking)) {
return;
}
$w->stop();
$this->close();
}
}
/////////////////////////////////////
class MyHttpClient {
/// @var array Instances of MyHttpRequest
private $requests = [];
/// @var EvLoop
private $loop;
public function __construct() {
// Each HTTP client runs its own event loop
$this->loop = new EvLoop();
}
public function __destruct() {
$this->loop->stop();
}
/**
* @return EvLoop
*/
public function getLoop() {
return $this->loop;
}
/**
* Adds a pending request
*/
public function addRequest(MyHttpRequest $r) {
$this->requests []= $r;
}
/**
* Dispatches all pending requests
*/
public function run() {
$this->loop->run();
}
}
/////////////////////////////////////
// Usage
$client = new MyHttpClient();
foreach (range(1, 10) as $i) {
$client->addRequest(new MyHttpRequest($client, 'my-host.local', '/test.php?a=' . $i, 'GET'));
}
$client->run();
Testing
Suppose http://my-host.local/test.php script is printing the dump of $_GET:
<?php
echo 'GET: ', var_export($_GET, true), PHP_EOL;
Then the output of php http-client.php command will be similar to the following:
<<<<
HTTP/1.1 200 OK
Server: nginx/1.10.1
Date: Fri, 02 Dec 2016 12:39:54 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: close
X-Powered-By: PHP/7.0.13-pl0-gentoo
1d
GET: array (
'a' => '3',
)
0
>>>>
<<<<
HTTP/1.1 200 OK
Server: nginx/1.10.1
Date: Fri, 02 Dec 2016 12:39:54 GMT
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: close
X-Powered-By: PHP/7.0.13-pl0-gentoo
1d
GET: array (
'a' => '2',
)
0
>>>>
...
(trimmed)
Note, in PHP 5 the sockets extension may log warnings for EINPROGRESS, EAGAIN, and EWOULDBLOCK errno values. It is possible to turn off the logs with
error_reporting(E_ERROR);
Concerning "the Rest" of the Code
I just want to do something like file_get_contents(), but not wait for the request to finish before executing the rest of my code.
The code that is supposed to run in parallel with the network requests can be executed within a the callback of an Event timer, or Ev's idle watcher, for instance. You can easily figure it out by watching the samples mentioned above. Otherwise, I'll add another example :)
A: You can do trickery by using exec() to invoke something that can do HTTP requests, like wget, but you must direct all output from the program to somewhere, like a file or /dev/null, otherwise the PHP process will wait for that output.
If you want to separate the process from the apache thread entirely, try something like (I'm not sure about this, but I hope you get the idea):
exec('bash -c "wget -O (url goes here) > /dev/null 2>&1 &"');
It's not a nice business, and you'll probably want something like a cron job invoking a heartbeat script which polls an actual database event queue to do real asynchronous events.
A: I find this package quite useful and very simple: https://github.com/amphp/parallel-functions
<?php
use function Amp\ParallelFunctions\parallelMap;
use function Amp\Promise\wait;
$responses = wait(parallelMap([
'https://google.com/',
'https://github.com/',
'https://stackoverflow.com/',
], function ($url) {
return file_get_contents($url);
}));
It will load all 3 urls in parallel.
You can also use class instance methods in the closure.
For example I use Laravel extension based on this package https://github.com/spatie/laravel-collection-macros#parallelmap
Here is my code:
/**
* Get domains with all needed data
*/
protected function getDomainsWithdata(): Collection
{
return $this->opensrs->getDomains()->parallelMap(function ($domain) {
$contact = $this->opensrs->getDomainContact($domain);
$contact['domain'] = $domain;
return $contact;
}, 10);
}
It loads all needed data in 10 parallel threads and instead of 50 secs without async it finished in just 8 secs.
A: You can use this library: https://github.com/stil/curl-easy
It's pretty straightforward then:
<?php
$request = new cURL\Request('http://yahoo.com/');
$request->getOptions()->set(CURLOPT_RETURNTRANSFER, true);
// Specify function to be called when your request is complete
$request->addListener('complete', function (cURL\Event $event) {
$response = $event->response;
$httpCode = $response->getInfo(CURLINFO_HTTP_CODE);
$html = $response->getContent();
echo "\nDone.\n";
});
// Loop below will run as long as request is processed
$timeStart = microtime(true);
while ($request->socketPerform()) {
printf("Running time: %dms \r", (microtime(true) - $timeStart)*1000);
// Here you can do anything else, while your request is in progress
}
Below you can see console output of above example.
It will display simple live clock indicating how much time request is running:
A: As of 2018, Guzzle has become the defacto standard library for HTTP requests, used in several modern frameworks. It's written in pure PHP and does not require installing any custom extensions.
It can do asynchronous HTTP calls very nicely, and even pool them such as when you need to make 100 HTTP calls, but don't want to run more than 5 at a time.
Concurrent request example
use GuzzleHttp\Client;
use GuzzleHttp\Promise;
$client = new Client(['base_uri' => 'http://httpbin.org/']);
// Initiate each request but do not block
$promises = [
'image' => $client->getAsync('/image'),
'png' => $client->getAsync('/image/png'),
'jpeg' => $client->getAsync('/image/jpeg'),
'webp' => $client->getAsync('/image/webp')
];
// Wait on all of the requests to complete. Throws a ConnectException
// if any of the requests fail
$results = Promise\unwrap($promises);
// Wait for the requests to complete, even if some of them fail
$results = Promise\settle($promises)->wait();
// You can access each result using the key provided to the unwrap
// function.
echo $results['image']['value']->getHeader('Content-Length')[0]
echo $results['png']['value']->getHeader('Content-Length')[0]
See http://docs.guzzlephp.org/en/stable/quickstart.html#concurrent-requests
A: /**
* Asynchronously execute/include a PHP file. Does not record the output of the file anywhere.
*
* @param string $filename file to execute, relative to calling script
* @param string $options (optional) arguments to pass to file via the command line
*/
function asyncInclude($filename, $options = '') {
exec("/path/to/php -f {$filename} {$options} >> /dev/null &");
}
A: Here is my own PHP function when I do POST to a specific URL of any page....
Sample: *** usage of my Function...
<?php
parse_str("email=myemail@ehehehahaha.com&subject=this is just a test");
$_POST['email']=$email;
$_POST['subject']=$subject;
echo HTTP_POST("http://example.com/mail.php",$_POST);***
exit;
?>
<?php
/*********HTTP POST using FSOCKOPEN **************/
// by ArbZ
function HTTP_Post($URL,$data, $referrer="") {
// parsing the given URL
$URL_Info=parse_url($URL);
// Building referrer
if($referrer=="") // if not given use this script as referrer
$referrer=$_SERVER["SCRIPT_URI"];
// making string from $data
foreach($data as $key=>$value)
$values[]="$key=".urlencode($value);
$data_string=implode("&",$values);
// Find out which port is needed - if not given use standard (=80)
if(!isset($URL_Info["port"]))
$URL_Info["port"]=80;
// building POST-request: HTTP_HEADERs
$request.="POST ".$URL_Info["path"]." HTTP/1.1\n";
$request.="Host: ".$URL_Info["host"]."\n";
$request.="Referer: $referer\n";
$request.="Content-type: application/x-www-form-urlencoded\n";
$request.="Content-length: ".strlen($data_string)."\n";
$request.="Connection: close\n";
$request.="\n";
$request.=$data_string."\n";
$fp = fsockopen($URL_Info["host"],$URL_Info["port"]);
fputs($fp, $request);
while(!feof($fp)) {
$result .= fgets($fp, 128);
}
fclose($fp); //$eco = nl2br();
function getTextBetweenTags($string, $tagname) {
$pattern = "/<$tagname ?.*>(.*)<\/$tagname>/";
preg_match($pattern, $string, $matches);
return $matches[1];
}
//STORE THE FETCHED CONTENTS to a VARIABLE, because its way better and fast...
$str = $result;
$txt = getTextBetweenTags($str, "span"); $eco = $txt; $result = explode("&",$result);
return $result[1];
<span style=background-color:LightYellow;color:blue>".trim($_GET['em'])."</span>
</pre> ";
}
</pre>
A: Here is a working example, just run it and open storage.txt afterwards, to check the magical result
<?php
function curlGet($target){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $target);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec ($ch);
curl_close ($ch);
return $result;
}
// Its the next 3 lines that do the magic
ignore_user_abort(true);
header("Connection: close"); header("Content-Length: 0");
echo str_repeat("s", 100000); flush();
$i = $_GET['i'];
if(!is_numeric($i)) $i = 1;
if($i > 4) exit;
if($i == 1) file_put_contents('storage.txt', '');
file_put_contents('storage.txt', file_get_contents('storage.txt') . time() . "\n");
sleep(5);
curlGet($_SERVER['HTTP_HOST'] . $_SERVER['SCRIPT_NAME'] . '?i=' . ($i + 1));
curlGet($_SERVER['HTTP_HOST'] . $_SERVER['SCRIPT_NAME'] . '?i=' . ($i + 1));
A: ReactPHP async http client
https://github.com/shuchkin/react-http-client
Install via Composer
$ composer require shuchkin/react-http-client
Async HTTP GET
// get.php
$loop = \React\EventLoop\Factory::create();
$http = new \Shuchkin\ReactHTTP\Client( $loop );
$http->get( 'https://tools.ietf.org/rfc/rfc2068.txt' )->then(
function( $content ) {
echo $content;
},
function ( \Exception $ex ) {
echo 'HTTP error '.$ex->getCode().' '.$ex->getMessage();
}
);
$loop->run();
Run php in CLI-mode
$ php get.php
A: Symfony HttpClient is asynchronous https://symfony.com/doc/current/components/http_client.html.
For example you can
use Symfony\Component\HttpClient\HttpClient;
$client = HttpClient::create();
$response1 = $client->request('GET', 'https://website1');
$response2 = $client->request('GET', 'https://website1');
$response3 = $client->request('GET', 'https://website1');
//these 3 calls with return immediately
//but the requests will fire to the website1 webserver
$response1->getContent(); //this will block until content is fetched
$response2->getContent(); //same
$response3->getContent(); //same
A: Well, the timeout can be set in milliseconds,
see "CURLOPT_CONNECTTIMEOUT_MS" in http://www.php.net/manual/en/function.curl-setopt
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "245"
}
|
Q: What are some great online database modeling tools? What's your favorite open source database design/modeling tool?
I'm looking for one that supports several databases, especially Firebird SQL but I can't find one on Google.
A: Do you mean design as in 'graphic representation of tables' or just plain old 'engineering kind of design'. If it's the latter, use FlameRobin, version 0.9.0 has just been released.
If it's the former, then use DBDesigner. Yup, that uses Java.
Or maybe you meant something more like MS Access. Then Kexi should be right for you.
A: I've used DBDesigner before. It is an open source tool. You might check that out. Not sure if it fits your needs.
Best of luck!
A: S.Lott inserted a comment, but it should be an answer: see the same question.
EDIT
Since it wasn't as obvious as I intended it to be, here follows a verbatim copy of S.Lott's answer in the other question:
I'm a big fan of ARGO UML from Tigris.org. Draws nice pictures
using standard UML notation. It does some code generation, but mostly
Java classes, which isn't SQL DDL, so that may not be close enough to
what you want to do.
You can look at the Data Modelling Tools list and see if anything
there is better than Argo UML. Many of the items on this list are
free or cheap.
Also, if you're using Eclipse or NetBeans, there are many
design plug-ins, some of which may have the features you're looking
for.
A: The DB Designer Fork project claims that it can generate FireBird sql scripts.
A: I like Clay Eclipse plugin. I've only used it with MySQL, but it claims Firebird support.
A: You may want to look at IBExpert Personal Edition. While not open source, this is a very good tool for designing, building, and administering Firebird and InterBase databases.
The Personal Edition is free, but some of the more advanced features are not available. Still, even without the slick extras, the free version is very powerful.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48"
}
|
Q: SQL Server Adapter for Rails Trying to find the sqlserver adapter for rails on windows.
I have tried getting it from (without luck):
gem install activerecord-sqlserver-adapter --source=http://gems.rubyonrails.org
Where else can I get this gem?
UPDATE:
Make sure to run the command prompt as the administrator. Right click on the command prompt and click "Run as administrator".
A: I just ran the exact command line you did, and the gem installs fine.
Questions:
*
*Are you running Vista?
*
*If so, make sure you run your command prompt with administrative access, so it can write to the gems folder
*Do you have the latest version of gems?
*
*Run gem --version to find out what you have, if it's not 1.2.0, then run gem update --system to get the latest
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: C# HttpWebRequest command to get directory listing I need a short code snippet to get a directory listing from an HTTP server.
Thanks
A: Basic understanding:
Directory listings are just HTML pages generated by a web server.
Each web server generates these HTML pages in its own way because there is no standard way for a web server to list these directories.
The best way to get a directory listing, is to simply do an HTTP request to the URL you'd like the directory listing for and to try to parse and extract all of the links from the HTML returned to you.
To parse the HTML links please try to use the HTML Agility Pack.
Directory Browsing:
The web server you'd like to list directories from must have directory browsing turned on to get this HTML representation of the files in its directories. So you can only get the directory listing if the HTTP server wants you to be able to.
A quick example of the HTML Agility Pack:
HtmlDocument doc = new HtmlDocument();
doc.Load(strURL);
foreach(HtmlNode link in doc.DocumentElement.SelectNodes("//a@href")
{
HtmlAttribute att = link"href";
//do something with att.Value;
}
Cleaner alternative:
If it is possible in your situation, a cleaner method is to use an intended protocol for directory listings, like the File Transfer Protocol (FTP), SFTP (FTP like over SSH) or FTPS (FTP over SSL).
What if directory browsing is not turned on:
If the web server does not have directory browsing turned on, then there is no easy way to get the directory listing.
The best you could do in this case is to start at a given URL, follow all HTML links on the same page, and try to build a virtual listing of directories yourself based on the relative paths of the resources on these HTML pages. This will not give you a complete listing of what files are actually on the web server though.
A: i just modified above and found this best
public static class GetallFilesFromHttp
{
public static string GetDirectoryListingRegexForUrl(string url)
{
if (url.Equals("http://ServerDirPath/"))
{
return "\\\"([^\"]*)\\\"";
}
throw new NotSupportedException();
}
public static void ListDiractory()
{
string url = "http://ServerDirPath/";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (StreamReader reader = new StreamReader(response.GetResponseStream()))
{
string html = reader.ReadToEnd();
Regex regex = new Regex(GetDirectoryListingRegexForUrl(url));
MatchCollection matches = regex.Matches(html);
if (matches.Count > 0)
{
foreach (Match match in matches)
{
if (match.Success)
{
Console.WriteLine(match.ToString());
}
}
}
}
Console.ReadLine();
}
}
}
A: The following code works well for me when I do not have access to the ftp server:
public static string[] GetFiles(string url)
{
List<string> files = new List<string>(500);
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (StreamReader reader = new StreamReader(response.GetResponseStream()))
{
string html = reader.ReadToEnd();
Regex regex = new Regex("<a href=\".*\">(?<name>.*)</a>");
MatchCollection matches = regex.Matches(html);
if (matches.Count > 0)
{
foreach (Match match in matches)
{
if (match.Success)
{
string[] matchData = match.Groups[0].ToString().Split('\"');
files.Add(matchData[1]);
}
}
}
}
}
return files.ToArray();
}
However, when I do have access to the ftp server, the following code works much faster:
public static string[] getFtpFolderItems(string ftpURL)
{
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(ftpURL);
request.Method = WebRequestMethods.Ftp.ListDirectory;
//You could add Credentials, if needed
//request.Credentials = new NetworkCredential("anonymous", "password");
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();
StreamReader reader = new StreamReader(responseStream);
return reader.ReadToEnd().Split("\r\n".ToCharArray(), StringSplitOptions.RemoveEmptyEntries);
}
A: A few important considerations before the code:
*
*The HTTP Server has to be configured to allow directories listing for the directories you want;
*Because directory listings are normal HTML pages there is no standard that defines the format of a directory listing;
*Due to consideration 2 you are in the land where you have to put specific code for each server.
My choice is to use regular expressions. This allows for rapid parsing and customization. You can get specific regular expressions pattern per site and that way you have a very modular approach. Use an external source for mapping URL to regular expression patterns if you plan to enhance the parsing module with new sites support without changing the source code.
Example to print directory listing from http://www.ibiblio.org/pub/
namespace Example
{
using System;
using System.Net;
using System.IO;
using System.Text.RegularExpressions;
public class MyExample
{
public static string GetDirectoryListingRegexForUrl(string url)
{
if (url.Equals("http://www.ibiblio.org/pub/"))
{
return "<a href=\".*\">(?<name>.*)</a>";
}
throw new NotSupportedException();
}
public static void Main(String[] args)
{
string url = "http://www.ibiblio.org/pub/";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (StreamReader reader = new StreamReader(response.GetResponseStream()))
{
string html = reader.ReadToEnd();
Regex regex = new Regex(GetDirectoryListingRegexForUrl(url));
MatchCollection matches = regex.Matches(html);
if (matches.Count > 0)
{
foreach (Match match in matches)
{
if (match.Success)
{
Console.WriteLine(match.Groups["name"]);
}
}
}
}
}
Console.ReadLine();
}
}
}
A: Thanks for the great post. for me the pattern below worked better.
<AHREF=\\"\S+\">(?<name>\S+)</A>
I also tested it at http://regexhero.net/tester.
to use it in your C# code, you have to add more backslashes () before any backslash and double quotes in the pattern for i
<AHREF=\\"\S+\">(?<name>\S+)</A>
nstance, in the GetDirectoryListingRegexForUrl method you should use something like this
return "< A HREF=\\"\S+\\">(?\S+)";
Cheers!
A: You can't, unless the particular directory you want has directory listing enabled and no default file (usually index.htm, index.html or default.html but always configurable). Only then will you be presented with a directory listing, which will usually be marked up with HTML and require parsing.
A: You can alternatively set the server up for WebDAV.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: .net COM+ object that returns IDataReader I've created a .net class derived from ServicedComponent and registered it with COM+. The interface that this component implements has a method that returns an IDataReader. When I call the serviced component from my client application everything works I can call the method that returns the IDataReader no problem but as soon as I call a method on the object I get the exception:
"System.Runtime.Remoting.RemotingException : This remoting proxy has no channel sink which means either the server has no registered server channels that are listening, or this application has no suitable client channel to talk to the server."
I hacked my code a fair bit and realized that it would work if I created my own implementation of IDataReader that was serializable (has the Serializable attribute). If the implementation derives from MarshalByRefObject it fails.
So, is it possible to return standard .net objects by reference from COM+ ServicedComponents and if so what do I need to do to achieve it?
A: When your COM+ client and COM+ component are both managed, the CLR tries to be "smart" and attempts to switch to using .Net remoting as a communication channel.
To make this scenario work, you can register a remoting channel for your object that implements IDataReader.
Unofrtunately, I have no access to the code where I did this couple of years ago, so I can't post a sample. :-(
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: WPF Diagramming Library What free or open source WPF diagramming libraries have you used before? I'm working on my thesis and have no money to pay for the commercial alternatives.
Valid answers should support undo/redo, exporting to XML and hopefully good documentation.
I'm building an open source UML / database diagramming tool.
A: This is a nifty diagramming control for WPF:
http://www.codeproject.com/KB/WPF/SpiderControl.aspx
A: sukram has a excellent series on CodeProject... it's a MUST READ!
*
*Part 1
*Part 2
*Part 3
*Part 4
A: What kind of Diagram drawing you are looking for?
WPF has great set of basic control library which supports most mathematical drawing models like Spline Curve, Line, PolyLine, arc etc etc
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: What exactly are DLL files, and how do they work? How exactly do DLL files work? There seems to be an awful lot of them, but I don't know what they are or how they work.
So, what's the deal with them?
A: http://support.microsoft.com/kb/815065
A DLL is a library that contains code
and data that can be used by more than
one program at the same time. For
example, in Windows operating systems,
the Comdlg32 DLL performs common
dialog box related functions.
Therefore, each program can use the
functionality that is contained in
this DLL to implement an Open dialog
box. This helps promote code reuse and
efficient memory usage.
By using a DLL, a program can be
modularized into separate components.
For example, an accounting program may
be sold by module. Each module can be
loaded into the main program at run
time if that module is installed.
Because the modules are separate, the
load time of the program is faster,
and a module is only loaded when that
functionality is requested.
Additionally, updates are easier to
apply to each module without affecting
other parts of the program. For
example, you may have a payroll
program, and the tax rates change each
year. When these changes are isolated
to a DLL, you can apply an update
without needing to build or install
the whole program again.
http://en.wikipedia.org/wiki/Dynamic-link_library
A: DLL is a File Extension & Known As “dynamic link library” file format used for holding multiple codes and procedures for Windows programs. Software & Games runs on the bases of DLL Files; DLL files was created so that multiple applications could use their information at the same time.
IF you want to get more information about DLL Files or facing any error read the following post.
https://www.bouncegeek.com/fix-dll-errors-windows-586985/
A: What is a DLL?
DLL files are binary files that can contain executable code and resources like images, etc. Unlike applications, these cannot be directly executed, but an application will load them as and when they are required (or all at once during startup).
Are they important?
Most applications will load the DLL files they require at startup. If any of these are not found the system will not be able to start the process at all.
DLL files might require other DLL files
In the same way that an application requires a DLL file, a DLL file might be dependent on other DLL files itself. If one of these DLL files in the chain of dependency is not found, the application will not load. This is debugged easily using any dependency walker tools, like Dependency Walker.
There are so many of them in the system folders
Most of the system functionality is exposed to a user program in the form of DLL files as they are a standard form of sharing code / resources. Each functionality is kept separately in different DLL files so that only the required DLL files will be loaded and thus reduce the memory constraints on the system.
Installed applications also use DLL files
DLL files also becomes a form of separating functionalities physically as explained above. Good applications also try to not load the DLL files until they are absolutely required, which reduces the memory requirements. This too causes applications to ship with a lot of DLL files.
DLL Hell
However, at times system upgrades often breaks other programs when there is a version mismatch between the shared DLL files and the program that requires them. System checkpoints and DLL cache, etc. have been the initiatives from M$ to solve this problem. The .NET platform might not face this issue at all.
How do we know what's inside a DLL file?
You have to use an external tool like DUMPBIN or Dependency Walker which will not only show what publicly visible functions (known as exports) are contained inside the DLL files and also what other DLL files it requires and which exports from those DLL files this DLL file is dependent upon.
How do we create / use them?
Refer the programming documentation from your vendor. For C++, refer to LoadLibrary in MSDN.
A: What is a DLL?
Dynamic Link Libraries (DLL)s are like EXEs but they are not directly executable. They are similar to .so files in Linux/Unix. That is to say, DLLs are MS's implementation of shared libraries.
DLLs are so much like an EXE that the file format itself is the same. Both EXE and DLLs are based on the Portable Executable (PE) file format. DLLs can also contain COM components and .NET libraries.
What does a DLL contain?
A DLL contains functions, classes, variables, UIs and resources (such as icons, images, files, ...) that an EXE, or other DLL uses.
Types of libraries:
On virtually all operating systems, there are 2 types of libraries. Static libraries and dynamic libraries. In windows the file extensions are as follows: Static libraries (.lib) and dynamic libraries (.dll). The main difference is that static libraries are linked to the executable at compile time; whereas dynamic linked libraries are not linked until run-time.
More on static and dynamic libraries:
You don't normally see static libraries though on your computer, because a static library is embedded directly inside of a module (EXE or DLL). A dynamic library is a stand-alone file.
A DLL can be changed at any time and is only loaded at runtime when an EXE explicitly loads the DLL. A static library cannot be changed once it is compiled within the EXE.
A DLL can be updated individually without updating the EXE itself.
Loading a DLL:
A program loads a DLL at startup, via the Win32 API LoadLibrary, or when it is a dependency of another DLL. A program uses the GetProcAddress to load a function or LoadResource to load a resource.
Further reading:
Please check MSDN or Wikipedia for further reading. Also the sources of this answer.
A: DLLs (Dynamic Link Libraries) contain resources used by one or more applications or services. They can contain classes, icons, strings, objects, interfaces, and pretty much anything a developer would need to store except a UI.
A: According to Microsoft
(DLL) Dynamic link libraries are files that contain data, code, or resources needed for the running of applications. These are files that are created by the windows ecosystem and can be shared between two or more applications.
When a program or software runs on Windows, much of how the application works depends on the DLL files of the program. For instance, if a particular application had several modules, then how each module interacts with each other is determined by the Windows DLL files.
If you want detailed explanation, check these useful resources
What are dll files , About Dll files
A: Let’s say you are making an executable that uses some functions found in a library.
If the library you are using is static, the linker will copy the object code for these functions directly from the library and insert them into the executable.
Now if this executable is run it has every thing it needs, so the executable loader just loads it into memory and runs it.
If the library is dynamic the linker will not insert object code but rather it will insert a stub which basically says this function is located in this DLL at this location.
Now if this executable is run, bits of the executable are missing (i.e the stubs) so the loader goes through the executable fixing up the missing stubs. Only after all the stubs have been resolved will the executable be allowed to run.
To see this in action delete or rename the DLL and watch how the loader will report a missing DLL error when you try to run the executable.
Hence the name Dynamic Link Library, parts of the linking process is being done dynamically at run time by the executable loader.
One a final note, if you don't link to the DLL then no stubs will be inserted by the linker, but Windows still provides the GetProcAddress API that allows you to load an execute the DLL function entry point long after the executable has started.
A: DLLs (dynamic link libraries) and SLs (shared libraries, equivalent under UNIX) are just libraries of executable code which can be dynamically linked into an executable at load time.
Static libraries are inserted into an executable at compile time and are fixed from that point. They increase the size of the executable and cannot be shared.
Dynamic libraries have the following advantages:
1/ They are loaded at run time rather than compile time so they can be updated independently of the executable (all those fancy windows and dialog boxes you see in Windows come from DLLs so the look-and-feel of your application can change without you having to rewrite it).
2/ Because they're independent, the code can be shared across multiple executables - this saves memory since, if you're running 100 apps with a single DLL, there may only be one copy of the DLL in memory.
Their main disadvantage is advantage #1 - having DLLs change independent your application may cause your application to stop working or start behaving in a bizarre manner. DLL versioning tend not to be managed very well under Windows and this leads to the quaintly-named "DLL Hell".
A: DLL files contain an Export Table which is a list of symbols which can be looked up by the calling program. The symbols are typically functions with the C calling convention (__stcall). The export table also contains the address of the function.
With this information, the calling program can then call the functions within the DLL even though it did not have access to the DLL at compile time.
Introducing Dynamic Link Libraries has some more information.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "303"
}
|
Q: Grails with YUI table example Does anyone have an example using the table object in YUI library. More specifically, I'd like to dynamically load it from JSON or SQL?
http://www.grails.org/YUI+Plugin
A: I just found this example. Will be trying it out this weekend. Looks like exactly what I was looking for.
http://marceloverdijk.blogspot.com/2008/06/grails-yui-datatable-example.html
A: From the YUI documentation: DataTable examples. In particular, here's an example using JSON and XmlHttpRequest.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Java equals(): to reflect or not to reflect This question is specifically related to overriding the equals() method for objects with a large number of fields. First off, let me say that this large object cannot be broken down into multiple components without violating OO principles, so telling me "no class should have more than x fields" won't help.
Moving on, the problem came to fruition when I forgot to check one of the fields for equality. Therefore, my equals method was incorrect. Then I thought to use reflection:
--code removed because it was too distracting--
The purpose of this post isn't necessarily to refactor the code (this isn't even the code I am using), but instead to get input on whether or not this is a good idea.
Pros:
*
*If a new field is added, it is automatically included
*The method is much more terse than 30 if statements
Cons:
*
*If a new field is added, it is automatically included, sometimes this is undesirable
*Performance: This has to be slower, I don't feel the need to break out a profiler
*Whitelisting certain fields to ignore in the comparison is a little ugly
Any thoughts?
A: Use Eclipse, FFS!
Delete the hashCode and equals methods you have.
Right click on the file.
Select Source->Generate hashcode and equals...
Done! No more worries about reflection.
Repeat for each field added, you just use the outline view to delete your two methods, and then let Eclipse autogenerate them.
A: If you do go the reflection approach, EqualsBuilder is still your friend:
public boolean equals(Object obj) {
return EqualsBuilder.reflectionEquals(this, obj);
}
A: Here's a thought if you're worried about:
1/ Forgetting to update your big series of if-statements for checking equality when you add/remove a field.
2/ The performance of doing this in the equals() method.
Try the following:
a/ Revert back to using the long sequence of if-statements in your equals() method.
b/ Have a single function which contains a list of the fields (in a String array) and which will check that list against reality (i.e., the reflected fields). It will throw an exception if they don't match.
c/ In your constructor for this object, have a synchronized run-once call to this function (similar to a singleton pattern). In other words, if this is the first object constructed by this class, call the checking function described in (b) above.
The exception will make it immediately obvious when you run your program if you haven't updated your if-statements to match the reflected fields; then you fix the if-statements and update the field list from (b) above.
Subsequent construction of objects will not do this check and your equals() method will run at it's maximum possible speed.
Try as I might, I haven't been able to find any real problems with this approach (greater minds may exist on StackOverflow) - there's an extra condition check on each object construction for the run-once behaviour but that seems fairly minor.
If you try hard enough, you could still get your if-statements out of step with your field-list and reflected fields but the exception will ensure your field list matches the reflected fields and you just make sure you update the if-statements and field list at the same time.
A: If you did want to whitelist for performance reasons, consider using an annotation to indicate which fields to compare. Also, this implementation won't work if your fields don't have good implementations for equals().
P.S. If you go this route for equals(), don't forget to do something similar for hashCode().
P.P.S. I trust you already considered HashCodeBuilder and EqualsBuilder.
A: You can always annotate the fields you do/do not want in your equals method, that should be a straightforward and simple change to it.
Performance is obviously related to how often the object is actually compared, but a lot of frameworks use hash maps, so your equals may be being used more than you think.
Also, speaking of hash maps, you have the same issue with the hashCode method.
Finally, do you really need to compare all of the fields for equality?
A: You have a few bugs in your code.
*
*You cannot assume that this and obj are the same class. Indeed, it's explicitly allowed for obj to be any other class. You could start with if ( ! obj instanceof myClass ) return false; however this is still not correct because obj could be a subclass of this with additional fields that might matter.
*You have to support null values for obj with a simple if ( obj == null ) return false;
*You can't treat null and empty string as equal. Instead treat null specially. Simplest way here is to start by comparing Field.get(obj) == Field.get(this). If they are both equal or both happen to point to the same object, this is fast. (Note: This is also an optimization, which you need since this is a slow routine.) If this fails, you can use the fast if ( Field.get(obj) == null || Field.get(this) == null ) return false; to handle cases where exactly one is null. Finally you can use the usual equals().
*You're not using foundMismatch
I agree with Hank that [HashCodeBuilder][1] and [EqualsBuilder][2] is a better way to go. It's easy to maintain, not a lot of boilerplate code, and you avoid all these issues.
A: You could use Annotations to exclude fields from the check
e.g.
@IgnoreEquals
String fieldThatShouldNotBeCompared;
And then of course you check the presence of the annotation in your generic equals method.
A: If you have access to the names of the fields, why don't you make it a standard that fields you don't want to include always start with "local" or "nochk" or something like that.
Then you blacklist all fields that begin with this (code is not so ugly then).
I don't doubt it's a little slower. You need to decide whether you want to swap ease-of-updates against execution speed.
A: Take a look at org.apache.commons.EqualsBuilder:
http://commons.apache.org/proper/commons-lang/javadocs/api-3.2/org/apache/commons/lang3/builder/EqualsBuilder.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: compression library for c and php To save network traffic I'd like to compress my data. The only trick is that I the client is a c application and the server is php. I'm looking for an open source compression library that's available for both c and php.
I guess I could write an external c application to decompress my data, but I'm trying to avoid spawning extra processes on the server.
If you know of any, please post it!
A: gzip is one of the most (if not the most) popular compression scheme. PHP has supported it since version 4. If you need even better compression, consider bzip2.
A: Zlib provides C APIs, and is part of the PHP functional API as well.
A: Php supports zlib compression and for the c compression you could use zlib, but you should think again if you want to compress network communication - the load will probably be too much for your servers.
A: ZLIB
Here's the page on accessing zlib from PHP.
A: You can probably instruct your web server to compress the data for you at the HTTP level, and then you won't have to worry about it on either end. For Apache, have a look at mod_deflate.
A: it depends on what data you are transferring. If it is text, use mod_gzip on apache (I am assuming you are using it). I have seen around 70% text compression with this. But if you are dealing with binary data, like images and videos, use media formats which are more compressible.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: I know Perl 5. What are the advantages of learning Perl 6, rather than moving to Python? Coming from a Perl 5 background, what are the advantages of moving to Perl 6 or Python?
A: Perl is generally better than python for quick one liners, especially involve text/regular expressions
http://novosial.org/perl/one-liner/
A: Python has one huge advantage: it's implemented, there's a rather stable compiler for it.
Perl 6 (renamed Raku in 2019) is a rather visionary language, with a stable compiler and test specification released in 2015. It has a set of very cool features, among them: junctions, grammars (yes, you can write full parsers with Raku "regexes"), unicode handling at the grapheme level, and lazy lists.
In your particular case when you know Perl 5 you'll get familiar with the Raku (née Perl 6) syntax very quickly.
For a more comprehensive list of what cool features Raku has, see https://raku.org/ or alternatively, the FAQ.
A: Python has a major advantage of being available in a production-ready format today.
Python has Jython and IronPython, if you need to work closely with Java or the .net clr.
Perl 6 has the advantages of being based on the same principles as Perl (1-5); If you like Perl, you'll like Perl 6 for the same reasons. (There's more than one way to do it, etc.)
Perl 6 also has an advantage of being only partially implemented: If you want to hack on language internals or help to define the standard libraries, this is a great time to get started in Perl 6.
Edit: (2011) It's still a great time to hack on the Perl6 internals, but there is now a much more mature, usable Perl6 distribution, Rakudo Star. If you want to use Perl6 today, that's a great choice.
A: You have not said why you want to move away from Perl*. If my crystal ball is functioning today then it is because you do not fully know the language and so it frustrates you.
Stick with Perl and study the language well. If you do then one day you will be a guru and know why your question is irrelevant. Enlightment comes to those to seek it.
*
*You called it "Perl5" but there is no such language. :P
A: IMO python's regexing, esp. when you try to represent something like perl's /e operator as in s/whatever/somethingelse/e, becomes quite slow. So in doubt, you may need to stay with Perl5 :-)
A: There is no advantage to be gained by switching from Perl to Python. There is also no advantage to be gained by switching from Python to Perl. They are both equally capable. Choose your tools based on what you know and the problem you are trying to solve rather than on some sort of notion that one is somehow inherently better than the other.
The only real advantage is if you are switching from a language you don't know to a language you do know, in which case your productivity will likely go up.
A: Python does not have Junctions. In fact I think only Perl has Junctions so far. :-)
A: In my opinion, Python's syntax is much cleaner, simpler, and consistent. You can define nested data structures the same everywhere, whether you plan to pass them to a function (or return them from one) or use them directly. I like Perl a lot, but as soon as I learned enough Python to "get" it, I never turned back.
In my experience, random snippets of Python tend to be more readable than random snippets of Perl. The difference really comes down to the culture around each language, where Perl users often appreciate cleverness while Python users more often prefer clarity. That's not to say you can't have clear Perl or devious Python, but those are much less common.
Both are fine languages and solve many of the same problems. I personally lean toward Python, if for no other reason in that it seems to be gaining momentum while Perl seems to be losing users to Python and Ruby.
Note the abundance of weasel words in the above. Honestly, it's really going to come down to personal preference.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
}
|
Q: "SELECT * FROM users WHERE id IN ( )" == FAIL I have a function that I use called sqlf(), it emulates prepared statements. For instance I can do things like:
$sql = sqlf("SELECT * FROM Users WHERE name= :1 AND email= :2",'Big "John"','bj@example.com') ;
For various reasons, I cannot use prepared statements, but I would like to emulate them. The problem that I run into is with queries like
$sql = sqlf("SELECT * FROM Users WHERE id IN (:1)",array(1,2,3) );
My code works, but it fails with empty arrays, e.g. the following throws a mysql error:
SELECT * FROM Users WHERE id IN ();
Does anyone have any suggestions? How should I translate and empty array into sql that can be injected into an IN clause? Substituting NULL will not work.
A: Null is the only value that you can guarantee is not in the set. How come it is not an option? Anything else can be seen as part of the potential set, they are all values.
A: I would say that passing an empty array as argument for an IN() clause is an error. You have control over the syntax of the query when calling this function, so you should also be responsible for the inputs. I suggest checking for emptiness of the argument before calling the function.
A: Is there a possibility that you could detect empty arrays withing sqlf and change the SQL to not have the IN clause?
Alteratively, you could postprocess the SQL before passing it to the "real" SQL executor so that "IN ()" sections are removed although you'd have to do all sorts of trickery to see what other elements had to be removed so that:
SELECT * FROM Users WHERE id IN ();
SELECT * FROM Users WHERE a = 7 AND id IN ();
SELECT * FROM Users WHERE id IN () OR a = 9;
would become:
SELECT * FROM Users;
SELECT * FROM Users WHERE a = 7;
SELECT * FROM Users WHERE a = 9;
That could get tricky depending on the complexity of your SQL - you'd basically need a full SQL language interpreter.
A: If your prepare-like function simply replaces :1 with the equivalent argument, you might try having your query contain something like (':1'), so that if :1 is empty, it resolves to (''), which will not cause a parse error (however it may cause undesirable behavior, if that field can have blank values -- although if it's an int, this isn't a problem). It's not a very clean solution, however, and you're better off detecting whether the array is empty and simply using an alternate version of the query that lacks the "IN (:1)" component. (If that's the only logic in the WHERE clause, then presumably you don't want to select everything, so you would simply not execute the query.)
A: I would use zero, assuming your "id" column is a pseudokey that is assigned numbers automatically.
As far as I know, automatic key generators in most brands of database begin at 1. This is a convention, not a requirement (auto-numbered fields are not defined in standard SQL). But this convention is common enough that you can probably rely on it.
Since zero probably never appears in your "id" column, you can use this value in the IN() predicate when your input array is empty, and it'll never match.
A: The only way I can think to do it would be to make your sqlf() function scan to see if a particular substitution comes soon after an "IN (" and then if the passed variable is an empty array, put in something which you know for certain won't be in that column: "m,znmzcb~~1", for example. It's a hack, for sure but it would work.
If you wanted to take it even further, could you change your function so that there are different types of substitutions? It looks like your function scans for a colon followed by a number. Why not add another type, like an @ followed by a number, which will be smart to empty arrays (this saves you from having to scan and guess if the variable is supposed to be an array).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: iterate over classes ie. beans for asp.net Lets say I have a class that stores user information complete with getters and setters, and it is populated with data from an XML file. How would I iterate over all of the instances of that class like you would do with java beans and tag libraries?
A: For outputting formatted HTML, you have a few choices. What I would probably do is make a property on the code-behind that accesses the collection of objects you want to iterate over. Then, I'd write the logic for iterating and formatting them on the .aspx page itself. For example, the .aspx page:
[snip]
<body>
<form id="form1" runat="server">
<% Somethings.ForEach(s => { %>
<h1><%=s.Name %></h1>
<h2><%=s.Id %></h2>
<% }); %>
</form>
</body>
</html>
And then the code-behind:
[snip]
public partial class _Default : System.Web.UI.Page
{
protected List<Something> Somethings { get; private set; }
protected void Page_Load(object sender, EventArgs e)
{
Somethings = GetSomethings(); // Or whatever populates the collection
}
[snip]
You could also look at using a repeater control and set the DataSource to your collection. It's pretty much the same idea as the code above, but I think this way is clearer (in my opinion).
A: This assumes you can acquire all instances of your class and add them to a Generic List.
List<YourClass> myObjects = SomeMagicMethodThatGetsAllInstancesOfThatClassAndAddsThemtoTheCollection();
foreach (YourClass instance in myObjects)
{
Response.Write(instance.PropertyName.ToString();
}
If you don't want to specify each property name you could use Reflection, (see PropertyInfo) and do it that way. Again, not sure if this is what your intent was.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Computing pseudo-inverse of a matrix in C++ I'm looking to compute the Moore-Penrose pseudo-inverse of a matrix in C++, can someone point me to a library implementation or a numerical recipe?
Thanks!
A: You need 'Single Value Decomposition', from which you can find a C implementation here from Numerical Recipes in C.
This other site describes how to use single value decomposition to calculate the pseudo-inverse.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Turn an array of pixels into an Image object with Java's ImageIO? I'm currently turning an array of pixel values (originally created with a java.awt.image.PixelGrabber object) into an Image object using the following code:
public Image getImageFromArray(int[] pixels, int width, int height) {
MemoryImageSource mis = new MemoryImageSource(width, height, pixels, 0, width);
Toolkit tk = Toolkit.getDefaultToolkit();
return tk.createImage(mis);
}
Is it possible to achieve the same result using classes from the ImageIO package(s) so I don't have to use the AWT Toolkit?
Toolkit.getDefaultToolkit() does not seem to be 100% reliable and will sometimes throw an AWTError, whereas the ImageIO classes should always be available, which is why I'm interested in changing my method.
A: Using the raster I got an ArrayIndexOutOfBoundsException even when I created the BufferedImage with TYPE_INT_ARGB. However, using the setRGB(...) method of BufferedImage worked for me.
A: JavaDoc on BufferedImage.getData() says: "a Raster that is a copy of the image data."
This code works for me but I doubt in it's efficiency:
// Получаем картинку из массива.
int[] pixels = new int[width*height];
// Рисуем диагональ.
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
if (i == j) {
pixels[j*width + i] = Color.RED.getRGB();
}
else {
pixels[j*width + i] = Color.BLUE.getRGB();
//pixels[j*width + i] = 0x00000000;
}
}
}
BufferedImage pixelImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
pixelImage.setRGB(0, 0, width, height, pixels, 0, width);
A: You can create the image without using ImageIO. Just create a BufferedImage using an image type matching the contents of the pixel array.
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
return image;
}
When working with the PixelGrabber, don't forget to extract the RGBA info from the pixel array before calling getImageFromArray. There's an example of this in the handlepixelmethod in the PixelGrabber javadoc. Once you do that, make sure the image type in the BufferedImage constructor to BufferedImage.TYPE_INT_ARGB.
A: I've had good success using java.awt.Robot to grab a screen shot (or a segment of the screen), but to work with ImageIO, you'll need to store it in a BufferedImage instead of the memory image source. Then you can call one static method of ImageIO and save the file. Try something like:
// Capture whole screen
Rectangle region = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
BufferedImage capturedImage = new Robot().createScreenCapture(region);
// Save as PNG
File imageFile = new File("capturedImage.png");
ImageIO.write(capturedImage, "png", imageFile);
A: As this is one of the highest voted question tagged with ImageIO on SO, I think there's still room for a better solution, even if the question is old. :-)
Have a look at the BufferedImageFactory.java class from my open source imageio project at GitHub.
With it, you can simply write:
BufferedImage image = new BufferedImageFactory(image).getBufferedImage();
The other good thing is that this approach, as a worst case, has about the same performance (time) as the PixelGrabber-based examples already in this thread. For most of the common cases (typically JPEG), it's about twice as fast. In any case, it uses less memory.
As a side bonus, the color model and pixel layout of the original image is kept, instead of translated to int ARGB with default color model. This might save additional memory.
(PS: The factory also supports subsampling, region-of-interest and progress listeners if anyone's interested. :-)
A: I had the same problem of everyone else trying to apply the correct answer of this question, my int array actually get an OutOfboundException where i fixed it adding one more index because the length of the array has to be widht*height*3 after this i could not get the image so i fixed it setting the raster to the image
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
image.setData(raster);
return image;
}
And you can see the image if u show it on a label on a jframe like this
JFrame frame = new JFrame();
frame.getContentPane().setLayout(new FlowLayout());
frame.getContentPane().add(new JLabel(new ImageIcon(image)));
frame.pack();
frame.setVisible(true);
setting the image on the imageIcon().
Last advice you can try to change the Bufferedimage.TYPE_INT_ARGB to something else that matches the image you got the array from this type is very important i had an array of 0 and -1 so I used this type BufferedImage.TYPE_3BYTE_BGR
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: Checking IP Port State Remotely I found an article on getting active tcp/udp connections on a machine.
http://www.codeproject.com/KB/IP/iphlpapi.aspx
My issue however is I need to be able to determine active connections remotely - to see if a particular port is running or listening without tampering with the machine.
Is this possible?
Doesn't seem like it natively, otherwise it could pose a security issue. The alternative would be to query a remoting service which could then make the necessary calls on the local machine.
Any thoughts?
A: Nmap is what you are looking for.
A: There is no way to know which ports are open without the remote computer knowing it. But you can determine the information without the program running on the port knowing it (i.e. without interfering with the program).
Use SYN scanning:
To establish a connection, TCP uses a three-way handshake. This can be exploited to find out if a port is open or not without the program knowing.
The handshake works as follows:
*
*The client performs an active open by sending a SYN to the server.
*The server replies with a SYN-ACK.
*Normally, the client sends an ACK back to the server. But this step is skipped.
SYN scan is the most popular form of
TCP scanning. Rather than use the
operating system's network functions,
the port scanner generates raw IP
packets itself, and monitors for
responses. This scan type is also
known as "half-open scanning", because
it never actually opens a full TCP
connection. The port scanner generates
a SYN packet. If the target port is
open, it will respond with a SYN-ACK
packet. The scanner host responds with
a RST packet, closing the connection
before the handshake is completed.
The use of raw networking has several
advantages, giving the scanner full
control of the packets sent and the
timeout for responses, and allowing
detailed reporting of the responses.
There is debate over which scan is
less intrusive on the target host. SYN
scan has the advantage that the
individual services never actually
receive a connection while some
services can be crashed with a connect
scan. However, the RST during the
handshake can cause problems for some
network stacks, particularly simple
devices like printers. There are no
conclusive arguments either way.
Source Wikipedia
As is mentioned below, I think nmap can do SYN scanning.
Using sockets for TCP port scanning:
One way to determine which ports are open is to open a socket to that port. Or to a different port which finds out the information for you like you mentioned.
For example from command prompt or a terminal:
telnet google.com 80
UDP Port scanning:
if a UDP packet is sent to a port that is not open, the system will respond with an ICMP port unreachable message. You can use this method to determine if a port is open or close. But the receiving program will know.
A: neouser99 (et al) has suggested NMAP. NMAP is very good if all you're trying to do is to detect ports that are open on the remote machine.
But from the sounds of your question you're actually trying to determine what ports are both open and connected on your remote machine. If you're after a general monitoring solution, including the connected ports, then you could install an snmp server on your remote machine. There are two MIBs that let you check for port status which are TCP-MIB::tcpConnectionTable and UDP-MIB::udpEndpointTable.
The daemon (server) supplied in net-snmp has most likely got support for these mibs.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I generate an array of pairwise distances in Ruby? Say I have an array that represents a set of points:
x = [2, 5, 8, 33, 58]
How do I generate an array of all the pairwise distances?
A: x = [2, 5, 8, 33, 58]
print x.collect {|n| x.collect {|i| (n-i).abs}}.flatten
I think that would do it.
A: If you really do want an array instead of a matrix, this is O(n^2/2) instead of O(n^2).
result=[]
x.each_index{|i| (i+1).upto(x.size-1){|j| result<<(x[i]-x[j]).abs}}
A: x.map{|i| x.map{|j| (i-j).abs } }
gives
[[0, 3, 6, 31, 56],
[3, 0, 3, 28, 53],
[6, 3, 0, 25, 50],
[31, 28, 25, 0, 25],
[56, 53, 50, 25, 0]]
(format it like this by printing it with 'pp' instead of puts)
and
x.map{|i| x.map{|j| (i-j).abs } }.flatten
gives
[0, 3, 6, 31, 56, 3, 0, 3, 28, 53, 6, 3, 0, 25, 50, 31, 28, 25, 0, 25, 56, 53, 50, 25, 0]
if you really want an array
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How Do I Give a Textbox Focus in Silverlight? In my Silverlight application, I can't seem to bring focus to a TextBox control. On the recommendation of various posts, I've set the IsTabStop property to True and I'm using TextBox.Focus(). Though the UserControl_Loaded event is firing, the TextBox control isn't getting focus. I've included my very simple code below. What am I missing? Thanks.
Page.xaml
<UserControl x:Class="TextboxFocusTest.Page"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Loaded="UserControl_Loaded"
Width="400" Height="300">
<Grid x:Name="LayoutRoot" Background="White">
<StackPanel Width="150" VerticalAlignment="Center">
<TextBox x:Name="RegularTextBox" IsTabStop="True" />
</StackPanel>
</Grid>
</UserControl>
Page.xaml.cs
using System.Windows;
using System.Windows.Controls;
namespace PasswordTextboxTest
{
public partial class Page : UserControl
{
public Page()
{
InitializeComponent();
}
private void UserControl_Loaded(object sender, RoutedEventArgs e)
{
RegularTextBox.Focus();
}
}
}
A: I found this on silverlight.net, and was able to get it to work for me by adding a call to System.Windows.Browser.HtmlPage.Plugin.Focus() prior to calling RegularTextBox.Focus():
private void UserControl_Loaded(object sender, RoutedEventArgs e)
{
System.Windows.Browser.HtmlPage.Plugin.Focus();
RegularTextBox.Focus();
}
A: I solved putting in the control constructor:
this.TargetTextBox.Loaded += (o, e) => { this.TargetTextBox.Focus(); };
A: Plugin.Focus();
didn't work for me.
Calling
Dispatcher.BeginInvoke(() => { tbNewText.Focus();});
From the Load event worked.
A: Are you sure you're not really getting focus? There's a known bug in Beta 2 where you'll get focus and be able to type but you won't get the caret or the border. The workaround is to call UpdateLayout() on the textbox right before you call Focus().
A: I would try adding a DispatcherTimer on the UserLoaded event that executes the Focus method a few milliseconds after the whole control has loaded; maybe the problem is there.
A: I also needed to call
Deployment.Current.Dispatcher.BeginInvoke(() => myTextbox.Focus());
interestingly this call is happening inside an event handler when I mouseclick on a TextBlock, collapse the TextBlock and make the TextBox Visible. If I don't follow it by a dispatcher.BeginInvoke it won't get focus.
-Mike
A: thanks Santiago Palladino Dispatcher worked for me perfectly. What I am doing is:
this.Focus();
then
Dispatcher.BeginInvoke(() => { tbNewText.Focus();});
A: You code to set the focus is correct since if you add a button that calls the same code it works perfectly:
<StackPanel Width="150" VerticalAlignment="Center">
<TextBox x:Name="RegularTextBox" IsTabStop="True" />
<Button Click="UserControl_Loaded">
<TextBlock Text="Test"/>
</Button>
</StackPanel>
So I'm assuming this is something to do with Focus() requiring some kind of user interaction. I couldn't get it to work with a MouseMove event on the UserControl, but putting a KeyDown event to set the focus works (although the template doesn't update to the focused template).
Width="400" Height="300" Loaded="UserControl_Loaded" KeyDown="UserControl_KeyDown">
Seems like a bug to me....
A: For out-of-browser apps the System.Windows.Browser.HtmlPage.Plugin.Focus(); doesn't exist.
See my question here for other ideas.
A: It works for me in SL4 and IE7 and Firefox 3.6.12
Final missing "piece" which made focus to work (for me) was setting .TabIndex property
System.Windows.Browser.HtmlPage.Plugin.Focus();
txtUserName.IsTabStop = true;
txtPassword.IsTabStop = true;
if (txtUserName.Text.Trim().Length != 0)
{
txtPassword.UpdateLayout();
txtPassword.Focus();
txtPassword.TabIndex = 0;
}
else
{
txtUserName.UpdateLayout();
txtUserName.Focus();
txtUserName.TabIndex = 0;
}
A: My profile is not good enough to comment on @Jim B-G's answer but what worked for me was to add a handler for the Loaded event on the RichTextBox and inside that handler add
System.Windows.Browser.HtmlPage.Plugin.Focus();
<YourTextBox>.UpdateLayout()
<YourTextBox>.Focus();
However, it only worked on IE and FF. To get it work on Chrome and Safari, scroll to the bottom of this
A: I forgot one thing...I haven't found a way to force focus to your Silverlight application on the page reliably (it will work on some browsers and not on others).
So it may be that the Silverlight app itself doesn't have focus. I usually trick the user into clicking a button or something similar before I start expecting keyboard input to make sure that the silverlight app has focus.
A: I also ran into this problem, but it had arisen from a different case than what has been answered here already.
If you have a BusyIndicator control being displayed and hidden at all during your view, controls will not get focus if you have lines like
Dispatcher.BeginInvoke(() => { myControl.Focus();});
in the load event.
Instead, you will need to call that line of code after your BusyIndicator display has been set to false.
I have a related question here, as well as a solution for this scenario.
A: Indeed an annoying beheviour. I found a simple straightforward solution:
(VB code)
Me.Focus()
Me.UpdateLayout()
Me.tbx_user_num.Focus()
Me.tbx_user_num.UpdateLayout()
Each element here is essential, as per my project at least (VB2010 SL4 OutOfBrowser).
Credit to : http://www.dotnetspark.com/kb/1792-set-focus-to-textbox-silverlight-3.aspx
A: None of the above answers worked for me directly, what i did is that I added this event in in the MainPage() constructor:
this.Loaded += new RoutedEventHandler(MainPage_Loaded);
And handled it as follows:
void MainPage_Loaded(object sender, RoutedEventArgs e)
{
System.Windows.Browser.HtmlPage.Plugin.Focus();
RegularTextBox.Focus();
}
My Silverlight version is 4.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: Have you integrated Mantis and Subversion? I do mostly Windows development. We use Mantis and Subversion for our development but they aren't integrated together, in fact they are on different servers.
I did a little googling about integrating the two together and came across this post. It looked interesting.
I was wondering if anyone is doing this or has done this and what your experience has been. If you've got a different solution, I'd be interested in knowing that too!
Thanks!
A: I use Mantis with SVN. Pretty much as that link says, though I put the regexp in the post-commit so it doesn't try to update the bug if the commit message is not relevant, that makes non-bug-updating commits respond slightly faster.
My Mantis install is on a different server too. I use curl to call the php method in Mantis 1.1.6.
Put this in your post-commit.cmd hook (you'll need to download strawberry perl and grab perl.exe and perl510.dll from it, you don't need the rest)
c:\tools\perl c:\tools\mantis_urlencode.pl %1 %2 > c:\temp\postcommit_mantis.txt
if %ERRORLEVEL% NEQ 0 exit /b 0
c:\tools\curl -s -d user=svn -d @c:\temp\postcommit_mantis.txt http://swi-sgi-l-web1.ingrnet.com/mantis/core/checkincurl.php
and put this in mantis_urlencode.pl
$url = `svnlook log -r $ARGV[1] $ARGV[0]`;
# check the string contains the matching regexp,
# quit if it doesn't so we don't waste time contacting the webserver
# this is the g_source_control_regexp value in mantis.
exit 1 if not $url =~ /\b(?:bug|issue|mantis)\s*[#]{0,1}(\d+)\b/i;
$url = $url . "\n" . `svnlook dirs-changed -r $ARGV[1] $ARGV[0]`;
#urlencode the string
$url =~ s/([^\w\-\.\@])/$1 eq " "?"+": sprintf("%%%2.2x",ord($1))/eg;
print "log=$url";
exit 0;
If you want to migrate from VSS, there are a load of scripts, including one I wrote on codeplex.
It all works well, we use it all the time, and its quick enough not to notice its there. Just type "Fixed Mantis #1234" and it resolves the bug and adds a bugnote to it. The script also adds the directories that were modified to the bugnote too (I tried showing changed files but too many detract from easy understanding)
A: We've used scmbug for quite some time to link SVN to Bugzilla. Worked very well until we upgraded to Bugzilla 3.2 recently, which broke the integration. It takes a little while for the scmbug team to catch up when new releases of the SCM tools come out, which is understandable.
A: Here's the Subversion post-commit script we use. It uses PHP to run the Mantis checkin PHP script as suggested in this link in the original post.
A: I came across scmbug. Looks like it will hook up things like Mantis to things like Subversion.
A: We followed the steps in your link - the only difference is that on Windows you have post-commit.bat instead. If you scroll down someone posts a sample. We modified that so it logs the files changed and who changed them - a fairly easy hack to the batch file. We tried including the diffs at one point - but it was obvious pretty quickly that doing that is a bad idea because of the size of some checkins.
It works really well and I'm really happy - now I have to move all our Sourcesafe stuff across...
A: I am personally using a private SVN repository on my local development environment using VisualSVN Server and a public Mantis bug tracker. I had to change the checkin.php file a bit to handle calls from a web server (with help of this web page: http://www.mantisbt.org/bugs/view.php?id=8847)
I have made a short C# console application to handle this instead of a batch file, so it is more configurable and supports remote or local checkin.php files.
I have posted an article about this on my blog with the source code if you are interested: http://mp4m.org/blog/svn-and-mantis-bug-tracker-integration/
Hope that helps!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Learning how to use Subversion This is probably a really stupid newbie-sounding question to you developer type people, but I'm at a loss :( I've been trying to learn how to use Subversion for keeping the history of my code, but I'm finding it pretty confusing. I read the 'book' that comes with Subversion, but I didn't find it all that helpful. I'm using Windows, and I downloaded the TortoiseSVN GUI for it.
All I really want to know how to do is to create a new project, put a file in it (any old file), and then update that file, just so I can see how it works. I created a 'repository' (in svn_repository/test), and if anyone could tell me how I'm supposed to go about creating a new file/putting a file in it, and then updating that file I'd be really happy :) Knowing my luck it'll be something as simple as "drag and drop the file into the directory". Apologies for asking such a stupid question!
Also if anyone could tell me how to go about making it work with Zend Studio, that would be extra awesome-points. Thanks!
A: Often when I create a new project I have to refer to the SVN Quickstart guide.
It takes you through creating a new repository, the initial import, and how to check your files out and back in (on the command line).
The book is very helpful, but you'll get the best value out of it after you've been using version control for a little while and understand the concepts better.
(Note the terminology in bold below)
If you're using TortoiseSVN, you'll have to create the repository, and then import your files (if you have any) when starting up. After that you check out the project to a working folder and can just create files in the working folder and then add them easily. Once the repository is created you only interact through it via your Subversion client.
A: You asked for a one-file project, so here it is. I'm not familiar enough with Tortoise to run you though it that way, but I'll list the commands and hopefully you can figure out for yourself how to do each step by right-clicking in File Explorer. There are only actually five things you need to be able to do: create a repository, check out, "add" a file to make it version-controlled, check in, and log. The rest will come later.
Also, someone might search on leanring subversion later who isn't using Tortoise, and they'll find this question.
# create an empty repository
svnadmin create myrepos
# check out a working copy of the empty repository
svn co file://full/path/to/myrepos workingcopy
# create an empty file in workingcopy (nothing to do with SVN - use
# File > New > Text Document if you like)
cd workingcopy
touch mycode
# place it under version control, then tell the repository what you've done.
svn add mycode
svn ci -m "My first ever checkin comment! File created."
# Now we're developing. Go edit the file. Come back when you're done.
# Check it back in
svn ci -m "First version of project"
# Go edit it again
# Check it in again
svn ci -m "Made my project better"
# See what we've done so far
svn log mycode
That's it. That's the bare minimum you have to do to version-control a single file. Now go re-read the start of the SVN book, delete myrepos, and start over, because you'll probably want to structure your first proper repository in the way it tells you to.
A: Have a look at this question its got some good pointers on starting with svn
A: I really like using AnkhSvn in conjunction with Tortoise. It works from Visual Studio. When I set up my own repository, I used VisualSVN, which took 2 secs to run, and didn't involve any apache or LAMP stuff. Just worked out of the box. As far as using it, try the free book online to get a feel for what source control is all about. Then go to a website, like http://blog.taragana.com/index.php/archive/5-minutes-guide-to-subversion/ for a quick tutorial of how to use it.
A: The recommended directory structure for a subversion repo contains three folders: "branches", "tags" and "trunk". So, create these folders somewhere convenient, in a new folder.
Right click in the parent folder of these folders, go to TortoiseSVN and select Import. Enter the url to the repository you created here (ie_ https://JUNK:8443/svn/Test/ is one I just made, on my local machine). Hit the ok button and the folders will be imported.
Now browse to where you want the repo to live on your local machine (I've gone to C:\workspace\test). Right-click and go to SVN Checkout.
Now, you want to check out from the trunk of your repo, so change the repository URL to reflect this (https://JUNK:8443/svn/Test/trunk/). Hit the ok button.
Create a new file in this directory. Right click on it and go to TortoiseSVN, then Add. Hit ok, and the file is now marked as a new file for the repo. Right click in the parent folder of the file and you should see SVN Update and SVN Commit. SVN Update will refresh the local files with files from the repository. SVN Commit will send local files that have been changed back into the repository.
Have fun :)
A: The repository is a place where Subversion itself manages the files - you will not access the files in the repository directly. If you've created a repository, then the next step is to do a Checkout from the repository to some working directory. (This working directory should not be a subdirectory of the repository.)
Once you have a checkout, drop a file in there and right click on it to Add it. The other operations should make more sense from that point.
A: The SVN Book has an appendix called "Subversion Quick Start Guide" that goes through the very basics quickly. Here is a quick overview.
For the initial setup, I create a temporary folder on the SVN server where I'll setup the structure of my site. This is just a temp folder and I delete it once I've done the initial setup. I usually call this something like C:\tmpRepository. I then create a new folder in there for my project name. So lets say your project name is test. I would create c:\tmpRepositories\test. Inside that folder create three folders: branches, tags, trunk. Then copy your project files into the trunk directory.
Now open the command prompt and type the following to create the new repository.
svnadmin create c:\AppRepositories\test. I just keep all my source code in the AppRepositories folder and then just setup each project with a new folder.
Next we need to load our new repository with the files in our temp directory. So with the command prompt open we run:
svn import c:\tmpRepositories\test file:///c:/AppRepositories/test -m "initial import"
That's it! Then on your development computer you should install TortoiseSVN. You will want to setup a location on your computer where you will store the working copy of your files. I typically just create a folder on the C: drive called "WorkingCode." Open that folder, right click and choose SVN Checkout. Under URL of repository type in svn://servername/test. Make sure checkout directory is correct.
BAM! You should now see all your code files in the trunk directory (c:\workingcode\test\trunk).
A: The prags wrote a good book on using Subversion: http://www.pragprog.com/titles/svn2/pragmatic-version-control-using-subversion
A: I found TortoiseSVN to be terribly confusing, especially in conjunction with the SVN Book. But then again, I'm not a very GUI oriented person.
Work through the book using the command line SVN client, until you understand the basic concepts. Don't skip any chapters!
Then you can evaluate GUIs, if you even need one by then.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: Picking a random element from a set How do I pick a random element from a set?
I'm particularly interested in picking a random element from a
HashSet or a LinkedHashSet, in Java.
Solutions for other languages are also welcome.
A: In Java:
Set<Integer> set = new LinkedHashSet<Integer>(3);
set.add(1);
set.add(2);
set.add(3);
Random rand = new Random(System.currentTimeMillis());
int[] setArray = (int[]) set.toArray();
for (int i = 0; i < 10; ++i) {
System.out.println(setArray[rand.nextInt(set.size())]);
}
A: List asList = new ArrayList(mySet);
Collections.shuffle(asList);
return asList.get(0);
A: A somewhat related Did You Know:
There are useful methods in java.util.Collections for shuffling whole collections: Collections.shuffle(List<?>) and Collections.shuffle(List<?> list, Random rnd).
A: This is identical to accepted answer (Khoth), but with the unnecessary size and i variables removed.
int random = new Random().nextInt(myhashSet.size());
for(Object obj : myhashSet) {
if (random-- == 0) {
return obj;
}
}
Though doing away with the two aforementioned variables, the above solution still remains random because we are relying upon random (starting at a randomly selected index) to decrement itself toward 0 over each iteration.
A: In Java 8:
static <E> E getRandomSetElement(Set<E> set) {
return set.stream().skip(new Random().nextInt(set.size())).findFirst().orElse(null);
}
A: Fast solution for Java using an ArrayList and a HashMap: [element -> index].
Motivation: I needed a set of items with RandomAccess properties, especially to pick a random item from the set (see pollRandom method). Random navigation in a binary tree is not accurate: trees are not perfectly balanced, which would not lead to a uniform distribution.
public class RandomSet<E> extends AbstractSet<E> {
List<E> dta = new ArrayList<E>();
Map<E, Integer> idx = new HashMap<E, Integer>();
public RandomSet() {
}
public RandomSet(Collection<E> items) {
for (E item : items) {
idx.put(item, dta.size());
dta.add(item);
}
}
@Override
public boolean add(E item) {
if (idx.containsKey(item)) {
return false;
}
idx.put(item, dta.size());
dta.add(item);
return true;
}
/**
* Override element at position <code>id</code> with last element.
* @param id
*/
public E removeAt(int id) {
if (id >= dta.size()) {
return null;
}
E res = dta.get(id);
idx.remove(res);
E last = dta.remove(dta.size() - 1);
// skip filling the hole if last is removed
if (id < dta.size()) {
idx.put(last, id);
dta.set(id, last);
}
return res;
}
@Override
public boolean remove(Object item) {
@SuppressWarnings(value = "element-type-mismatch")
Integer id = idx.get(item);
if (id == null) {
return false;
}
removeAt(id);
return true;
}
public E get(int i) {
return dta.get(i);
}
public E pollRandom(Random rnd) {
if (dta.isEmpty()) {
return null;
}
int id = rnd.nextInt(dta.size());
return removeAt(id);
}
@Override
public int size() {
return dta.size();
}
@Override
public Iterator<E> iterator() {
return dta.iterator();
}
}
A: This is faster than the for-each loop in the accepted answer:
int index = rand.nextInt(set.size());
Iterator<Object> iter = set.iterator();
for (int i = 0; i < index; i++) {
iter.next();
}
return iter.next();
The for-each construct calls Iterator.hasNext() on every loop, but since index < set.size(), that check is unnecessary overhead. I saw a 10-20% boost in speed, but YMMV. (Also, this compiles without having to add an extra return statement.)
Note that this code (and most other answers) can be applied to any Collection, not just Set. In generic method form:
public static <E> E choice(Collection<? extends E> coll, Random rand) {
if (coll.size() == 0) {
return null; // or throw IAE, if you prefer
}
int index = rand.nextInt(coll.size());
if (coll instanceof List) { // optimization
return ((List<? extends E>) coll).get(index);
} else {
Iterator<? extends E> iter = coll.iterator();
for (int i = 0; i < index; i++) {
iter.next();
}
return iter.next();
}
}
A: Clojure solution:
(defn pick-random [set] (let [sq (seq set)] (nth sq (rand-int (count sq)))))
A: Java 8+ Stream:
static <E> Optional<E> getRandomElement(Collection<E> collection) {
return collection
.stream()
.skip(ThreadLocalRandom.current()
.nextInt(collection.size()))
.findAny();
}
Based on the answer of Joshua Bone but with slight changes:
*
*Ignores the Streams element order for a slight performance increase in parallel operations
*Uses the current thread's ThreadLocalRandom
*Accepts any Collection type as input
*Returns the provided Optional instead of null
A: Perl 5
@hash_keys = (keys %hash);
$rand = int(rand(@hash_keys));
print $hash{$hash_keys[$rand]};
Here is one way to do it.
A: C++. This should be reasonably quick, as it doesn't require iterating over the whole set, or sorting it. This should work out of the box with most modern compilers, assuming they support tr1. If not, you may need to use Boost.
The Boost docs are helpful here to explain this, even if you don't use Boost.
The trick is to make use of the fact that the data has been divided into buckets, and to quickly identify a randomly chosen bucket (with the appropriate probability).
//#include <boost/unordered_set.hpp>
//using namespace boost;
#include <tr1/unordered_set>
using namespace std::tr1;
#include <iostream>
#include <stdlib.h>
#include <assert.h>
using namespace std;
int main() {
unordered_set<int> u;
u.max_load_factor(40);
for (int i=0; i<40; i++) {
u.insert(i);
cout << ' ' << i;
}
cout << endl;
cout << "Number of buckets: " << u.bucket_count() << endl;
for(size_t b=0; b<u.bucket_count(); b++)
cout << "Bucket " << b << " has " << u.bucket_size(b) << " elements. " << endl;
for(size_t i=0; i<20; i++) {
size_t x = rand() % u.size();
cout << "we'll quickly get the " << x << "th item in the unordered set. ";
size_t b;
for(b=0; b<u.bucket_count(); b++) {
if(x < u.bucket_size(b)) {
break;
} else
x -= u.bucket_size(b);
}
cout << "it'll be in the " << b << "th bucket at offset " << x << ". ";
unordered_set<int>::const_local_iterator l = u.begin(b);
while(x>0) {
l++;
assert(l!=u.end(b));
x--;
}
cout << "random item is " << *l << ". ";
cout << endl;
}
}
A: Solution above speak in terms of latency but doesn't guarantee equal probability of each index being selected.
If that needs to be considered, try reservoir sampling. http://en.wikipedia.org/wiki/Reservoir_sampling. Collections.shuffle() (as suggested by few) uses one such algorithm.
A: If you want to do it in Java, you should consider copying the elements into some kind of random-access collection (such as an ArrayList). Because, unless your set is small, accessing the selected element will be expensive (O(n) instead of O(1)). [ed: list copy is also O(n)]
Alternatively, you could look for another Set implementation that more closely matches your requirements. The ListOrderedSet from Commons Collections looks promising.
A: int size = myHashSet.size();
int item = new Random().nextInt(size); // In real life, the Random object should be rather more shared than this
int i = 0;
for(Object obj : myhashSet)
{
if (i == item)
return obj;
i++;
}
A: Since you said "Solutions for other languages are also welcome", here's the version for Python:
>>> import random
>>> random.choice([1,2,3,4,5,6])
3
>>> random.choice([1,2,3,4,5,6])
4
A: Can't you just get the size/length of the set/array, generate a random number between 0 and the size/length, then call the element whose index matches that number? HashSet has a .size() method, I'm pretty sure.
In psuedocode -
function randFromSet(target){
var targetLength:uint = target.length()
var randomIndex:uint = random(0,targetLength);
return target[randomIndex];
}
A: PHP, assuming "set" is an array:
$foo = array("alpha", "bravo", "charlie");
$index = array_rand($foo);
$val = $foo[$index];
The Mersenne Twister functions are better but there's no MT equivalent of array_rand in PHP.
A: Javascript solution ;)
function choose (set) {
return set[Math.floor(Math.random() * set.length)];
}
var set = [1, 2, 3, 4], rand = choose (set);
Or alternatively:
Array.prototype.choose = function () {
return this[Math.floor(Math.random() * this.length)];
};
[1, 2, 3, 4].choose();
A: Icon has a set type and a random-element operator, unary "?", so the expression
? set( [1, 2, 3, 4, 5] )
will produce a random number between 1 and 5.
The random seed is initialized to 0 when a program is run, so to produce different results on each run use randomize()
A: In C#
Random random = new Random((int)DateTime.Now.Ticks);
OrderedDictionary od = new OrderedDictionary();
od.Add("abc", 1);
od.Add("def", 2);
od.Add("ghi", 3);
od.Add("jkl", 4);
int randomIndex = random.Next(od.Count);
Console.WriteLine(od[randomIndex]);
// Can access via index or key value:
Console.WriteLine(od[1]);
Console.WriteLine(od["def"]);
A: In lisp
(defun pick-random (set)
(nth (random (length set)) set))
A: In Mathematica:
a = {1, 2, 3, 4, 5}
a[[ ⌈ Length[a] Random[] ⌉ ]]
Or, in recent versions, simply:
RandomChoice[a]
Random[] generates a pseudorandom float between 0 and 1. This is multiplied by the length of the list and then the ceiling function is used to round up to the next integer. This index is then extracted from a.
Since hash table functionality is frequently done with rules in Mathematica, and rules are stored in lists, one might use:
a = {"Badger" -> 5, "Bird" -> 1, "Fox" -> 3, "Frog" -> 2, "Wolf" -> 4};
A: How about just
public static <A> A getRandomElement(Collection<A> c, Random r) {
return new ArrayList<A>(c).get(r.nextInt(c.size()));
}
A: For fun I wrote a RandomHashSet based on rejection sampling. It's a bit hacky, since HashMap doesn't let us access it's table directly, but it should work just fine.
It doesn't use any extra memory, and lookup time is O(1) amortized. (Because java HashTable is dense).
class RandomHashSet<V> extends AbstractSet<V> {
private Map<Object,V> map = new HashMap<>();
public boolean add(V v) {
return map.put(new WrapKey<V>(v),v) == null;
}
@Override
public Iterator<V> iterator() {
return new Iterator<V>() {
RandKey key = new RandKey();
@Override public boolean hasNext() {
return true;
}
@Override public V next() {
while (true) {
key.next();
V v = map.get(key);
if (v != null)
return v;
}
}
@Override public void remove() {
throw new NotImplementedException();
}
};
}
@Override
public int size() {
return map.size();
}
static class WrapKey<V> {
private V v;
WrapKey(V v) {
this.v = v;
}
@Override public int hashCode() {
return v.hashCode();
}
@Override public boolean equals(Object o) {
if (o instanceof RandKey)
return true;
return v.equals(o);
}
}
static class RandKey {
private Random rand = new Random();
int key = rand.nextInt();
public void next() {
key = rand.nextInt();
}
@Override public int hashCode() {
return key;
}
@Override public boolean equals(Object o) {
return true;
}
}
}
A: The easiest with Java 8 is:
outbound.stream().skip(n % outbound.size()).findFirst().get()
where n is a random integer. Of course it is of less performance than that with the for(elem: Col)
A: With Guava we can do a little better than Khoth's answer:
public static E random(Set<E> set) {
int index = random.nextInt(set.size();
if (set instanceof ImmutableSet) {
// ImmutableSet.asList() is O(1), as is .get() on the returned list
return set.asList().get(index);
}
return Iterables.get(set, index);
}
A: PHP, using MT:
$items_array = array("alpha", "bravo", "charlie");
$last_pos = count($items_array) - 1;
$random_pos = mt_rand(0, $last_pos);
$random_item = $items_array[$random_pos];
A: Unfortunately, this cannot be done efficiently (better than O(n)) in any of the Standard Library set containers.
This is odd, since it is very easy to add a randomized pick function to hash sets as well as binary sets. In a not to sparse hash set, you can try random entries, until you get a hit. For a binary tree, you can choose randomly between the left or right subtree, with a maximum of O(log2) steps. I've implemented a demo of the later below:
import random
class Node:
def __init__(self, object):
self.object = object
self.value = hash(object)
self.size = 1
self.a = self.b = None
class RandomSet:
def __init__(self):
self.top = None
def add(self, object):
""" Add any hashable object to the set.
Notice: In this simple implementation you shouldn't add two
identical items. """
new = Node(object)
if not self.top: self.top = new
else: self._recursiveAdd(self.top, new)
def _recursiveAdd(self, top, new):
top.size += 1
if new.value < top.value:
if not top.a: top.a = new
else: self._recursiveAdd(top.a, new)
else:
if not top.b: top.b = new
else: self._recursiveAdd(top.b, new)
def pickRandom(self):
""" Pick a random item in O(log2) time.
Does a maximum of O(log2) calls to random as well. """
return self._recursivePickRandom(self.top)
def _recursivePickRandom(self, top):
r = random.randrange(top.size)
if r == 0: return top.object
elif top.a and r <= top.a.size: return self._recursivePickRandom(top.a)
return self._recursivePickRandom(top.b)
if __name__ == '__main__':
s = RandomSet()
for i in [5,3,7,1,4,6,9,2,8,0]:
s.add(i)
dists = [0]*10
for i in xrange(10000):
dists[s.pickRandom()] += 1
print dists
I got [995, 975, 971, 995, 1057, 1004, 966, 1052, 984, 1001] as output, so the distribution seams good.
I've struggled with the same problem for myself, and I haven't yet decided weather the performance gain of this more efficient pick is worth the overhead of using a python based collection. I could of course refine it and translate it to C, but that is too much work for me today :)
A: you can also transfer the set to array use array
it will probably work on small scale i see the for loop in the most voted answer is O(n) anyway
Object[] arr = set.toArray();
int v = (int) arr[rnd.nextInt(arr.length)];
A: If you really just want to pick "any" object from the Set, without any guarantees on the randomness, the easiest is taking the first returned by the iterator.
Set<Integer> s = ...
Iterator<Integer> it = s.iterator();
if(it.hasNext()){
Integer i = it.next();
// i is a "random" object from set
}
A: A generic solution using Khoth's answer as a starting point.
/**
* @param set a Set in which to look for a random element
* @param <T> generic type of the Set elements
* @return a random element in the Set or null if the set is empty
*/
public <T> T randomElement(Set<T> set) {
int size = set.size();
int item = random.nextInt(size);
int i = 0;
for (T obj : set) {
if (i == item) {
return obj;
}
i++;
}
return null;
}
A: If you don't mind a 3rd party library, the Utils library has a IterableUtils that has a randomFrom(Iterable iterable) method that will take a Set and return a random element from it
Set<Object> set = new HashSet<>();
set.add(...);
...
Object random = IterableUtils.randomFrom(set);
It is in the Maven Central Repository at:
<dependency>
<groupId>com.github.rkumsher</groupId>
<artifactId>utils</artifactId>
<version>1.3</version>
</dependency>
A: after reading this thread, the best i could write is:
static Random random = new Random(System.currentTimeMillis());
public static <T> T randomChoice(T[] choices)
{
int index = random.nextInt(choices.length);
return choices[index];
}
A: If set size is not large then by using Arrays this can be done.
int random;
HashSet someSet;
<Type>[] randData;
random = new Random(System.currentTimeMillis).nextInt(someSet.size());
randData = someSet.toArray();
<Type> sResult = randData[random];
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "211"
}
|
Q: How can you find available ldap servers from a computer in the same network, but different domain? My company has code that integrates with activedirectory/LDAP for centralized userid/password login. Currently, the configuration page can only show the LDAP server linked to the Exchange domain the current computer is on. I'd like to list all available LDAP servers, similar to when you go to Windows Explorer and view 'Microsoft Windows Network'. As of now, I've been unable to get this information through LDAP or through other means.
A: There are a few things you can attempt:
*
*You can look for SRV records in DNS for the domain you're on. These look like _protoname._transportname.domain.tld - I suspect this might be what you're already doing.
*You can attempt to use Service Location Protocol as documented in RFC 2608.
*There might be some MS-specific way to look for these services that I'm not aware of.
*You could attempt to brute-force port scan. (poor form)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Can you have custom client-side javascript Validation for standard ASP.NET Web Form Validators? Can you have custom client-side javascript Validation for standard ASP.NET Web Form Validators?
For instance use a asp:RequiredFieldValidator leave the server side code alone but implement your own client notification using jQuery to highlight the field or background color for example.
A: Yes I have done so. I used Firebug to find out the Dot.Net JS functions and then hijacked the validator functions
The following will be applied to all validators and is purely client side. I use it to change the way the ASP.Net validation is displayed, not the way the validation is actually performed. It must be wrapped in a $(document).ready() to ensure that it overwrites the original ASP.net validation.
/**
* Re-assigns a couple of the ASP.NET validation JS functions to
* provide a more flexible approach
*/
function UpgradeASPNETValidation(){
// Hi-jack the ASP.NET error display only if required
if (typeof(Page_ClientValidate) != "undefined") {
ValidatorUpdateDisplay = NicerValidatorUpdateDisplay;
AspPage_ClientValidate = Page_ClientValidate;
Page_ClientValidate = NicerPage_ClientValidate;
}
}
/**
* Extends the classic ASP.NET validation to add a class to the parent span when invalid
*/
function NicerValidatorUpdateDisplay(val){
if (val.isvalid){
// do custom removing
$(val).fadeOut('slow');
} else {
// do custom show
$(val).fadeIn('slow');
}
}
/**
* Extends classic ASP.NET validation to include parent element styling
*/
function NicerPage_ClientValidate(validationGroup){
var valid = AspPage_ClientValidate(validationGroup);
if (!valid){
// do custom styling etc
// I added a background colour to the parent object
$(this).parent().addClass('invalidField');
}
}
A: The standard CustomValidator has a ClientValidationFunction property for that:
<asp:CustomValidator ControlToValidate="Text1"
ClientValidationFunction="onValidate" />
<script type='text/javascript'>
function onValidate(validatorSpan, eventArgs)
{ eventArgs.IsValid = (eventArgs.Value.length > 0);
if (!eventArgs.IsValid) highlight(validatorSpan);
}
</script>
A: What you can do is hook into the validator and assign a new evaluate method, like this:
<script type="text/javascript">
rfv.evaluationfunction = validator;
function validator(sender, e) {
alert('rawr');
}
</script>
rfv is the ID of my required field validator. You have to do this at the bottom of your page so that it assigns it after the javascript for the validator is registered.
Its much easier just to use the CustomFieldValidator and assign its client side validation property.
<asp:CustomValidator ControlToValidate="txtBox" ClientValidationFunction="onValidate" />
<script type='text/javascript'>
function onValidate(sender, e)
{
alert('do validation');
}
</script>
Check out the documentation here and here.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: How to map a Servlet to get extra path information with getPathInfo() I am having an issue where Tomcat is treating extra path information as part of the servlet name. This is breaking a bunch of RESTFul functionality in our webapp (we use extra path info rather than ?name=value pairs for crawler friendly links).
It was working correctly before, but it broke after adding explicit mappings and removing the Invoker servlet that we previously used to serve our servlets. For example consider the following link:
http://mydomain.com/servlet/MyServlet/param1/param2/param3
MyServlet used to be called correctly, and "/param1/param2/param3" was returned by getPathInfo() on the HttpServletRequest.
Now, it appears that Tomcat is trying to load MyServlet/param1/param2/param3 as the servlet:
[23/Sep/2008:16:44:23 -0700] "GET
/servlet/MyServlet/param1/param2/param3
HTTP/1.0" 404
Here is the way they are defined and mapped in the web.xml, and just hitting
"http://mydomain.com/servlet/MyServlet" works fine.
<servlet>
<servlet-name>MyServlet</servlet-name>
<servlet-class>com.myclass.etcetera.MyServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>MyServlet</servlet-name>
<url-pattern>/servlet/MyServlet</url-pattern>
</servlet-mapping>
A: You need to map it to /servlet/MyServlet/*
You are missing the trailing "/*".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What is the intended use of the DEFAULT section in config files used by ConfigParser? I've used ConfigParser for quite a while for simple configs. One thing that's bugged me for a long time is the DEFAULT section. I'm not really sure what's an appropriate use. I've read the documentation, but I would really like to see some clever examples of its use and how it affects other sections in the file (something that really illustrates the kind of things that are possible).
A: I found an explanation here by googling for "windows ini" "default section". Summary: whatever you put in the [DEFAULT] section gets propagated to every other section. Using the example from the linked website, let's say I have a config file called test1.ini:
[host 1]
lh_server=192.168.0.1
vh_hosts = PloneSite1:8080
lh_root = PloneSite1
[host 2]
lh_server=192.168.0.1
vh_hosts = PloneSite2:8080
lh_root = PloneSite2
I can read this using ConfigParser:
>>> cp = ConfigParser.ConfigParser()
>>> cp.read('test1.ini')
['test1.ini']
>>> cp.get('host 1', 'lh_server')
'192.168.0.1'
But I notice that lh_server is the same in both sections; and, indeed, I realise that it will be the same for most hosts I might add. So I can do this, as test2.ini:
[DEFAULT]
lh_server=192.168.0.1
[host 1]
vh_root = PloneSite1
lh_root = PloneSite1
[host 2]
vh_root = PloneSite2
lh_root = PloneSite2
Despite the sections not having lh_server keys, I can still access them:
>>> cp.read('test2.ini')
['test2.ini']
>>> cp.get('host 1', 'lh_server')
'192.168.0.1'
Read the linked page for a further example of using variable substitution in the DEFAULT section to simplify the INI file even more.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41"
}
|
Q: Web Services or Custom Protocol? I have no experience with web services. Historically I've built client-server systems using proprietary communication protocols (even they happen to be XML). I just spent a few hours looking over Axis2 and it sent a shudder down my spine. The learning curve of WS scares me, and seeing all that XML surround so little functionality makes me wonder if it's worth the trouble.
How do you decide whether you need to use Web Services or a custom communication protocol? What are the advantages/disadvantages of each approach and what use-cases are they best suited for?
Please post a clear guideline, not an opinion piece :)
A: Build RESTful web APIs; then you get a lot of automatic caching and etc benefits that you don't get if you use other methods (SOAP, XML-RPC, etc)
See this post for more details
Another benefit is that if you build a RESTful API for your code to use, you can potentially let your users take advantage of it too - they often have uses for your product that you never dreamed of.
A: "Web Services" as defined by the W3C means using SOAP over HTTP. SOAP is severe overkill in most cases; it's only really appropriate (IMO) when you're making a public service available to the world, like an API for interacting with your website, for example.
Anything else (especially internal, private communications) rarely need anything more complex than XML-RPC. Only if performance is an issue should you consider a more condensed protocol; XML-RPC is so simple and widely-supported that the ease of development and debugging more than makes up for the performance loss of using bloaty ol' XML.
A: Remember that there are a number of frameworks out there that make programming web services very trivial stuff. In the VB / C# world .Net makes it a joy. I'm not really sure about specific frameworks for other languages but I am sure most have at least one.
The standardisation and simplicity of implementation and reuse of web services make them very attractive. As previously pointed out- yes, they make communications very verbose. If you are worried about this why not calculate how much data you actually will be trasmitting. chances are, with current network and internet speeds, it will be trivial - even with the XML overhead.
A: I would always use the custom data formats as a last resort and not a first. What widely used method you use it up to you but it's unlikely you would go wrong with Web Services model.
Maintainability and extensibility are the main benefits. The use of widely used technology your solution will be easier for someone else to understand plus you can use ready to roll libraries as consumers and providers.
A: I have recently broken my custom protocol habit. I am now using Apache on the server side and libCurl plus libxml2 to load and parse the XML on the client which is written in C++.
The server side can be either PHP or a CGI written in a more serious language. Depends what you want to do.
A: Webservices have the advantage of being somewhat standard, so it's possible for programs you've never heard of to use a webservice you wrote. Using HTTP can help them communicate over proxies and other network obstacles without any extra work from you. The XML, although rather verbose and ugly, is rather easier to read when debugging than binary data.
When you're transferring stuff over the network, it's unlikely that serialisation/deserialisation to xml will be the limiting factor in performance. It can be a bit of hassle, although a library to do it for you will help a lot.
A: In my personal (old cranky dude) opinion, web services should only be used as a way to make some of your internal information available to third parties (i.e. other companies, people outside your organization etc.). Of course, that is also the originally intended purpose of XML. :-)
If you have access to a direct connection with the databases containing the information your application needs - that is the way to go. It's faster and simpler - which in application development means "better" and "less buggy".
A: SOAP and XML -- "all that XML surround so little functionality makes me wonder if it's worth the trouble."
Totally. SOAP is heavy-weight, and -- to a large extent -- a workaround to the need for static binding throughout the Java technology stack.
REST, on the other hand, is much lighter weight. Further, REST with JSON or REST with YAML is very lightweight, and very easy to implement. It builds right on top of the off-the shelf HTTP protocol.
REST requires you to define resources (named via URI's), and transactions based on the canonical CRUD rules (GET, POST, PUT and DELETE). Very simple and canonical.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Query to retrieve names of group nodes If I had some XML such as this loaded into an XDocument object:
<Root>
<GroupA>
<Item attrib1="aaa" attrib2="000" />
</GroupA>
<GroupB>
<Item attrib1="bbb" attrib2="111" />
<Item attrib1="ccc" attrib2="222" />
<Item attrib1="ddd" attrib2="333" />
</GroupB>
<GroupC>
<Item attrib1="eee" attrib2="444" />
<Item attrib1="fff" attrib2="555" />
</GroupC>
</Root>
What would a query look like to retrieve the names of the group nodes?
For example, I'd like a query to return:
GroupA
GroupB
GroupC
A: Something like this:
XDocument doc; // populate somehow
// this will give the names as XName
var names = from child in doc.Root.Elements()
select child.Name;
// if you want just the local (no-namespaces) name as a string, use this
var simpleNames = from child in doc.Root.Elements()
select child.Name.LocalName;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Breaking event cycles in GUIs When writing GUIs, I've frequently come over the following problem: Assume you have a model and a controller. The controller has a widget W that is used to show a property X of the model.
Because the model might be changed from outside the controller (there might be other controllers using the same model, undo operations etc), the controller listens to changes on the model. The controller also listens to events on the widget W and updates the property X accordingly.
Now, the following happens:
*
*the value in W is changed
*an event is generated, the handler in the controller is invoked
*the controller sets the new value for X in the model
*the model emits events because it has been changed
*the controller receives a change event from the model
*the controller gets the value of X and sets it in the widget
*goto 1.
There are several possible solutions for that:
*
*Modify the controller to set a flag when the model is updated, and not react to any events from the model if this flag is set.
*Disconnect the controller temporarily (or tell the model not to send any events for some time)
*Freeze any updates from the widget
In the past, I usually went for option 1., because it's the simplest thing. It has the drawback of cluttering your classes with flags, but the other methods have their drawbacks, too.
Just for the record, I've had this problem with several GUI toolkits, including GTK+, Qt and SWT, so I think it's pretty toolkit-agnostic.
Any best practices? Or is the architecture I use simply wrong?
@Shy: That's a solution for some cases, but you still get a round of superfluous events if X is changed from outside the controller (for instance, when using the command pattern for undo/redo), because then the value has changed, W is updated and fires an event. In order to prevent another (useless) update to the model, the event generated by the widget has to be swallowed.
In other cases, the model might be more complex and a simple check on what exactly has changed might not be feasible, e.g. a complex tree view.
A: The standard QT way of dealing with this and also the one suggested in their very useful tutorial is to make the change to the value in the controller only if the new value is different from the current value.
This is way signals have the semantics of valueChanged()
see this tutorial
A: Usually you should respond to input events in the widget and not to change events. This prevents this type of loop from occuring.
*
*User changes input in the widget
*Widget emits change event (scroll done / enter clicked / mouse leave, etc.)
*Controller responds, translates to change in the model
*Model emits event
*Controller responds, changes value in widget
*Value change event emitted, but not listened to by controller
A: Flags to indicate updaing work. You can wrap them in methods like BeginUpdate and EndUpdate.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Max length of send() data param on XMLHttpRequest Post Is there a documented max to the length of the string data you can use in the send method of an XMLHttpRequest for the major browser implementations?
I am running into an issue with a JavaScript XMLHttpRequest Post failing in FireFox 3 when the data is over approx 3k. I was assuming the Post would behave the same as a conventional Form Post.
The W3C docs mention the data param of the send method is a DOMString but I am not sure how the major browsers implement that.
Here is a simplified version of my JavaScript, if bigText is over about 3k it fails, otherwise it works...
var xhReq = createXMLHttpRequest();
function createXMLHttpRequest() {
try { return new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) {}
try { return new ActiveXObject("Microsoft.XMLHTTP"); } catch (e) {}
try { return new XMLHttpRequest(); } catch(e) {}
alert("XMLHttpRequest not supported");
return null;
}
function mySubmit(id, bigText) {
var url = "SubmitPost.cfm";
var params = "id=" + id + "&bigtext=" + encodeURI(bigText);
xhReq.open("POST", url, true);
//Send the header information along with the request
xhReq.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
xhReq.setRequestHeader("Content-length", params.length);
xhReq.setRequestHeader("Connection", "close");
xhReq.onreadystatechange = onPostSubmit;
xhReq.send(params);
}
function onPostSubmit() {
if (xhReq.readyState==4 || xhReq.readyState=="complete")
{
if (xhReq.status != 200)
{
alert('BadStatus');
return;
}
}
}
A: I believe the maximum length depends not only on the browser, but also on the web server. For example, the Apache HTTP server has a LimitRequestBody directive which allows anywhere from 0 bytes to 2GB worth of data.
A: According to the XMLRPC spec, the only real limits are on the size of integers and doubles.
A: You don't specify how it fails however your encoding is incorrect. You should use encodeURIComponent not encodeURI.
Default the maximum size of the request entity body on the client is likely only limited by the available memory. The server as has already been pointed out may reject entity bodies over a certain size. IIS 6 and 7 for example have a 200KB default limit.
A: The configuration for Nginx must be done at the client_max_body_size and ca be set to any value i.e. 20m for 20MiB or set to 0 to disable it.
vim /etc/nginx/nginx.conf
Save and close the file, apply the changes to nginx.conf, then test -t and and send signal -s to reload:
/usr/sbin/nginx -t
/usr/sbin/nginx -s reload
Syntax: client_max_body_size size;
Default: client_max_body_size 1m;
Context: http, server, location
Sets the maximum allowed size of the client request body, specified in the “Content-Length” request header field. If the size in a request exceeds the configured value, the 413 (Request Entity Too Large) error is returned to the client. Please be aware that browsers cannot correctly display this error. Setting size to 0 disables checking of client request body size.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Sniffing network traffic for signs of viruses/spyware How can I connect a system to a network and sniff for virus/spyware related traffic? I'd like to plug in a network cable, fire up an appropriate tool sand have it scan the data for any signs of problems. I don't expect this to find everything, and this is not to prevent initial infection but to help determine if there is anything trying to actively infect other system/causing network problems.
Running a regular network sniffer and manually looking through the results is no good unless the traffic is really obvious,but I havn't been able to find any tool to scan a network data stream automatically.
A: You can make Snort scan traffic for viruses. I think this will be the best solution for you.
A: For watching local network traffic your best bet (with a decent switch) is to set your switch to route all packets out a specific interface (as well as whatever interface it would normally send). This lets you monitor the entire network by dumping traffic down a specific port.
On a 100 megabit network, however, you'll want a gigabit port on your switch to plug it into, or to filter on protocol (e.g. trim out HTTP, FTP, printing, traffic from the fileserver, etc.), or your switch's buffers are going to fill up pretty much instantly and it'll start dropping whatever packets it needs to (and your network performance will die).
A: I highly recommend running Snort on a machine somewhere near the core of your network, and span (mirror) one (or more) ports from somewhere along your core network path to the machine in question.
Snort has the ability to scan network traffic it sees, and automatically notify you via various methods if it sees something suspicious. This could even be taken further, if desired, to automatically disconnect devices, et cetera, if it finds something.
A: *
*Use snort: An open source network intrusion prevention and detection system.
*Wireshark, formerly ethereal is a great tool, but will not notify you or scan for viruses. Wireshark is a free packet sniffer and protocol analyzer.
*Use the netstat -b command to see which processes have which ports open.
*Use CPorts to see a list of ports and the associated programs, and have the ability to close those ports.
*Download a free anti-virus program such as free AVG.
*Setup your firewall more tightly.
*Setup a gateway computer to let all network traffic go through. Take the above recommendataions to the gateway computer instead. You will be checking your whole network instead of just your one computer.
A: The problem with that approach is that most networks today are on switches, not hubs. So, if you plug a machine with a packet sniffer into the switch, it will only be able to see traffic to and from the sniffing machine; and network broadcasts.
A: As a followup to Ferruccio's comment you will need to find some method of getting around your switches.
A number of network switches have the option of setting up port mirrors, so that all traffic (regardless of the destination) will be copied, or "mirrored", to a nominated port. If you could configure your switch to do this then you would be able to attach your network sniffer here.
A: Network Magic, if you don't mind something that's not open source.
A: You can use an IDS, hardware or software
http://en.wikipedia.org/wiki/Intrusion-detection_system
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Are Mutexes needed in javascript? I have seen this link: Implementing Mutual Exclusion in JavaScript.
On the other hand, I have read that there are no threads in javascript, but what exactly does that mean?
When events occur, where in the code can they interrupt?
And if there are no threads in JS, do I need to use mutexes in JS or not?
Specifically, I am wondering about the effects of using functions called by setTimeout() and XmlHttpRequest's onreadystatechange on globally accessible variables.
A: As @william points out,
you may need a mutex if your code does something where it expects a
value not to change between when the asynchronous event was fired and
when the callback was called.
This can be generalised further - if your code does something where it expects exclusive control of a resource until an asynchronous request resolves, you may need a mutex.
A simple example is where you have a button that fires an ajax call to create a record in the back end. You might need a bit of code to protect you from trigger happy users clicking away and thereby creating multiple records. there are a number of approaches to this problem (e.g. disable the button, enable on ajax success). You could also use a simple lock:
var save_lock = false;
$('#save_button').click(function(){
if(!save_lock){
//lock
save_lock=true;
$.ajax({
success:function()
//unlock
save_lock = false;
}
});
}
}
I'm not sure if that's the best approach and I would be interested to see how others handle mutual exclusion in javascript, but as far as i'm aware that's a simple mutex and it is handy.
A: JavaScript is single threaded... though Chrome may be a new beast (I think it is also single threaded, but each tab has it's own JavaScript thread... I haven't looked into it in detail, so don't quote me there).
However, one thing you DO need to worry about is how your JavaScript will handle multiple ajax requests coming back in not the same order you send them. So, all you really need to worry about is make sure your ajax calls are handled in a way that they won't step on eachother's feet if the results come back in a different order than you sent them.
This goes for timeouts too...
When JavaScript grows multithreading, then maybe worry about mutexes and the like....
A: The answers to this question are a bit outdated though correct at the time they were given. And still correct if looking at a client-side javascript application that does NOT use webworkers.
Articles on web-workers:
multithreading in javascript using webworkers
Mozilla on webworkers
This clearly shows that javascript via web-workers has multithreading capabilities. As concerning to the question are mutexes needed in javascript? I am unsure of this. But this stackoverflow post seems relevant:
Mutual Exclusion for N Asynchronous Threads
A: JavaScript, the language, can be as multithreaded as you want, but browser embeddings of the javascript engine only runs one callback (onload, onfocus, <script>, etc...) at a time (per tab, presumably). William's suggestion of using a Mutex for changes between registering and receiving a callback should not be taken too literally because of this, as you wouldn't want to block in the intervening callback since the callback that will unlock it will be blocked behind the current callback! (Wow, English sucks for talking about threading.) In this case, you probably want to do something along the lines of redispatching the current event if a flag is set, either literally or with the likes of setTimeout().
If you are using a different embedding of JS, and that executes multiple threads at once, it can get a bit more dicey, but due to the way JS can use callbacks so easily and locks objects on property access explicit locking is not nearly as necessary. However, I would be surprised if an embedding designed for general code (eg, game scripting) that used multi threading didn't also give some explicit locking primitives as well.
Sorry for the wall of text!
A: Javascript is defined as a reentrant language which means there is no threading exposed to the user, there may be threads in the implementation. Functions like setTimeout() and asynchronous callbacks need to wait for the script engine to sleep before they're able to run.
That means that everything that happens in an event must be finished before the next event will be processed.
That being said, you may need a mutex if your code does something where it expects a value not to change between when the asynchronous event was fired and when the callback was called.
For example if you have a data structure where you click one button and it sends an XmlHttpRequest which calls a callback the changes the data structure in a destructive way, and you have another button that changes the same data structure directly, between when the event was fired and when the call back was executed the user could have clicked and updated the data structure before the callback which could then lose the value.
While you could create a race condition like that it's very easy to prevent that in your code since each function will be atomic. It would be a lot of work and take some odd coding patterns to create the race condition in fact.
A: Yes, mutexes can be required in Javascript when accessing resources that are shared between tabs/windows, like localStorage.
For example, if a user has two tabs open, simple code like the following is unsafe:
function appendToList(item) {
var list = localStorage["myKey"];
if (list) {
list += "," + item;
}
else {
list = item;
}
localStorage["myKey"] = list;
}
Between the time that the localStorage item is 'got' and 'set', another tab could have modified the value. It's generally unlikely, but possible - you'd need to judge for yourself the likelihood and risk associated with any contention in your particular circumstances.
See the following articles for a more detail:
*
*Wait, Don't Touch That: Mutual Exclusion Locks & JavaScript - Medium Engineering
*JavaScript concurrency and locking the HTML5 localStorage - Benjamin Dumke-von der Eh, Stackoverflow
A: Events are signaled, but JavaScript execution is still single-threaded.
My understanding is that when event is signaled the engine stops what it is executing at the moment to run event handler. After the handler is finished, script execution is resumed. If event handler changed some shared variables then resumed code will see these changes appearing "out of the blue".
If you want to "protect" shared data, simple boolean flag should be sufficient.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "117"
}
|
Q: What do you use to deploy your Web Applications? We're looking to automate our deployment of Web Applications, particularly when going from local development to a remote server.
Our current stack is LAMP remotely, MAMP locally, but I'm interested in general what people are using for this task regardless of their environment?
I'm not just talking about moving files around, I also meant considering other tasks such as:
*
*Setting up Database schema
*Managing configurations
*Misc tasks required for deployment (creating log files etc.)
A: When and where possible, I prefer an automated deployment such as with Ant, even FTP deployment can be fairly easily handled. Automating the deployment, much like an automated build, takes the guess work and error out of the process and by definition provides at least the bare minimum documentation necessary (i.e. the build script) for a new programmer to understand the process.
A: One of the things used in a previous company was - believe it or not - RPM files. When we built our software, all the various parts of it would be packaged into RPM files, which were then deployed to the server.
*
*Master servers in a cluster had a list of all servers and their roles, which would be used to determine what packages each server needed.
*The deploy phase would check versions on each server and determine which servers needed upgrades. Each server would get a copy of any new packages it needed,
*Each server would have its packages installed by the deploy script, which would manage pre-installation and post-installation checks and tasks.
*The deploy script would trigger a separate process, the configuration management system, to read the configuration templates to generate configuration files for any services a server needed (based on its list of roles), and farm those out to the servers
*The deploy system would generate a list of actions that needed to be taken (services to be restarted) for each system, and present those to the operator managing the update. The operator would then either perform the restarts (if the update was occurring during the client's scheduled maintenance window, or we had a work-order for mid-day service restarts), or create a ticket for the night staff with a list of tasks to be done.
RPM is a horrific hack, but as our clients were all running Red Hat Linux (by our requirement), it made perfect sense. If I had a choice, I'd go with a system like Debian or Ubuntu, and set up a repository that the systems could all pull from. Still, it worked well for hundreds of clients, with thousands of servers total. Pretty neat.
A: We use "svn export" when it needs to go live. Keeps our code under revision control, and lets us actively develop it on test boxes or our local computer.
A: I haven't tried it yet but I'm looking at using Fabric in future:
Fabric is a simple pythonic remote deployment tool.
It is designed to upload files to, and run shell commands on, a number of servers in parallel or serially. These commands are grouped in tasks (regular python functions) and specified in a 'fabfile'.
It is a bit like a dumbed down Capistrano, except it's in Python, dosn't expect you to be deploying Rails applications, and the 'put' command works.
Unlike Capistrano, Fabric want's to stay small, light, easy to change and not bound to any specific framework.
A: Capistrano works very well for this kind of thing. It came out of the Ruby on Rails ecosystem, and was initially very strongly tied to deploying Rails apps. Since a lot of people had noticed that it was handy for remote server control, it's become a bit more general-purpose.
With no extra setup, Capistrano:
*
*Uses SSH to connect to the application servers
*Checks out the latest source code from Subversion to a new, dated, folder
*Activates the new release by updating a symbolic link or two
*Reloads the application server
And all this with rollback functionality.
Another good option would be to use your operating system's packaging system (RPM, deb/apt, etc). This tends to require a good level of familiarity with your operating system and its policies, but fits in great with other tools if you know what you're doing.
A: rsync->great tool
But, the answer depends on your enviro. What do you use for source control? What do you use for a build system? Etc.
Deployment for a web server is nothing more than a "cp" command depending on which files changed. You need to build a process that tracks the files that change, pull those files from source control and then pushes those changes. When you are dealing with PHP files, how do you know which files to push? That's the problem. You solve that, you'll be fine. The tool to cp the files and "deploy" them is the easy part.
A: I'm a .NET guy so for us it is CruiseControl + nant
Ive heard great things about Jet Brains TeamCity though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: GetPrivateProfileString Oddity I was just tinkering around with calling GetPrivateProfileString and GetPrivateProfileSection in kernel32 from .NET and came across something odd I don't understand.
Let's start with this encantation:
Private Declare Unicode Function GetPrivateProfileString Lib "kernel32" Alias "GetPrivateProfileStringW" ( _
ByVal lpApplicationName As String, _
ByVal lpKeyName As String, _
ByVal lpDefault As String, _
ByVal lpReturnedString() As Char, _
ByVal nSize As Int32, _
ByVal lpFileName As String) As Int32
If I pass an lpApplicationName (section), no lpKeyName and no lpDefault, I should get all of the keys for that section, and indeed I do: 50% of the time.
If the ini file has the lpApplicationName starting on the first line, the buffer returns nothing. If lpApplicationName stats on the second line in the file, it returns the expected values.
At first I though it was a matter of using the W version and Unicode in the Declare, but changing those seems to have no effect.
What am I missing?
A: Check to see if the file you are opening has a byte order mark (a few bytes marking the type of text encoding).
These Windows API calls don't seem to grok byte order marks and is causes them to miss the first section (hence everything works fine if there is a blank line).
A: Good call. Editing the ini file in VS.NET is of course (Duh) adding a utf-8 BOM. Grrr.
Opening it in notepad and doing a SaveAs ASCII yields the expected results.
So obvious. So obtuse. Another hour down the crapper. :-)
Thanks!
-=Chris
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Source Code Control for Consultants? Can anyone recommend a good Source Code control (SCC) that works for the mostly disconnected consultant? I'd prefer something that allows one to save into a local repository and then once connected 'syncs' to the server however I've never seen such a feature. Suggestions? [Windows solution is preferred that integrates with standard IDEs the SCCI API].
A: You may want to look at git.
It lets you commit things locally, and then resync back to another copy. Its very decentralized and it appears to work in windows.
A: Git may be a good alternative in this case.
From wikipedia: "Git gives each developer a local copy of the entire development history, and changes are copied from one such repository to another. These changes are imported as additional development branches, and can be merged in the same way as a locally developed branch. "
http://git.or.cz/
A: Assuming you're working with Windows, I'd suggest syncing a local folder using TortoiseSVN on the client side (http://tortoisesvn.tigris.org/) to a VisualSVN Server-based repository on the server side (http://www.visualsvn.com/).
All available free.
A: If you want a full up local repository, it sounds like you might want to look at Mercurial. I haven't used it other than a quick look-see, but it looks very interesting and powerful, and to the best of my knowledge provides a distributed source control process that replicates the repository allowing a disconnected user to still access things that they didn't "checkout" while connected.
A: For Windows you'll be better off with Mercurial. Especially if you're familiar with Subversion.
A: Any distributed version control service is going to do that. Check out Mercurial or Git.
A: Git may be the closet thing to get to what you want.
A: Bazaar supports this workflow. It is also do able in git, but it's more intuitive in bzr, imho.
A: Sorry doesnt solve your want to have it merge to multiple repositories but SVN + TortoiseSVN on windows is a good combination. There are also Visual Studio plug ins for SVN if you use VS.
If you need a third party SVN provider i use SVNRepository.com
A: You need a distributed source code control.
Wikipedia has a good comparison table of source code controls. I'd say pick the distributed source code control that best suits your needs.
A: You can do this with most systems(cvs, svn, git, and so on).
As for an application that will automatically sync when connected; I would not recommend this. It is better to commit intentionally. Depending on how you use a repository auto check in could break an active copy in your repository, or pass in code that hasn't been tested yet.
A decent client for subversion in windows is TortoiseSVN. http://tortoisesvn.net/
A: If like GUIs, then check plastic. It works both disconnected (well, actually distributed, using your own server at your laptop) and centralized
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/124809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.