text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: Source code control policy I'm looking for an overview over different source code control policies. I only came across the Main-Line policy and would like to better know others before committing to one with the team.
Can someone provide a link to an overview or even give me some names of policies so I can launch google on it?
A: No empty commit messages.
A: The paper "streamed lines: branching patterns for parallel software development" is an excellent discussion on branching patterns such as the "main line" pattern you mention - it lists the options in the form of patterns together with discussion of anti-patterns. One of the authors is Robert Orenstein of Perforce.
A: We use several practical rules as commit policy in our project. These rules help us to keep every revision in ready-to-deployment state. Our rules are similar to KDE commit policy, posted here: http://techbase.kde.org/Policies/SVN_Commit_Policy.
Every commit should be (from higher to lower priority):
*
*Successfully checked (compiled, tested, reviewed, FxCop'ed, etc.)
*Atomic (should contain only one logical change, f.e. single bugfix, refactoring, etc.)
*Non-redundant (no unused code should be added, do not commit commented code, delete it, do not commit accidentally format changes, etc)
*Correctly and fully commented
*Matched current development phase (for example no refactoring should be allowed in version support branches)
*As small as possible to match previous rules.
We developed a simple tool SvnCommitChecker, witch helps us to check some of these rules before commit to svn. I plan to put it to sourceforge in near future with an article about benefits of keeping good svn change history.
A: I have had great use of the book Practical Perforce. Though you might not be working with Perforce I think that chapter 7 (How Software Evolves) and chapter 8 (Basic Codeline Management) might be very useful. You might be able to skim them on Google Books.
Perforce also has many great articles on the subject. Software Life-Cycle Modeling writes about policies.
Perforce complete technical documentation.
And, no I'm not working for neither with Perforce.
Good luck,
Thomas
A: These two are basically the same:
Version Control for Multiple Agile Teams
Configuration Management Branching Strategy
We are using this strategy to make the trunk stable and enable developers do whatever they need on their branches.
There is some problem with Subversion since it can't handle Cyclic merges but it can worked around by deleting development branch after each reintegration back to the trunk (irrelevant for other version control systems)
A: My favorite policy is "No subversion commits that do not reference tickets + Auto Trac comments for each commit": http://trac.edgewall.org/browser/trunk/contrib/trac-post-commit-hook
A: Commit per-change instead of per-file.
This has following advantages:
*
*You can later see why this single line has been changed in this exact file (aha, this was bugfix for bug #123). If you commit per-file then commit messages tend to describe changes done in file - which you can see with diff anyway. If you commit per-change then commit messages tend to explain why the change has been done in the first place.
*It is much easier to revert or merge changes/bugfixes.
*It helps to organize your work better as you clearly focus on a single bug/feature/change you are working. You commit when you are done.
Some people think this policy produces more commits but from my experience you get less commits after all. For example, you are doing refactoring which affects 50 files. After refactoring you have a single commit with a message "Refactored xyz subsystem.".
For bigger changes you should consider dev-branch-per-change policy.
A: Don't check-in/commit any changes that break a build.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: How well does Python's whitespace dependency interact with source control with regards to merging? I'm wondering if the need to alter the indentation of code to adjust the nesting has any adverse effects on merging changes in a system like SVN.
A: I've used python with SVN and Mercurial, and have no hassles merging.
It all depends on how the diffing is done - and I suspect that it is character-by-character, which would notice the difference between one level of indent and another.
A: Generally source control systems merge on a line-by-line basis by default. I have found that merging Python code is no different from merging any other source code that is reasonably indented. If one programmer wraps a block of code in an if statement (indenting the whole block), and another programmer modifies something inside the block, then there will be a merge conflict. Fortunately, the conflict in this case is super easy to resolve.
If you use an external merge tool, then your tool may support more detailed textual merging algorithms that take the above scenario into account automatically.
A: It works fine so long as everyone on the project has agreed to use the same whitespace style (spaces or tabs).
But I've seen cases where a developer has converted an entire file from spaces to tabs (I think Eclipse had that as a feature, bound to Ctrl+Tab!), which makes spotting diffs near impossible.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: time length of an mp3 file What is the simplest way to determine the length (in seconds) of a given mp3 file, without using outside libraries? (python source highly appreciated)
A:
Simple, parse MP3 binary blob to calculate something, in Python
That sounds like a pretty tall order. I don't know Python, but here's some code I've refactored from another program I once tried to write.
Note: It's in C++ (sorry, it's what I've got). Also, as-is, it'll only handle constant bit rate MPEG 1 Audio Layer 3 files. That should cover most, but I can't make any guarantee as to this working in all situations. Hopefully this does what you want, and hopefully refactoring it into Python is easier than doing it from scratch.
// determines the duration, in seconds, of an MP3;
// assumes MPEG 1 (not 2 or 2.5) Audio Layer 3 (not 1 or 2)
// constant bit rate (not variable)
#include <iostream>
#include <fstream>
#include <cstdlib>
using namespace std;
//Bitrates, assuming MPEG 1 Audio Layer 3
const int bitrates[16] = {
0, 32000, 40000, 48000, 56000, 64000, 80000, 96000,
112000, 128000, 160000, 192000, 224000, 256000, 320000, 0
};
//Intel processors are little-endian;
//search Google or see: http://en.wikipedia.org/wiki/Endian
int reverse(int i)
{
int toReturn = 0;
toReturn |= ((i & 0x000000FF) << 24);
toReturn |= ((i & 0x0000FF00) << 8);
toReturn |= ((i & 0x00FF0000) >> 8);
toReturn |= ((i & 0xFF000000) >> 24);
return toReturn;
}
//In short, data in ID3v2 tags are stored as
//"syncsafe integers". This is so the tag info
//isn't mistaken for audio data, and attempted to
//be "played". For more info, have fun Googling it.
int syncsafe(int i)
{
int toReturn = 0;
toReturn |= ((i & 0x7F000000) >> 24);
toReturn |= ((i & 0x007F0000) >> 9);
toReturn |= ((i & 0x00007F00) << 6);
toReturn |= ((i & 0x0000007F) << 21);
return toReturn;
}
//How much room does ID3 version 1 tag info
//take up at the end of this file (if any)?
int id3v1size(ifstream& infile)
{
streampos savePos = infile.tellg();
//get to 128 bytes from file end
infile.seekg(0, ios::end);
streampos length = infile.tellg() - (streampos)128;
infile.seekg(length);
int size;
char buffer[3] = {0};
infile.read(buffer, 3);
if( buffer[0] == 'T' && buffer[1] == 'A' && buffer[2] == 'G' )
size = 128; //found tag data
else
size = 0; //nothing there
infile.seekg(savePos);
return size;
}
//how much room does ID3 version 2 tag info
//take up at the beginning of this file (if any)
int id3v2size(ifstream& infile)
{
streampos savePos = infile.tellg();
infile.seekg(0, ios::beg);
char buffer[6] = {0};
infile.read(buffer, 6);
if( buffer[0] != 'I' || buffer[1] != 'D' || buffer[2] != '3' )
{
//no tag data
infile.seekg(savePos);
return 0;
}
int size = 0;
infile.read(reinterpret_cast<char*>(&size), sizeof(size));
size = syncsafe(size);
infile.seekg(savePos);
//"size" doesn't include the 10 byte ID3v2 header
return size + 10;
}
int main(int argCount, char* argValues[])
{
//you'll have to change this
ifstream infile("C:/Music/Bush - Comedown.mp3", ios::binary);
if(!infile.is_open())
{
infile.close();
cout << "Error opening file" << endl;
system("PAUSE");
return 0;
}
//determine beginning and end of primary frame data (not ID3 tags)
infile.seekg(0, ios::end);
streampos dataEnd = infile.tellg();
infile.seekg(0, ios::beg);
streampos dataBegin = 0;
dataEnd -= id3v1size(infile);
dataBegin += id3v2size(infile);
infile.seekg(dataBegin,ios::beg);
//determine bitrate based on header for first frame of audio data
int headerBytes = 0;
infile.read(reinterpret_cast<char*>(&headerBytes),sizeof(headerBytes));
headerBytes = reverse(headerBytes);
int bitrate = bitrates[(int)((headerBytes >> 12) & 0xF)];
//calculate duration, in seconds
int duration = (dataEnd - dataBegin)/(bitrate/8);
infile.close();
//print duration in minutes : seconds
cout << duration/60 << ":" << duration%60 << endl;
system("PAUSE");
return 0;
}
A: simply use mutagen
$pip install mutagen
use it in python shell:
from mutagen.mp3 import MP3
audio = MP3(file_path)
print audio.info.length
A: Also take a look at audioread (some linux distros including ubuntu have packages), https://github.com/sampsyo/audioread
audio = audioread.audio_open('/path/to/mp3')
print audio.channels, audio.samplerate, audio.duration
A: You can use pymad. It's an external library, but don't fall for the Not Invented Here trap. Any particular reason you don't want any external libraries?
import mad
mf = mad.MadFile("foo.mp3")
track_length_in_milliseconds = mf.total_time()
Spotted here.
--
If you really don't want to use an external library, have a look here and check out how he's done it. Warning: it's complicated.
A: For google followers' sake, here are a few more external libs:
mpg321 -t
ffmpeg -i
midentify (mplayer basically) see Using mplayer to determine length of audio/video file
mencoder (pass it invalid params, it will spit out an error message but also give you info on the file in question, ex $ mencoder inputfile.mp3 -o fake)
mediainfo program http://mediainfo.sourceforge.net/en
exiftool
the linux "file" command
mp3info
sox
refs:
https://superuser.com/questions/36871/linux-command-line-utility-to-determine-mp3-bitrate
http://www.ruby-forum.com/topic/139468
mp3 length in milliseconds
(making this a wiki for others to add to).
and libs: .net: naudio, java: jlayer, c: libmad
Cheers!
A: You might count the number of frames in the file. Each frame has a start code, although I can't recollect the exact value of the start code and I don't have MPEG specs laying around. Each frame has a certain length, around 40ms for MPEG1 layer II.
This method works for CBR-files (Constant Bit Rate), how VBR-files work is a completely different story.
From the document below:
For Layer I files us this formula:
FrameLengthInBytes = (12 * BitRate / SampleRate + Padding) * 4
For Layer II & III files use this formula:
FrameLengthInBytes = 144 * BitRate / SampleRate + Padding
Information about MPEG Audio Frame Header
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: How would you unittest a memory allocator? There's a lot of people today who sell unittesting as bread-and-butter of development. That might even work for strongly algorithmically-oriented routines. However, how would you unit-test, for example, a memory allocator (think malloc()/realloc()/free()). It's not hard to produce a working (but absolutely useless) memory allocator that satisfies the specified interface. But how to provide the proper context for unit-testing functionality that is absolutely desired, yet not part of the contract: coalescing free blocks, reusing free blocks on next allocations, returning excess free memory to the system, asserting that the allocation policy (e.g. first-fit) really is respected, etc.
My experience is that assertions, even if complex and time-consuming (e.g. traversing the whole free list to check invariants) are much less work and are more reliable than unit-testing, esp. when coding complex, time-dependent algorithms.
Any thoughts?
A: Highly testable code tends to be structured differently than other code.
You describe several tasks that you want an allocator to do:
*
*coalescing free blocks
*reusing free blocks on next
allocations
*returning excess free memory to the
system
*asserting that the allocation policy
(e.g. first-fit) really is respected
While you might write your allocation code to be very coupled, as in doing several of those things inside one function body, you could also break each task out into code that is a testable chunk. This is almost an inversion of what you may be used to. I find that testable code tends to be very transparent and built from more small pieces.
Next, I would say is that within reason automated testing of any sort is better than no automated testing. I would definitely focus more on making sure your tests do something useful than worrying if you've properly used mocks, whether you've ensured it's properly isolated and whether it's a true unit test. Those are all admirable goals that will hopefully make 99% of tests better. On the other hand, please use common sense and your best engineering judgment to get the job done.
Without code samples I don't think I can be more specific.
A: If there is any logic in there, it can be unit-tested.
If your logic involves making decisions and calling OS/hardware/system APIs, fake/mock out the device dependent calls and unit test your logic to verify if the right decisions are made under a given set of pre-conditions. Follow the Arrange-Act-Assert triad in your unit test.
Assertions are no replacement for automated unit-tests. They dont tell you which scenario failed, they don't provide feedback during development, they can't be used to prove that all specs are being met by the code among other things.
Non-vague Update:
I don't know the exact method calls.. I think I'll 'roll my own'
Let's say your code examines current conditions, makes a decision and make calls to the OS as required. Lets say your OS calls are (you may have many more):
void* AllocateMemory(int size);
bool FreeMemory(void* handle);
int MemoryAvailable();
First turn this into an interface, I_OS_MemoryFacade. Create an implementation of this interface to make the actual calls to the OS. Now make your code use this interface - you now have decoupled your code/logic from the device/OS. Next in your unit test, you use a mock framework (its purpose is to give you a mock implementation of a specified interface. You can then tell the mock framework that expect these calls to be made, with these params and return this when they are made. At the end of the test, you can ask the mock framework to verify if all the expectations were met. (e.g. In this test, AllocateMemory should be called thrice with 10, 30, 50 as params followed by 3 FreeMemory calls. Check if MemoryAvailable returns the initial value.)
Since your code depends on an interface, it doesn't know the difference between the real implementation and a fake/mock implementation that you use for testing.
Google out 'mock frameworks' for more information on this.
A: Both things have their place. Use unit tests to check the interfaces behave as expected and assertions to check that the contract is respected.
A: You may also want to include performance testing, stress testing etc. They wouldn't be unit tests, because it would test entire thing, but they're very valuable in case of memory allocator.
Unit testing does not exclude these kinds of tests. It's best to have both of them.
A: I also think unit tests are overrated. They have their usefulness, but what really increases the quality of a program is to review it. On the other hand I'm really fond of assertions, but they don't replace unit testing.
I'm not talking about peer-review, but simply reread what you wrote, possibly while stepping through it with a debugger and checking that each line does what it's supposed to do will sky rocket software quality.
I'd recommend "high level" unit tests that test a chunk of functionality rather than a tiny method call. The later tend to make any code change extremely painful and expensive.
A: Personally I find most of the unit tests like someone else's desire rather than mine. I think that any unit test should be written just like a normal program except the fact that it doesn't do anything other than testing a library/algorithm or any part of code.
My unit tests usually don't use tools like CUnit, CppUnit and similar software.
I create my own tests. For example not long ago I needed to test a fresh implementation of a container in usual cases for memory leaks. A unit test was not helpful enough for providing a nice test. Instead I created my own allocator and made it fail to allocate memory after a certain (adjustable) number of allocations to see if my application has memory leaks in these case ( and it had :) ).
How can this be made by a unit test? With more effort to make your code fit into the unit test "pattern".
So I strongly recommend not using unit test every time just because it is "trendy", but only when it is really easy to integrate them with the code you like to test.
A: Unit testing isn't just to make sure your code works. It is also a very good design methodology. For tests to be useful, as mentioned previously, the code needs to be as decoupled as possible, such as using interfaces where needed.
I don't always write tests first, but very often if I am having trouble getting started on something, I will write a simple test, experiment with the design and go from there. Also, good unit tests serve as good documentation. At work when I need to see how to use a specific class or similar, I look at the unit tests for it.
Just remember that Unit Testing is not integration testing. Unit testing does have its limits but overall I think a very good tool to know how to use appropriately.
A: So you run into a problem that your allocator gets used by the testing framework which may cause problems for the state of your allocator while you're trying to test. Consider prefixing your allocator functions (see dlmalloc). You write
prefix_malloc();
prefix_free();
and then
#ifndef USE_PREFIX
#define prefix_malloc malloc
#define prefix_free free
#endif
Now, set your build system to compile a version of the library using -DUSE_PREFIX. Write your unit tests to call prefix_malloc and prefix_free. This lets you separate your allocator's state from the state of the system allocator.
If you're using sbrk and the system allocator uses sbrk it's possible you could have a bad time if either allocator assumes it has complete control over the breakpoint. In this case you would want to link in yet another allocator which you can configure to only use mmap so your allocator can have the breakpoint.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Jar file naming conventions Are there any industry standard conventions for naming jar files?
A: I have been using
*Informative*-*name*-*M*.*m*.*b*.jar
Where:
M = major version number (changed when backward compatibility is not necessarily maintained)
m = minor version number (feature additions etc)
b = build number (for releases containing bug fixes)
A: If your jar is used for JEE then these guidelines apply:
The module name is used as the EJB archive name. By default, the
module name should be based on the package name and should be written
in all-lowercase ASCII letters. In case of conflict, the module name
should be a more specific version (including other part of the package
name):
EJB archive: -ejb.jar
EJB client archive: -ejb-client.jar
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: Logging Clientside JavaScript Errors on Server Im running a ASP.NET Site where I have problems to find some JavaScript Errors just with manual testing.
Is there a possibility to catch all JavaScript Errors on the Clientside and log them on the Server i.e. in the EventLog (via Webservice or something like that)?
A: You could try setting up your own handler for the onerror event and use XMLHttpRequest to tell the server what went wrong, however since it's not part of any specification, support is somewhat flaky.
Here's an example from Using XMLHttpRequest to log JavaScript errors:
window.onerror = function(msg, url, line)
{
var req = new XMLHttpRequest();
var params = "msg=" + encodeURIComponent(msg) + '&url=' + encodeURIComponent(url) + "&line=" + line;
req.open("POST", "/scripts/logerror.php");
req.send(params);
};
A: Short answer: Yes, it is possible.
Longer answer: People have already written about how you can (at least partially) solve this issue by writing your own code. However, do note that there are services out there that seems to have made sure the JS code needed works in many browsers. I've found the following:
*
*http://trackjs.com
*https://www.atatus.com
*http://jserrlog.appspot.com
*http://muscula.com
*https://sentry.io
*https://rollbar.com
*https://catchjs.com
I can't speak for any of these services as I haven't tried them yet.
A: If you use Google Analytics, you can log javascript errors into Google Analytics Events.
See this app: http://siteapps.com/app/log_javascript_errors_with_ga-181
Hope it helps.
A: I have just implemented server side error logging on javascript errors on a project at work. There is a mixture of legacy code and new code using jQuery.
I use a combination of window.onerror and wrapping the jQuery event handlers and onready function with an error handling function (see: JavaScript Error Tracking: Why window.onerror Is Not Enough).
*
*window.onerror: catches all errrors in IE (and most errors in Firefox), but does nothing in Safari and Opera.
*jQuery event handlers: catches jQuery event errors in all browsers.
*jQuery ready function: catches initialisation errors in all browsers.
Once I have caught the error, I add some extra properties to it (url, browser, etc) and then post it back to the server using an ajax call.
On the server I have a small page which just takes the posted arguments and outputs them to our normal server logging framework.
I would like to open source the code for this (as a jQuery plugin). If anyone is interested let me know, it would help to convince the bosses!
A: You could potentially make an Ajax call to the server from a try/catch, but that's probably about the best you can do.
May I suggest JavaScript unit testing instead? Possibly with JSUnit?
A: Also I recomend using TraceTool utility, it comes with JavaScript support and is very handy for JS monitoring.
A: If you're wanting to log the client-side errors back to the server you're going to have to do some kind of server processing. Best bet would be to have a web service which you can access via JavaScript (AJAX) and you pass your error log info to it.
Doesn't 100% solve the problem cuz if the problem is with the web server hosting the web service you're in trouble, you're other option would be to send the info via a standard page via a query string, One method of doing that is via dynamically generating Image tags (which are then removed) as the browser will try and load the source of an image. It gets around cross-domain JavaScript calls nicely though. Keep in mind that you're in trouble if someone has images turned off ;)
A: I've been using Appfail recently, which captures both asp.net and JavaScript errors
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "104"
}
|
Q: Highlight a word with jQuery I basically need to highlight a particular word in a block of text. For example, pretend I wanted to highlight the word "dolor" in this text:
<p>
Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
</p>
<p>
Quisque bibendum sem ut lacus. Integer dolor ullamcorper libero.
Aliquam rhoncus eros at augue. Suspendisse vitae mauris.
</p>
How do I convert the above to something like this:
<p>
Lorem ipsum <span class="myClass">dolor</span> sit amet, consectetuer adipiscing elit.
</p>
<p>
Quisque bibendum sem ut lacus. Integer <span class="myClass">dolor</span> ullamcorper
libero. Aliquam rhoncus eros at augue. Suspendisse vitae mauris.
</p>
Is this possible with jQuery?
Edit: As Sebastian pointed out, this is quite possible without jQuery - but I was hoping there might be a special method of jQuery which would let you do selectors on the text itself. I'm already using jQuery heavily on this site, so keeping everything wrapped up in jQuery would make things perhaps a bit more tidy.
A: Try highlight: JavaScript text highlighting jQuery plugin.
Warning: The source code available on this page contains a cryptocurrency mining script, either use the code below or remove the mining script from the script downloaded from the website.
/*
highlight v4
Highlights arbitrary terms.
<http://johannburkard.de/blog/programming/javascript/highlight-javascript-text-higlighting-jquery-plugin.html>
MIT license.
Johann Burkard
<http://johannburkard.de>
<mailto:jb@eaio.com>
*/
jQuery.fn.highlight = function(pat) {
function innerHighlight(node, pat) {
var skip = 0;
if (node.nodeType == 3) {
var pos = node.data.toUpperCase().indexOf(pat);
if (pos >= 0) {
var spannode = document.createElement('span');
spannode.className = 'highlight';
var middlebit = node.splitText(pos);
var endbit = middlebit.splitText(pat.length);
var middleclone = middlebit.cloneNode(true);
spannode.appendChild(middleclone);
middlebit.parentNode.replaceChild(spannode, middlebit);
skip = 1;
}
}
else if (node.nodeType == 1 && node.childNodes && !/(script|style)/i.test(node.tagName)) {
for (var i = 0; i < node.childNodes.length; ++i) {
i += innerHighlight(node.childNodes[i], pat);
}
}
return skip;
}
return this.length && pat && pat.length ? this.each(function() {
innerHighlight(this, pat.toUpperCase());
}) : this;
};
jQuery.fn.removeHighlight = function() {
return this.find("span.highlight").each(function() {
this.parentNode.firstChild.nodeName;
with (this.parentNode) {
replaceChild(this.firstChild, this);
normalize();
}
}).end();
};
Also try the "updated" version of the original script.
/*
* jQuery Highlight plugin
*
* Based on highlight v3 by Johann Burkard
* http://johannburkard.de/blog/programming/javascript/highlight-javascript-text-higlighting-jquery-plugin.html
*
* Code a little bit refactored and cleaned (in my humble opinion).
* Most important changes:
* - has an option to highlight only entire words (wordsOnly - false by default),
* - has an option to be case sensitive (caseSensitive - false by default)
* - highlight element tag and class names can be specified in options
*
* Usage:
* // wrap every occurrance of text 'lorem' in content
* // with <span class='highlight'> (default options)
* $('#content').highlight('lorem');
*
* // search for and highlight more terms at once
* // so you can save some time on traversing DOM
* $('#content').highlight(['lorem', 'ipsum']);
* $('#content').highlight('lorem ipsum');
*
* // search only for entire word 'lorem'
* $('#content').highlight('lorem', { wordsOnly: true });
*
* // don't ignore case during search of term 'lorem'
* $('#content').highlight('lorem', { caseSensitive: true });
*
* // wrap every occurrance of term 'ipsum' in content
* // with <em class='important'>
* $('#content').highlight('ipsum', { element: 'em', className: 'important' });
*
* // remove default highlight
* $('#content').unhighlight();
*
* // remove custom highlight
* $('#content').unhighlight({ element: 'em', className: 'important' });
*
*
* Copyright (c) 2009 Bartek Szopka
*
* Licensed under MIT license.
*
*/
jQuery.extend({
highlight: function (node, re, nodeName, className) {
if (node.nodeType === 3) {
var match = node.data.match(re);
if (match) {
var highlight = document.createElement(nodeName || 'span');
highlight.className = className || 'highlight';
var wordNode = node.splitText(match.index);
wordNode.splitText(match[0].length);
var wordClone = wordNode.cloneNode(true);
highlight.appendChild(wordClone);
wordNode.parentNode.replaceChild(highlight, wordNode);
return 1; //skip added node in parent
}
} else if ((node.nodeType === 1 && node.childNodes) && // only element nodes that have children
!/(script|style)/i.test(node.tagName) && // ignore script and style nodes
!(node.tagName === nodeName.toUpperCase() && node.className === className)) { // skip if already highlighted
for (var i = 0; i < node.childNodes.length; i++) {
i += jQuery.highlight(node.childNodes[i], re, nodeName, className);
}
}
return 0;
}
});
jQuery.fn.unhighlight = function (options) {
var settings = { className: 'highlight', element: 'span' };
jQuery.extend(settings, options);
return this.find(settings.element + "." + settings.className).each(function () {
var parent = this.parentNode;
parent.replaceChild(this.firstChild, this);
parent.normalize();
}).end();
};
jQuery.fn.highlight = function (words, options) {
var settings = { className: 'highlight', element: 'span', caseSensitive: false, wordsOnly: false };
jQuery.extend(settings, options);
if (words.constructor === String) {
words = [words];
}
words = jQuery.grep(words, function(word, i){
return word != '';
});
words = jQuery.map(words, function(word, i) {
return word.replace(/[-[\]{}()*+?.,\\^$|#\s]/g, "\\$&");
});
if (words.length == 0) { return this; };
var flag = settings.caseSensitive ? "" : "i";
var pattern = "(" + words.join("|") + ")";
if (settings.wordsOnly) {
pattern = "\\b" + pattern + "\\b";
}
var re = new RegExp(pattern, flag);
return this.each(function () {
jQuery.highlight(this, re, settings.element, settings.className);
});
};
A: function hiliter(word, element) {
var rgxp = new RegExp(word, 'g');
var repl = '<span class="myClass">' + word + '</span>';
element.innerHTML = element.innerHTML.replace(rgxp, repl);
}
hiliter('dolor');
A: Why using a selfmade highlighting function is a bad idea
The reason why it's probably a bad idea to start building your own highlighting function from scratch is because you will certainly run into issues that others have already solved. Challenges:
*
*You would need to remove text nodes with HTML elements to highlight your matches without destroying DOM events and triggering DOM regeneration over and over again (which would be the case with e.g. innerHTML)
*If you want to remove highlighted elements you would have to remove HTML elements with their content and also have to combine the splitted text-nodes for further searches. This is necessary because every highlighter plugin searches inside text nodes for matches and if your keywords will be splitted into several text nodes they will not being found.
*You would also need to build tests to make sure your plugin works in situations which you have not thought about. And I'm talking about cross-browser tests!
Sounds complicated? If you want some features like ignoring some elements from highlighting, diacritics mapping, synonyms mapping, search inside iframes, separated word search, etc. this becomes more and more complicated.
Use an existing plugin
When using an existing, well implemented plugin, you don't have to worry about above named things. The article 10 jQuery text highlighter plugins on Sitepoint compares popular highlighter plugins. This includes plugins of answers from this question.
Have a look at mark.js
mark.js is such a plugin that is written in pure JavaScript, but is also available as jQuery plugin. It was developed to offer more opportunities than the other plugins with options to:
*
*search for keywords separately instead of the complete term
*map diacritics (For example if "justo" should also match "justò")
*ignore matches inside custom elements
*use custom highlighting element
*use custom highlighting class
*map custom synonyms
*search also inside iframes
*receive not found terms
DEMO
Alternatively you can see this fiddle.
Usage example:
// Highlight "keyword" in the specified context
$(".context").mark("keyword");
// Highlight the custom regular expression in the specified context
$(".context").markRegExp(/Lorem/gmi);
It's free and developed open-source on GitHub (project reference).
A: JSFiddle
Uses .each(), .replace(), .html(). Tested with jQuery 1.11 and 3.2.
In the above example, reads the 'keyword' to be highlighted and appends span tag with the 'highlight' class. The text 'keyword' is highlighted for all selected classes in the .each().
HTML
<body>
<label name="lblKeyword" id="lblKeyword" class="highlight">keyword</label>
<p class="filename">keyword</p>
<p class="content">keyword</p>
<p class="system"><i>keyword</i></p>
</body>
JS
$(document).ready(function() {
var keyWord = $("#lblKeyword").text();
var replaceD = "<span class='highlight'>" + keyWord + "</span>";
$(".system, .filename, .content").each(function() {
var text = $(this).text();
text = text.replace(keyWord, replaceD);
$(this).html(text);
});
});
CSS
.highlight {
background-color: yellow;
}
A: You can use my highlight plugin jQuiteLight, that can also work with regular expressions.
To install using npm type:
npm install jquitelight --save
To install using bower type:
bower install jquitelight
Usage:
// for strings
$(".element").mark("query here");
// for RegExp
$(".element").mark(new RegExp(/query h[a-z]+/));
More advanced usage here
A: You can use the following function to highlight any word in your text.
function color_word(text_id, word, color) {
words = $('#' + text_id).text().split(' ');
words = words.map(function(item) { return item == word ? "<span style='color: " + color + "'>" + word + '</span>' : item });
new_words = words.join(' ');
$('#' + text_id).html(new_words);
}
Simply target the element that contains the text, choosing the word to colorize and the color of choice.
Here is an example:
<div id='my_words'>
This is some text to show that it is possible to color a specific word inside a body of text. The idea is to convert the text into an array using the split function, then iterate over each word until the word of interest is identified. Once found, the word of interest can be colored by replacing that element with a span around the word. Finally, replacing the text with jQuery's html() function will produce the desired result.
</div>
Usage,
color_word('my_words', 'possible', 'hotpink')
A: Here's a variation that ignores and preserves case:
jQuery.fn.highlight = function (str, className) {
var regex = new RegExp("\\b"+str+"\\b", "gi");
return this.each(function () {
this.innerHTML = this.innerHTML.replace(regex, function(matched) {return "<span class=\"" + className + "\">" + matched + "</span>";});
});
};
A: You need to get the content of the p tag and replace all the dolors in it with the highlighted version.
You don't even need to have jQuery for this. :-)
A: I wrote a very simple function that uses jQuery to iterate the elements wrapping each keyword with a .highlight class.
function highlight_words(word, element) {
if(word) {
var textNodes;
word = word.replace(/\W/g, '');
var str = word.split(" ");
$(str).each(function() {
var term = this;
var textNodes = $(element).contents().filter(function() { return this.nodeType === 3 });
textNodes.each(function() {
var content = $(this).text();
var regex = new RegExp(term, "gi");
content = content.replace(regex, '<span class="highlight">' + term + '</span>');
$(this).replaceWith(content);
});
});
}
}
More info:
http://www.hawkee.com/snippet/9854/
A: This is a modified version from @bjarlestam.
This will only search text.
jQuery.fn.highlight = function(str) {
var regex = new RegExp(str, "gi");
return this.each(function() {
this.innerHTML = this.innerText.replace(regex, function(matched) {
return "<span class='mark'>" + matched + "</span>";
});
});
};
// Mark
jQuery('table tr td').highlight('desh')
.mark {
background: #fde293;
color: #222;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<h2>HTML Table</h2>
<table>
<tr>
<th>Company</th>
<th>Contact</th>
<th>Country</th>
</tr>
<tr>
<td>Sodeshi</td>
<td>Francisco Chang</td>
<td>Mexico</td>
</tr>
<tr>
<td>Ernst Handel</td>
<td>Roland Mendel</td>
<td>Austria</td>
</tr>
<tr>
<td>Island Trading</td>
<td>Helen Bennett</td>
<td>Bangladesh</td>
</tr>
</table>
Usages: jQuery('.selector').highlight('sample text')
A: Is it possible to get this above example:
jQuery.fn.highlight = function (str, className)
{
var regex = new RegExp(str, "g");
return this.each(function ()
{
this.innerHTML = this.innerHTML.replace(
regex,
"<span class=\"" + className + "\">" + str + "</span>"
);
});
};
not to replace text inside html-tags like , this otherwise breakes the page.
A: $(function () {
$("#txtSearch").keyup(function (event) {
var txt = $("#txtSearch").val()
if (txt.length > 3) {
$("span.hilightable").each(function (i, v) {
v.innerHTML = v.innerText.replace(txt, "<hilight>" + txt + "</hilight>");
});
}
});
});
Jfiddle here
A: I have created a repository on similar concept that changes the colors of the texts whose colors are recognised by html5 (we don't have to use actual #rrggbb values and could just use the names as html5 standardised about 140 of them)
colors.js
$( document ).ready(function() {
function hiliter(word, element) {
var rgxp = new RegExp("\\b" + word + "\\b" , 'gi'); // g modifier for global and i for case insensitive
var repl = '<span class="myClass">' + word + '</span>';
element.innerHTML = element.innerHTML.replace(rgxp, repl);
};
hiliter('dolor', document.getElementById('dolor'));
});
.myClass{
background-color:red;
}
<!DOCTYPE html>
<html>
<head>
<title>highlight</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.0/jquery.min.js"></script>
<link href="main.css" type="text/css" rel="stylesheet"/>
</head>
<body id='dolor'>
<p >
Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
</p>
<p>
Quisque bibendum sem ut lacus. Integer dolor ullamcorper libero.
Aliquam rhoncus eros at augue. Suspendisse vitae mauris.
</p>
<script type="text/javascript" src="main.js" charset="utf-8"></script>
</body>
</html>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "104"
}
|
Q: Locking binary files using git version control system For one and a half years, I have been keeping my eyes on the git community in hopes of making the switch away from SVN. One particular issue holding me back is the inability to lock binary files. Throughout the past year I have yet to see developments on this issue. I understand that locking files goes against the fundamental principles of distributed source control, but I don't see how a web development company can take advantage of git to track source code and image file changes when there is the potential for binary file conflicts.
To achieve the effects of locking, a "central" repository must be identified. Regardless of the distributed nature of git, most companies will have a "central" repository for a software project. We should be able to mark a file as requiring a lock from the governing git repository at a specified address. Perhaps this is made difficult because git tracks file contents not files?
Do any of you have experience in dealing with git and binary files that should be locked before modification?
NOTE: It looks like Source Gear's new open source distributed version control project, Veracity, has locking as one of its goals.
A: Git LFS 2.0 has added support for file locking.
With Git LFS 2.0.0 you can now lock files that you're actively working on, preventing others from pushing to the Git LFS server until you unlock the files again.
This will prevent merge conflicts as well as lost work on non-mergeable files at the filesystem level. While it may seem to contradict the distributed and parallel nature of Git, file locking is an important part of many software development workflows—particularly for larger teams working with binary assets.
A: We've just recently started using Git (used Subversion previously) and I have found a change to workflow that might help with your problem, without the need for locks. It takes advantage of how git is designed and how easy branches are.
Basically, it boils down to pushing to a non-master branch, doing a review of that branch, and then merging into the master branch (or whichever the target branch is).
The way git is "intended" to be used, each developer publishes their own public repository, which they request others to pull from. I've found that Subversion users have trouble with that. So, instead, we push to branch trees in the central repository, with each user having their own branch tree. For instance, a hierarchy like this might work:
users/a/feature1
users/a/feature2
users/b/feature3
teams/d/featurey
Feel free to use your own structure. Note I'm also showing topic branches, another common git idiom.
Then in a local repo for user a:
feature1
feature2
And to get it to central server (origin):
git push origin feature1:users/a/feature1
(this can probably be simplified with configuration changes)
Anyway, once feature1 is reviewed, whomever is responsible (in our case, it's the developer of the feature, you could have a single user responsible for merges to master), does the following:
git checkout master
git pull
git merge users/name/feature1
git push
The pull does a fetch (pulling any new master changes and the feature branch) and the updates master to what the central repository has. If user a did their job and tracked master properly, there should be no problems with the merge.
All this means that, even if a user or remote team makes a change to a binary resource, it gets reviewed before it gets incorporated into the master branch. And there is a clear delineation (based on process) as to when something goes into the master branch.
You can also programmatically enforce aspects of this using git hooks, but again, I've not worked with these yet, so can't speak on them.
A: Subversion has locks, and they aren't just advisory. They can be enforced using the svn:needs-lock attribute (but can also be deliberately broken if necessary). It's the right solution for managing non-mergeable files. The company I work for stores just about everything in Subversion, and uses svn:needs-lock for all non-mergeable files.
I disagree with "locks are just a communication method". They are a much more effective method than push-notifications such as phone or e-mail. Subversion locks are self-documenting (who has the lock). On the other hand, if you have to communicate by other traditional push-notification channels, such as e-mail, who do you send the notification to? You don't know in advance who might want to edit the file, especially on open-source projects, unless you have a complete list of your entire development team. So those traditional communication methods aren't nearly as effective.
A central lock server, while against the principles of DVCS, is the only feasible method for non-mergeable files. As long as DVCS don't have a central lock feature, I think it will keep the company I work for using Subversion.
The better solution would be to make a merge tool for all your binary file formats, but that's a longer-term and ongoing goal that will never be "finished".
Here's an interesting read on the topic.
A: It's worth examining your current workflow to see if locking images is really necessary. It's relatively unusual for two people to independently edit an image, and a bit of communication can go a long way.
A: When I was using Subversion, I religiously set the svn:needs-lock property on all binary and even the hard-to-edit text files. I never actually experienced any conflicts.
Now, in Git, I don't worry about such things. Remember: locks in Subversion aren't actually mandatory locks, they are merely communication tools. And guess what: I don't need Subversion to communicate, I can manage just fine with E-Mail, Phone and IM.
Another thing I did, is to replace many binary formats with plain text formats. I use reStructuredText or LaΤΕΧ instead of Word, CSV instead of Excel, ASCII-Art instead of Visio, YAML instead of databases, SVG instead of OO Draw, abc instead of MIDI, and so on.
A: I have discussed this issue on git discussion groups and have concluded that at this time, there is no agreed upon method of centralized file locking for git.
A: I would not expect file-locking to ever make it as a feature in git. What kind of binary files are you primarily interested in? Are you actually interested in locking the files, or just preventing conflicts caused by not being able to merge them.
I seem to remember someone talking (or even implementing) support for merging OpenOffice-documents in git.
A: This is not a solution but rather a comment on why locking mechanisms are needed. There are some tools used in some fields which use binary only formats which are flat out mission critical and the "use better/different tools" is just not an option. There are no viable alternate tools. The ones I'm familiar with really wouldn't be candidates for merging even if you stored the same information in an ascii format. One objection I've heard is that you want to be able to work offline. The particular tool I'm thinking of really doesn't work offline anyway because of needing to pull licenses so if I have data on a laptop it isn't like I can run the tool while on a train anyway. That said, what git does provide if I have a slow connection, I can get licenses and also pull down changes but have the fast local copy for looking at different versions. That is a nice thing that the DVCS gives you even in this case.
One view point is that git is simply not the tool to use but it is nice for all the text files which are also managed with it and it is annoying to need different version control tools for different files.
The sort-of-advisory-locking-via-mail approach really stinks. I've seen that and have been tired of an endless stream of emails of "I'm editing it" "I'm done editing" and seen changes lost because of it. The particular case I'm thinking of was one where a collection of smaller ascii files would have been much nicer but that is an aside.
A: I agree that locking binary files is a necessary feature for some environments. I just had a thought about how to implement this, though:
*
*Have a way of marking a file as "needs-lock" (like the "svn:needs-lock" property).
*On checkout, git would mark such a file as read-only.
*A new command git-lock would contact a central lock server running somewhere to ask permission to lock.
*If the lock server grants permission, mark the file read-write.
*git-add would inform the lock server of the content hash of the locked file.
*The lock server would watch for that content hash to appear in a commit on the master repository.
*When the hash appears, release the lock.
This is very much a half-baked idea and there are potential holes everywhere. It also goes against the spirit of git, yet it can certainly be useful in some contexts.
Within a particular organisation, this sort of thing could perhaps be built using a suitable combination of script wrappers and commit hooks.
A: In response to Mario's additional concern with changes happening in multiple places on the binaries. So the scenario is Alice and Bob are both making changes to the same binary resource at the same time. They each have their own local repo, cloned from one central remote.
This is indeed a potential problem. So Alice finishes first and pushes to the central alice/update branch. Normally when this happens, Alice would make an announcement that it should be reviewed. Bob sees that and reviews it. He can either (1) incorporate those changes himself into his version (branching from alice/update and making his changes to that) or (2) publish his own changes to bob/update. Again, he makes an announcement.
Now, if Alice pushes to master instead, Bob has a dilemma when he pulls master and tries to merge into his local branch. His conflicts with Alice's. But again, the same procedure can apply, just on different branches. And even if Bob ignores all the warnings and commits over Alice's, it's always possible to pull out Alice's commit to fix things. This becomes simply a communication issue.
Since (AFAIK) the Subversion locks are just advisory, an e-mail or instant message could serve the same purpose. But even if you don't do that, Git lets you fix it.
No, there's no locking mechanism per se. But a locking mechanism tends to just be a substitute for good communication. I believe that's why the Git developers haven't added a locking mechanism.
A: What about cad files? If the files aren't locked, to be kept read-only as well, most cad programms would just open them an change arbitrary bits, seen as a new file by any vcs. So in my view, locking is an ideal means for communicating your intend to change some particalur file. As well, it prevents some Software to gain write access in the first place. This allows updates of the local files, without the need to close the software or at least all files entirely.
A: TortoiseGit supports full git workflow for Office documents delegating diff to Office itself. It works also delegating to OpenOffice for OpenDocument formats.
A: Im not suggesting to use git at my company for the same problem. We use EA for all our designs and microsoft word for documentation, we don't know in advance who may edit a particular file so exclusive locking is our only option.
A: git will work very well in a non-team environment where each developer is solely responsible for a piece of code or file, because in that case communication about locks is not needed.
If your organization requires team environment (usually to strip developers from job security), then use svn, git is not for you. Svn provides both - source control and communication between developers about locks.
A: Just put a text file in cc with the file that you want to lock and then have the update hook reject it.
A: It might be true, that reorganising a project can help avoiding locks, but:
*
*Teams are also organised by other priorities (location, customers, ...)
*Tools are also selected by other targets (compatibility, price, ease of use by most of the employees)
*Some tools (and therefore there binary files) cannot be avoided, as there is simply no replacement that can do the same job, fitting the same to the companys needs for the same price.
To request, that a whole company might reorganise their workflow and replace all their tools that produce binaries, only to be able to work with git, because of the lack of locks, sounds quite inefficient.
Locks do not fit into the git philosophy (which was never made for binaries), but there are non-neglectable situations, where locks are the most efficient way to solve such a problem.
A: Git is not providing any command to lock files but I've fund a way to achieve that function using git hooks.
An auxiliary server is needed to store the lock informations. We can use a pre-commit hook to check if any of the committed files is locked. And if anyone locks a file, a program should tell the auxiliary server the information of the locker and the locked file.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "76"
}
|
Q: How do you make the background of a video or picture clear in Quartz Composer? I'd like to remove all of the black from a picture attached to a sprite so that it becomes transparent.
A: I'll copy and paste in case that link dies:
" I used a 'Color Matrix' patch, setting 'Alpha Vector (W)' and 'Bias Vector(X,Y,Z)' to 1 and all other to 0.
You will then find the alpha channel from the input image at the output."
I found this before, but I can't figure out exactly how to do it.
I found another solution using core image filter:
kernel vec4 darkToTransparent(sampler image)
{
vec4 color = sample(image, samplerCoord(image));
color.a = (color.r+color.g+color.b) > 0.005 ? 1.0:0.;
return color;
}
A: This looks like it'll do the trick:
http://www.quartzcompositions.com/phpBB2/viewtopic.php?t=281
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do you structure your URL routes? Is there a specific pattern that developers generally follow? I never really gave it much thought before in my web applications, but the ASP.NET MVC routing engine pretty much forces you to at least take it into consideration.
So far I've liked the controller/action/index structure (e.g. Products/Edit/1), but I'm struggling with more complex urls.
For instance, let's say you have a page that lists all the products a user has in their account. How would you do it? Off the top of my head I can think of the following possibilities for a listing page and an edit page:
*
*User/{user id}/Products/List, User/{user id}/Products/Edit/{product id}
*User/{user id}/Products, User/{user id}/Products/{product id}
*Products?UserID={user id}, Products/Edit/{product id}
I'm sure there are plenty of others that I'm missing. Any advice?
A: I like RESTful, user friendly and hackable URLs.
What does this mean? Lets start with user friendly URLs. To me a user friendly URL is something easy to type and easy to remember /Default.aspx?action=show&userID=140 doesn't meet any of these requirements. A URL like `/users/troethom´ seems logical though.
This leads to the next point. A hackable URL is an URL that the user can modify and still get presented with a result. If the URL is hackable and the URL for my profile is /users/troethom it would be safe to remove my user name to get a list of users (/users).
Using RESTful URLs is pretty similar to the ideas behind my other suggestions. You are designing URLs for a user and not for a machine and therefore the URL has to relate to the content and not the the technical back-end of your site. An URL as ´/users´ makes more sense than ´/users/list´ and an URL as ´/category/programming/javascript´ (representing the subcategory 'javascript' in the category 'programming' is better than ´/category/show/12´.
It is indeed more difficult to omit IDs, but in my world it is worth the effort.
Also consult the Understanding URIs section on W3C´s Common HTTP Implementation Problems. It has a list of common pitfalls when designing URIs. Another good resource is Resourceful Vs Hackable Search URLs.
A: You may want to take a look at the question "Friendly url scheme?".
Particularly, Larry.Smithmier's answer provided a list of common URL schemes when using MVC in ASP.NET.
A: Also, you may consider using different verbs to reuse the same routes for different actions. For example, a GET request to "Products/Edit/45" would display the product editor, whereas a POST to the same url would update the product. You can use the AcceptVerb attribute to accomplish this:
[AcceptVerb("GET")]
public ActionResult Edit(int id)
{
ViewData["Product"] = _products.Get(id);
return View();
}
[AcceptVerb("POST")]
public ActionResult Edit(int id, string title, string description)
{
_products.Update(id, title, description);
TempData["Message"] = "Changes saved successfully!";
return RedirectToAction("Edit", new { id });
}
A: Bill de hÓra wrote a very good essay entitled Web resource mapping criteria for frameworks that is well worth a read.
A: To add to troethom's comments, RESTful generally also means that, for example, to create a new user you would PUT a representation to /users/newusername
RESTful basically uses the 5 standard HTTP Methods (GET, PUT, POST, DELETE, HEAD) for controlling/accessing content.
Ok, this isn't easy for a web browser, but you can always use overloaded POST (post to /users/username with a representation of a user to change some of the details, etc.
Its a good way of doing things, I'd reccommend reading RESTFul Web services to get a better understanding :D (and it's a darn good book!)
A: I've seen two main accepted ways to approach this topic...
One is described in the MvcContrib project documentation
and the other one is described in a blog post by Stephen Walther (which I personally prefer).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Is it possible to create full screen color overlay effects in windows? I remember my old Radeon graphics drivers which had a number of overlay effects or color filters (whatever they are called) that would render the screen in e.g. sepia tones or negative colors. My current NVIDIA card does not seem to have such a function so I wondered if it is possible to make my own for Vista.
I don't know if there is some way to hook into window's rendering engine or, alternatively, into NVIDIA's drivers to achieve this effect. While it would be cool to just be able to modify the color, it would be even better to modify the color based on its screen coordinates or perform other more varied functions. An example would be colors which are more desaturated the longer they are from the center of the screen.
I don't have a specific use scenario so I cannot provide much more information. Basically, I'm just curious if there is anything to work with in this area.
A: You could have a full-screen layered window on top of everything and passing through click events.. However that's hacky and slow compared to what could be done by getting a hook in the WDM renderer's DirectX context. However, so far it's not possible, as Microsoft does not provide any public interface into this.
The Flip 3D utility does this, though, but even there that functionality is not in the program, it's in the WDM DLL, called by ordinal (hidden/undocumented function, obviously, since it doesn't serve any other purpose). So pretty much another dead end, from where I haven't bothered to dig deeper.
On that front, the best we can do is wait for some kind of official API.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: SQL Server 2005 / XML Stored Proc - Unicode to ascii? (Exception 0xc00ce508) I have an MSSQL2005 stored procedure here, which is supposed to take an XML message as input, and store it's content into a table.
The table fields are varchars, because our delphi backend application could not handle unicode.
Now, the messages that come in, are encoded ISO-8859-1. All is fine until characters over the > 128 standard set are included (in this case, ÄÖäö, which are an integral part of finnish). This causes the DB server to raise exception 0xc00ce508.
The database's default, as well as the table's and field's, collation is set to latin1, which should be the same as ISO-8859-1.
The XML message is parsed using the XML subsystem, like so:
ALTER PROCEDURE [dbo].[parse] @XmlIn NVARCHAR(1000) AS
SET NOCOUNT ON
DECLARE @XmlDocumentHandle INT
DECLARE @XmlDocument VARCHAR(1000)
BEGIN
SET @XmlDocument = @XmlIn
EXECUTE sp_xml_preparedocument @XmlDocumentHandle OUTPUT, @XmlDocument
BEGIN TRANSACTION
//the xml message's fields are looped through here, and rows added or modified in two tables accordingly
// like ...
DECLARE TempCursor CURSOR FOR
SELECT AM_WORK_ID,CUSTNO,STYPE,REFE,VIN_NUMBER,REG_NO,VEHICLE_CONNO,READY_FOR_INVOICE,IS_SP,SMANID,INVOICENO,SUB_STATUS,TOTAL,TOTAL0,VAT,WRKORDNO
FROM OPENXML (@XmlDocumentHandle, '/ORDER_NEW_CP_REQ/ORDER_NEW_CUSTOMER_REQ',8)
WITH (AM_WORK_ID int '@EXIDNO',CUSTNO int '@CUSTNO',STYPE VARCHAR(1) '@STYPE',REFE VARCHAR(50) '@REFE',VIN_NUMBER VARCHAR(30) '@VEHICLE_VINNO',
REG_NO VARCHAR(20) '@VEHICLE_LICNO',VEHICLE_CONNO VARCHAR(30) '@VEHICLE_CONNO',READY_FOR_INVOICE INT '@READY_FOR_INVOICE',IS_SP INT '@IS_SP',
SMANID INT '@SMANID',INVOICENO INT '@INVOICENO',SUB_STATUS VARCHAR(1) '@SUB_STATUS',TOTAL NUMERIC(12,2) '@TOTAL',TOTAL0 NUMERIC(12,2) '@TOTAL0',VAT NUMERIC(12,2) '@VAT',WRKORDNO INT '@WRKORDNO')
OPEN TempCursor
FETCH NEXT FROM TempCursor
INTO @wAmWork,@wCustNo,@wType,@wRefe,@wVIN,@wReg,@wConNo,@wRdy,@wIsSp,@wSMan,@wInvoNo,@wSubStatus,@wTot,@wTot0,@wVat,@wWrkOrdNo
// ... etc
COMMIT TRANSACTION
EXECUTE sp_xml_removedocument @XmlDocumentHandle
END
Previously, the stored procedure used to use nvarchar for input, but since that caused problems with the ancient backend application (Delphi 5 + ODBC), we had to switch the fields to varchars, at which point everything broke.
I also tried taking in nvarchar and converting that to varchar at the start, but the result is the same.
A: I'll answer my own question, since I managed to resolve the more than cryptic problem...
1) The stored procedure must reflect the correct code page for the transformation:
@XmlIn NVARCHAR(2000)
@XmlDocument VARCHAR(2000)
SELECT @XmlDocument = @XmlIn COLLATE SQL_Latin1_General_CP1_CI_AS
2) The XML input must specify the same charset:
<?xml version="1.0" encoding="ISO-8859-1" ?>
A: I don't know if anybody with enough rights to edit the answer will see this but while the answer is correct I would like to add that without specifying the collation explicitly the default collation of the database would be used in this case since it is implicitly assigned to every varchar-variable without a collation statement.
So
DECLARE @XmlDocument VARCHAR(2000) COLLATE SQL_Latin1_General_CP1_CI_AS
should do the trick, too.
A: The errorcode you mention seems to come from the MSXML Library. How is that involved there? From your question I would assume that you pass a varchar parameter to a stored procedure, then insert or update a varchar column with that parameter.
However that does not match with your exception code so it must happen outside of the actual stored procedure or you are doing additional things based on xml inside the stored procedure.
Please check that and modify your question accordingly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Getting Security Headers into a WCF service with custom message/formatter in .NET 3.0 We've inherited a WCF web service that has a custom MessageFormatter that constructs a custom Message subclass in the SerializeReply Method.
class OurMessageFormatter : MessageFormatter
{
public Message SerializeReply(MessageVersion messageVersion, object[] parameters, object result)
{
OurResponse ourResponse = (OurResponse) result;
// some validation here...
OurMessage reply = new OurMessage(ourResponse, MessageVersion.Soap11);
return reply;
}
}
The problem we're facing is that the custom Message subclass wouldn't have any headers populated. We tried to see if WCF would populate the generic ones (MessageID, ResponseTo, Action and such) out of the box, but no luck. Then we realized that the custom Message subclass had implemented the Headers Property like so...
class OurMessage : Message
{
public override MessageHeaders Headers
{
get { return new MessageHeaders(MessageVersion.Soap11WSAddressing10); }
}
}
... lotta help that turned out to be! So we rewrote it as so...
class OurMessage : Message
{
MessageHeaders headers;
public OurMessage()
{
// ...
headers = new MessageHeaders(MessageVersion.Soap11WSAddressing10);
}
public override MessageHeaders Headers
{
get { return headers; }
}
}
... and still no luck.
So we went on to hand code the headers; first in the Formatter...
class OurMessageFormatter : MessageFormatter
{
public Message SerializeReply(MessageVersion messageVersion, object[] parameters, object result)
{
//...
OurMessage reply = new OurMessage(ourResponse, MessageVersion.Soap11);
ourMessage.MessageID = ...;
ourMessage.ResponseTo = ...;
ourMessage.Action = ...;
// more headers set ...
return reply;
}
}
... and then in the Message itself...
class OurMessage : Message
{
public override MessageHeaders Headers
{
get
{
MessageHeaders headers = new MessageHeaders(MessageVersion.Soap11WSAddressing10);
ourMessage.MessageID = ...;
ourMessage.ResponseTo = ...;
ourMessage.Action = ...;
// more headers set ...
return headers;
}
}
}
Every way we tried, we managed to get the WS-Addressing headers into the actual response, but could never get the WS-Security header in (actually we were just trying to put in a Security Header with TimestampID and Created/Expires elements). Every time we added the Security header in the Security header, the service just dropped the connection unexpectedly during serialization (after the SerializeReply call had completed).
So here's my question. Does anyone know how to get the WS-Security headers into a WCF service with custom Formatter and custom Message implementation?
Update [26 Nov 2008]: We have an outstanding MS incident for this and the latest update we got from them was that WCF's current MessageVersion's don't seem to support those headers and need a custom binding implementation. The investigation continues for better approaches.
A: I noticed that the MessageHeaders class has a constructor that takes a collection of MessageHeaders as a parameter. Maybe you could pass the complete collection of headers you need to see if it works. I haven't worked with WS-Security headers before so I'm not sure this is feasible for them. I know that they will be in their own namespace (wsse:http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd) that will need to be defined for the message header.
I found this article that gives a good overview of Messaging Fundamentals. It has an example on creating headers.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Difference between Visual C++ 2008 and 2005 I couldn't find any useful information on Microsoft's site, so here is the question: has the compiler in Visual C++ 2008 been improved significantly since the 2005 version? I'm especially looking for better optimization.
A: Straight from the horses mouth....
http://msdn.microsoft.com/en-us/library/bb384632.aspx
A: Somasegar has some notes in this blog post.
Mainly about incremental build improvements and multi core improvements.
A: According to one of our senior developers VS2008 features extended support for multicore compilation ( file-wise instead of project-wise I'm told ), so there might be a reasonable performance optimization for your project.
A: Have you looked here, here or here ?
If yes, and no information was there you could start by checking first the compiler version (cl.exe) the linker version(link.exe) and then make some performance (optimization tests) and see who is the winner.
Usually a newer version of cl.exe will be better. Not the same thing can be mentioned about the UserInterface of Visual Studio (at least from my experience).
A: In my experience, compiler optimizations rarely improve more than a few percent between versions at most; if you really need more performance, that few percent just isn't going to cut it--you're going to have to get down and dirty in the code if you want more.
Remember, compilers are extremely dumb, and can usually be outwitted by a smart programmer; the only question is whether its worth your time and effort to do so. If you have a single core function that makes up 90% of your CPU time, it might definitely be so. If runtime is spread equally over ten thousand lines of code, probably not.
Of course, if your speed problem is due to slow algorithms, no compiler can save you.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Virtual member call in a constructor I'm getting a warning from ReSharper about a call to a virtual member from my objects constructor.
Why would this be something not to do?
A: Reasons of the warning are already described, but how would you fix the warning? You have to seal either class or virtual member.
class B
{
protected virtual void Foo() { }
}
class A : B
{
public A()
{
Foo(); // warning here
}
}
You can seal class A:
sealed class A : B
{
public A()
{
Foo(); // no warning
}
}
Or you can seal method Foo:
class A : B
{
public A()
{
Foo(); // no warning
}
protected sealed override void Foo()
{
base.Foo();
}
}
A: Yes, it's generally bad to call virtual method in the constructor.
At this point, the objet may not be fully constructed yet, and the invariants expected by methods may not hold yet.
A: In order to answer your question, consider this question: what will the below code print out when the Child object is instantiated?
class Parent
{
public Parent()
{
DoSomething();
}
protected virtual void DoSomething()
{
}
}
class Child : Parent
{
private string foo;
public Child()
{
foo = "HELLO";
}
protected override void DoSomething()
{
Console.WriteLine(foo.ToLower()); //NullReferenceException!?!
}
}
The answer is that in fact a NullReferenceException will be thrown, because foo is null. An object's base constructor is called before its own constructor. By having a virtual call in an object's constructor you are introducing the possibility that inheriting objects will execute code before they have been fully initialized.
A: Because until the constructor has completed executing, the object is not fully instantiated. Any members referenced by the virtual function may not be initialised. In C++, when you are in a constructor, this only refers to the static type of the constructor you are in, and not the actual dynamic type of the object that is being created. This means that the virtual function call might not even go where you expect it to.
A: Your constructor may (later, in an extension of your software) be called from the constructor of a subclass that overrides the virtual method. Now not the subclass's implementation of the function, but the implementation of the base class will be called. So it doesn't really make sense to call a virtual function here.
However, if your design satisfies the Liskov Substitution principle, no harm will be done. Probably that's why it's tolerated - a warning, not an error.
A: One important aspect of this question which other answers have not yet addressed is that it is safe for a base-class to call virtual members from within its constructor if that is what the derived classes are expecting it to do. In such cases, the designer of the derived class is responsible for ensuring that any methods which are run before construction is complete will behave as sensibly as they can under the circumstances. For example, in C++/CLI, constructors are wrapped in code which will call Dispose on the partially-constructed object if construction fails. Calling Dispose in such cases is often necessary to prevent resource leaks, but Dispose methods must be prepared for the possibility that the object upon which they are run may not have been fully constructed.
A: One important missing bit is, what is the correct way to resolve this issue?
As Greg explained, the root problem here is that a base class constructor would invoke the virtual member before the derived class has been constructed.
The following code, taken from MSDN's constructor design guidelines, demonstrates this issue.
public class BadBaseClass
{
protected string state;
public BadBaseClass()
{
this.state = "BadBaseClass";
this.DisplayState();
}
public virtual void DisplayState()
{
}
}
public class DerivedFromBad : BadBaseClass
{
public DerivedFromBad()
{
this.state = "DerivedFromBad";
}
public override void DisplayState()
{
Console.WriteLine(this.state);
}
}
When a new instance of DerivedFromBad is created, the base class constructor calls to DisplayState and shows BadBaseClass because the field has not yet been update by the derived constructor.
public class Tester
{
public static void Main()
{
var bad = new DerivedFromBad();
}
}
An improved implementation removes the virtual method from the base class constructor, and uses an Initialize method. Creating a new instance of DerivedFromBetter displays the expected "DerivedFromBetter"
public class BetterBaseClass
{
protected string state;
public BetterBaseClass()
{
this.state = "BetterBaseClass";
this.Initialize();
}
public void Initialize()
{
this.DisplayState();
}
public virtual void DisplayState()
{
}
}
public class DerivedFromBetter : BetterBaseClass
{
public DerivedFromBetter()
{
this.state = "DerivedFromBetter";
}
public override void DisplayState()
{
Console.WriteLine(this.state);
}
}
A: The warning is a reminder that virtual members are likely to be overridden on derived class. In that case whatever the parent class did to a virtual member will be undone or changed by overriding child class. Look at the small example blow for clarity
The parent class below attempts to set value to a virtual member on its constructor. And this will trigger Re-sharper warning, let see on code:
public class Parent
{
public virtual object Obj{get;set;}
public Parent()
{
// Re-sharper warning: this is open to change from
// inheriting class overriding virtual member
this.Obj = new Object();
}
}
The child class here overrides the parent property. If this property was not marked virtual the compiler would warn that the property hides property on the parent class and suggest that you add 'new' keyword if it is intentional.
public class Child: Parent
{
public Child():base()
{
this.Obj = "Something";
}
public override object Obj{get;set;}
}
Finally the impact on use, the output of the example below abandons the initial value set by parent class constructor.
And this is what Re-sharper attempts to to warn you, values set on the Parent class constructor are open to be overwritten by the child class constructor which is called right after the parent class constructor.
public class Program
{
public static void Main()
{
var child = new Child();
// anything that is done on parent virtual member is destroyed
Console.WriteLine(child.Obj);
// Output: "Something"
}
}
A: Beware of blindly following Resharper's advice and making the class sealed!
If it's a model in EF Code First it will remove the virtual keyword and that would disable lazy loading of it's relationships.
public **virtual** User User{ get; set; }
A: In C#, a base class' constructor runs before the derived class' constructor, so any instance fields that a derived class might use in the possibly-overridden virtual member are not initialized yet.
Do note that this is just a warning to make you pay attention and make sure it's all-right. There are actual use-cases for this scenario, you just have to document the behavior of the virtual member that it can not use any instance fields declared in a derived class below where the constructor calling it is.
A: The rules of C# are very different from that of Java and C++.
When you are in the constructor for some object in C#, that object exists in a fully initialized (just not "constructed") form, as its fully derived type.
namespace Demo
{
class A
{
public A()
{
System.Console.WriteLine("This is a {0},", this.GetType());
}
}
class B : A
{
}
// . . .
B b = new B(); // Output: "This is a Demo.B"
}
This means that if you call a virtual function from the constructor of A, it will resolve to any override in B, if one is provided.
Even if you intentionally set up A and B like this, fully understanding the behavior of the system, you could be in for a shock later. Say you called virtual functions in B's constructor, "knowing" they would be handled by B or A as appropriate. Then time passes, and someone else decides they need to define C, and override some of the virtual functions there. All of a sudden B's constructor ends up calling code in C, which could lead to quite surprising behavior.
It is probably a good idea to avoid virtual functions in constructors anyway, since the rules are so different between C#, C++, and Java. Your programmers may not know what to expect!
A: There are well-written answers above for why you wouldn't want to do that. Here's a counter-example where perhaps you would want to do that (translated into C# from Practical Object-Oriented Design in Ruby by Sandi Metz, p. 126).
Note that GetDependency() isn't touching any instance variables. It would be static if static methods could be virtual.
(To be fair, there are probably smarter ways of doing this via dependency injection containers or object initializers...)
public class MyClass
{
private IDependency _myDependency;
public MyClass(IDependency someValue = null)
{
_myDependency = someValue ?? GetDependency();
}
// If this were static, it could not be overridden
// as static methods cannot be virtual in C#.
protected virtual IDependency GetDependency()
{
return new SomeDependency();
}
}
public class MySubClass : MyClass
{
protected override IDependency GetDependency()
{
return new SomeOtherDependency();
}
}
public interface IDependency { }
public class SomeDependency : IDependency { }
public class SomeOtherDependency : IDependency { }
A: When an object written in C# is constructed, what happens is that the initializers run in order from the most derived class to the base class, and then constructors run in order from the base class to the most derived class (see Eric Lippert's blog for details as to why this is).
Also in .NET objects do not change type as they are constructed, but start out as the most derived type, with the method table being for the most derived type. This means that virtual method calls always run on the most derived type.
When you combine these two facts you are left with the problem that if you make a virtual method call in a constructor, and it is not the most derived type in its inheritance hierarchy, that it will be called on a class whose constructor has not been run, and therefore may not be in a suitable state to have that method called.
This problem is, of course, mitigated if you mark your class as sealed to ensure that it is the most derived type in the inheritance hierarchy - in which case it is perfectly safe to call the virtual method.
A: There's a difference between C++ and C# in this specific case.
In C++ the object is not initialized and therefore it is unsafe to call a virutal function inside a constructor.
In C# when a class object is created all its members are zero initialized. It is possible to call a virtual function in the constructor but if you'll might access members that are still zero. If you don't need to access members it is quite safe to call a virtual function in C#.
A: Just to add my thoughts. If you always initialize the private field when define it, this problem should be avoid. At least below code works like a charm:
class Parent
{
public Parent()
{
DoSomething();
}
protected virtual void DoSomething()
{
}
}
class Child : Parent
{
private string foo = "HELLO";
public Child() { /*Originally foo initialized here. Removed.*/ }
protected override void DoSomething()
{
Console.WriteLine(foo.ToLower());
}
}
A: I think that ignoring the warning might be legitimate if you want to give the child class the ability to set or override a property that the parent constructor will use right away:
internal class Parent
{
public Parent()
{
Console.WriteLine("Parent ctor");
Console.WriteLine(Something);
}
protected virtual string Something { get; } = "Parent";
}
internal class Child : Parent
{
public Child()
{
Console.WriteLine("Child ctor");
Console.WriteLine(Something);
}
protected override string Something { get; } = "Child";
}
The risk here would be for the child class to set the property from its constructor in which case the change in the value would occur after the base class constructor has been called.
My use case is that I want the child class to provide a specific value or a utility class such as a converter and I don't want to have to call an initialization method on the base.
The output of the above when instantiating the child class is:
Parent ctor
Child
Child ctor
Child
A: Another interesting thing I found is that the ReSharper error can be 'satisfied' by doing something like below which is dumb to me. However, as mentioned by many earlier, it still is not a good idea to call virtual properties/methods in constructor.
public class ConfigManager
{
public virtual int MyPropOne { get; private set; }
public virtual string MyPropTwo { get; private set; }
public ConfigManager()
{
Setup();
}
private void Setup()
{
MyPropOne = 1;
MyPropTwo = "test";
}
}
A: I would just add an Initialize() method to the base class and then call that from derived constructors. That method will call any virtual/abstract methods/properties AFTER all of the constructors have been executed :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1451"
}
|
Q: Is it possible to determine the current user from within a global keyboard hook in .NET I want to create a keyboard and mouse hook which will be started as a windows service. I want to monitor the activity of the various users who use the system throughout the day. i.e. which users are active at what times.
Is is possible to determine which user will be receiving the events? (The service will be running as a separate user so getCurrentUser is not appropriate)
A: No, Environment.UserName does not work - the hook procedure is not called under the context of the input receiver.
Indeed, I think this is not possible - the _LL hooks, which you are no doubt using if using .NET, are low-level hooks. It seems to me that they are executed well before Windows even determines which desktop/application will receive the event. I may be wrong, though - I have never used the _LL hooks myself.
A: @TcKs - Um, what about Fast User Switching?
A: Another way:
The WTSGetActiveConsoleSessionId function allows you get a ID of active session. Concrete:
The WTSGetActiveConsoleSessionId function retrieves the Terminal Services session that is currently attached to the physical console. The physical console is the monitor, keyboard, and mouse. Note that it is not necessary that Terminal Services be running for this function to succeed.
Then you can use a WTSQueryUserToken to geting user's token, then you should be able to get a information about user with the token.
This functions are from Terminal Services, but documentation says:
Note that it is not necessary that Terminal Services be running for this function to succeed.
So I think, this can be way.
A: I don't know about these hooks - do they receive events from Remote Desktop keyboards? If they only get the local keyboard, then I think you need to find the owner of WinSta0.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Business Logic: Database or Application Layer The age old question. Where should you put your business logic, in the database as stored procedures ( or packages ), or in the application/middle tier? And more importantly, Why?
Assume database independence is not a goal.
A: While there is no one right answer - it depends on the project in question, I would recommend the approach advocated in "Domain Driven Design" by Eric Evans. In this approach the business logic is isolated in its own layer - the domain layer - which sits on top of the infrastructure layer(s) - which could include your database code, and below the application layer, which sends the requests into the domain layer for fulfilment and listens for confirmation of their completion, effectively driving the application.
This way, the business logic is captured in a model which can be discussed with those who understand the business aside from technical issues, and it should make it easier to isolate changes in the business rules themselves, the technical implementation issues, and the flow of the application which interacts with the business (domain) model.
I recommend reading the above book if you get the chance as it is quite good at explaining how this pure ideal can actually be approximated in the real world of real code and projects.
A: Anything that affects data integrity must be put at the database level. Other things besides the user interface often put data into, update or delete data from the database including imports, mass updates to change a pricing scheme, hot fixes, etc. If you need to ensure the rules are always followed, put the logic in defaults and triggers.
This is not to say that it isn't a good idea to also have it in the user interface (why bother sending information that the database won't accept), but to ignore these things in the database is to court disaster.
A: While there are certainly benefits to have the business logic on the application layer, I'd like to point out that the languages/frameworks seem to change more frequently then the databases.
Some of the systems that I support, went through the following UIs in the last 10-15 years: Oracle Forms/Visual Basic/Perl CGI/ ASP/Java Servlet. The one thing that didn't change - the relational database and stored procedures.
A: The only thing that goes in a database is data.
Stored procedures are a maintenance nightmare. They aren't data and they don't belong in the database. The endless coordination between developers and DBA's is little more than organizational friction.
It's hard to keep good version control over stored procedures. The code outside the database is really easy to install -- when you think you've got the wrong version you just do an SVN UP (maybe an install) and your application's back to a known state. You have environment variables, directory links, and lots of environment control over the application.
You can, with simple PATH manipulations, have variant software available for different situations (training, test, QA, production, customer-specific enhancements, etc., etc.)
The code inside the database, however, is much harder to manage. There's no proper environment -- no "PATH", directory links or other environment variables -- to provide any usable control over what software's being used; you have a permanent, globally bound set of application software stuck in the database, married to the data.
Triggers are even worse. They're both a maintenance and a debugging nightmare. I don't see what problem they solve; they seem to be a way of working around badly-designed applications where someone couldn't be bothered to use the available classes (or function libraries) correctly.
While some folks find the performance argument compelling, I still haven't seen enough benchmark data to convince me that stored procedures are all that fast. Everyone has an anecdote, but no one has side-by-side code where the algorithms are more-or-less the same.
[In the examples I've seen, the old application was a poorly designed mess; when the stored procedures were written, the application was re-architected. I think the design change had more impact than the platform change.]
A: Database independence, which the questioner rules out as a consideration in this case, is the strongest argument for taking logic out of the database. The strongest argument for database independence is for the ability to sell software to companies with their own preference for a database backend.
Therefore, I'd consider the major argument for taking stored procedures out of the database to be a commercial one only, not a technical one. There may be technical reasons but there are also technical reasons for keeping it in there -- performance, integrity, and the ability to allow multiple applications to use the same API for example.
Whether or not to use SP's is also strongly influenced by the database that you are going to use. If you take database independence out of consideration then you're going to have very different experiences using T-SQL or using PL/SQL.
If you are using Oracle to develop an application then PL/SQL is an obvious choice as a language. It's is very tightly coupled with the data, continually improved in every relase, and any decent development tool is going to integratePL/SQL development with CVS or Subversion or somesuch.
Oracle's web-based Application Express development environment is even built 100% with PL/SQL.
A: If you need database independence, you'll probably want to put all your business logic in the application layer since the standards available in the application tier are far more prevalent than those available to the database tier.
However, if database independence isn't the #1 factor and the skill-set of your team includes strong database skills, then putting the business logic in the database may prove to be the best solution. You can have your application folks doing application-specific things and your database folks making sure all the queries fly.
Of course, there's a big difference between being able to throw a SQL statement together and having "strong database skills" - if your team is closer to the former than the latter then put the logic in the application using one of the Hibernates of this world (or change your team!).
In my experience, in an Enterprise environment you'll have a single target database and skills in this area - in this case put everything you can in the database. If you're in the business of selling software, the database license costs will make database independence the biggest factor and you'll be implementing everything you can in the application tier.
Hope that helps.
A: It is nowadays possible to submit to subversion your stored proc code and to debug this code with good tool support.
If you use stored procs that combine sql statements you can reduce the amount of data traffic between the application and the database and reduce the number of database calls and gain big performance gains.
Once we started building in C# we made the decision not to use stored procs but now we are moving more and more code to stored procs. Especially batch processing.
However don't use triggers, use stored procs or better packages. Triggers do decrease maintainability.
A: Putting the code in the application layer will result in a DB independent application.
Sometimes it is better to use stored procedures for performance reasons.
It (as usual) depends on the application requirements.
A: Maintainability of your code is always a big concern when determining where business logic should go.
Integrated debugging tools and more powerful IDEs generally make maintaining middle tier code easier than the same code in a stored procedure. Unless there is a real reason otherwise, you should start with business logic in your middle tier/application and not in stored procedures.
However when you come to reporting and data mining/searching, stored procedures can often a better choice. This is thanks to the power of the databases aggregation/filtering capabilities and the fact you are keeping processing very close the the source of the data. But this may not be what most consider classic business logic anyway.
A: Put enough of the business logic in the database to ensure that the data is consistent and correct.
But don't fear having to duplicate some of this logic at another level to enhance the user experience.
A: For very simple cases you can put your business logic in stored procedures. Usually even the simple cases tend to get complicated over time. Here are the reasons I don't put business logic in the database:
Putting the business logic in the database tightly couples it to the technical implementation of the database. Changing a table will cause you to change a lot of the stored procedures again causing a lot of extra bugs and extra testing.
Usually the UI depends on business logic for things like validation. Putting these things in the database will cause tight coupling between the database and the UI or in different cases duplicates the validation logic between those two.
It will get hard to have multiple applications work on the same database. Changes for one aplication will cause others to break. This can quickly turn into a maintenance nightmare. So it doesn't really scale.
More practically SQL isn't a good language to implement business logic in an understandable way. SQL is great for set based operations but it misses constructs for "programming in the large" it's hard to maintain big amounts of stored procedures. Modern OO languages are better suited and more flexible for this.
This doesn't mean you can't use stored procs and views. I think it sometimes is a good idea to put an extra layer of stored procedures and views between the tables and application(s) to decouple the two. That way you can change the layout of the database without changing external interface allowing you to refactor the database independently.
A: The business logic should be placed in the application/middle tier as a first choice. That way it can be expressed in the form of a domain model, be placed in source control, be split or combined with related code (refactored), etc. It also gives you some database vendor independence.
Object Oriented languages are also much more expressive than stored procedures, allowing you to better and more easily describe in code what should be happening.
The only good reasons to place code in stored procedures are: if doing so produces a significant and necessary performance benefit or if the same business code needs to be executed by multiple platforms (Java, C#, PHP). Even when using multiple platforms, there are alternatives such as web-services that might be better suited to sharing functionality.
A: The answer in my experience lies somewhere on a spectrum of values usually determined by where your organization's skills lie.
The DBMS is a very powerful beast, which means proper or improper treatment will bring great benefit or great danger. Sadly, in too many organizations, primary attention is paid to programming staff; dbms skills, especially query development skills (as opposed to administrative) are neglected. Which is exacerbated by the fact that the ability to evaluate dbms skills is also probably missing.
And there are few programmers who sufficiently understand what they don't understand about databases.
Hence the popularity of suboptimal concepts, such as Active Records and LINQ (to throw in some obvious bias). But they are probably the best answer for such organizations.
However, note that highly scaled organizations tend to pay a lot more attention to effective use of the datastore.
A: There is no standalone right answer to this question. It depends on the requirements of your app, the preferences and skills of your developers, and the phase of the moon.
A: Business logic is to be put in the application tier and not in the database.
The reason is that a database stored procedure is always dependen on the database product you use. This break one of the advantages of the three tier model. You cannot easily change to an other database unless you provide an extra stored procedure for this database product.
on the other hand sometimes, it makes sense to put logic into a stored procedure for performance optimization.
What I want to say is business logic is to be put into the application tier, but there are exceptions (mainly performance reasons)
A: Bussiness application 'layers' are:
1. User Interface
This implements the business-user's view of h(is/er) job. It uses terms that the user is familiar with.
2. Processing
This is where calculations and data manipulation happen. Any business logic that involves changing data are implemented here.
3. Database
This could be: a normalized sequential database (the standard SQL-based DBMS's); an OO-database, storing objects wrapping the business-data; etc.
What goes Where
In getting to the above layers you need to do the necessary analysis and design. This would indicate where business logic would best be implemented: data-integrity rules and concurrency/real-time issues regarding data-updates would normally be implemented as close to the data as possible, same as would calculated fields, and this is a good pointer to stored-procedures/triggers, where data-integrity and transaction-control is absolutely necessary.
The business-rules involving the meaning and use of the data would for the most part be implemented in the Processing layer, but would also appear in the User-Interface as the user's work-flow - linking the various process in some sequence that reflects the user's job.
A: Imho. there are two conflicting concerns with deciding where business logic goes in a relational database-driven app:
*
*maintainability
*reliability
Re. maintainability: To allow for efficient future development, business logic belongs in the part of your application that's easiest to debug and version control.
Re. reliability: When there's significant risk of inconsistency, business logic belongs in the database layer. Relational databases can be designed to check for constraints on data, e.g. not allowing NULL values in specific columns, etc. When a scenario arises in your application design where some data needs to be in a specific state which is too complex to express with these simple constraints, it can make sense to use a trigger or something similar in the database layer.
Triggers are a pain to keep up to date, especially when your app is supposed to run on client systems you don't even have access too. But that doesn't mean it's impossible to keep track of them or update them. S.Lott's arguments in his answer that it's a pain and a hassle are completely valid, I'll second that and have been there too. But if you keep those limitations in mind when you first design your data layer and refrain from using triggers and functions for anything but the absolute necessities it's manageable.
In our application, most business logic is contained in the application's model layer, e.g. an invoice knows how to initialize itself from a given sales order. When a bunch of different things are modified sequentially for a complex set of changes like this, we roll them up in a transaction to maintain consistency, instead of opting for a stored procedure. Calculation of totals etc. are all done with methods in the model layer. But when we need to denormalize something for performance or insert data into a 'changes' table used by all clients to figure out which objects they need to expire in their session cache, we use triggers/functions in the database layer to insert a new row and send out a notification (Postgres listen/notify stuff) from this trigger.
After having our app in the field for about a year, used by hundreds of customers every day, the only thing I would change if we were to start from scratch would be to design our system for creating database functions (or stored procedures, however you want to call them) with versioning and updates to them in mind from the get-go.
Thankfully, we do have some system in place to keep track of schema versions, so we built something on top of that to take care of replacing database functions. It would've saved us some time now if we'd considered the need to replace them from the beginning though.
Of course, everything changes when you step outside of the realm of RDBMS's into tuple-storage systems like Amazon SimpleDB and Google's BigTable. But that's a different story :)
A: It's really up to you, as long as you're consistent.
One good reason to put it in your database layer: if you are fairly sure that your clients will never ever change their database back-end.
One good reason to put it in the application layer: if you are targeting multiple persistence technologies for your application.
You should also take into account core competencies. Are your developers mainly application layer developers, or are they primarily DBA-types?
A: We put a lot of business logic in stored procedures - it's not ideal, but quite often it's a good balance between performance and reliability.
And we know where it is without having to search through acres of solutions and codebase!
A: Scalability is also very important factor for pusing business logic in middle or app layer than to database layer.It should be understood that DatabaseLayer is only for interacting with Database not manipulating which is returned to or from database.
A: I remember reading an article somewhere that pointed out that pretty well everything can be, at some level, part of the business logic, and so the question is meaningless.
I think the example given was the display of an invoice onscreen. The decision to mark an overdue one in red is a business decision...
A: It's a continuum. IMHO the biggest factor is speed. How can u get this sucker up and running as quickly as possible while still adhering to good tenants of programming such as maintainability, performance, scalability, security, reliability etc. Often times SQL is the most concise way to express something and also happens to be the most performant many times, except for string operations etc, but that's where your CLR Procs can help. My belief is to liberally sprinkle business logic around whereever you feel it is best for the undertaking at hand. If you have a bunch of application developers who shit their pants when looking at SQL then let them use their app logic. If you really want to create a high performance application with large datasets, put as much logic in the DB as you can. Fire your DBA's and give developers ultimate freedom over their Dev databases. There is no one answer or best tool for the job. You have multiple tools so become expert at all levels of the application and you'll soon find that you're spending a lot more time writing nice consise expressive SQL where warranted and using the application layer other times. To me, ultimately, reducing the number of lines of code is what leads to simplicity. We have just converted a sql rich application with a mere 2500 lines of app code and 1000 lines of SQL to a domain model which now has 15500 lines of app code and 2500 lines of SQL to achieve what the former sql rich app did. If you can justify a 6 fold increase in code as "simplified" then go right ahead.
A: This is a great question! I found this after I had already asked a simliar question, but this is more specific. It came up as a result of a design change decision that I wasn't involved in making.
Basically, what I was told was that If you have millions of rows of data in your database tables, then look at putting business logic into stored procedures and triggers. That is what we are doing right now, converting a java app into stored procedures for maintainability as the java code had become convoluted.
I found this article on: The Business Logic Wars The author also made the million rows in a table argument, which I found interesting. He also added business logic in javascript, which is client side and outside of the business logic tier. I hadn't thought about this before even though I've used javascript for validation for years, to along with server side validation.
My opinion is that you want the business logic in the application/middle tier as a rule of thumb, but don't discount cases where it makes sense to put it into the database.
One last point, there is another group where I'm working presently that is doing massive database work for research and the amount of data they are dealing with is immense. Still, for them they don't have any business logic in the database itself, but keep it in the application/middle tier. For their design, the application/middle tier was the correct place for it, so I wouldn't use the size of tables as the only design consideration.
A: Business logic is usually embodied by objects, and the various language constructs of encapsulation, inheritance, and and polymorphism. For example, if a banking application is passing around money, there may be a Money type that defines the business elements of what "money" is. This, opposed to using a primitive decimal to represent money. For this reason, well-designed OOP is where the "business logic" lives—not strictly in any layer.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
}
|
Q: Issue writing to single file in Web service in .NET I have created a webservice in .net 2.0, C#. I need to log some information to a file whenever different methods are called by the web service clients.
The problem comes when one user process is writing to a file and another process tries to write to it. I get the following error:
The process cannot access the file because it is being used by another process.
The solutions that I have tried to implement in C# and failed are as below.
*
*Implemented singleton class that contains code that writes to a file.
*Used lock statement to wrap the code that writes to the file.
*I have also tried to use open source logger log4net but it also is not a perfect solution.
*I know about logging to system event logger, but I do not have that choice.
I want to know if there exists a perfect and complete solution to such a problem?
A: The locking is probably failing because your webservice is being run by more than one worker process.
You could protect the access with a named mutex, which is shared across processes, unlike the locks you get by using lock(someobject) {...}:
Mutex lock = new Mutex("mymutex", false);
lock.WaitOne();
// access file
lock.ReleaseMutex();
A: You don't say how your web service is hosted, so I'll assume it's in IIS. I don't think the file should be accessed by multiple processes unless your service runs in multiple application pools. Nevertheless, I guess you could get this error when multiple threads in one process are trying to write.
I think I'd go for the solution you suggest yourself, Pradeep, build a single object that does all the writing to the log file. Inside that object I'd have a Queue into which all data to be logged gets written. I'd have a separate thread reading from this queue and writing to the log file. In a thread-pooled hosting environment like IIS, it doesn't seem too nice to create another thread, but it's only one... Bear in mind that the in-memory queue will not survive IIS resets; you might lose some entries that are "in-flight" when the IIS process goes down.
Other alternatives certainly include using a separate process (such as a Service) to write to the file, but that has extra deployment overhead and IPC costs. If that doesn't work for you, go with the singleton.
A: Maybe write a "queue line" of sorts for writing to the file, so when you try to write to the file it keeps checking to see if the file is locked, if it is - it keeps waiting, if it isn't locked - then write to it.
A: You could push the results onto an MSMQ Queue and have a windows service pick the items off of the queue and log them. It's a little heavy, but it should work.
A: Joel and charles. That was quick! :)
Joel: When you say "queue line" do you mean creating a separate thread that runs in a loop to keep checking the queue as well as write to a file when it is not locked?
Charles: I know about MSMQ and windows service combination, but like I said I have no choice other than writing to a file from within the web service :)
thanks
pradeep_tp
A: Trouble with all the approached tried so far is that multiple threads can enter the code.
That is multiple threads try to acquire and use the file handler - hence the errors - you need a single thread outside of the worker threads to do the work - with a single file handle held open.
Probably easiest thing to do would be to create a thread during application start in Global.asax and have that listen to a synchronized in-memory queue (System.Collections.Generics.Queue). Have the thread open and own the lifetime of the file handle, only that thread can write to the file.
Client requests in ASP will lock the queue momentarily, push the new logging message onto the queue, then unlock.
The logger thread will poll the queue periodically for new messages - when messages arrive on the queue, the thread will read and dispatch the data in to the file.
A: To know what I am trying to do in my code, following is the singletone class I have implemented in C#
public sealed class FileWriteTest
{
private static volatile FileWriteTest instance;
private static object syncRoot = new Object();
private static Queue logMessages = new Queue();
private static ErrorLogger oNetLogger = new ErrorLogger();
private FileWriteTest() { }
public static FileWriteTest Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
instance = new FileWriteTest();
Thread MyThread = new Thread(new ThreadStart(StartCollectingLogs));
MyThread.Start();
}
}
}
return instance;
}
}
private static void StartCollectingLogs()
{
//Infinite loop
while (true)
{
cdoLogMessage objMessage = new cdoLogMessage();
if (logMessages.Count != 0)
{
objMessage = (cdoLogMessage)logMessages.Dequeue();
oNetLogger.WriteLog(objMessage.LogText, objMessage.SeverityLevel);
}
}
}
public void WriteLog(string logText, SeverityLevel errorSeverity)
{
cdoLogMessage objMessage = new cdoLogMessage();
objMessage.LogText = logText;
objMessage.SeverityLevel = errorSeverity;
logMessages.Enqueue(objMessage);
}
}
When I run this code in debug mode (simulates just one user access), I get the error "stack overflow" at the line where queue is dequeued.
Note: In the above code ErrorLogger is a class that has code to write to the File. objMessage is an entity class to carry the log message.
A: Alternatively, you might want to do error logging into the database (if you're using one)
A: Koth,
I have implemented Mutex lock, which has removed the "stack overflow" error. I yet have to do a load testing before I can conclude whether it is working fine in all cases.
I was reading about Mutex objets in one of the websites, which says that Mutex affects the performance. I want to know one thing with putting lock through Mutex.
Suppose User Process1 is writing to a file and at the same time User Process2 tries to write to the same file. Since Process1 has put a lock on the code block, will Process2 will keep trying or just die after the first attempet iteself.?
thanks
pradeep_tp
A: It will wait until the mutex is released....
A:
Joel: When you say "queue line" do you
mean creating a separate thread that
runs in a loop to keep checking the
queue as well as write to a file when
it is not locked?
Yeah, that's basically what I was thinking. Have another thread that has a while loop until it can get access to the file and save, then end.
But you would have to do it in a way where the first thread to start looking gets access first. Which is why I say queue.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: I want to log the Poison message that my wcf service is dropping using MSMQ 3.0 and windows 2003 I want to log the Poison message that my wcf service is dropping using MSMQ 3.0 and windows 2003
A: You can implement a custom IErrorHandler and associate it with your service using a custom behavior. In your implementation, check if the exception raised is of type MsmqPoisonMessageException, and if so, go out and grab the message from the queue using System.Messaging,MessageQueue and log it.
There's a sample that shows how most of this stuff is done: it moves the message to another queue, but should be trivial to modify it so that it just logs the message somewhere instead.
A: You could probably add a service like the following that reads messages from your poison queue and logs them.
<service name="YourPosionMessageHandler"
<endpoint
address="net.msq://localhost/private/YourServiceQueue;poison"
binding="netMsmqBinding"
/>
</service>
A: There is a perfect example for this on MSDN.
http://msdn.microsoft.com/en-us/library/ms751472.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do you start building an ASP.NET web app? Say for example you're getting a web app project that interacts with a database.
How do you start your development? Do you start by designing your database, drawing simple ERD and writting a create scripts? Do you start by designing the look of your of web app, maybe using photoshop, then create a master page for it? or do you start by designing your domain models, with minimal looks, and apply a preetier design later on?
Please share you thoughts on this... Cheers...
A: I tend to do the last of those ideas, "start by designing your domain models, with minimal looks, and apply a preetier design later on" I like to make my application, of any kind, does what I want it to do before I spend time on making it look pretty.
A: You start by deciding which way you start. No but really, it depends on too much factors to have a general answer. Do you develop using concepts of agile development, are there specified functional designs, did the client give you strict requirements, what is your own experience etc..
Generally we start by developing our business objects first, then creating views for them using sample data / fake databases or sometimes even plain text files. From there, we start filling in the bits and pieces. If not all requirements are set, it's best to keep the database outside your development as long as possible. That way you prevent yourself from having to change your db, sprocs and interaction with your db everytime.
A: Figure out how the users need to interact with your site first. What are they needing to achieve?
Let this define you ERD and the database model will quickly follow.
Then, when you actually start coding you'll be heading in the right direction.
Many will also say, write your Unit Tests first. It's hard to do but often worth it.
A: UI and DB, but it depends which one is really first. The UI is a very important thing because your customer has to work with it in the end (some say there might be developers who sometimes forget...). The database design is a very good way to put (some) structure in all the business needs which aren't always specified in a strictly and well structured way.
This is junior's experience, I've been working in development since 2004, beginning with a 4-year-apprenticeship in a development company.
Cheers,
Matthias
A: I start with the functional UI, moving from there to the business layer and db (usually in tandem to start with). Design is normally provided in some respect by the client, so I try to apply that early on, without letting it get in the way. I like to get the domain sorted out in one step (minor changes are acceptable later), and I create my scripts as I get to them in my code.
It sounds like a bit of a runaround, but it works for me.
A: I definitely start with a UI Prototype.
Clients never know what they really want untill they see it.
A simple change on a UI can translate to a dramatic change in the core components of your system. So rather let the user play with a pretty prototype until they are confident it's what they looking for, and then dive into system objects and database design.
With regards to Database and System objects, I find it difficult to decide which way to go. Going database first definitely influences my class design, so I try go object first as much as possible. It turns into a more human design IMO
A: Depends on the Project Id' say.
Usually it's good to have a Photoshop mockup to show your client what they're getting.
On small no-maintenance projects I try to start by modeling the database first to get a better overview on the structure. Then it's usually quite easy to create a Web Application around it.
On bigger projects I usually start by creating (basic) prototypes of critical pieces of the software. I then show those to clients and afterwards throw them away. They're just there to help me understand the upcoming challenges better.
But as said, it's a matter of taste and project.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Determine the number of lines within a text file Is there an easy way to programmatically determine the number of lines within a text file?
A: This would use less memory, but probably take longer
int count = 0;
string line;
TextReader reader = new StreamReader("file.txt");
while ((line = reader.ReadLine()) != null)
{
count++;
}
reader.Close();
A: Reading a file in and by itself takes some time, garbage collecting the result is another problem as you read the whole file just to count the newline character(s),
At some point, someone is going to have to read the characters in the file, regardless if this the framework or if it is your code. This means you have to open the file and read it into memory if the file is large this is going to potentially be a problem as the memory needs to be garbage collected.
Nima Ara made a nice analysis that you might take into consideration
Here is the solution proposed, as it reads 4 characters at a time, counts the line feed character and re-uses the same memory address again for the next character comparison.
private const char CR = '\r';
private const char LF = '\n';
private const char NULL = (char)0;
public static long CountLinesMaybe(Stream stream)
{
Ensure.NotNull(stream, nameof(stream));
var lineCount = 0L;
var byteBuffer = new byte[1024 * 1024];
const int BytesAtTheTime = 4;
var detectedEOL = NULL;
var currentChar = NULL;
int bytesRead;
while ((bytesRead = stream.Read(byteBuffer, 0, byteBuffer.Length)) > 0)
{
var i = 0;
for (; i <= bytesRead - BytesAtTheTime; i += BytesAtTheTime)
{
currentChar = (char)byteBuffer[i];
if (detectedEOL != NULL)
{
if (currentChar == detectedEOL) { lineCount++; }
currentChar = (char)byteBuffer[i + 1];
if (currentChar == detectedEOL) { lineCount++; }
currentChar = (char)byteBuffer[i + 2];
if (currentChar == detectedEOL) { lineCount++; }
currentChar = (char)byteBuffer[i + 3];
if (currentChar == detectedEOL) { lineCount++; }
}
else
{
if (currentChar == LF || currentChar == CR)
{
detectedEOL = currentChar;
lineCount++;
}
i -= BytesAtTheTime - 1;
}
}
for (; i < bytesRead; i++)
{
currentChar = (char)byteBuffer[i];
if (detectedEOL != NULL)
{
if (currentChar == detectedEOL) { lineCount++; }
}
else
{
if (currentChar == LF || currentChar == CR)
{
detectedEOL = currentChar;
lineCount++;
}
}
}
}
if (currentChar != LF && currentChar != CR && currentChar != NULL)
{
lineCount++;
}
return lineCount;
}
Above you can see that a line is read one character at a time as well by the underlying framework as you need to read all characters to see the line feed.
If you profile it as done bay Nima you would see that this is a rather fast and efficient way of doing this.
A: If by easy you mean a lines of code that are easy to decipher but per chance inefficient?
string[] lines = System.IO.File.RealAllLines($filename);
int cnt = lines.Count();
That's probably the quickest way to know how many lines.
You could also do (depending on if you are buffering it in)
#for large files
while (...reads into buffer){
string[] lines = Regex.Split(buffer,System.Enviorment.NewLine);
}
There are other numerous ways but one of the above is probably what you'll go with.
A: Seriously belated edit: If you're using .NET 4.0 or later
The File class has a new ReadLines method which lazily enumerates lines rather than greedily reading them all into an array like ReadAllLines. So now you can have both efficiency and conciseness with:
var lineCount = File.ReadLines(@"C:\file.txt").Count();
Original Answer
If you're not too bothered about efficiency, you can simply write:
var lineCount = File.ReadAllLines(@"C:\file.txt").Length;
For a more efficient method you could do:
var lineCount = 0;
using (var reader = File.OpenText(@"C:\file.txt"))
{
while (reader.ReadLine() != null)
{
lineCount++;
}
}
Edit: In response to questions about efficiency
The reason I said the second was more efficient was regarding memory usage, not necessarily speed. The first one loads the entire contents of the file into an array which means it must allocate at least as much memory as the size of the file. The second merely loops one line at a time so it never has to allocate more than one line's worth of memory at a time. This isn't that important for small files, but for larger files it could be an issue (if you try and find the number of lines in a 4GB file on a 32-bit system, for example, where there simply isn't enough user-mode address space to allocate an array this large).
In terms of speed I wouldn't expect there to be a lot in it. It's possible that ReadAllLines has some internal optimisations, but on the other hand it may have to allocate a massive chunk of memory. I'd guess that ReadAllLines might be faster for small files, but significantly slower for large files; though the only way to tell would be to measure it with a Stopwatch or code profiler.
A: You could quickly read it in, and increment a counter, just use a loop to increment, doing nothing with the text.
A: The easiest:
int lines = File.ReadAllLines("myfile").Length;
A: count the carriage returns/line feeds. I believe in unicode they are still 0x000D and 0x000A respectively. that way you can be as efficient or as inefficient as you want, and decide if you have to deal with both characters or not
A: A viable option, and one that I have personally used, would be to add your own header to the first line of the file. I did this for a custom model format for my game. Basically, I have a tool that optimizes my .obj files, getting rid of the crap I don't need, converts them to a better layout, and then writes the total number of lines, faces, normals, vertices, and texture UVs on the very first line. That data is then used by various array buffers when the model is loaded.
This is also useful because you only need to loop through the file once to load it in, instead of once to count the lines, and again to read the data into your created buffers.
A: Use this:
int get_lines(string file)
{
var lineCount = 0;
using (var stream = new StreamReader(file))
{
while (stream.ReadLine() != null)
{
lineCount++;
}
}
return lineCount;
}
A: You can launch the "wc.exe" executable (comes with UnixUtils and does not need installation) run as an external process. It supports different line count methods (like unix vs mac vs windows).
A: try {
string path = args[0];
FileStream fh = new FileStream(path, FileMode.Open, FileAccess.Read);
int i;
string s = "";
while ((i = fh.ReadByte()) != -1)
s = s + (char)i;
//its for reading number of paragraphs
int count = 0;
for (int j = 0; j < s.Length - 1; j++) {
if (s.Substring(j, 1) == "\n")
count++;
}
Console.WriteLine("The total searches were :" + count);
fh.Close();
} catch(Exception ex) {
Console.WriteLine(ex.Message);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "231"
}
|
Q: Best practice to make a multi language application in C#/WinForms? I've been looking into making applications suitable for multiple languages in C# since I need to work on a small project where this is the case. I have found basically two ways to do this:
Set a form's Localizable property to true, set the Language property, fill all the labels and such, and you're 'done'. The major drawback I see in this is: how to make other stuff which is not part of a form ready for multiple languages (e.g. pop-up windows, log files or windows, etc).
Create a resource file, for example 'Lang.en-us.resx' and one for every language, for example 'Lang.nl-nl.resx' and fill it up with Strings. The IDE seems to generate a class for me automatically, so in the code I can just use Lang.SomeText. The biggest drawback I see in this is: for every form I need to set all the labels and other captions myself in the code (and it doesn't seem data binding works with these resources).
I'm sure, however, that there are other methods to do this as well.
So, what is the best practice? What's the easiest for small applications (a few forms, database connection etc) and what scales best for larger applications?
A: For the benefit of others who may come across this (1+ years after the last post), I'm the author of a professional localization product that makes the entire translation process extremely easy. It's a Visual Studio add-in that will extract all ".resx" strings from any arbitrary solution and load them into a single file that can be translated using a free standalone application (translators can download this from my site). The same add-in will then import the translated strings back into your solution. Extremely easy to use with many built-in safeguards, lots of bells and whistles, and online help (you won't need it much). See http://www.hexadigm.com
A: I have always used resource files for multi-language applications.
The are many articles on the web explaining how to use them.
I have used two different ways:
*
*A resource file per form
*A global resource file
The resource file / form, is easier to implement, you only need to enter the values in the resource file, but I find this approach harder to maintain, since the labels are dispersed throughout the application.
The global resource file allows you to centralise all the labels (images etc.) in one file (per language), but it means manually setting the labels in the form load. This file can also be used for error messages etc.
A question of taste...
One last point, I write programs in English and French, I use "en" and "fr" and not "en-US" and "fr-FR". Do not complicate things, the different dilelects of English (American, English, Australian etc) have few enough differences to use only one (the same goes for French).
A: I recently wrote a program with both German and English language support. I was surprised to find out that if I simply named my english resources LanguageResources.resx and my German resources LanguageResources.de.resx, it automatically selected the correct language. The ResXFileCodeGenerator took care of it all for me.
Note that the fields in the two files were the same and any not yet entered German fields would show up in the application as English as the most non specific file language wise is the default file. When looking for a string it goes from most specific (ex .de-DE.resx) to least specific (ex. .resx).
To get at your strings use the ResourceManager.GetString or ResourceManager.GetObject calls. The application should give you the ResourceManager for free.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "95"
}
|
Q: Disabling Warnings generated via _CRT_SECURE_NO_DEPRECATE What is the best way to disable the warnings generated via _CRT_SECURE_NO_DEPRECATE that allows them to be reinstated with ease and will work across Visual Studio versions?
A: i work on a multi platform project, so i can't use _s function and i don't want pollute my code with visual studio specific code.
my solution is disable the warning 4996 on the visual studio project. go to Project -> Properties -> Configuration properties -> C/C++ -> Advanced -> Disable specific warning add the value 4996.
if you use also the mfc and/or atl library (not my case) define before include mfc _AFX_SECURE_NO_DEPRECATE and before include atl _ATL_SECURE_NO_DEPRECATE.
i use this solution across visual studio 2003 and 2005.
p.s. if you use only visual studio the secure template overloads could be a good solution.
A: You can also use the Secure Template Overloads, they will help you replace the unsecure calls with secure ones anywhere it is possible to easily deduce buffer size (static arrays).
Just add the following:
#define _CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES 1
Then fix the remaining warnings by hand, by using the _s functions.
A: Combination of @[macbirdie] and @[Adrian Borchardt] answer. Which proves to be very useful in production environment (not messing up previously existing warning, especially during cross-platform compile)
#if (_MSC_VER >= 1400) // Check MSC version
#pragma warning(push)
#pragma warning(disable: 4996) // Disable deprecation
#endif
//... // ...
strcat(base, cat); // Sample depreciated code
//... // ...
#if (_MSC_VER >= 1400) // Check MSC version
#pragma warning(pop) // Renable previous depreciations
#endif
A: You could disable the warnings temporarily in places where they appear by using
#pragma warning(push)
#pragma warning(disable: warning-code) //4996 for _CRT_SECURE_NO_WARNINGS equivalent
// deprecated code here
#pragma warning(pop)
so you don't disable all warnings, which can be harmful at times.
A: For the warning by warning case, It's wise to restore it to default at some point, since you are doing it on a case by case basis.
#pragma warning(disable: 4996) /* Disable deprecation */
// Code that causes it goes here
#pragma warning(default: 4996) /* Restore default */
A: The best way to do this is by a simple check and assess. I usually do something like this:
#ifndef _DEPRECATION_DISABLE /* One time only */
#define _DEPRECATION_DISABLE /* Disable deprecation true */
#if (_MSC_VER >= 1400) /* Check version */
#pragma warning(disable: 4996) /* Disable deprecation */
#endif /* #if defined(NMEA_WIN) && (_MSC_VER >= 1400) */
#endif /* #ifndef _DEPRECATION_DISABLE */
All that is really required is the following:
#pragma warning(disable: 4996)
Hasn't failed me yet; Hope this helps
A: you can disable security check. go to
Project -> Properties -> Configuration properties -> C/C++ -> Code Generation -> Security Check
and select Disable Security Check (/GS-)
A: If you don't want to pollute your source code (after all this warning presents only with Microsoft compiler), add _CRT_SECURE_NO_WARNINGS symbol to your project settings via "Project"->"Properties"->"Configuration properties"->"C/C++"->"Preprocessor"->"Preprocessor definitions".
Also you can define it just before you include a header file which generates this warning.
You should add something like this
#ifdef _MSC_VER
#define _CRT_SECURE_NO_WARNINGS
#endif
And just a small remark, make sure you understand what this warning stands for, and maybe, if you don't intend to use other compilers than MSVC, consider using safer version of functions i.e. strcpy_s instead of strcpy.
A: You can define the _CRT_SECURE_NO_WARNINGS symbol to suppress them and undefine it to reinstate them back.
A: Another late answer... Here's how Microsoft uses it in their wchar.h. Notice they also disable Warning C6386:
__inline _CRT_INSECURE_DEPRECATE_MEMORY(wmemcpy_s) wchar_t * __CRTDECL
wmemcpy(_Out_opt_cap_(_N) wchar_t *_S1, _In_opt_count_(_N) const wchar_t *_S2, _In_ size_t _N)
{
#pragma warning( push )
#pragma warning( disable : 4996 6386 )
return (wchar_t *)memcpy(_S1, _S2, _N*sizeof(wchar_t));
#pragma warning( pop )
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "76"
}
|
Q: ASP MVC Preview 5 and IIS 6 Windows Authentication I've just built a basic ASP MVC web site for deployment on our intranet. It expects users to be on the same domain as the IIS box and if you're not an authenticated Windows User, you should not get access.
I've just deployed this to IIS6 running on Server 2003 R2 SP2. The web app is configured with it's own pool with it's own pool user account. The IIS Directory Security options for the web app are set to "Windows Integrated Security" only and the web.config file has:
<authentication mode="Windows" />
From a Remote Desktop session on the IIS6 server itself, an IE7 browser window can successfully authenticate and navigate the web app if accessed via http://localhost/myapp.
However, also from the server, if accessed via the server's name (ie http://myserver/myapp) then IE7 presents a credentials dialog which after three attempts entering the correct credentials eventually returns "HTTP Error 401.1 - Unauthorized: Access is denied due to invalid credentials".
The same problem occurs when a workstation browses to the web app url (naturally using the server's name and not "localhost").
The IIS6 server is a member of the only domain we have and has no firewall enabled.
Is there something I have failed to configure correctly for this to work?
Thanks,
I have tried the suggestions from Matt Ryan, Graphain, and Mike Dimmick to date without success. I have just built a virtual machine test lab with a Server 2003 DC and a separate server 2003 IIS6 server and I am able to replicate the problem.
I am seeing an entry in the IIS6 server's System Event Log the first time I try to access the site via the non-localhost url (ie http://iis/myapp). FQDN urls fail too.
Source: Kerberos, Event ID: 4
The kerberos client received a KRB_AP_ERR_MODIFIED error from the server host/iis.test.local. The target name used was HTTP/iis.test.local. This indicates that the password used to encrypt the kerberos service ticket is different than that on the target server. Commonly, this is due to identically named machine accounts in the target realm (TEST.LOCAL), and the client realm.
A: After extensive Googling I managed to find a solution on the following MSDN article:
How To: Create a Service Account for an ASP.NET 2.0 Application
Specifically the Additional Considerations section which describes "Creating Service Principal Names (SPNs) for Domain Accounts" using the setspn tool from the Windows Support Tools:
setspn -A HTTP/myserver MYDOMAIN\MyPoolUser
setspn -A HTTP/myserver.fqdn.com MYDOMAIN\MyPoolUser
This solved my problem on both my virtual test lab and my original problem server.
There is also an important note in the article that using Windows Authentication with custom pool users constrains the associated DNS name to be used by that pool only. That is, another pool with another identity would need to be associated with a different DNS name.
A: Sounds like the new Loopback check security feature of Windows Server 2003 SP1. As I understand it, is designed to prevent a particular type of interception attack.
From http://support.microsoft.com/kb/896861
SYMPTOMS
When you use the fully qualified domain name (FQDN) or a custom host header to browse a local Web site that is hosted on a computer that is running Microsoft Internet Information Services (IIS) 5.1 or IIS 6, you may receive an error message that resembles the following:
HTTP 401.1 - Unauthorized: Logon Failed
This issue occurs when the Web site uses Integrated Authentication and has a name that is mapped to the local loopback address.
Note You only receive this error message if you try to browse the Web site directly on the server. If you browse the Web site from a client computer, the Web site works as expected.
CAUSE
This issue occurs if you install Microsoft Windows XP Service Pack 2 (SP2) or Microsoft Windows Server 2003 Service Pack 1 (SP1). Windows XP SP2 and Windows Server 2003 SP1 include a loopback check security feature that is designed to help prevent reflection attacks on your computer. Therefore, authentication fails if the FQDN or the custom host header that you use does not match the local computer name.
Workaround
*
*Method 1: Disable the loopback check
*Method 2: Specify host names
See http://support.microsoft.com/kb/896861 for details.
Edit - just noticed that you said you were seeing this from Client PCs as well... that's more unusual. But I'd still look to test one of these workarounds, to see if it corrected the problem (and if so, might indicate a problem with your DNS config).
A: It sounds to me as though you've done everything right.
I'm sure you are but have you made sure you are using 'DOMAIN\user' as the user account and not just 'user'?
A: IE7 only sends Windows credentials (NTLM, Kerberos) if it identifies the server as being on the Intranet. IE7 also added an Intranet zone lockdown feature - if you're not on a domain, by default no servers are in the Intranet zone. This was done to prevent zone-migration attacks.
To change this, go to Tools/Internet Options, Security tab, then click Local Intranet. You can then manually add servers that should be treated as Intranet, by clicking the Sites button, then Advanced, or tell IE not to automatically detect your Intranet and selecting the other checkboxes as appropriate.
A: I just encountered the opposite problem - my site authenticates externally but not locally.
I compared it to the sites we have working and the difference was that the site that failed to authenticate was using Windows Authentication.
However, other sites I work with (this is a dev server) tend to have Basic Authentication.
Not sure why exactly but this fixed it.
However, at the same time I noticed "Default Domain" and "Realm" settings.
I know it's very unlikely but could these perhaps help at all?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to route in linux I have 20 ips from my isp. I have them bound to a router box running centos. What commands, and in what order, do I set up so that the other boxes on my lan, based either on their mac addresses or 192 ips can I have them route out my box on specific ips. For example I want mac addy xxx:xxx:xxx0400 to go out 72.049.12.157 and xxx:xxx:xxx:0500 to go out 72.049.12.158.
A: Use iptables to setup NAT.
iptables -t nat -I POSTROUTING -s 192.168.0.0/24 -j SNAT --to-source 72.049.12.157
iptables -t nat -I POSTROUTING -s 192.168.1.0/24 -j SNAT --to-source 72.049.12.158
This should cause any ips on the 192.168.0.0 subnet to have an 'external' ip of 72.049.12.157 and those on the 192.168.1.0 subnet to have an 'external' ip of 72.049.12.158. For MAC address matching, use -m mac --mac-source MAC-ADDRESS in place of the -s 192.168.0.0/24 argument
Don't forget to activate ip forwarding:
cat /proc/sys/net/ipv4/ip_forward
If the above returns a 0 then it won't work, you'll have to enable it. Unfortunately this is distro-specific and I don't know CentOS.
For a quick hack, do this:
echo 1 > /proc/sys/net/ipv4/ip_forward
A: What's the router hardware and software version?
Are you trying to do this with a linux box? Stop now and go get a router. It will save you money long-term.
A: Answering this question with the little information you gave amounts to rewriting a routing Howto here. You could either
*
*read about routing and IP in general (e.g. Linux System Administrator's Guide) or
*give us more info on the exact IP addresses you got.
The above answer using NAT is definately not what you intend to use when you have public IP addresses. This solution is not going to scale well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Where is a good place to brush up on some math? Math skills are becoming more and more essential, and I wonder where is a good place to brush up on some basics before moving on to some more CompSci specific stuff?
A site with lots of video's as well as practice exercises would be a double win but I can't seem to find one.
A: It depends on your math level. You should start by revising what you should know till that moment and then go further to algorithm mathmatics, geometry (transforms and etc), statistics and more.
There are tons of places on the internet were you can learn:
http://www.math.cornell.edu/Courses/courses.html
http://ocw.mit.edu/OcwWeb/web/courses/courses/index.htm
http://mathworld.wolfram.com/
and the list is open.
A: I recommend Project Euler if you want to train number theory and discrete maths. Lots of fun exercises, though you need to know a bit of programming.
A: Steve Yegge had a good blog post Math for programmers
Quoting some of it:
"But a few things I've learned recently might surprise you:
*
*Math is a lot easier to pick up after you know how to program. In fact, if you're a halfway decent programmer, you'll find it's almost a snap.
*They teach math all wrong in school. Way, WAY wrong. If you teach yourself math the right way, you'll learn faster, remember it longer, and it'll be much more valuable to you as a programmer.
*Knowing even a little of the right kinds of math can enable you do write some pretty interesting programs that would otherwise be too hard. In other words, math is something you can pick up a little at a time, whenever you have free time.
*Nobody knows all of math, not even the best mathematicians. The field is constantly expanding, as people invent new formalisms to solve their own problems. And with any given math problem, just like in programming, there's more than one way to do it. You can pick the one you like best.
*Math is... ummm, please don't tell anyone I said this; I'll never get invited to another party as long as I live. But math, well... I'd better whisper this, so listen up: (it's actually kinda fun.)"
A: I will be boring and recommend actually taking university courses in math.
Without lectures and lessons with an assistant I know I would never be able to learn as much as I have. I just need some kind of motivation, since higher math is really hard.
That is, if you are looking for quite advanced stuff and actually want to get a deep understanding and don't want to crunch numbers. Crunching numbers is why we have MATLAB ;)
It would be good to know what level of math you have, and what you want to do with it. But I guess calculus, linear algebra and discrete math are the most useful courses to take.
A: I suggest books with good tutorials throughout if you're unable to partake in a maths course. For computer science-related maths Don Knuth's Concrete Mathematics is meant to be very good.
Obviously nothing can replace a good teacher, but good tutorials can come pretty damn close. You really get to learn the subject in the tutorials I think.
A: Get some videos from www.aduni.org
Math courses
A: It's a couple of years since this question has been asked, but there are a number of new sites and resources available now:
*
*Khan Academy was originally intended for schoolkids, but it has since expanded to include material that would not be out of place in first-year university courses. It serves as a great way to review and fix fundamentals. It has videos and practice exercises, and keeps track of your progress.
*EdX is an evolution of initiatives like MIT Open Courseware. It's now an alliance of universities like MIT, Berkeley and Stanford that offer free online university level courses, with video instruction and learning materials. My only complaint is that some of their courses have prerequisites (like single-variable calculus) that you need to pick up elsewhere, like Coursera, or the original MIT OpenCourseWare site.
*Coursera offers more courses than EdX, and many of them are more basic, covering topics like pre-algebra and pre-calculus. The learning interface is not quite as cool as EdX's (which offers a scrollable captioning interface alongside most of it's videos), but the broader range of topics and courses covering fundamentals offers learning you just won't find on EdX.
A: A lot of the universities will actually publish their lecture materials online. So all you really need to do is find a suitable subject and then read the lecture materials and do the associated work. If you were really sneaky you could probably also go to the tutorials to get help :P
A: BetterExplained.com has some great math lectures. Its not video lectures but the author gives easy-to-understand explanations on math concepts.
A: Don't forget that iTunes now has available a load of maths lectures (and other subjects) from various mainstream universities - and all for free.
A: Since you want to brush up your math
I would suggest you to do a G search on UCCS math online
Or follow this link , and after registering yourself free you can browse the archives
I must say that It's common that you will find people recommending course X .
But rarely will you find people completing their recommended course ..
SO IN the case of number theory you must go for the latest course , the last offering has not high quality video ..
Also for Discrete Math ->There are no lecture notes on this site
So you have to figure out how to establish correspondence two online course (6.042 has good P sets and Notes) And The above Math course for Discrete Math .
I would discourage you to use YouTube (x minutes ) tutorials , Because most of them cover Math like History ..
A good course can be found by G searching Harvard OlI--
It has probability (Non Continuous) - There are P sets without solutions ..
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Storing xml data in a cookie I'm trying to store an xml serialized object in a cookie, but i get an error like this:
A potentially dangerous Request.Cookies value was detected from the client (KundeContextCookie="<?xml version="1.0" ...")
I know the problem from similiar cases when you try to store something that looks like javascript code in a form input field.
What is the best practise here? Is there a way (like the form problem i described) to supress this warning from the asp.net framework, or should i JSON serialize instead or perhaps should i binary serialize it? What is common practise when storing serialized data in a cookie?
EDIT:
Thanks for the feedback. The reason i want to store more data in the cookie than the ID is because the object i really need takes about 2 seconds to retreive from a service i have no control over. I made a lightweight object 'KundeContext' to hold a few of the properties from the full object, but these are used 90% of the time. This way i only have to call the slow service on 10% of my pages. If i only stored the Id i would still have to call the service on almost all my pages.
I could store all the strings and ints seperately but the object has other lightweight objects like 'contactinformation' and 'address' that would be tedious to manually store for each of their properties.
A: Storing serialized data in a cookie is a very, very bad idea. Since users have complete control over cookie data, it's just too easy for them to use this mechanism to feed you malicious data. In other words: any weakness in your deserialization code becomes instantly exploitable (or at least a way to crash something).
Instead, only keep the simplest identifier possible in your cookies, of a type of which the format can easily be validated (for example, a GUID). Then, store your serialized data server-side (in a database, XML file on the filesystem, or whatever) and retrieve it using that identifier.
Edit: also, in this scenario, make sure that your identifier is random enough to make it infeasible for users to guess each other's identifiers, and impersonate each other by simply changing their own identifier a bit. Again, GUIDs (or ASP.NET session identifiers) work very well for this purpose.
Second edit after scenario clarification by question owner: why use your own cookies at all in this case? If you keep a reference to either the original object or your lightweight object in the session state (Session object), ASP.NET will take care of all implementation details for you in a pretty efficient way.
A: I wouldn't store data in XML in the cookie - there is a limit on cookie size for starters (used to be 4K for all headers including the cookie). Pick a less verbose encoding strategy such as delimiters instead e.g. a|b|c or separate cookie values. Delimited encoding makes it especially easy and fast to decode the values.
The error you see is ASP.NET complaining that the headers look like an XSS attack.
A: Look into the View State. Perhaps you'd like to persist the data across post-backs in the ViewState instead of using cookies. Otherwise, you should probably store the XML on the server and a unique identifier to that data in the cookie, instead.
A: You might look into using Session State to store the value. You can configure it to use a cookie to store the session id. This is also more secure, because the value is neither visible or changeable by the user-side.
Another alternative is to use a distributed caching mechanism to store the value. My current favorite is Memcached.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Running scripts inside C# I want to run javascript/Python/Ruby inside my application.
I have an application that creates process automatically based on user's definition. The process is generated in C#. I want to enable advanced users to inject script in predefined locations. Those scripts should be run from the C# process.
For example, the created process will have some activities and in the middle the script should be run and then the process continues as before.
Is there a mechanism to run those scripts from C#?
A: Basically, you have two problems: how to define point of injections in your generated code, and how to run python / ruby / whatev scripts from there.
Depending on how you generate the process, one possible solution would be to add a function to each possible point of injection. The function would check, whether the user has associated any scripts with given points, and if so, runs the script by invoking IronPython / IronRuby (with optionally given parameters).
Disadvantages include: limited accessibility from the scripts to the created process (basically, only variables passed as parameters could be accessed); as well as implementation limits (IronPython's current version omits several basic system functions).
A: Look into IronPython and IronRuby -- these will allow you to easily interoperate with C#.
A: You can compile C# code from within a C# application using the CSharpCodeProvider class.
If the compile succeeds you can run the resulting assembly as returned via the CompiledAssembly property of the CompilerResults class.
A: Awesome C# scripting language - Script.Net
A: .NET has a scripting language including runtime engine in PowerShell which can be embedded in any .NET application.
A: You can compile C# code "on the fly" into an in-memory assembly. I think this is possible with IronPython and IronRuby as well. Look at the CodeDomProvider.CreateProvider method.
If you need to run scripts a lot, or if your process runs for a long time, you might want to load these assemblies into another AppDomain. And unload the AppDomain after you're done with the script. Otherwise you are unable to remove them from memory. This has some consequenses on the other classes in your project, because you have to marshall all calls.
A: Have you thought about Visual Studio for Applications? I haven't heard much about it since .NET 1.1, but it might be worth a look.
http://msdn.microsoft.com/en-us/library/ms974548.aspx
A: I've done exactly this just recently - allowed run-time addition of C# scripting.
It's not hard at all, and this article:
http://www.divil.co.uk/net/articles/plugins/scripting.asp
is a very useful summary of the details.
A: One of Microsoft's solutions to JavaScript in C# is ClearScript,
which uses V8, Chrom browser's JavaScript engine. Check its short FAQtorial for code samples.
It has excellent two-way integration - iterator/enumerator, output parameters, optional parameters, parameter arrays, delegate, task/promise/async/await, bigint, and more.
Apart from that, I think the most distinguishing feature is that it does not depend on Rosyln or Dynamic Language Runtime. This can be good or bad - good because there may be a lot less dependencies (depending on your project's target), bad because you need to bundle the native, platform-dependent V8 dll.
If that is ok, you get to enjoy cutting edge JavaScript / ECMAScript. Everything you get on Chrome, or 98% ES6 as of 2022 Feb, plus several extensions. Speed is as fast as Chrome, obviously, so you get the best of both Google and Microsoft.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: British English to American English (and vice versa) Converter Does anyone know of a library or bit of code that converts British English to American English and vice versa?
I don't imagine there's too many differences (some examples that come to mind are doughnut/donut, colour/color, grey/gray, localised/localized) but it would be nice to be able to provide localised site content.
A: I've been working on one to convert US English to UK English. As I've discovered it's actually a lot harder to write something to convert the other way but I hope to get around to providing a reverse conversion one day.
This isn't perfect, but it's not a bad effort (even if I do say so myself). It'll convert most US spellings to UK ones but there are some words where UK English retains the US spelling (e.g. "program" where this refers to computer software). It won't convert words like pants to trousers because my main goal was simply to make the spelling uniform across the whole document.
There are also words such as practice and license where UK English uses either those or practise & licence, depending on whether the word's being used as a verb or a noun. For those two examples the conversion tool will highlight them and an explanatory note pops up on the lower left hand of your screen when you hover your mouse over them. All word patterns which are converted are underlined in red, and the output is shown in a side by side comparison with your original input.
It'll do quite large blocks of text quite quickly, but I prefer to go use it just for a couple of paragraphs at a time - copying them in from a Word doc.
It's still a work in progress so if anyone has any comments or suggestions then I'd appreciate feedback I can use to improve it.
http://www.us2uk.eu/
A: The difference between UK and US English is far greater than just a difference in spelling. There is also the hood/bonnet, sidewalk/pavement, pants/trousers idea.
Guess it depends how far you need to take it.
A: I looked forever to find a solution to this, but couldn't find one, so, I wrote my own bit of code for it, using a master list of ~20,000 different spellings that were freely available from the varcon project and the language experts at wordsworldwide:
https://github.com/HoldOffHunger/convert-british-to-american-spellings
Since I had two source lists, I used them each to crosscheck each other, and I found numerous errors and typos (varcon lists "preexistent"'s british equivalent as "preaexistent"). It is possible that I may have accidentally made typos, too, but, since I didn't do any wordsmithing here, I don't believe that to be the case.
Example:
require('AmericanBritishSpellings.php');
$american_british_spellings = new AmericanBritishSpellings();
$text = "Axiomatically ax that door, would you, my neighbour?";
$text = $american_british_spellings->SwapBritishSpellingsForAmericanSpellings(['text'=>$text]);
print($text); // output: Axiomatically axe that door, would you, my neighbor?
A: I think if you're thinking of converting from American English to British English, I personally wouldn't bother. Britain is very Americanised anyway, we accept silly yank spellings on the net :)
A: I had a similar problem recently. I discovered the following tool, called VarCon. I haven't tested it out, but I needed a rough converter for some text data. Here's an example.
echo "I apologise for my colourful tongue ." | ./translate british american
# >> I apologize for my colorful tongue .
It looks like it works for various dialects. Be sure to read the README and proceed with caution.
*note: This will only correct spelling variations.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Find matching sequences in two binary files Let me start off with a bit of background.
This morning one of our users reported that Testuff's setup file has been reported as infected with a virus by the CA antivirus. Confident that this was a false positive, I looked on the web and found that users of another program (SpyBot) have reported the same problem.
A now, for the actual question.
Assuming the antivirus is looking for a specific binary signature in the file, I'd like to find the matching sequences in both files and hopefully find a way to tweak the setup script to prevent that sequence from appearing.
I tried the following in Python, but it's been running for a long time now and I was wondering if there was a better or faster way.
from difflib import SequenceMatcher
spybot = open("spybotsd160.exe", "rb").read()
testuff = open("TestuffSetup.exe", "rb").read()
s = SequenceMatcher(None, spybot, testuff)
print s.find_longest_match(0, len(spybot), 0, len(testuff))
Is there a better library for Python or for another language that can do this?
A completely different way to tackle the problem is welcome as well.
A: See the longest common substring problem. I guess difflib uses the DP solution, which is certainly too slow to compare executables. You can do much better with suffix trees/arrays.
Using perl Tree::Suffix might be easiest solution. Apparently it gives all common substrings in a specified length range:
@lcs = $tree->lcs;
@lcs = $tree->lcs($min_len, $max_len);
@lcs = $tree->longest_common_substrings;
A: Note that even if you did find it this way, there's no guarantee that the longest match is actually the one being looked for. Instead, you may find common initialisation code or string tables added by the same compiler for instance.
A: Why don't you contact CA and ask them to tell them what they're searching for, for that virus?
Or, you could copy the file and change each individual byte until the warning disappeared (may take a while depending on the size).
It's possible the virus detection may be a lot more complicated than simply looking for a fixed string.
A: Better not wonder about the complexity and time these kinds of algorithms need.
If you have interest in this - here .ps document linked here you can find a good introduction into this thematic.
If a good implementation for these algorithms exist, I can not tell.
A: I suspect that looking for binary strings isn't going to help you. An install program is likely to be doing some 'suspicious' things.
You probably need to talk to CA and spybot about white-listing your installer, or about what is triggering the alert.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Connectivity of Winform Applications/ASP.NET application with SAP databases How can I fetch data in a Winforms application or ASP.NET form from a SAP database? The .NET framework used is 2.0. , language is C# and SAP version is 7.10.
A: Not sure if this will work for you, but there's a C library, which can probably be used from your C# application and which provides a quite easy API for calling BAPIs in SAP. (Accessing the underlying database directly via SQL is not to be recommended... Better use BAPIs or a custom-tailored RFC-enabled function module.)
See http://service.sap.com/rfc-library
You may also be able to use the "SAP connector for Microsoft .NET" (from the same link above), but it was developed with .NET 1.1 and may have compatibility problems with .NET 2.0?!
Update (2011): Since Dec. 2010 there is a new version of the "SAP connector for Microsoft .NET" available, which works with .NET Frameworks 2.0, 3.5 and 4.0. This would now be the perfect solution for your question! See http://service.sap.com/connectors ---> SAP connector for Microsoft .NET
A: Apologies for the plug.... I work for ERP-Link, and we have a product, iNet.BPS, which is a VS2005 plug-in that helps you create proxy objects that can be used by your .NET code to call BAPI's on an SAP system. iNet.BPS lets you customize the BAPI method calls, for instance it lets you elide optional parameters your application is not using, thus simplifying your code by not having to pass over a dozen parameters to the BAPI. This product is not dependent on SAP AG's SAP Connector for .NET.
Here's a link to the marketing page, http://www.erp-link.com/html/product/product-overview-iNetBPS_Overview.asp
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: List of Stored Procedure from Table I have a huge database with 100's of tables and stored procedures. Using SQL Server 2005, how can I get a list of stored procedures that are doing an insert or update operation on a given table.
A: Use sys.dm_sql_referencing_entities
Note that sp_depends is obsoleted.
MSDN Reference
A: sys.sql_dependencies has a list of entities with dependencies, including tables and columns that a sproc includes in queries. See this post for an example of a query that gets out dependencies. The code snippet below will get a list of table/column dependencies by stored procedure
select sp.name as sproc_name
,t.name as table_name
,c.name as column_name
from sys.sql_dependencies d
join sys.objects t
on t.object_id = d.referenced_major_id
join sys.objects sp
on sp.object_id = d.object_id
join sys.columns c
on c.object_id = t.object_id
and c.column_id = d.referenced_minor_id
where sp.type = 'P'
A: select
so.name,
sc.text
from
sysobjects so inner join syscomments sc on so.id = sc.id
where
sc.text like '%INSERT INTO xyz%'
or sc.text like '%UPDATE xyz%'
This will give you a list of all stored procedure contents with INSERT or UPDATE in them for a particular table (you can obviously tweak the query to suit). Also longer procedures will be broken across multiple rows in the returned recordset so you may need to do a bit of manual sifting through the results.
Edit: Tweaked query to return SP name as well. Also, note the above query will return any UDFs as well as SPs.
A: You could try exporting all of your stored procedures into a text file and then use a simple search.
A more advanced technique would be to use a regexp search to find all SELECT FROM and INSERT FROM entries.
A: This seems to work:
select
so.name as [proc],
so2.name as [table],
sd.is_updated
from sysobjects so
inner join sys.sql_dependencies sd on so.id = sd.object_id
inner join sysobjects so2 on sd.referenced_major_id = so2.id
where so.xtype = 'p' -- procedure
and is_updated = 1 -- proc updates table, or at least, I think that's what this means
A: If you download sp_search_code from Vyaskn's website it will allow you to find any text within your database objects.
http://vyaskn.tripod.com/sql_server_search_stored_procedure_code.htm
A: SELECT Distinct SO.Name
FROM sysobjects SO (NOLOCK)
INNER JOIN syscomments SC (NOLOCK) on SO.Id = SC.ID
AND SO.Type = 'P'
AND (SC.Text LIKE '%UPDATE%' OR SC.Text LIKE '%INSERT%')
ORDER BY SO.Name
This link was used as a resource for the SP search.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Converting audio to code and vice-versa Having just witnessed Sound Load technology on the Nintendo DS game Bangai-O Spritis. I was curious as to how this technology works? Does anyone have any links, documentation or sample code on implementing such a feature, that would allow the state of an application to be saved and loaded via audio?
A: Its the same old thing used in ZX Spectrum era. You load programs/games from tape.Only the sound quality and the filters are probably better.
In my opinion something like Bluetooth or WiFi is better. You can also send files that can be put on some storage and then load them. I find these methods much easier than sound because if there is a lot of noise around you cannot do much.
It is just a conversion of data to audio and then back from audio to data.
Search for Zotyocopy and Copy86M on google - these are the utilities used for saving a game to tape after loading it into memory on zx spectrum.
A: If you want to pass data as audio through the air there are a few things you need to be aware of though, such as how the speaker and microphone interact for example. It is important that they don't distort or alter the sound too much as what you are sending are in fact the raw bytes.
Some audio software will let you open any file as audio so that you may listen to it. If you record audio as data do not use lossy compression such as mp3 on the audio file!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Replacing strings inside SWF We've got dozens of versions of an SWF modified for different customers of a big Flash project, and now would have to replace some strings embedded in scripts in each copy. The FLA file for some of these is very difficult to locate or even missing (I inherited this mess and refactoring it is currently not an option).
Is there a (free) tool to replace strings used inside ActionScript? I tried swfmill to convert the files to XML and back but it can't handle international characters contained in the strings so I could get them only partially converted. Most of the strings were correctly extracted so another tool might do the job.
A: You could try Burak's URL Action Editor -- it says URL, but I'm pretty sure it lets you edit any text in a SWF. I haven't used it, but I have used his ActionScript Viewer, which works wonderfully.
A: You can use Apparat for this kind of task. It allows you to alter ActionScript 3 bytecode in SWF and SWC files.
I would prefer the Scala source branch for your task. Basically the code would look like this:
val swf = Swf from "in.swf"
for(tag <- swf.tags) {
(Abc fromTag tag) match {
case Some(abc) => {
val strings = abc.cpool.strings
for(i <- 1 until strings.length) {
if(strings(i) == 'search) {
strings(i) = 'replacement
}
}
abc write tag
}
case None =>
}
swf write "out.swf"
A: Well the only advice I can come up with is to fix swfmill to support international characters. You might want to ask in swfmill mailing list (swfmill@swfmill.org as far as I know) for a best way how to do it, shouldn't be too difficult if you know C/C++ a bit.
A: If the files are using actionscript 2 maybe you can disassemble and reassemble using http://flasm.sourceforge.net/ (And of course: modify the strings before reassembling). For as3 adobe provides a decompiler that might be usable to achieve the same, but I don't think your flash will be as3 if you've inherited it.
A: This one works really well too and it is free:
http://www.free-decompiler.com/flash/
A: tricky - it might not be any easier, but you could load the 'locked' swf into one you control, then spider through its objects until you hit TextBox, using some for...in loops - it'd be a long, arduous process to map them out then change them, especially if the previous developer didn't name things in a helpful way, but if it's a fairly simple .swf then it might be too bad...
Also, there's a mac-only utility for decompiling swfs that I remember a coworker swearing by, but I don't recall the name... anybody?
A: Tough one - have you tried the Sothink decompiler? If that doesn't work, I'd say try loading the the swf into another swf, then drill down and change the content of the textfield - something like _root.loadedswf.clip1.box2.textField.text = "New text";
Obviously, this might not work if the application is complex.
A: How about this one?
http://code.google.com/p/swfreplacer/wiki/Intro
A: It seems to me that swfmill has updated their tool to support international characters, at least for Latvian language (which requires UTF-8). Strings are encoded as HTML entities.
Solved some similar problems with legacy swf file.
A: You could try Swiffotron which includes the ability to replace text both on the stage and in actionscript.
Here is a unit test input file from the project which shows how it could be done:
A: Apparat ended up working great for this (I have some legacy files that needed to be modified). I couldn't get Joa's Scala code to work, so I ended up implementing his solution in Java.
*
*Download binaries.
*Make sure Scala is installed (the libraries are still required, even though the solution is in pure Java).
*Compile and run the code below using the appropriate classpath (pointing to the folder with Apparat binaries as well as your Scala libraries).
*
*javac -classpath ".;apparat/*;C:/path/to/scala/lib/*" SwfEditor.java
*java -classpath ".;apparat/*;C:/path/to/scala/lib/*" SwfEditor
import apparat.swf.Swf;
import apparat.abc.Abc;
import apparat.swf.SwfTag;
import scala.collection.Iterator;
import scala.Symbol;
import apparat.swf.DoABC;
class SwfEditor {
public static void main(String[] args){
Swf input = Swf.fromFile("in.swf");
Iterator<SwfTag> iter = input.tags().iterator();
while (iter.hasNext()) {
SwfTag tag = iter.next();
if(tag instanceof DoABC) {
DoABC doABCTag = (DoABC) tag;
Abc abc = Abc.fromTag(doABCTag).getOrElse(null);
Symbol[] strings = abc.cpool().strings();
for(int i=0; i<strings.length; i++) {
String string = strings[i].toString();
if(string == "'search")) {
strings[i] = new Symbol("replacement");
}
}
abc.write(doABCTag);
}
}
input.write("out.swf");
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Vista skin look and feel Is there anywhere on the web free vista look and feel theme pack for java?
A: I'm guessing that what you want is to use the system look and feel regardless on whatever platform your application is started. This can be done with
UIManager.setLookAndFeel( UIManager.getSystemLookAndFeelClassName() );
on the main() method (you have to handle possible exceptions of course ;-).
As I don't have a vista installation here I can't check whether the jvm natively supports the vista laf...
edit: Seems to be possible with the above statement. See this thread in the java forums:
http://forums.sun.com/thread.jspa?threadID=5287193&tstart=345
A: If you use SWT it has a native vista look and feel built in. However, if you are using swing I honestly do not know.
A: Using Java6u13 on Vista shows a Windows LookAndFeel that is pretty decent. Its as free as Java is.
Sun has a tutorial on LookAndFeel usage here and if you want to play with the SwingSet demo, it shows you the System LookAndFeel and lets you change it. Then you can get an idea of what the LookAndFeel looks like. Calling UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
as dhiller suggests, will set the look and feel based on your system. If you wanted to see what the Vista look and feel looks like, you'll need Vista though.
Another option is to use Substance or Nimbus if you want something different from the Metal look and feel. Substance is a separate look and feel collection and Nimbus is available as of the 6u10 release.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How to get my own code's module handle?
Possible Duplicate:
How do I get the HMODULE for the currently executing code?
I'm trying to find a resource in my own module. If this module is an executable, that's trivial - GetModuleHandle(NULL) returns the handle of the "main" module.
My module, however, is a DLL that is loaded by another executable. So GetModuleHandle(NULL) will return the module handle to that executable, which is obviously not what I want.
Is there any way to determine the module handle of the module that contains the currently running code? Using the DLL's name in a call to GetModuleHandle() seems like a hack to me (and is not easily maintainable in case the code in question is transplanted into a different DLL).
A: If DLL is linked with MFC then there is a way to get instance of the DLL in which some function was called:
void dll_function()
{
AFX_MANAGE_STATE(AfxGetStaticModuleState());
HINSTANCE dll_instance = AfxGetInstanceHandle();
}
A: Store the module handle away when it is given to you in DllMain and then use it later when you actually need it. A lot of frameworks (e.g., MFC) do this automatically.
A: As has been already stated this can be done by saving the module handle passed in to the DllMain function.
But there are other reasons why you should save the handle.
For example if you decide to bind resources to the DLL using the resource linker, you will need this module handle to get at these resources via the LoadResource function API.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: What is the biggest drawback of ? We all have our favourite database. If you look objectively at your chosen database, what drawbacks does it have and what could be improved?
The rules:
*
*One reply per drawback with;
*a short description of the limitation, followed by;
*a more detailed description, an explanation of how it could be done better or an example of another technology that does not have the same limitation.
*Do not diss any database that you haven't used extensively. It is easy to take potshots at other technologies but we want to learn form your experience, not your prejudice.
A: Oracle databases are quite expensive
Oracle does what it does well but the licensing costs are horrendous. That has been improved by the release of Oracle XE but the limitations of that mean that it is a growth constraint on you solution.
A: Database Microsoft SQL Server 2005
Defect Lack of "INSERT OR UPDATE"
Description
Often you need to either insert or update a record in a table, depending on whether the record is present or not. Not having an atomic operation to do so leads to unnecessary transactions.
This does not happen with MySQL or SQLServer 2008.
A: Database PostgreSQL
Defect No SQL Profiler
We asked the developers about this at a recent conference and I understand it's now something they're looking to implement.
A: I love the flexibility of sequences in Oracle as compared with other databases autoincrements, but the inability to set seq.nextval as a default for a pk column is somewhat annoying, and must be trivial to fix.
A: Database Microsoft SQL Server
Defect Huge licensing cost
Description
SQL Server has great features and it integrates very well with .NET development. The issue is that when you have to scale up from a shared database to a dedicated database, licensing costs are really high. This, in effect, leads to databases which should really run on a dedicated server, being hosted on shared servers with performance and security issues.
This does not happen with MySQL or PostgreSQL.
A: Database Microsoft SQL Server 2005
Defect Badly implemented UI
Description
SQL Server management studio does not offer a great user experience:
*
*Tabbing behaviour is weird: you are always looking for the right tab
*Keeps on crashing on 64-bit versions
*Missing some features of preceding version, like overview of grants of stored procedures
This does not happen with version 2000.
A: Database MySQL
Defect Foreign Keys supported only on some table types
Description
Enough said. It has obvious maintenance implications.
From the MySQL manual
Foreign keys definitions are subject to the following conditions:
*
*Both tables must be InnoDB tables and they must not be TEMPORARY tables.
And here:
For storage engines other than InnoDB, MySQL Server parses the FOREIGN KEY syntax in CREATE TABLE statements, but does not use or store it.
This does not happen with any other major DB.
A: Database MySQL
Defect Server will start up with damaged tables
Description
If MySQL has a damaged table - from either being killed during a write or some other failure - it will quite happily start up and allow the user to carry on as if the problem does not exist. Granted it will produce some error messages in the log, but from my experience this doesn't help when you're trying to figure out why an application is behaving oddly.
Most other databases will detect and repair the error on startup or simply refuse to start with any sort of corruption.
A: Database MySQL 5.0.x and above
Defect Ring replication errors lead to inconsistent data on different nodes
Description
The most serious problem in production we face at the moment is that in a MySQL ring the ring itself produces an error and stops replicating.
Building a ring (or Master-Master-replication) is possible since 5.x.x: You chain the databases in a "ring" so that the replicate data to each other. Every database-node gets all the changes from all other nodes.
We assume that the error lies behind autoincrement- failures. This is known from normal replication, too, but in the new version there are no sufficinet error messages in the error log. I highly recommend not to use this feature in MySQL as long as the problems here are not fixed.
A: Database Oracle
Defect Did not handle long datatype well for too long
Description
Oracle only had the long datatype until 9i (I believe) at which point it was deprecated in favor of the LOBs. There is a ton of code out there, however, which still has longs and all of the related restrictions. The biggest of which was that each table could only have one long column and it had to be at the end of the columns. See here for a more exhaustive list of restricitons on the long.
A: Database Oracle
Problem Temp table definitions are not private
Description Many databases (eg Postgres and Sybase) allow you to create temp tables on the fly, insert into them, add indexes if you want, then query from them. Oracle has temp tables, but the temp table definitions exist in a global name space. Therefore the temp table has to be created by a DBA, you need to synchronize between the table definition they used and your code, and if two pieces of code want similar (but not identical) table definitions, they need to use different names. These differences make temp tables far less convenient for developers.
Yes, I understand the benefits for the query optimizer of having global definitions. However for me the lack of convenience makes Oracle's temp tables virtually useless for me, while I use them very intensively in Postgres.
A: Database: Oracle
Problem: The names of tables, procedures, columns, etc cannot exceed 30 characters. This is infuriating.
Problem: It's slapdash JDBC compliance. For example, stored procedures do not return results sets in a JDBC-compliant way, but instead of a proprietary OUT parameter type. This means you can't use higher-level JDBC abstractions.
A: PostgreSQL doesn't have a good failover solution, but I understand they're working on it.
A: Database : Sql Compact Edition
Drawback : Stored procedures are not supported.
Regardless of this limitation, this DB has its' uses especially as a client cache for application that can be smart client or distributed to mobile platforms.
A: Database Oracle
Defect Granularity of grants on packages
Description
You can only grant permissions on packages and not on stored procedures inside packages. Or alternatively, you can grant permissions on single stored procedures but then you put them outside of packages. This requires you to know up front who will use which stored procedure and it is really hard to refactor.
This does not happen with SQL Server.
A: Database Microsoft SQL Server 2005
Defect Lack of array type parameters
Description
Useful in searches, a lot of times you need to pass a series of values to be matched against. In SQL 2005 you can do a workaround by using CLR inside SQLServer. Given the usefulness it would make more sense to have this feature out of the box.
This does not happen with SQL Server 2008 or Oracle.
A: Database Postgres
Defect No analytic queries
Description
Analytic queries, introduced by Oracle, are part of the SQL 2003 standard. Unfortunately Postgres hasn't implemented them yet.
A: Database : PostgreSQL
**Problem : ** is that connector for C# for example are not really up-to-date and crash with advanced feature.
A: Database: All
Drawback - Poor design by people who didn't think it was important to know what you were doing when you designed a datbase. Far more problmes caused in all databases by bad design than from any missing feature. So I suppose they are all missing the "read my mind and figure out the best solution without me having to think" feature.
A: Any SQL DBMS
Defect: Duplicate rows
One of the virtues of the relational model is that it represents everything without duplicate tuples, i.e. using relations, which have keys and no duplicates. Unfortunately SQL isn't built that way. This makes the database developer's life needlessly difficult. SQL developers have to deal with tables without keys and debug queries that return duplicate rows.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Is it worth the development time to output valid HTML? Developing websites are time-consuming. To improve productivity, I would code a prototype to show to our clients. I don't worry about making the prototype comform to the standard. Most of the time, our clients would approve the prototype and give an unreasonable deadline. I usually end up using the prototype in production (hey, the prototype works. No need to make my job harder.)
I could refactor the code to output valid HTML. But is it worth the effort to output valid HTML?
A: Producing compliant HTML is similar to ensuring that you have no warnings during a compilation - the warnings are there for a reason, you may not realise what that reason is, but ignore the warnings and, before you know where you are, there as so many, you can't spot the one that's relevant to the problem that you're trying to fix.
If you use Firefox to view your web pages, you'll get a helpful green tick or red cross in the bottom right hand corner, quickly showin you whether you've complied or not. Clicking on a red cross will show you all of the places where you goofed.
Some of the warnings/errors may seem a bit pedantic, but fix them and you'll benefit in many ways.
*
*Your page is much more likely to work with a wider range of browsers.
*Accessibility compliance will be easier (You'll have 'alt' attributes on your images, for example)
*If you choose XHTML as a standard, your markup will be more likely to be useful in an AJAX environment.
Failure to do this results in unpredictability.
One of the biggest problems with web browsers is that they have perpetuated bad habits (And still do, in some cases) by silently correcting certain markup problems, such as failure to close table cells and/or rows. This single fact has resulted in thousands of web pages that are not compliant but 'work', lulling their developers into a false sense of security.
When you consider how many things there are that can go wrong with a website, being lazy when it comes to compliance is just adding more problems to your workload.
EDIT: having read your original post again, I notice that you say you don't bother with compliance when working on a prototype, then you go on to say that you usually use the prototype in production - this means that it's not strictly a prototype, but a candidate.
The normal situation in such circumstances is that once the customer accepts a candidate, no time is allocated for bug fixing or tidying up, thus strengthening the argument for making the markup compliant in the first place.
If you won't be given time later, do it now.
If you are given time later, then you had the time to do it anyway.
A: If you want your sight to be accessible to people with and without disabilities, as well as external systems, then yes, you should definitely make sure you output valid HTML.
It's easy to test your HTML with automatic validators.
I'll add to what Mike Edwards said about legal ramifications and remind you that you have a moral obligation too :)
A: Why not write the prototype in valid (X)HTML in the first place? I've never found that to be more of an effort than using invalid HTML. Producing valid XHTML should be a trivial task. (On the other hand, producing semantically meaningful XHTML might be more taxing.)
In short, I see no advantage whatsoever in using invalid HTML for prototypes.
A: I honestly dont know why it is extra effort to do standards based HTML. It's not as if it's hard and you should be doing it as a matter of professionalism.
If you paid someone to build you a house and he cut corners out of laziness, that you didnt notice at the time, but in 10 years cracks appeared in your walls, would you be happy?
A: It is only worth the effort if it gives you a practical benefit. Sticking to standards might make it easier to build a website that works across most browsers. Then again, if you're happy with how a website displays on the browsers you care about (maybe one, maybe all), then going through hoops to make it pass validation is a waste of time.
Also, the difference in SEO between an all-valid html website and a mostly-valid html website is negligible.
So always look for the practical benefit, there are some in some situations, but don't do it just for the sake of it.
A: Absolutely. Invalid code can cause all sorts of weird behaviors, and errors which don't obscure those that do when you get a validation report.
Case in point:
A yellow background was spilling out of a list of messages and over the heading for the next list of messages - but only in Internet Explorer.
Why? The background was applied to a list item, but the person who wrote the page had written it as a single list with a heading in the middle. Headings are not allowed between list items and different browsers attempted to recover from it in different ways. Internet Explorer ended the list item (with the background colour) when it saw the start of the following item (after the heading), while other browsers ended it when they saw the end tag for the first list item.
It was the only validity error on the page, so it took only a couple of minutes to track down the problem and fix it.
A: Valid HTML just to be able to have a badge on your site - no.
Having "valid HTML" in the sense of "HTML that works on every major browser or browser engine" - yes.
A: Yes. It's hard enough trying to deal with how different browsers will render valid HTML, never mind trying to predict what they'll do with invalid code. Same goes for search engines - enough problems in the HTML may lead to the site not being indexed properly or at all.
I guess the real answer is "it depends on what is invalid about the HTML". If the invalid parts relate to accessibility issues, you might even find your customer has legal problems if they use the site on a commercial basis.
A: Probably not if you have a non-complying site to begin with and are short on time.
However, and you won't believe me because I didn't believe others to begin with, but it is easier to make a site compliant from the start - it saves you headaches in terms of browser compatibility, CSS behaviour and even JavaScript behavior and it is typically less markup to maintain.
Site compliance (at least to Transitional) is pretty easy.
A: Because, if you stick to standards, your work will be compatible in the future. User Agents will strive for standard compliance and their quirks non-compliance mode will always be subject to change. This is the way is supposed to be.
Unless you're into that whole IE8 broken standards perpetuation thing that they want to enable by default. -- that's another argument.
Webkit, Gecko, Presto? (is that opera's engine?), and the others will always become more compliant with every release.
Unless your html work is in a IE embedded browser control, then there's really no reason to output valid html as long as it renders.
A: In my opinion the key criterion is "fit for purpose" - If your clients want something for a small/internal market (and don't care if that alienates potential customers who have disabilities or use less-common browsers) then that's their choice.
At the same time I think it's our (as developers) responsibility to make sure they know the implications of their decisions - Some organisations will be bound by legislative requirements that websites be useable by screen readers, which typically means standards-compliant HTML.
A: i believe making valid html outputs wont hurt your development time that much if you've trained yourself to code valid html from the start. for one, its not that hard to know which tags are not allowed within an elementand the required attributes in a tag are sometimes the ones you'd really need anyway - i believe these are the main errors that makes your html invalid, so why not just learn them as early as now if you plan to stay on the web for long?plus outputting valid html can help boost your sites ranking
A: There are two rules for writing websites:
*
*The site must work for your users.
*The site must work for your users.
To meet the first rule, you have to code such that your site renders correctly when using Internet Explorer. Unless you have the freedom to alter your site design to use only those features that IE renders correctly, this means writing invalid HTML.
To meet the second rule, you have to code such that your site renders correctly when using screen-readers and braille screens. Although some newer screen readers can work with IE-targeted sites, in general this means writing valid HTML.
If you're working on a small project, or you're part of a large team, you can code a site that outputs IE-targeted HTML for IE, and valid HTML otherwise. But if you're taking on a medium-to-large project on your own, you have to decide which rule you're going to follow and which one you're going to ignore.
UPDATE:
This is getting voted down by users who think you can always get away with valid HTML in IE. That may be true if you have the flexibility to change your design to get around IE's shortcomings, but if a client has given you a design and you have to get it working, you may have to resort to invalid HTML. It's sad, but it's true, whatever they might think.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: In continuous integration what is the best way to deal with external application dependencies In using our TeamCity Continuous Integration server we have uncovered some issues that we are unsure as to the best way to handle. Namely how to reference external applications that our application requires on the CI server.
This was initially uncovered with a dependency on Crystal Reports, so we went and installed Crystal Reports on the Server fixing the immediate problem. However as we move more applications over to the CI server we are finding more dependencies.
What is the best strategy here? Is it to continue installing the required applications on the Server?
Thanks
A: Where possible make the external dependencies part of your build system.
For instance check the installer in to your version control system and have a step that checks it out and runs it in silent mode (many installers support a mode with no user action sometimes using the commandline /s).
This way if you need to set up another build machine for a branch or just for new hardware everything is repeatable.
A: If your builds require the actual application to complete the build, then you should probably continue to install the application on your build server.
If you just need references to dlls or assemblies from the application, then what we've done at my company is to create installable 'SDKs' of the references required for a particular applicatoin and install them on our development and build machines in well-known library directories that our solutions reference.
On the build machine, our pre-build steps install the correct version of the dependencies and then clean them up when we are finished.
Recently, we've moved to using virtual machines for our build machines that our build process activates. These VMs get the SDKs installed on them as a pre-build, and then are restored to their snap-shot state after the build. We had some dependencies that were almost impossible to uninstall, so this made for a clean starting point each time.
A: If you use Maven to build, you can define your dependencies in the pom.xml file. They will then be automatically downloaded if necessary.
A: I am not sure if I followed correctly...
I am assuming your application is dependent on this external app, while building? In that case it should be on the machine doing CI...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I sort a VARCHAR column in SQL server that contains numbers? I have a VARCHAR column in a SQL Server 2000 database that can contain either letters or numbers. It depends on how the application is configured on the front-end for the customer.
When it does contain numbers, I want it to be sorted numerically, e.g. as "1", "2", "10" instead of "1", "10", "2". Fields containing just letters, or letters and numbers (such as 'A1') can be sorted alphabetically as normal. For example, this would be an acceptable sort order.
1
2
10
A
B
B1
What is the best way to achieve this?
A: One possible solution is to pad the numeric values with a character in front so that all are of the same string length.
Here is an example using that approach:
select MyColumn
from MyTable
order by
case IsNumeric(MyColumn)
when 1 then Replicate('0', 100 - Len(MyColumn)) + MyColumn
else MyColumn
end
The 100 should be replaced with the actual length of that column.
A: select
Field1, Field2...
from
Table1
order by
isnumeric(Field1) desc,
case when isnumeric(Field1) = 1 then cast(Field1 as int) else null end,
Field1
This will return values in the order you gave in your question.
Performance won't be too great with all that casting going on, so another approach is to add another column to the table in which you store an integer copy of the data and then sort by that first and then the column in question. This will obviously require some changes to the logic that inserts or updates data in the table, to populate both columns. Either that, or put a trigger on the table to populate the second column whenever data is inserted or updated.
A: SELECT *, CONVERT(int, your_column) AS your_column_int
FROM your_table
ORDER BY your_column_int
OR
SELECT *, CAST(your_column AS int) AS your_column_int
FROM your_table
ORDER BY your_column_int
Both are fairly portable I think.
A: you can always convert your varchar-column to bigint as integer might be too short...
select cast([yourvarchar] as BIGINT)
but you should always care for alpha characters
where ISNUMERIC([yourvarchar] +'e0') = 1
the +'e0' comes from http://blogs.lessthandot.com/index.php/DataMgmt/DataDesign/isnumeric-isint-isnumber
this would lead to your statement
SELECT
*
FROM
Table
ORDER BY
ISNUMERIC([yourvarchar] +'e0') DESC
, LEN([yourvarchar]) ASC
the first sorting column will put numeric on top.
the second sorts by length, so 10 will preceed 0001 (which is stupid?!)
this leads to the second version:
SELECT
*
FROM
Table
ORDER BY
ISNUMERIC([yourvarchar] +'e0') DESC
, RIGHT('00000000000000000000'+[yourvarchar], 20) ASC
the second column now gets right padded with '0', so natural sorting puts integers with leading zeros (0,01,10,0100...) in correct order (correct!) - but all alphas would be enhanced with '0'-chars (performance)
so third version:
SELECT
*
FROM
Table
ORDER BY
ISNUMERIC([yourvarchar] +'e0') DESC
, CASE WHEN ISNUMERIC([yourvarchar] +'e0') = 1
THEN RIGHT('00000000000000000000' + [yourvarchar], 20) ASC
ELSE LTRIM(RTRIM([yourvarchar]))
END ASC
now numbers first get padded with '0'-chars (of course, the length 20 could be enhanced) - which sorts numbers right - and alphas only get trimmed
A: I solved it in a very simple way writing this in the "order" part
ORDER BY (
sr.codice +0
)
ASC
This seems to work very well, in fact I had the following sorting:
16079 Customer X
016082 Customer Y
16413 Customer Z
So the 0 in front of 16082 is considered correctly.
A: This seems to work:
select your_column
from your_table
order by
case when isnumeric(your_column) = 1 then your_column else 999999999 end,
your_column
A: There are a few possible ways to do this.
One would be
SELECT
...
ORDER BY
CASE
WHEN ISNUMERIC(value) = 1 THEN CONVERT(INT, value)
ELSE 9999999 -- or something huge
END,
value
the first part of the ORDER BY converts everything to an int (with a huge value for non-numerics, to sort last) then the last part takes care of alphabetics.
Note that the performance of this query is probably at least moderately ghastly on large amounts of data.
A: This query is helpful for you. In this query, a column has data type varchar is arranged by good order.For example- In this column data are:- G1,G34,G10,G3. So, after running this query, you see the results: - G1,G10,G3,G34.
SELECT *,
(CASE WHEN ISNUMERIC(column_name) = 1 THEN 0 ELSE 1 END) IsNum
FROM table_name
ORDER BY IsNum, LEN(column_name), column_name;
A: This may help you, I have tried this when i got the same issue.
SELECT *
FROM tab
ORDER BY IIF(TRY_CAST(val AS INT) IS NULL, 1, 0),TRY_CAST(val AS INT);
A: The easiest and efficient way to get the job done is using TRY_CAST
SELECT my_column
FROM my_table
WHERE <condition>
ORDER BY TRY_CAST(my_column AS NUMERIC) DESC
This will sort all numbers in descending order and push down all non numeric values
A: SELECT FIELD FROM TABLE
ORDER BY
isnumeric(FIELD) desc,
CASE ISNUMERIC(test)
WHEN 1 THEN CAST(CAST(test AS MONEY) AS INT)
ELSE NULL
END,
FIELD
As per this link you need to cast to MONEY then INT to avoid ordering '$' as a number.
A: SELECT *,
ROW_NUMBER()OVER(ORDER BY CASE WHEN ISNUMERIC (ID)=1 THEN CONVERT(NUMERIC(20,2),SUBSTRING(Id, PATINDEX('%[0-9]%', Id), LEN(Id)))END DESC)Rn ---- numerical
FROM
(
SELECT '1'Id UNION ALL
SELECT '25.20' Id UNION ALL
SELECT 'A115' Id UNION ALL
SELECT '2541' Id UNION ALL
SELECT '571.50' Id UNION ALL
SELECT '67' Id UNION ALL
SELECT 'B48' Id UNION ALL
SELECT '500' Id UNION ALL
SELECT '147.54' Id UNION ALL
SELECT 'A-100' Id
)A
ORDER BY
CASE WHEN ISNUMERIC (ID)=0 /* alphabetical sort */
THEN CASE WHEN PATINDEX('%[0-9]%', Id)=0
THEN LEFT(Id,PATINDEX('%[0-9]%',Id))
ELSE LEFT(Id,PATINDEX('%[0-9]%',Id)-1)
END
END DESC
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60"
}
|
Q: How to initialize Hibernate entities fetched by a remote method call? When calling a remote service (e.g. over RMI) to load a list of entities from a database using Hibernate, how do you manage it to initialize all the fields and references the client needs?
Example: The client calls a remote method to load all customers. With each customer the client wants the reference to the customer's list of bought articles to be initialized.
I can imagine the following solutions:
*
*Write a remote method for each special query, which initializes the required fields
(e.g. Hibernate.initialize()) and returns the domain objects to the client.
*Like 1. but create DTOs
*Split the query up into multiple queries, e.g. one for the customers, a second for the customers' articles, and let the client manage the results
*The remote method takes a DetachedCriteria, which is created by the client and executed by the server
*Develop a custom "Preload-Pattern", i.e. a way for the client to specify explicitly which properties to preload.
A: I have used 1 in the past and it worked well.
A: I think number 5 is why there is a "fetch" clause in HQL. Could you use that or is the problem more complex?
A: I've been at a customer who standardised its' projects on #5 and it worked really well.
The final argument of a service call was a comma-separated list of all properties to be loaded, for example:
CustomerService.getCustomerById(id, "parent, address, address.city")
I believe they used the fetch clause for this.
I implemented the same idea once for jpa using PropertyUtils to trigger the lazy loading.
A: This remote service... is it another part of the same application? if so, there's no problem sharing classes and sending instances back and forth (same goes for RMI, using stubs).
Then there's the option of accessing a web-service or something similar, where the two applications communicating share no classes, and this makes things even trickier. I'm about to face this problem in the near future, and I'll be very interested to know the answer.
Anyway, my vote is for DTOs. I think they provide the best and easiest solution, where applicable, and help keep the code simple and maintainable.
Yuval =8-)
A: If your remote service only exists to provide your client with data, then switching off lazy loading on all the Hibernate entities might help.
Personally, however, I think DTOs are the right way to go. By expressing your remote interface in terms of DTOs you are sure that you get everything you need, and nothing you did not expect.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Manualy choose an interface on TCP to send data I have a TCP server that is listening on a particular interface only. I want that after the accept() call if the incoming connection was from xxx.xxx.xxx.0 subnet then the data should go through intf1 otherwise the data should be sent through intf2 which is where my server is listening.
Both the interfaces have the same route settings and priority. All the networks are accessible from both the interfaces.
One way out could be to add a new route through intf1 with higher priority, but I want this to be a code based decision.
The code is in pure C written on Linux.
A: While it's not exactly the pure C option that you're looking for perhaps you could use an iptables rule upon receipt of the accept().
(although a quick look at /lib/iptables shows that you might be in luck)
I'm imagining a rule that would redirect all tcp traffic to your nominated device. You could possibly even make the rule aware of the socket state so that you didn't need to nominate the interface after the accept().
All of this applies until I spot the obvious flaw in the scheme, slap my head and say "D'oh!"
A: Unfortunately your options are limited, as the TCP/IP stack is implemented in the kernel and your interface is the routing table. Your best bet is to manually assign a route.
You could also use the netlink library to add/remove routes on-the-fly, but some portion of the TCP packets are going to go across the 'wrong' interface until the call is made. You will probably come up against routing issues when your packets with the original interface's address are emitted via another interface.
A: You can change the route in your program whith the rtnetlink.
You can modify all the parameters you want. See also netdevice.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Problems with IronPython Studio and PictureBox Right, so I'm having a go at Iron Python Studio as I'm quite familiar with VS2005 and want to try Python with an integrated GUI designer. But as soon as I add a PictureBox I'm in trouble. When running the project I get complaints about BeginInit and soon enough the form designer stops working.
Is this because I'm running .NET 3.5 or some other compatibility issue? Couldn't find anything at the Iron Python Studio site
A: I'm having the same problem. What you can do is manually remove the BeginInit() and EndInit() calls, it should work fine then.
A: Have you checked out IronPython Blog on MSDN? You could probably drill down from there and find a definitive answer. If you do, be sure you update your question here!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is Approximate Nearest Neighbour the fastest feature matching in Computer Vision? When using feature descriptors [like SIFT, SURF] - is Approximate Nearest Neighbour the fastest method to do matching between images?
A: You should check out pyramid match kernel, which is one of the most successful algorithms for image matching with local features so far. It has a linear time complexity, as opposed to comparing every feature in image A to every feature in image B, which is O(n^2). There is also a free implementation.
A: I'd say that Euclidean distnace based nearest neighbor would be the easiest to implement, but not necessarily the fastest.
I'd agree that approximate nearest neighbor or 'best bin first' would be the quickest at identifying which image in your background set most closely resembles the probe image.
If your trying to identify a single object in the image, things will be a little more difficult.
A: You can also see FLANN - Fast Library for Approximate Nearest Neighbors
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Is there any IIS equivalent to Tomcat? I want to test ASP.NET applications to get the feel for the MVC extension and compare that to what I can do today with Grails or Rails.
The trouble is that being in a corporate environment, I can't install IIS on my workstation, neither on my DEV server. And - you guessed it - Visual Studio is not to be considered at that moment (I guess for my investigations I'll stick with SharpDevelop and the .NET SDK for the time being).
On the Java side, I could unzip some Tomcat distribution in any folder and hit go.
Is there any equivalent in the IIS world, like a lightweight ASP.NET host?
Thanks,
Rollo
A: UltiDev Cassini Web Server
A: cassini runs locally. I'll get a link..
Edit: Here's the link to the Cassini Web Server
A: UltiDev recently started shipping test builds of the Cassini replacement - UltiDev Web Server Pro. It requires elevated/admin privileges to be installed, but it can be downloaded for free. It's quite advanced, it's closer to IIS than Cassini. See screenshots.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: I am losing periods in an email sent using Java Mail I am sending newsletters from a Java server and one of the hyperlinks is arriving missing a period, rendering it useless:
Please print your <a href=3D"http://xxxxxxx.xxx.xx.edu=
au//newsletter2/3/InnovExpoInviteVIP.pdf"> VIP invitation</a> for future re=
ference and check the Innovation Expo website <a href=3D"http://xxxxxxx.xx=
xx.xx.edu.au/2008/"> xxxxxxx.xxxx.xx.edu.au</a> for updates.
In the example above the period was lost between edu and au on the first hyperlink.
We have determined that the mail body is being line wrapped and the wrapping splits the line at the period, and that it is illegal to start a line with a period in an SMTP email:
https://www.rfc-editor.org/rfc/rfc2821#section-4.5.2
My question is this - what settings should I be using to ensure that the wrapping is period friendly and/or not performed in the first place?
UPDATE: After a lot of testing and debugging it turned out that our code was fine - the client's Linux server had shipped with a very old Java version and the old Mail classes were still in one of the lib folders and getting picked up in preference to ours.
JDK prior to 1.2 have this bug.
A: From an SMTP perspective, you can start a line with a period but you have to send two periods instead. If the SMTP client you're using doesn't do this, you may encounter the problem you describe.
It might be worth trying an IP sniffer to see where the problem really is. There are likely at least two separate SMTP transactions involved in sending that email.
A: I had a similar problem in HTML emails: mysterious missing periods, and in one case a strangely truncated message. JavaMail sends HTML email using the quoted-printable encoding which wraps lines at any point (i.e. not only on whitespace) so that no line exceeds 76 characters. (It uses an '=' at the end of the line as a soft carriage return, so the receiver can reassemble the lines.) This can easily result in a line beginning with a period, which should be doubled. (This is called 'dot-stuffing') If not, the period will be eaten by the receiving SMTP server or worse, if the period is the only character on a line, it will be interpreted by the SMTP server as the end of the message.
I tracked it down to the GNU JavaMail 1.1.2 implementation (aka classpathx javamail). There is no newer version of this implementation and it hasn't been updated for 4 or 5 years. Looking at the source, it partially implements dot-stuffing -- it tries to handle the period on a line by itself case, but there's a bug that prevents even that case from working.
Unfortunately, this was the default implementation on our platform (Centos 5), so I imagine it is also the default on RedHat.
The fix on Centos is to install Sun's (or should I now say Oracle's?) JavaMail implementation (I used 1.4.4), and use the Centos alternatives command to install it in place of the default implementation. (Using alternatives ensures that installing Centos patches won't cause a reversion to the GNU implmentation.)
A: Make sure all your content is RFC2045 friendly by virtue of quoted-printable.
Use the MimeUtility class in a method like this.
private String mimeEncode (String input)
{
ByteArrayOutputStream bOut = new ByteArrayOutputStream();
OutputStream out;
try
{
out = MimeUtility.encode( bOut, "quoted-printable" );
out.write( input.getBytes( ) );
out.flush( );
out.close( );
bOut.close( );
} catch (MessagingException e)
{
log.error( "Encoding error occured:",e );
return input;
} catch (IOException e)
{
log.error( "Encoding error occured:",e );
return input;
}
return bOut.toString( );
}
A: I am having a similar problem, but using ASP.NET 2.0. Per application logs, the link in the email is correct 'http://www.3rdmilclassrooms.com/' however then the email is received by the client the link is missing a period 'http://www3rdmilclassrooms.com'
I have done all I can to prove that the email is being sent with the correct link. My suspicion is that it is the email client or spam filter software that is modifying the hyperlink. Is it possible that an email spam filtering software would do that?
A: I am not sure, but it looks a bit as if your email is getting encoded. 0x3D is the hexadecimal character 61, which is the equals character ('=').
What classes/libary are you using to send the emails? Check the settings regarding encoding.
A: Are you setting the Mime type to "text/html"? You should have something like this:
BodyPart bp = new MimeBodyPart();
bp.setContent(message,"text/html");
A: I had a similar problem sending email programmatically over to a yahoo account. They would get one very long line of text and add their own linebreaks in the HTML email, thinking that that wouldnt cause a problem, but of course it would.
the trick was not to try to send such a long line. Because HTML emails don't care about linebreaks, you should add your own every few blocks, or just before the offending line, to ensure that your URL doesn't get split at a period like that.
I had to change my ASP VB from
var html;
html = "Blah Blah Blah Blah ";
html = html & " More Text Here....";
to
var html;
html = "Blah Blah Blah Blah " & VbCrLf;
html = html & " More Text Here....";
And that's all it took to clean up the output as was being processed on their end.
A: As Greg pointed out, the problem is with your SMTP client, which does not do dot-stuffing (doubling the leading dot).
It appears that the e-mail is being encoded in quoted-printable. Switching to base64 (I assume you can do it with the current java mime implementation) will fix the problem.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: NHibernate transaction and race condition I've got an ASP.NET app using NHibernate to transactionally update a few tables upon a user action. There is a date range involved whereby only one entry to a table 'Booking' can be made such that exclusive dates are specified.
My problem is how to prevent a race condition whereby two user actions occur almost simultaneously and cause mutliple entries into 'Booking' for >1 date. I can't check just prior to calling .Commit() because I think that will still leave be with a race condition?
All I can see is to do a check AFTER the commit and roll the change back manually, but that leaves me with a very bad taste in my mouth! :)
booking_ref (INT) PRIMARY_KEY AUTOINCREMENT
booking_start (DATETIME)
booking_end (DATETIME)
A: *
*make the isolation level of your transaction SERIALIZABLE (session.BeginTransaction(IsolationLevel.Serializable) and check and insert in the same transaction. You should not in general set the isolationlevel to serializable, just in situations like this.
or
*
*lock the table before you check and eventually insert. You can do this by firing a SQL query through nhibernate:
session.CreateSQLQuery("SELECT null as dummy FROM Booking WITH (tablockx, holdlock)").AddScalar("dummy", NHibernateUtil.Int32);
This will lock only that table for selects / inserts for the duration of that transaction.
Hope it helped
A: The above solutions can be used as an option. If I sent 100 request by using "Parallel.For" while transaction level is serializable, yess there is no duplicated reqeust id but 25 transaction is failed. It is not acceptable for my client. So we fixed the problem with only storing request id and adding an unique index on other table as temp.
A: Your database should manage your data integrity.
You could make your 'date' column unique. Therefore, if 2 threads try to get the same date. One will throw a unique key violation and the other will succeed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Schema reference for IIS programmatic administration Where can I find the IIS object schema? All I found from MSDN was a picture of the class hierarchy.
To be clear, I want to configure IIS through either WMI or ADSI and I'm looking for something like the Active Directory schema, only for IIS. I want a list of all the objects I can configure, which objects they can be contained in and what their properties are.
A: While I can't point you to a definitive document, you might find this even better: WMI code creator:
http://www.microsoft.com/downloads/details.aspx?FamilyID=2cc30a64-ea15-4661-8da4-55bbc145c30e&DisplayLang=en
I've used this tool to query IIS WMI objects and it will even script the queries or WMI method calls for you in C# (I used them as examples).
If you run it on a machine with IIS installed, you'll see the root\MicrosoftIISv2 namesspace, which has tons of queryable objects in it, and all their properties and methods. You are probably looking for something like the IIsWebVirtualDirSetting object - try starting with that as it's got most of the properties you'll see on the IIS config panel.
A: Can't you find some information here : http://msdn.microsoft.com/en-us/library/aa737439.aspx ?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to compare files with same names in two different directories using a shell script Before moving on to use SVN, I used to manage my project by simply keeping a /develop/ directory and editing and testing files there, then moving them to the /main/ directory. When I decided to move to SVN, I needed to be sure that the directories were indeed in sync.
So, what is a good way to write a shell script [ bash ] to recursively compare files with the same name in two different directories?
Note: The directory names used above are for sample only. I do not recommend storing your code in the top level :).
A: diff -rqu /develop /main
It will only give you a summary of changes that way :)
If you want to see only new/missing files
diff -rqu /develop /main | grep "^Only
If you want to get them bare:
diff -rqu /develop /main | sed -rn "/^Only/s/^Only in (.+?): /\1/p"
A: The diff I have available allows recursive differences:
diff -r main develop
But with a shell script:
( cd main ; find . -type f -exec diff {} ../develop/{} ';' )
A: The diff command has a -r option to recursively compare directories:
diff -r /develop /main
A: [I read somewhere that answering your own questions is OK, so here goes :) ]
I tried this, and it worked pretty well
[/]$ cd /develop/
[/develop/]$ find | while read line; do diff -ruN "/main/$line" $line; done |less
You can choose to compare only specific files [e.g., only the .php ones] by editing the above line as
[/]$ cd /develop/
[/develop/]$ find -name "*.php" | while read line; do diff -ruN "/main/$line" $line; done |less
Any other ideas?
A: here is an example of a (somewhat messy) script of mine, dircompare.sh, which will:
*
*sort files and directories in arrays depending on which directory they occur in (or both), in two recursive passes
*The files that occur in both directories, are sorted again in two arrays, depending on if diff -q determines if they differ or not
*for those files that diff claims are equal, show and compare timestamps
Hope it can be found useful - Cheers!
EDIT2: (Actually, it works fine with remote files - the problem was unhandled Ctrl-C signal during a diff operation between local and remote file, which can take a while; script now updated with a trap to handle that - however, leaving the previous edit below for reference):
EDIT: ... except it seems to crash my server for a remote ssh directory (which I tried using over ~/.gvfs)... So this is not bash anymore, but an alternative I guess is to use rsync, here's an example:
$ # get example revision 4527 as testdir1
$ svn co https://openbabel.svn.sf.net/svnroot/openbabel/openbabel/trunk/data@4527 testdir1
$ # get earlier example revision 2729 as testdir2
$ svn co https://openbabel.svn.sf.net/svnroot/openbabel/openbabel/trunk/data@2729 testdir2
$ # use rsync to generate a list
$ rsync -ivr --times --cvs-exclude --dry-run testdir1/ testdir2/
sending incremental file list
.d..t...... ./
>f.st...... CMakeLists.txt
>f.st...... MACCS.txt
>f..t...... SMARTS_InteLigand.txt
...
>f.st...... atomtyp.txt
>f+++++++++ babel_povray3.inc
>f.st...... bin2hex.pl
>f.st...... bondtyp.h
>f..t...... bondtyp.txt
...
Note that:
*
*To get the above, you mustn't forget trailing slashes / at the end of directory names in rsync
*--dry-run - simulate only, don't update/transfer files
*-r - recurse into directories
*-v - verbose (but not related to file changes info)
*--cvs-exclude - ignore .svn files
*-i - "--itemize-changes: output a change-summary for all updates"
Here is a brief excerpt of man rsync that explains the information shown by -i (for instance, the >f.st...... strings above):
The "%i" escape has a cryptic output that is 11 letters long.
The general format is like the string YXcstpoguax, where Y is
replaced by the type of update being done, X is replaced by the
file-type, and the other letters represent attributes that may
be output if they are being modified.
The update types that replace the Y are as follows:
o A < means that a file is being transferred to the remote
host (sent).
o A > means that a file is being transferred to the local
host (received).
o A c means that a local change/creation is occurring for
the item (such as the creation of a directory or the
changing of a symlink, etc.).
...
The file-types that replace the X are: f for a file, a d for a
directory, an L for a symlink, a D for a device, and a S for a
special file (e.g. named sockets and fifos).
The other letters in the string above are the actual letters
that will be output if the associated attribute for the item is
being updated or a "." for no change. Three exceptions to this
are: (1) a newly created item replaces each letter with a "+",
(2) an identical item replaces the dots with spaces, and (3) an
....
A bit cryptic, indeed - but at least it shows basic directory comparison over ssh. Cheers!
A: The classic (System V Unix) answer would be dircmp dir1 dir2, which was a shell script that would list files found in either dir1 but not dir2 or in dir2 but not dir1 at the start (first page of output, from the pr command, so paginated with headings), followed by a comparison of each common file with an analysis (same, different, directory were the most common results).
This seems to be in the process of vanishing - I have an independent reimplementation of it available if you need it. It's not rocket science (cmp is your friend).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: Subversion Berkeley DB broken, recovery failed I've got a Subversion repository, backed by the berkeley DB. Occasionally it breaks down due to some locks and such not being released, but this morning it was impossible to recover it using the 'svnadmin recover' command. Instead it failed with the following error:
svnadmin: Berkeley DB error for filesystem 'db' while opening 'nodes' table:
Invalid argument
svnadmin: bdb: file nodes (meta pgno = 0) has LSN [1083][429767].
svnadmin: bdb: end of log is [1083][354707]
svnadmin: bdb: db/nodes: unexpected file type or format
I'm going to restore the repository from the last known good backup, but it would be good to know if there is a way this repository could be fixed.
edit: even the db_recover utility does not make a difference. It shows recovery is completed, but the same error persists when verifying the repository using svnadmin.
A:
I've got a Subversion repository, backed by the berkeley DB.
Sorry to hear that. I would suggest that at your earliest convenience, you dump that repository (svnadmin dump) and reload it into a new one backed by FSFS (svnadmin load).
A: have you tried db_recover? the latter tends to be able to correct more issues than svnadmin
A: For those wanting to try the db_recover function, you first need to find the right berkeley DB version, and then use the proper version of the berkeley DB software. Then run the recover utility:
db_recover -c -v -h <path to subversion db dir>
A: I know this question is very old, but there is another alternative which worked for me:
svnadmin recover <svn path>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Using **kwargs with SimpleXMLRPCServer in python I have a class that I wish to expose as a remote service using pythons SimpleXMLRPCServer. The server startup looks like this:
server = SimpleXMLRPCServer((serverSettings.LISTEN_IP,serverSettings.LISTEN_PORT))
service = Service()
server.register_instance(service)
server.serve_forever()
I then have a ServiceRemote class that looks like this:
def __init__(self,ip,port):
self.rpcClient = xmlrpclib.Server('http://%s:%d' %(ip,port))
def __getattr__(self, name):
# forward all calls to the rpc client
return getattr(self.rpcClient, name)
So all calls on the ServiceRemote object will be forwarded to xmlrpclib.Server, which then forwards it to the remote server. The problem is a method in the service that takes named varargs:
@useDb
def select(self, db, fields, **kwargs):
pass
The @useDb decorator wraps the function, creating the db before the call and opening it, then closing it after the call is done before returning the result.
When I call this method, I get the error "call() got an unexpected keyword argument 'name'". So, is it possible to call methods taking variable named arguments remotely? Or will I have to create an override for each method variation I need.
Thanks for the responses. I changed my code around a bit so the question is no longer an issue. However now I know this for future reference if I indeed do need to implement positional arguments and support remote invocation. I think a combination of Thomas and praptaks approaches would be good. Turning kwargs into positional args on the client through xmlrpclient, and having a wrapper on methods serverside to unpack positional arguments.
A: XML-RPC doesn't really have a concept of 'keyword arguments', so xmlrpclib doesn't try to support them. You would need to pick a convention, then modify xmlrpclib._Method to accept keyword arguments and pass them along using that convention.
For instance, I used to work with an XML-RPC server that passed keyword arguments as two arguments, '-KEYWORD' followed by the actual argument, in a flat list. I no longer have access to the code I wrote to access that XML-RPC server from Python, but it was fairly simple, along the lines of:
import xmlrpclib
_orig_Method = xmlrpclib._Method
class KeywordArgMethod(_orig_Method):
def __call__(self, *args, **kwargs):
if args and kwargs:
raise TypeError, "Can't pass both positional and keyword args"
args = list(args)
for key in kwargs:
args.append('-%s' % key.upper())
args.append(kwargs[key])
return _orig_Method.__call__(self, *args)
xmlrpclib._Method = KeywordArgMethod
It uses monkeypatching because that's by far the easiest method to do this, because of some clunky uses of module globals and name-mangled attributes (__request, for instance) in the ServerProxy class.
A: You can't do this with plain xmlrpc since it has no notion of keyword arguments. However, you can superimpose this as a protocol on top of xmlrpc that would always pass a list as first argument, and a dictionary as a second, and then provide the proper support code so this becomes transparent for your usage, example below:
Server
from SimpleXMLRPCServer import SimpleXMLRPCServer
class Server(object):
def __init__(self, hostport):
self.server = SimpleXMLRPCServer(hostport)
def register_function(self, function, name=None):
def _function(args, kwargs):
return function(*args, **kwargs)
_function.__name__ = function.__name__
self.server.register_function(_function, name)
def serve_forever(self):
self.server.serve_forever()
#example usage
server = Server(('localhost', 8000))
def test(arg1, arg2):
print 'arg1: %s arg2: %s' % (arg1, arg2)
return 0
server.register_function(test)
server.serve_forever()
Client
import xmlrpclib
class ServerProxy(object):
def __init__(self, url):
self._xmlrpc_server_proxy = xmlrpclib.ServerProxy(url)
def __getattr__(self, name):
call_proxy = getattr(self._xmlrpc_server_proxy, name)
def _call(*args, **kwargs):
return call_proxy(args, kwargs)
return _call
#example usage
server = ServerProxy('http://localhost:8000')
server.test(1, 2)
server.test(arg2=2, arg1=1)
server.test(1, arg2=2)
server.test(*[1,2])
server.test(**{'arg1':1, 'arg2':2})
A: As far as I know, the underlying protocol doesn't support named varargs (or any named args for that matter). The workaround for this is to create a wrapper that will take the **kwargs and pass it as an ordinary dictionary to the method you want to call. Something like this
Server side:
def select_wrapper(self, db, fields, kwargs):
"""accepts an ordinary dict which can pass through xmlrpc"""
return select(self,db,fields, **kwargs)
On the client side:
def select(self, db, fields, **kwargs):
"""you can call it with keyword arguments and they will be packed into a dict"""
return self.rpcClient.select_wrapper(self,db,fields,kwargs)
Disclaimer: the code shows the general idea, you can do it a bit cleaner (for example writing a decorator to do that).
A: Using the above advice, I created some working code.
Server method wrapper:
def unwrap_kwargs(func):
def wrapper(*args, **kwargs):
print args
if args and isinstance(args[-1], list) and len(args[-1]) == 2 and "kwargs" == args[-1][0]:
func(*args[:-1], **args[-1][1])
else:
func(*args, **kwargs)
return wrapper
Client setup (do once):
_orig_Method = xmlrpclib._Method
class KeywordArgMethod(_orig_Method):
def __call__(self, *args, **kwargs):
args = list(args)
if kwargs:
args.append(("kwargs", kwargs))
return _orig_Method.__call__(self, *args)
xmlrpclib._Method = KeywordArgMethod
I tested this, and it supports method with fixed, positional and keyword arguments.
A: As Thomas Wouters said, XML-RPC does not have keyword arguments. Only the order of arguments matters as far as the protocol is concerned and they can be called anything in XML: arg0, arg1, arg2 is perfectly fine, as is cheese, candy and bacon for the same arguments.
Perhaps you should simply rethink your use of the protocol? Using something like document/literal SOAP would be much better than a workaround such as the ones presented in other answers here. Of course, this may not be feasible.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: Transform .HBM model to annotated pojos We have our domain model declared in rusty old hbm files, we wish to move to POJOs annotated with the javax.persistence.* annotations.
Has anyone had experience doing so?
Are there tools that we could employ?
A: You could use hbm2java ant task from hibernate-tools.jar. This is a tool known as Hibernate Tool. hbm2java will generate JPA annotated POJOS from hbm files.
See http://www.hibernate.org/hib_docs/tools/reference/en/html/ant.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Antivirus and file access conflict : good programming practices? Sometimes, we experiment "access denied" errors due to the antivirus which handles the file at the same time our program wants to write/rename/copy it.
This happens rarely but makes me upset because I don't find the good way to deal with: technically our response is to change our source code to implement kind of retry mechanism... but we are not satisfied.. . that smells a little bit... we can't afford telling our customers "please turn off your antivirus, let our software work properly"...
So if your have already experimented such issues, please let me know how you dealt with.
Thanks!
A: There is really very little scope for saying "turn avs off". That just won't fly in a lot of offices so we've done exactly what you've said: build a retry-queue.
Files that are locked are added to the queue. When the original operation ends, we pause for 1 second and sequentially pop through the queue. Files that fail the second time are added to a second queue and after the first completes, we wait 3 seconds and pop through the second.
Files that fail the second queue (the third attempt) are reported.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Dynamic Client Script I need to write a java script. This is supposed to validate if the checkbox is selected in the page or not. The problem here is that the check box is inside a grid and is generated dynamically. The reason being the number of check box that need to be rendered is not know at design time. So the id is know only at the server side.
A: You have to generate your javascript too, or at least a javascript data structure (array) wich must contain the checkboxes you should control.
Alternatively you can create a containing element, and cycle with js on every child input element of type checkbox.
A: Here is a thought:
As indicated by Anonymous you can generate javascript, if you are in ASP.NET you have some help with the RegisterClientScriptBlock() method. MSDN on Injecting Client Side Script
Also you could write, or generate, a javascript function that takes in a checkbox as a parameter and add an onClick attribute to your checkbox definition that calls your function and passes itself as the parameter
function TrackMyCheckbox(ck)
{
//keep track of state
}
<input type="checkbox" onClick="TrackMyCheckbox(this);".... />
A: If it's your only checkbox you can do a getElementsByTagName() call to get all inputs and then iterate through the returned array looking for the appropriate type value (i.e. checkbox).
A: There is not much detail in the question. But assuming the the HTML grid is generated on the server side (not in javascript).
Then add classes to the checkboxes you want to ensure are checked. And loop through the DOM looking for all checkboxes with that class. In jQuery:
HTML:
<html>
...
<div id="grid">
<input type="checkbox" id="checkbox1" class="must-be-checked" />
<input type="checkbox" id="checkbox2" class="not-validated" />
<input type="checkbox" id="checkbox3" class="must-be-checked" />
...
<input type="checkbox" id="checkboxN" class="must-be-checked" />
</div>
...
</html>
Javascript:
<script type="text/javascript">
// This will show an alert if any checkboxes with the class 'must-be-checked'
// are not checked.
// Checkboxes with any other class (or no class) are ignored
if ($('#grid .must-be-checked:not(:checked)').length > 0) {
alert('some checkboxes not checked!');
}
</script>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: how to override Ctrl+V in TinyMCE I need to cleanup the HTML of pasted text into TinyMCE by passing it to a webservice and then getting it back into the textarea.
So I need to override the Ctrl+V in TinyMCE to caputre the text, do a background request, and on return continue with whatever the paste handler was for TinyMCE.
First off, where is TinyMCE's Ctrl+V handler, and is there a non-destructive way to override it? (instead of changing the source code)
A: You could write a plug-in that handles the ctrl+v event and passes it through or modify the paste plug-in. The following code is found at plugins/paste/editor_plugin.js and it handles the ctrl+v event.
handleEvent : function(e) {
// Force paste dialog if non IE browser
if (!tinyMCE.isRealIE && tinyMCE.getParam("paste_auto_cleanup_on_paste", false) && e.ctrlKey && e.keyCode == 86 && e.type == "keydown") {
window.setTimeout('tinyMCE.selectedInstance.execCommand("mcePasteText",true)', 1);
return tinyMCE.cancelEvent(e);
}
return true;
},
Here is some more information about creating plug-ins for tinyMCE.
A: Tiny Editor has a plugin called "paste".
When using it, you can define two functions in the init-section
/**
* This option enables you to modify the pasted content BEFORE it gets
* inserted into the editor.
*/
paste_preprocess : function(plugin, args)
{
//Replace empty styles
args.content = args.content.replace(/<style><\/style>/gi, "");
}
and
/**
* This option enables you to modify the pasted content before it gets inserted
* into the editor ,but after it's been parsed into a DOM structure.
*
* @param plugin
* @param args
*/
paste_postprocess : function(plugin, args) {
var paste_content= args.node.innerHTML;
console.log('Node:');
console.log(args.node);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: GUI apps in javascript without a browser? I would like to use javascript to develop general-purpose GUI applications. Initially these are to run on Windows, but I would like them to ultimately be cross-platform.
Is there a way to do this without having to make the application run in a browser?
A: Check out Adobe AIR.
From Wikipedia:
Adobe AIR is a cross-platform runtime environment for building rich Internet applications using Adobe Flash, Adobe Flex, HTML, or Ajax, that can be deployed as a desktop application.
Also check out Mozilla Prism (in beta).
A: JsLibs
Today I came across this: http://code.google.com/p/jslibs/
(from DZone)
JS Libs seems to meet my requirement. I'll have a look, and if I find that it's interesting, I'll post back here.
A: You could try to combine something like SUN's Lively Kernel with Mozilla's Prism.
*
*Lively Kernel is a GUI Stack written entirely in JavaScript using SVG for display purposes.
*Prism is a way to launch web applications without showing the browser in which they run.
Very bleeding edge though, use at your own risk. :-)
A: XUL Runner might be an answer, but I'm afraid I can't speak from experience.
A: You can use nodeGUI . By which you can use all modules of nodejs. And you can style your app with css without any html file. Check it out :- https://docs.nodegui.org/
A: JScript .NET might be able to do it. It was intended for ASP .NET and .NET may not be cross platform the way you want it. However, more interest might create more development.
JScript .NET:
http://msdn.microsoft.com/en-us/library/ms974588.aspx
A: Prova Titanium. It's a development platform that allows you to build apps for mobile and desktop using the common web languages (html, javascript, php, etc).
It's open source!
A: You can use the rhino JavaScript interpreter from Mozilla. It allows JavaScript to access any of the Java libraries, including Swing for GUIs.
http://www.mozilla.org/rhino/
A: Try AIR, you can even use your JS toolkit of choice
Using it with dojo look at this: http://dojocampus.org/content/2008/04/02/dojo-on-air-a-fancy-file-uploader/
A: Well, if you didn't want your software to be cross-platform, then you had two options on Windows:
*
*HTML Application (HTA) .hta (tutorial)
*Windows Script/Scripting Host (WSH) and the XML like Windows Script File (WSF) .wsf format, to mix JScript and VBScript and run them with cscript.exe/wscript.exe (example)
Notes:
*
*Option one actually uses Internet Explorer and mshta.exe under the hood, so I don't think it suffices your requirements.
*Option two, JScript doesn't have all the VBScript GUI functionalities. So in the example provided above, you have to actually define VBScript SUBs to get GUI elements.
*Microsoft's JScript and mainstream dialects of JavaScripts are not fully compatible.
P.S. A great tutorial about WSH, WSF, VBScript, JScript ... here
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: How to default the working directory for JUnit launch configurations in Eclipse? Our Java code (not the test code) reads files from the current directory, which means the working directory needs to be set properly whenever we run the code.
When launching a JUnit test from within Eclipse, a launch configuration automatically gets created. The problem is, that the working directory in that launch configuration is always by default the root project directory which is always wrong, the test fails, I have to open the launch configuration dialog, change the working directory and relaunch the test. This is very annoying. The same thing happens when I run a single test method.
I've already considered these:
*
*Changing the current directory from the test code - not possible by design.
*When opening a file, pass a parent directory parameter - too difficult, since this would affect lots of places.
*Use the Copy launch configuration feature of Eclipse to create new launch configurations from existing ones that already have a correct working directory set. This doesn't really makes sense here, since I would like to launch a test or a test method quickly, just by invoking "run this test / method as JUnit test".
All in all, it looks like it's responsibility of Eclipse, not the code.
Is there a way to set the default working directory for all future, newly created JUnit launch configurations?
A: With Eclipse 3 you can set you working directory. Go to you run/debug configuration -> Arguments tab. In the Working directory select "other" and enter the root of your test
A: This is a subjective answer:
I believe you're doing your tests wrong, you shouldn't be loading the files from the JUnit using relative or complete paths, but instead, have them as resources in the project (add them to the build path) and load them as resources in the JUnit tests. This way if something changes on the filesystem, someone uses a different filesystem or IDE but still has them in the build path (as source folders) you're not going to have a problem.
I'm unsure if this is what you mean but if you really want to change it go to the Run Configuration options -> Your JUnit -> Arguments tab, at the bottom (Eclipse 3.4) you see Working Directory, click 'other' and change it from there.
A: As far as I can tell, there's no way of changing the default working directory for future JUnit launch configurations in Eclipse 3.4. This issue has also been reported as a bug in the Eclipse bug database.
However, it is possible in IDEA. There's the Edit Defaults button for setting all kinds of defaults for each launch config type separately.
A: I haven't found a possibility to do this, but what you can do is to use:
getClass().getResourceAsStream(filename);
getClass().getClassLoader().getResourceAsStream(filename);
This methods locates a resource on the classpath.
The first one is relative to the location of the class, the second one is relative to any classpath "root entry". You can then for example add the project root directory to the classpath.
This does not work however if you want to write to a file as well.
A: If your tests depend on the current working directory, I think it is responsability of your tests to setup correctly that working directory, and to configure your classes to be tested to point that directory.
If you have a superclass for most of your tests, write a constant within it.
Or: if you have a superclass for most of your tests, write a @Before setup method.
Or: if you have not a superclass for most of your tests, write a constant in some class of the testing codebase.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: How to learn Java Webservices Please suggest some good resources to start writing Java Web services.
A: If you're using the Spring Framework, I suggest Spring-WS.
There is a very helpful reference guide which should get you started.
A: The standard way in Java to write a web service is to use Apache Axis.
If you are generating a web service client, then you need the WSDL (.xsd, .wsdl, etc) of the foreign web service, and then you can use wsdl2java (or preferably, the ANT task provided by axis-ant) to simply generate the code to do the communications and a model.
If you are generating a web service on the server side, then you can use Java2WSDL to turn a Java model into a web service implementation, although you will have to code the server side within the Impl class it generates. You can then easily deploy on Tomcat, etc, using the axis.war and the generated deploy.wsdd script.
There's plenty of documentation out there that will help.
A: This is a good starting point for REST and JAX-RS:
http://www.lunatech-research.com/archives/2008/03/20/restful-web-sevices-resteasy-jax-rs
A: A great place to start is Sang Shin's online course. There's an active online group as well as good slides, examples and exercises to complete. The great thing about this course is that there are timelines set for each component of the course, to help you figure out how much time to spend on a particular concept.
A: Apache Axis (http://ws.apache.org/axis) is easy to use and highly effective for basic web services in my experience.
The user guide should get you started: http://ws.apache.org/axis/java/user-guide.html
A: I highly recommend you to start by the new specification Jax-WS 2.0. It's a good idea walk on the standards.
Sun provides a reference implementation that you can use.
Try the JAX-WS web site and then you can watch the Metro web site to see all the standard ws-* stack.
I'm using this tool to consume and provide services. It's fast, easy to use, customizable and the standard.
Enjoy it!
A: Check out Java Enterprise in a nutshell it has a good section on web services, describing both the J2EE framework specification and also the Apache Axis implementation. Bear in mind that, while it may be popular, Axis is not the standard method, but something that was developed while standards where being finalised/refined.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Which Design Pattern is best for Iterative development? Is there such a thing as having the most prefered design pattern for building applications in TDD or the iterative mode?
A: I think the question could be rewritten so it makes more sense in these words:
"Which architectural patterns and strategies are useful in order to achieve flexibility when using a Test-Driven and incremental development strategy?"
My answer would be: patterns that help you decouple your clases and components, like:
*
*Inversion of Control and Dependency Injection - Help you keep the dependencies between your classes and components detached from specific implementations that are resolved until runtime (or startup time) allowing both using stubs for not-yet implemented functionality and for unit tests.
*Facades - Helps you isolate components providing well defined interfaces for interaction between them, reducing coupling.
*Factories and other creational patterns - They give you flexibility in the sections of your code responsible for instantiating objects.
Also remember that one of the mantras of incremental and iterative development is 'Do the simplest thing that could possibly work'. Don't over-engineer.
Does it makes sense according with what you asked?
A: Please don't mix different things. You use a pattern when it's applicable and saves you time, effort, and makes your code look more standard. It has nothing to do with your development methodology!
However, you may want to stress some things in your application architecture:
*
*Make things extremely modular. Embrace loose coupling.
*Define clear conceptual boundaries between modules. By conceptual I mean it should be clear for the start, feel natural. A random programmer asked about it should respond "Wow, it's obvious how you did it!".
*Start small. Don't try to produce ZOMG-this-will-be-the-best-and-most-universal-class-library-and-program-and-whatever. Make things work, and then extend, but only if necessary.
*Convince yourself that YAGNI (You aren't gotta need it). Don't do things you are not sure you have to do. It doesn't mean procrastination or something. It means don't do things because "I don't know, it may be useful in the future", "it's technically fancy", "I will include it just in case".
*DRY - don't repeat yourself. Make sure you don't run into code duplication problems. Think about code generators, good abstractions, and productive communication across the team.
A: I'm not sure that's a meaningful question. But I won't let that stop me...
It's entirely likely that specific patterns may become apparent as being appropriate to aspects of the design of your application as it evolves under your chosen agile process, but to (hopefully not mis-) quote Ron Jeffries, "the code will tell you".
Edit: But if you want a definitive answer, then Bridge. That's a good one. Or Visitor, I like that one too. Or most of the ones starting with "F". :)
A: Use a dynamic language like Python or Ruby to develop: You don't have to fight with many of the problems other languages have which are the reason for "design patterns" in the first place.
Dynamic languages in combination with automated testing will give results really quick so you know which direction to take. If you realize then that you should use a static language for performance reasons or whatever you can translate the dynamic software you have already built.
A: Design patterns are tools to help solve particular types of problems. The use of patterns is governed by the problems defined by the scope of requirements, not by a development methodology.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How do you read an image in Java when Toolkit.getDefaultToolkit() throws an AWTError? I am reading image files in Java using
java.awt.Image img = Toolkit.getDefaultToolkit().createImage(filePath);
On some systems this doesn't work, it instead throws an AWTError complaining about sun/awt/motif/MToolkit.
How else can you create a java.awt.Image object from an image file?
A: I read images using ImageIO.
Image i = ImageIO.read(InputStream in);
The javadoc will offer more info as well.
A: There is several static methods in ImageIO that allow to read images from different sources. The most interesting in your case are:
BufferedImage read(ImageInputStream stream)
BufferedImage read(File input)
BufferedImage read(InputStream input)
I checked inside in the code. It uses the ImageReader abstract class, and there is three implementors: JPEGReader. PNGReader and GIFReader. These classes and BufferedImage do not use any native methods apparently, so it should always work.
It seems that the AWTError you have is because you are running java in a headless configuration, or that the windows toolkit has some kind of problem. Without looking at the specific error is hard to say though. This solution will allow you to read the image (probably), but depending on what you want to do with it, the AWTError might be thrown later as you try to display it.
A: On some systems adding "-Djava.awt.headless=true" as java parameter may help.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Shortcut to switch between markup and code Using Visual Studio 2008 Team Edition, is it possible to assign a shortcut key that switches between markup and code? If not, is it possible to assign a shortcut key that goes from code to markup?
A: The following is a macro taken from a comment by Lozza on https://blog.codinghorror.com/visual-studio-net-2003-and-2005-keyboard-shortcuts/. You just need to bind it to a shortcut of your choice:
Sub SwitchToMarkup()
Dim FileName
If (DTE.ActiveWindow.Caption().EndsWith(".cs")) Then
' swith from .aspx.cs to .aspx
FileName = DTE.ActiveWindow.Document.FullName.Replace(".cs", "")
If System.IO.File.Exists(FileName) Then
DTE.ItemOperations.OpenFile(FileName)
End If
ElseIf (DTE.ActiveWindow.Caption().EndsWith(".aspx")) Then
' swith from .aspx to .aspx.cs
FileName = DTE.ActiveWindow.Document.FullName.Replace(".aspx", ".aspx.cs")
If System.IO.File.Exists(FileName) Then
DTE.ItemOperations.OpenFile(FileName)
End If
ElseIf (DTE.ActiveWindow.Caption().EndsWith(".ascx")) Then
FileName = DTE.ActiveWindow.Document.FullName.Replace(".ascx", ".ascx.cs")
If System.IO.File.Exists(FileName) Then
DTE.ItemOperations.OpenFile(FileName)
End If
End If
End Sub
A: Not sure if this is what you mean, as I don't do ASPX development myself, but don't the F7 (show code) and Shift-F7 (show designer) default key bindings switch between code and design? They do in my VS2008 (on WinForms designable items) with the largely default C# key bindings I use.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How to improve the program coding I am a C# developer. Still learning. I had not learn all the features of C# 2.0 and now the new version of c# is being released. how do we cope up with this.what is the best option to cope up with the latest programming skills.
A: As Steve M said: Read. But don't stop there. You also have to write.
First: Write Code. Try out the stuff you read about. Look at open source software and how things are done by others. Try those new techniques out.
Second: Write text. Write a blog post or whatever on how to do something. You had a problem and you solved it, now write down what the problem was, what ideas for solutions you had and what solution you picked for which reasons. Get people to comment, get peer review of your own thinking that way.
A: *
*Read good quality code. Locate other projects (open source or proprietary projects within your organizations) and look for how other engineers have approached particular issues. Look for idioms, design patterns, styles that you find particularly good and adopt them in your coding practices.
*Concentrate on the basics. Sure knowing how to perform a particular operation best in C# is good, but knowing how and when to abstract, avoiding duplication, following style rules, and giving your identifiers appropriate names are more important skills. These are also more valuable because you can apply them to any language.
*Improve your code. When you find in code something complicated or suboptimal try to think of a better way to write it. For instance, if you write a lot of boilerplate code, examine how you can use abstraction mechanisms, like subroutines, methods, or classes, to avoid the code duplication. If an expression is particularly long, think whether putting some of it into a separate function can increase its readability.
*Use tools. There are tools, like FindBugs, that can locate suboptimal or downright wrong code constructs. Make it a habit to have your code pass cleanly through these tools, and also from your compiler's highest warning setting.
*Have your code reviewed. Find a mentor and have him or her review your code. Be ready to accept criticism and learn from this experience. Later repay this favor to the community by acting as a mentor.
A: A good method to learn is to see what has changed in the language specifications and try them out yourself with small programs. Search some examples, try them, change them and see the results. There will be a time when you do some "real" work when you'll remember that stuff and think "that might actually help here"
A: There are no magic tricks or secret ninja-methods. If you want to be good programmer, work. Work a lot and hard.
Book reading will not make you a professional if you don't use new knowledge in practice. Don't worry if you don't know all nifty features of .NET X.Y.Z. Work hard, try to solve different problems, ask your boss to give you different tasks and you will succeed. It's hard, but it's the only way to go. Work + learn at the free time and you will become professional.
But don't rush, remember that professionalism comes at a price - you can't be proficient at many different fields of work at once. Choose some technology that you like and can give you money, and go along with it. You will feel when time for changes comes.
A: Read, read and when you're done reading, read some more. Reading also helps.
But seriously, sign up to relevant mailing lists and RSS feeds so that you can be updated as things happen.
A: 1) I try to get involved with my local user groups for c# it would be a Microsoft Technical User group
http://www.microsoft.com/communities/usergroups/default.mspx
They are usually a bunch of like minded individuals who want to learn about the new features in certain tools.
Microsoft are generally very good at helping to fund these groups and talks and seminars are held frequently. Often with the developer who created the tools you want to learn more about.
2) Get some RSS Feeds/News letters to c# sites such as C# Corner or Channel 9
They are usually the places
3) Oh and as mentioned by others, read a LOT and try stuff out. It's not easy to keep up with the new features but read about them, try them out on small little stand alone projects and have fun with them.
I don't know about you but I derive great satisfaction from getting something new and cool to work.
As the Pragmatic programmers would say, improve your tool belt all the time.
A: Read good code.
Pick an open source project you support. Start going through it on a regular basis, learning how it works by actually reading the code.
A: the only way to learn to code, is to code... you become a better coder by observing people better than you.
Don't worry too much about new features in a languae, be aware of them sure, but concentrate on the core language
A: It is useful to keep up with technologies, but even more useful to learn timeless skills that will apply whatever development tools you use.
To that end, I recommend reading Code Complete, and then some of the other classic programming books.
The other thing is just to keep on coding. My experience is that you'll pick up specific technologies as and when you need them. Sometimes you'll do this by looking at other people's code, sometimes by reading an interesting article or book, sometimes by going on a course. But however you do it, you'll find the tools you need when you need them.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: MSMQ samples in C++? Can someone give me some working examples of how you can create, add messages, read from, and destroy a private message queue from C++ APIs? I tried the MSDN pieces of code but i can't make them work properly.
Thanks
A: Actualy this is the code i was interested in:
#include "windows.h"
#include "mq.h"
#include "tchar.h"
HRESULT CreateMSMQQueue(
LPWSTR wszPathName,
PSECURITY_DESCRIPTOR pSecurityDescriptor,
LPWSTR wszOutFormatName,
DWORD *pdwOutFormatNameLength
)
{
// Define the maximum number of queue properties.
const int NUMBEROFPROPERTIES = 2;
// Define a queue property structure and the structures needed to initialize it.
MQQUEUEPROPS QueueProps;
MQPROPVARIANT aQueuePropVar[NUMBEROFPROPERTIES];
QUEUEPROPID aQueuePropId[NUMBEROFPROPERTIES];
HRESULT aQueueStatus[NUMBEROFPROPERTIES];
HRESULT hr = MQ_OK;
// Validate the input parameters.
if (wszPathName == NULL || wszOutFormatName == NULL || pdwOutFormatNameLength == NULL)
{
return MQ_ERROR_INVALID_PARAMETER;
}
DWORD cPropId = 0;
aQueuePropId[cPropId] = PROPID_Q_PATHNAME;
aQueuePropVar[cPropId].vt = VT_LPWSTR;
aQueuePropVar[cPropId].pwszVal = wszPathName;
cPropId++;
WCHAR wszLabel[MQ_MAX_Q_LABEL_LEN] = L"Test Queue";
aQueuePropId[cPropId] = PROPID_Q_LABEL;
aQueuePropVar[cPropId].vt = VT_LPWSTR;
aQueuePropVar[cPropId].pwszVal = wszLabel;
cPropId++;
QueueProps.cProp = cPropId; // Number of properties
QueueProps.aPropID = aQueuePropId; // IDs of the queue properties
QueueProps.aPropVar = aQueuePropVar; // Values of the queue properties
QueueProps.aStatus = aQueueStatus; // Pointer to the return status
WCHAR wszFormatNameBuffer[256];
DWORD dwFormatNameBufferLength = sizeof(wszFormatNameBuffer)/sizeof(wszFormatNameBuffer[0]);
hr = MQCreateQueue(pSecurityDescriptor, // Security descriptor
&QueueProps, // Address of queue property structure
wszFormatNameBuffer, // Pointer to format name buffer
&dwFormatNameBufferLength); // Pointer to receive the queue's format name length
if (hr == MQ_OK || hr == MQ_INFORMATION_PROPERTY)
{
if (*pdwOutFormatNameLength >= dwFormatNameBufferLength)
{
wcsncpy_s(wszOutFormatName, *pdwOutFormatNameLength - 1, wszFormatNameBuffer, _TRUNCATE);
wszOutFormatName[*pdwOutFormatNameLength - 1] = L'\0';
*pdwOutFormatNameLength = dwFormatNameBufferLength;
}
else
{
wprintf(L"The queue was created, but its format name cannot be returned.\n");
}
}
return hr;
}
This presumably creates a queue... but there are some parts missing for this to work, that's why i need a simple example that works.
A: Not quite sure how you'd go about creating or destroying message queues. Windows should create one per thread.
If you're using MFC, any CWinThread and CWnd derived class has a message queue that's trivial to access (using PostMessage or PostThreadMessage and the ON_COMMAND macro). To do something similar with the windows API, I think you need to write your own message pump, something like CWinApp's run method.
MSG msg;
BOOL bRet;
while( (bRet = GetMessage( &msg, NULL, 0, 0 )) != 0)
{
if (bRet == -1)
{
// handle the error and possibly exit
}
else
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
}
...is the example from the MSDN documentation. Is this what you're using? What doesn't work?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Are the values of pi and e available in the .Net framework? Without calculating them, I mean?
A: System.Math.PI;
System.Math.E;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Performance improvements moving from g++/gcc 3.2.3 to 4.2.4 We have been looking at g++ versions 3.2.3 and 4.2.4. With 4.2.4, the performance improvements on some of our code base is significant.
I've tried searching the gcc buzilla database to find hints as to what bugs may have had such a dramatic improvement but I didn't find any individual bug that stood out as being a candidate.
Are the improvements the result of many small changes that have slowly had an affect? Or was there say a top 5 set of improvements that may have made a difference?
For some background, our code base does make good use of STL containers and algorithms, as well as C++ features such as the 'inline' keyword.
A: In my experience, 3.4 is where the performance basically peaked; 4.2 is actually slower than 3.4 on my project, with 4.3 being the first to roughly equal 3.4's performance. 4.4 is slightly faster than 3.4.
There are a specific few cases I've found where older versions of gcc did some unbelievably retarded things in code--there was a particular function that went from 128 to 21 clocks from 3.4 to 4.3, but that was obviously a special case (it was a short loop where the addition of just a few unnecessary instructions massively hurt performance).
I personally use 3.4 just because it compiles so much faster, making testing much quicker. I also try to avoid the latest versions because they seem to have nasty habits of miscompiling code; --march core2 on recent gcc versions causes segfaults in my program, for example, because it emits autovectorized code that tries to perform aligned accesses on unaligned addresses.
Overall though the differences are rarely large; 3-5% is the absolute most I've seen in terms of performance change.
Now, note this is C; things may be different in C++.
A: I believe the optimizer was completely reworked in the gcc4 series. See this page, for instance, about vectorization:
http://gcc.gnu.org/projects/tree-ssa/vectorization.html
For info, I once did a benchmark of c[i] = a[i] + b[i] with dynamic arrays, static arrays and std::vector and it was the std::vector that was the fastest (w/ gcc 4.1). 30% difference in performance.
A: Streams were very slow in 3.3 and got much faster in 3.4. (message on gcc mailing list)
I bet other things improved too.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Duplicate result I am writing a query in SQL server2005. This is returning me a duplicate rows in the result. Can i eliminate this duplication with a particular column as the key?
A: You can eliminate complete duplicate rows using the DISTINCT keyword. If there is some key column that is a duplicate but the rest of the columns are not, then you would have to use aggregate functions and a GROUP BY clause to explain to SQL Server what data you do want returned.
A: SELECT DISTINCT will eliminate duplicate rows.
A: You can use SELECT DISTINCT to eliminate duplicates, as has been advised in other comments, and it may work well enough for now, but you may be begging for future trouble. All too frequently, if you cannot get a unique result without SELECT DISTINCT, your database model has been denormalized too far, and your queries can get bogged down by retrieving and then eliminating a large number of duplicates. (However, of course, normalizing an existing database schema is rarely trivial...)
Without more information on the schema and the query it's impossible to tell whether SELECT DISTINCT is an acceptable workaround, or whether there simply is a better join statement.
A: It's possible that your result contains both an associative and a numeric array.
A: If you just want one column:
SELECT DISTINCT MyColumn FROM MyTable;
If you want a variety of columns, possibly returning a maximum or sum, try grouping it:
SELECT MyFirstColumn, MySecondColumn, MAX(SomeDate) AS MaxDate, SUM(Amount) AS TotalAmount
FROM MyTable
GROUP BY MyFirstColumn, MySecondColumn;
A: As you have pointed out in the comments, you are using a stored procedure. One way to handle this case is to create a temporary table which you populate using the stored procedure. Then select the results from the temporary table and prune the duplicates with either SELECT DISTINCT or GROUP BY.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: j_security_check and JAAS I have been given the task of implementing a login handler. The only detail the handler captures is username and password. Initially I was going to use a JSP that posted to a servlet. Where the servlet carried out a db lookup and also validated the user credentials. Upon successful login you are redirected but unsuccessful takes you back to the jsp and displays appropriate error messages.
However upon doing some research I discovered j_security_check and JAAS and am not sure which to use or whether to at all.
What benefits will I get from either and which would be most suitable from my task? ?
A: Security is composed of following aspects:
*
*Authentication
*Authorization
*Transport layer security - Encryption
Authentication: - this consists of checking the credentials of the user; most of the times this is implemented through login mechanism. Your task of creating login page is part of authentication.
Authorization: - application resources need to be protected from unauthorized access that means when ever user requests for protected resource, application need to ensure that user has appropriate access rights. This is generally done by assign roles to the user and putting request filters that verify the access rights of the user. This part is more critical and requires detailed design analysis. Just authenticating user is not enough, you need to ensure that protected resources are not accessed by those users who are not authorized for the same.
Transport layer security: - system architecture need to ensure that data being transfered over the network doesnot fall into hands of hackers or sniffers. SSL/TSL is used for achieving this
J2EE containers and frameworks like Spring security provide common functionalities for each of the security aspect.
What you are trying to develop is simple authentication mechanism. Application security is more demandind when it comes to access control i.e. authorization.
Also security need to scalable i.e. as business needs changes for integrating systems and security your system should be able to adapt to things like Single Sign On (SSO), LDAP authentication etc.
Though JAAS and container security is good enough for scaling but there are few restrictions with the same. For example you would need to depend on vendor specific configurations and adapters. Your application would declare security needs in deployment descriptors and server administrators need to configure security realms at server end.
I would recommend you to evaluate Spring Security (previously Acegi Security) framework. We have been using the same in many of our projects and found it to be robust, customizable and easy to implement. It comes with set of filters that intercept your request and provide access control. Framework can be used to validate users against various user repositories such as database, LADP servers, OS Security etc. It is extensible and can be integrated with SSO servers. It also provides useful taglibraries for controlling access to parts within JSP pages.
Not only that this framework also provides method level security that can be imposed at class level through Spring AOP framework
A: Use what you container provides and don't implement your database lookup to do this. When the container knows who is logged in, you can use the roles to restrict access to certain pages. There are also different types of authentication.
Using JAAS will give you the flexibility to use another way of verifying the password (for example in active directory). Also single-sign-on could be implemented with this.
A: The simpler method should suffice unless you are doing really really sensitive stuff. Just remember the most important (and simple) bit: keep a password hash in the database, not the real password.
A: You may as well check out Spring Security framework.
A: JAAS takes the load off you and allows you (or the client) to change authentication methods just by dropping in another module. For example from DB auth to LDAP to Kerberos to NT Domain - you get the point.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Different framerate for loaded SWF file in Flex? Using The loader class of Adobe Flex, I load an external SWF file. However, the loaded SWF file has a different frameRate than my application.
Is it possible to have both the parent app and the loaded child app playing at different framerates? If so, how?
A: It's not possible.
Flash Player or Adobe AIR only uses a single frame rate for all loaded SWF files at any one time, and this frame rate is determined by the nominal frame rate of the main SWF file
There are two ways around this, change the framerate of the main swf to match the loaded one (this can be done during runtime) or decouple the animation from actual frames and use events to step it forward.
A: If you decide to use events to drive your swf in order to approximate different frame rates I'd recommend using a tween engine like TweenLite/TweenMax.
It's free (as in beer) and I've used it very successfully for frame based tweening in the past.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Storing SAS data (including table structure) in a single flat file I need to convert SAS data tables into flat files (or "ASCII files" as they were called once, as opposed to binary files). And only one flat file for each original SAS table.
The challenging thing is that I want the flat file to contain some structural information of the original SAS table also, specifically:
*
*Variable/Column name
*Variable/Column label
*Variable/Column type
*Variable/Column length
*Variable/Column format
*Variable/Column informat
Additional information:
*
*I will only need to convert small data (< 100 obs).
*Performance is not an issue (within reasonable limits).
*The flat file should form a basis for recreating the original SAS table, I don't need to be able to use the file directly as a table in DATA or PROC steps.
The standard SAS tables, transport files, XPORT files, etc are all binary format files, and the standard XML table format in SAS and CSV-files don't preserve table structure. So obviously these options don't help.
What is my best option?
A: I'm not aware of any easy solutions.
Possibly:
*
*Use PROC EXPORT to produce CSV file with the data in it.
*Use PROC DATASETS with ODS to produce a dataset with the names, types, etc.
*Produce another CSV file for this dataset.
Now you've got your ASCII description of the table (spread over two CSV files). Reversing the process would be more tricky. Basically you'd have to read in the description data set, then use CALL SYMPUT in a loop to create a bunch of macro variables with the information in them, then use your macro variables to build a PROC IMPORT for the CSV file...
A: *
*Create the code to export the table to text (this is straightforward, just google it or look at 'The Little SAS Book' if you have a copy).
*Then append the 'meta' info from sashelp.vcolumn, which is where sas stores information (meta data) about sas datasets. It's a sas table itself, so you could do a proc sql union operation to join it with the actual columns that this table describes (though you will need to do a transpose type operation because the meta data about the columns is in rows, not columns).
You're not being completely specific about how you want to see the meta data in the text file, so that's as far as I can go.
A: proc sql's describe syntax might be handy to get the metadata portion, including lengths, types, formats, indexes etc...
Code:
proc sql;
describe table sashelp.class;
quit;
Log:
NOTE: SQL table SASHELP.CLASS was created like:
create table SASHELP.CLASS( bufsize=4096 )
(
Name char(8),
Sex char(1),
Age num,
Height num,
Weight num
);
A: With SAS 9.2, you can create an XML file from a data set and the XML contains variable/column metadata, like format, label, etc... See the section of the SAS 9.2 XML LIBNAME Engine: User's Guide titled "Using the XML Engine to Transport SAS Data Sets across Operating Environments". A link to it is here:
http://support.sas.com/documentation/cdl/en/engxml/61740/HTML/default/a002594382.htm
Here's a section of code from the manual that shows using the XML92 libname engine and PROC COPY to create the XML:
libname myfiles 'SAS-library';
libname trans xml92 'XML-document' xmltype=export;
proc copy in=myfiles out=trans;
select class;
run;
In SAS 9.1.3, you may have to create a custom tagset to get the same operation. SAS Technical Support (support@sas.com) may be able to offer some help.
A: BTW - you haven't said why you need to do this. In this case, there is no good reason (there might be a compelling reason, such as somebody with power
saying 'do it, or be fired', but there's no good reason).
I'd give up the idea of merging the metadata and data in each file, unless there's some incredibly strong reason to do so. Go with exporting the metadata for data set A into a file called metadata_A; this will result in paired files. Anybody looking to use those files in a a database program or statistical program would have a clearly-labeled metadata file to work with.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Dynamically create variables inside function I want to create variables inside a function from a dictionary.
Let's say I have a dictionary, bar:
bar = {
'a': 1,
'b': 2,
'c': 3
}
That goes to:
def foo():
a = 1
b = 2
c = 3
I want to make new variables with the variable names as bar's keys (a, b, c), then set the values of the variables to the value of the corresponding key.
So, in the end, it should be similar to:
bar = {
k: v
}
# --->
def foo():
k = v
Is it possible? And if so, how?
A: Your question is not clear.
If you want to "set" said variables when foo is not running, no, you can't. There is no frame object yet to "set" the local variables in.
If you want to do that in the function body, you shouldn't (check the python documentation for locals()).
However, you could do a foo.__dict__.update(bar), and then you could access those variables even from inside the function as foo.a, foo.b and foo.c. The question is: why do you want to do that, and why isn't a class more suitable for your purposes?
A: From your comment, perhaps what you're really looking for is something like a bunch object:
class Bunch(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
b=Bunch(**form.cleaned_data)
print b.first_name, b.last_name
(The ** syntax is because Bunch-type objects are usually used like Bunch(foo=12, bar='blah') - not used in your case but I've left it for consistency with normal usage)
This does require a "b." prefix to access your variables, but if you think about it, this is no bad thing. Consider what would happen if someone crafted a POST request to overwrite variables you aren't expecting to be overwritten - it makes it easy to produce crashes and DOS attacks, and could easily introduce more serious security vulnerabilities.
A: Why would you want to do such a thing? Unless you actually do anything with the variables inside the function, a function that just assigns several variables and then discards them is indistinguishable to def foo(): pass (An optimiser would be justified in generating exactly the same bytecode).
If you also want to dynamically append code that uses the values, then you could do this by using exec (though unless this is really user-input code, there are almost certainly better ways to do what you want). eg:
some_code = ' return a+b+c'
exec "def foo():\n " + '\n '.join('%s = %s' for k,v in bar.items()) + '\n' + some_code
(Note that your code must be indented to the same level.)
On the other hand, if you want to actually assign these values to the function object (so you can do foo.a and get 1 - note that your sample code doesn't do this), you can do this by:
for key, val in bar.items():
setattr(foo, key, val)
A: Thanks guys, I got the point. I should not do such thing. But if your curios what I tried to do is to somehow short number of lines in my view function in django. I have form with many fields, and instead of receive every field in form of:
first_name = form.cleaned_data['first_name']
last_name = form.cleaned_data['last_name'] ..
i was thinking to take every attribute name of my form class and loop over it. Like so:
for name in ProfileRegistration.base_fields.__dict__['keyOrder']:
# and here the variables that i tried to assign
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Using Flash Component SWC file in Flex I am accessing custom UIComponent via SWC file from Flex 3.
This component works OK in Flash CS3, but using it from Flex gives A weird error in draw().
I have added swc component inside Sprite (with addchild) and its in LIB path.
TypeError: Error #1010: A term is undefined and has no properties.
at com.xxxx.highscores::HighScores/draw()
at fl.core::UIComponent/callLaterDispatcher()
Here is the draw() function of this UI Component:
override protected function draw():void {
isInitializing = false;
page.Text.x = width / 2;
page.Text.y = height / 2;
drawBackground();
}
A: With only that code, it must be either page, or page.Text, is null.
Going by the names, I would guess page is a Flash library object you create with AS? If so, I would guess a previous error is firing before it is created and being swollowed by the player (can happen if the debugger has not attached yet, or problems with loading shared librarys). 'stage' not being set for new display object until it's added to the display list is common.
EDIT: It's a bug in the component: draw() always uses the highScoresModuleText property on page: which is only set when the page is a HighScoresTextPage, and not any of the other pages, eg: HighScoresTablePage, which showHighsSores() sets it to. This works in Flash presumably because the object is on the stage, or at least gets created before showHighScores() is called, so draw() gets called first, and since the component does not invalidate, is not called after.
The correct method in this case is to have show*() just set some properties, then invalidate() to have draw() figure it out later, but a quick fix is to just add 'if (page.highScoresModuleText)' around the offending lines in draw(). An even quicker fix is to create and addChild() the component early (like startup), and call showHighScores() much later.
This works for me:
package
{
import flash.display.Sprite;
import com.novelgames.flashgames.highscores.HighScores;
import flash.events.MouseEvent;
public class As3_scratch extends Sprite
{
private var highscore : HighScores;
public function As3_scratch()
{
highscore = new HighScores();
addChild(highscore);
stage.addEventListener(MouseEvent.CLICK, onClick);
}
private function onClick(event : MouseEvent) : void
{
highscore.showEnterHighScore(50);
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: When to use test scripts over unit testing? I am currently working on a project that has been in production for over two years. The project makes extensive use of unit testing and scripted UI tests. Initialy unit tests covered the system framework, business rules and state transitions (or workflow). Test scripts are used for black box testing. However, over time the cost of maintaining our full set of unit tests has become increasingly expensive especially those relating to state.
After a bit of investigation we have found that test scripts are more effective (that is, provide better coverage) and are cheaper to maintain than unit tests relating to workflow. This isn't to say the value of unit tests has been completely negated but it does raise this question whether some classes of unit tests can be dropped in favour of test scripts.
Our project is run on an iterative incremental model.
A: One of the answers on SO to the question of 'the limitations of Unit Testing' was that a unit testing becomes convoluted when it's used to test anything to do with INTEGRATION rather than function. Connecting to and using external services (database, SSH'ing to a another server, etc.) and User Interfaces were two of the examples used.
It's not that you CANT use Unit Testing for these things, its just that the difficulty involved in covering all the bases makes using this method of testing not worth it except in cases where reliability is paramount.
I use "script tests" for all my custom JavaScript UI code (template engine, effects and animations, etc.) and I find it quick and reliable if done right.
A: You normally use Unit Tests to do exactly this: test units. More exactly, to test if a unit complies to its interface specification/contract. If a unit test fails, you know exactly where the problem is: it is within the tested unit. This makes it easier to debug, especially since the result of a unit test can be processed automatically. Automatic regression tests come to mind here.
You start using scripted UI tests if you either want to leave the scope of unit tests or want to test things that cannot be tested well with unit tests. E.g. when you test code that interfaces with lots of external APIs that you cannot mock. Now you can provoke certain errors but tracking down where exactly in the code the failure is buried is much harder.
A: Actually, there are four levels of testings, 3 of them might involve script, the first one does not.
*
*unit testing: test a class or method in complete isolation of the rest of the system
*assembly testing: test a scenario within the system in complete isolation from other components external to the system (that is from a different functional domain)
*integration testing: test the system including inputs coming from external part, and outputs going to external other system (that is from other functional domains).
*acceptance testing: final validations, as say Gishu, to check the right code (that is the right features) is there.
Example of functional domain: Service Layer Bus, that is all projects (usually encapsulating some core referential databases) able to expose their services on a bus.
You may do:
*
*unit tests for your publisher class
*assembly test for your publisher mechanism in collaboration with other component of your SLB
*integration test for you SLB service and other components developed outside of SLB and client of your services
*acceptance tests for the whole system.
As said, the last 3 kinds of tests can involve heavy scripting and can quickly cover more of your code. Depending on the sheer number of classes/methods to unit-test, a good assembly test can be a better approach.
A: In more ways than one I have experienced the same kind of pain that you have vis-a-vis unit tests, especially in projects wherein the members are not at all enthusiastic about unit testing and many of them simply ignore or comment-out tests to be able to cheat source control, save time, etc. One former colleague even coined the term "Deadline Driven Development" for this.
In my opinion when facing this kind of challenge, the following are some guidelines vis-a-vis unit testing:
*
*Discard obsolete tests - Sometimes it is pointless in trying to update hundreds to thousands of lines of tests if they are, in essence, inaccurate or irrelevant. Discard tests immediately. Do not "Ignore" them, do not comment them out. Delete them completely.
*Write tests for new functionality - Any new functionality still needs to be unit tested and written in a unit-testable manner.
*Write tests for bug fixes - When regression testing the application, it might be relevant to ensure that bug fixes have unit tests that ensure that the bug has been fixed.
*To hell with code coverage - This might earn a few downvotes, I'm sure, but there is a fine line between ensuring functionality and using tests as an excuse to delay work. The focus should be on ensuring core functionality than following any arbitrary code coverage percentage.
That being said, I still think that unit testing should not be discarded completely. Test scripts and unit tests have their own purposes. But a balance should be struck between an over-zealous attempt to maintain TDD and facing the realities of enterprise application development.
A: Two different things
*
*Unit Tests - Developer - verify if the code is right
*Acceptance Tests - Customer/QA/BA - verify that the right code is developed.
The two categories should be distinct and both play an equally important role.. Dropping one doesn't bode well. Test scripts as you have mentioned fall into the second category. I would recommend something like FIT / Fitnesse for this purpose. If that is not feasible, then test-scripts / record-replay style tools. But don't throw away good unit tests.. what do you mean by 'cost of maintaining tests has become expensive'?
A: My assumption is that the maintenance effort for your unit tests increases because the architecture of your application was allowed to fall apart. Since nobody but you really knows what's in your code, you might want to apply the five whys method to decide what your real, essential, root problem is. IME unit tests should never be costly to maintain, as long as you employ a highly decoupled, interfaces-based architecture.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: How to export fonts and colors from VS2008 to VS2005? I have been forced to work in Visual Studio 2005 and would like to export my fonts and colors from Visual Studio 2008. However, VS2005 complains about wrong export document version. Is there any good way to do this besides manually changing each color and font?
A: I think there's a tag in the exported file called "applicationIdentity" that is set to a value of "9.0" by 2008, change it to "8.0" and the file should import. I don't recall it causing any problems from settings that are 2k8 specific, but take a backup of your settings first!
UPDATE: Just looked in an export file and there is indeed a '<ApplicationIdentity version="8.0"/>' in a 2k5 file so it should be "9.0" in the 2k8 file (I haven't VS2k8 on my PC here to verify this beyond a doubt).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Set tag's size attribute through css? Normally you can do this:
<select size="3">
<option>blah</option>
<option>blah</option>
<option>blah</option>
</select>
And it would render as a selectionbox where all three options are visible (without dropping down)
I'm looking for a way to set this size attribute from css.
A: There ins't an option for setting the size, but if you do set the size some browsers will let you set the width/height properties to whatever you want via CSS.
Some = Firefox, Chrome, Safari, Opera.
Not much works in IE though (no surprise)
You could though, if you wanted, use CSS expressions in IE, to check if the size attribute is set, and if so, run JS to (re)set it to the size you want... e.g.
size = options.length;
A: I don't think that this is possible.
CSS properties are very generic (applicable to any element) and are unable to alter the functionality of an element in any ways (only the looks).
The size attribute is changing the functionality of the element at least in the case of size = 1 / size != 1.
A: I disagree, I have been able to set select width and height using CSS with something like this:
select { width: 5em; height: 5em; }
Works in IE(7) as well as I just tested it.
A: Try this:
select[attr=size] {
width:auto;
height:auto;
}
A: You can use following CSS selector to style select items with the size attribute:
select[size]{
/*your style here */
}
A: In general you can't add attributes set within html via CSS. You can add to the html using the pseudo selectors :before and :after, but that's about it, plus they still aren't compatible across all browsers. If you really need to do this, I think you'd have to resort to Javascript.
A: I do not believe this is possible no
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: JavaScript Chart Library Would anyone recommend a particular JavaScript charting library - specifically one that doesn't use flash at all?
A: If you're using jQuery I've found flot to be very good - try out the examples to see if they suit your needs, but I've found them to do most of what I need for my current project.
Additionally ExtJS 4.0 has introduced a great set of charts - very powerful, and is designed to work with live data.
A: Check out http://www.highcharts.com !
Highcharts is a charting library written in pure JavaScript, offering an easy way of adding interactive charts to your web site or web application. Highcharts currently supports line, spline, area, areaspline, column, bar, pie and scatter chart types.
A: As some kind of late answer, try d3.js
http://mbostock.github.com/d3/
It's the continuation of protovis.
The big difference to flot is in the number of features supported.
Though flot may be simpler, d3.js is definitely more powerful.
A: It maybe not exactly what you are looking for, but
Google's Chart API is pretty cool and easy to use.
A: Flotr is another, pure Javascript chart-library based on Prototype and inspired by Flot
A: Try PlotKit
A: I'd recommend gRaphaël for pure JavaScript charting along with the pure JavaScript vector graphics library it's built on (Raphaël).
gRaphaël currently supports Firefox 3.0+, Safari 3.0+, Opera 9.5+ and Internet Explorer 6.0+.
A: *
*a framework: http://www.simile-widgets.org/
a basic: http://www.filamentgroup.com/examples/charting_v2/index_2.php
good looking: http://www.highcharts.com/
A: Another is RGraph: Javascript charts and graph library:
http://www.rgraph.net
Canvas based so it's fast and there's roughly 20 different chart types. It's free for non-commercial use too!
A: Try the MIT simile timeline which could be made into a chart - http://simile.mit.edu/timeline/
or the final one, http://code.google.com/p/gchart/
A: My favourite (flot) has already been mentioned.
But be sure to investigate Ortho.
It is excellent for tree charts and timelines.
A: There is a lot of activity in the dojo charting library, and what is great I am using it inside an AIR application without problems too, pretty cool!
See for example there http://www.sitepen.com/blog/2008/05/27/dojo-charting-event-support-has-landed/
A: Check out Google Visualization API, which is kind of a generalization of the simpler Chart API
A: http://code.google.com/apis/visualization/documentation/gallery.html
Has very cool interactive options including maps, gauges, and charts.
A: We just bought a license of TechOctave Charts Suite for our new startup. I highly recommend them. Licensing is simple. Charts look great! It was easy to get started and has a powerful API for when we need it. I was shocked by how clean and extensible the code is. Really happy with our choice.
A: There is a growing number of Open Source and commercial solutions for pure JavaScript charting that do not require Flash. In this response I will only present Open Source options.
There are 2 main classes of JavaScript solutions for graphics that do not require Flash:
*
*Canvas-based, rendered in IE using ExplorerCanvas that in turns relies on VML
*SVG on standard-based browsers, rendered as VML in IE
There are pros and cons of both approaches but for a charting library I would recommend the later because it is well integrated with DOM, allowing to manipulate charts elements with the DOM, and most importantly setting DOM events. By contrast Canvas charting libraries must reinvent the DOM wheel to manage events. So unless you intend to build static graphs with no event handling, SVG/VML solutions should be better.
For SVG/VML solutions there are many options, including:
*
*Dojox Charting, good if you use the Dojo toolkit already
*Raphael-based solutions
Raphael is a very active, well maintained, and mature, open-source graphic library with very good cross-browser support including IE 6 to 8, Firefox, Opera, Safari, Chrome, and Konqueror. Raphael does not depend on any JavaScript framework and therefore can be used with Prototype, jQuery, Dojo, Mootools, etc...
There are a number of charting libraries based on Raphael, including (but not limited to):
*
*gRaphael, an extension of the Raphael graphic library
*Ico, with an intuitive API based on a single function call to create complex charts
Disclosure: I am the developer of one of the Ico forks on github.
A: There is another javascript library based on SVG. It is called Protovis and it comes from Stanford Visualization Group
It also allows making nice interactive graphics and visualizations.
http://vis.stanford.edu/protovis/ex/
Although it is only for modern web browsers
UPDATE: The protovis team has moved to another library called d3.js (Data Driven Documents) as they said:
"The Protovis team is now developing a new visualization library, D3.js, with improved support for animation and interaction. D3 builds on many of the concepts in Protovis"
The new library can now be found in:
http://mbostock.github.com/d3/
UPDATE 2:
"Rickshaw" is a JavaScript toolkit for creating interactive time series graphs. Based on d3.js that simplifies a lot the work with d3.js although is a little bit less powerful.
http://code.shutterstock.com/rickshaw/
A: jqPlot is great. If your requirements are fairly "normal" and you just want to draw some charts, you're probably overwhelmed by the quantity of js charting options. Assuming you don't want to do hours of research, just go with jqPlot as it's probably your best bet. It covers most use cases for most people well. Some of the alternatives are specialised on a certain type of chart or built with a certain use case in mind.
A: I was recently looking for a javascript charting library and I evaluated a whole bunch before finally settling on jqplot which fit my requirements very well. As Jean Vincent's answer mentioned you are really choosing between canvas based and svg based solution.
To my mind the major pros and cons were as follows. The SVG based solutions like Raphael (and offshoots) are great if you want to construct highly dynamic/interactive charts. Or if you charting requirements are very much outside the norm (e.g. you want to create some sort of hybrid chart or you've come up with a new visualization that no-one else has thought of yet). The downside is the learning curve and the amount of code you will have to write. You won't be banging out charts in a few minutes, be prepared to invest some real learning time and then to write a goodly amount of code to produce a relatively simple chart.
If your charting requirements are reasonably standard, e.g. you want some line or bar graphs or perhaps a pie chart or two, with limited interactivity, then it is worth looking at canvas based solutions. There will be hardly any learning curve, you'll be able to get basic charts going within a few minutes, you won't need to write a lot of code, a few lines of basic javascript/jquery will be all you need. Of course you will only be able to produce the specific types of charts that the library supports, usually limited to various flavors of line, bar, pie. The interactivity choices will be extremely limited, that is to say non-existent for many of the libraries out there, although some limited hover effects are possible with the better ones.
I went with JQplot which is a canvas based solution since I only really needed some standard types of charts. From my research and playing around with the various choices I found it to be reasonably full-featured (if you're only after the standard charts) and extremely easy to use, so I would recommend it if your requirements are similar.
To summarize, simple and want charts now, then go with JQplot. Complex/different and not pressed for time then go with Raphael and friends.
A: Not a Javascript library but it may be a suitable alternative - check out Google Charts where you can generate charts by passing querystring data to their web service.
A: Take a look at Bluff. It's a JavaScript port of the Gruff graphing library for Ruby.
A: Protochart is all you need
A: Sencha acquired Raphael and now their charts are pure javascript as of version 4. Emprise and HighCharts mentioned above are my two favorites.
http://www.sencha.com/
A: For the more unusual charts: http://thejit.org/
A: Check out ZingChart HTML5 Canvas, SVG, VML and Flash Charts. Very powerful and compatible library. I'm on the Zing team - mention us on twitter @zingchart or shoot any questions to support@zingchart.com.
A: I can recommend ArcadiaCharts. A brand-new professional charting library for JavaScript and GWT. Runs in all browsers without plugins. Easy and fast to use: creates great looking charts with just a few lines of code.
Free for non-commercial use.
A: Fusion charts has a new javascript/jquery library that looks promising.
A: In case what you need is bar chart only. I published some code I've been using in an old project. Someone told me the VML implementation is broken on recent versions of IE, but the SVG should work just fine. Might be getting back to the project and release some serverside renderers I already have and maybe WebGL rendering layer. There's a link: http://blog.conquex.com/?p=64
A: Probably not what the OP is looking for, but since this question has become a list of JS charting library options: jQuery Sparklines is really cool.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "223"
}
|
Q: Selenium RC against a Cassini webserver I'm trying to run Selenium RC against my ASP.NET code running on a Cassini webserver.
The web application works when i browse it directly but when running through Selenium I get
HTTP ERROR: 403
Forbidden for Proxy
Running Selenium i interactive mode I start a new session with:
cmd=getNewBrowserSession&1=*iexplore&2=http://localhost:81/
cmd=open&1=http://localhost:81/default.aspx&sessionId=199578
I get the above error in the Selenium browser, the command window tells me OK.
Any input?
A: I think the problem is that both Selenium and the webserver is running on localhost.
It works if I run with the "iehta" instead of "iexplore".
A: Your Selenium server and web server should run of different ports.
A: I am not sure if this is part of the issue but, cassini can not be accessed from another machine. It is meant for local development only. I ran into this problem today and am trying UltiDev (cassini wrapper) to get around it: http://www.ultidev.com/products/Cassini/index.htm
A: Have you tried running RC with the -proxyInjection flag?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Detect via javascript whether Silverlight is installed Is there a javascript function I can use to detect whether a specific silverlight version is installed in the current browser?
I'm particularly interested in the Silverlight 2 Beta 2 version. I don't want to use the default method of having an image behind the silverlight control which is just shown if the Silverlight plugin doesn't load.
Edit: From link provided in accepted answer:
Include Silverlight.js (from Silverlight SDK)
Silverlight.isInstalled("2.0");
A: Please actually use the latest script available at http://code.msdn.microsoft.com/silverlightjs for the latest updates. This has several fixes in it.
A: Include Silverlight.js (from Silverlight SDK)
Silverlight.isInstalled("4.0")
Resource:
http://msdn.microsoft.com/en-us/library/cc265155(vs.95).aspx
A: var hasSilverlight = Boolean(window.Silverlight);
var hasSilverlight2 = hasSilverlight && Silverlight.isInstalled('2.0');
Etc....
A: Download this script: http://code.msdn.microsoft.com/silverlightjs
And then you can use it like so:
if (Silverlight.isInstalled)
{
alert ("Congrats. Your web browser is enabled with Silverlight Runtime");
}
A: if (Silverlight.isInstalled("1.0")) {
try {
alert("Silverlight Version 1.0 or above is installed");
}
catch (err) {
alert(err.Description);
}
}
else {
alert("No Silverlight is installed");
}
from this video.
Silverlight.isInstalled is always true so version string such as "1.0" must be provided to make it useful.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Should an index be optimised after incremental indexes in Lucene? We run full re-indexes every 7 days (i.e. creating the index from scratch) on our Lucene index and incremental indexes every 2 hours or so. Our index has around 700,000 documents and a full index takes around 17 hours (which isn't a problem).
When we do incremental indexes, we only index content that has changed in the past two hours, so it takes much less time - around half an hour. However, we've noticed that a lot of this time (maybe 10 minutes) is spent running the IndexWriter.optimize() method.
The LuceneFAQ mentions that:
The IndexWriter class supports an optimize() method that compacts the index database and speeds up queries. You may want to use this method after performing a complete indexing of your document set or after incremental updates of the index. If your incremental update adds documents frequently, you want to perform the optimization only once in a while to avoid the extra overhead of the optimization.
...but this doesn't seem to give any definition for what "frequently" means. Optimizing is CPU intensive and VERY IO-intensive, so we'd rather not be doing it if we can get away with it. How much is the hit of running queries on an un-optimized index (I'm thinking especially in terms of query performance after a full re-index compared to after 20 incremental indexes where, say, 50,000 documents have changed)? Should we be optimising after every incremental index or is the performance hit not worth it?
A: An optimize operation reads and writes the entire index, which is why it's so IO intensive!
The idea behind optimize operations is to re-combine all the various segments in the Lucene index into one single segment, which can greatly reduce query times as you don't have to open and search several files per query. If you're using the normal Lucene index file structure (rather than the combined structure), you get a new segment per commit operation; the same as your re-indexes I assume?
I think Matt has great advice and I'd second everything he says - be driven by the data you have. I would actually go a step further and only optmize a) when you need to and b) when you have low query volume.
As query performance is intimately tied to the number of segments in your index, a simple ls -1 index/segments_* | count could be a useful indicator for when in optimization is really needed.
Alternatively, tracking the query performance and volume and kicking off an optimize when you reach unacceptable low performance with acceptably low volume would be a nicer solution.
A: In this mail, Otis Gospodnetic advices against using optimize, if your index is seeing constant updates. It's from 2007, but calling optimize() is in it's very nature an IO-heavy operation. You could consider using a more stepwise approach; a MergeScheduler
A: Mat, since you seem to have a good idea how long your current process takes, I suggest that you remove the optimize() and measure the impact.
Do many of the documents change in those 2 hour windows? If only a small fraction (50,000/700,000 is about 7%) are incrementally re-indexed, then I don't think you are getting much value out of an optimize().
Some ideas:
*
*Don't do an incremental optimize() at all. My experience says you are not seeing a huge query improvement anyway.
*Do the optimize() daily instead of 2-hourly.
*Do the optimize() during low-volume times (which is what the javadoc says).
And make sure you take measurements. These kinds of changes can be a shot in the dark without them.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: How to setup a shared ccache How can I setup a shared ccache without falling into a permissions problem?
I would like to run a nightly or CI build with latest changes and share all created binaries throughout the R&D using a large ccache repository.
A: The easiest solution: create a new group (e.g. "devel"), and make all developers members of it. Give read/write permissions to that group on the directory hierarchy where the cache is maintained. The developers will also need to fix their umask.
A: You might also take a look at Mozilla's sccache, which is a ccache-like tool that can store build artifacts in cloud storage (GCS/S3/Azure or redis/memcached).
A: See the newly written Sharing a cache section in ccache's manual. In essence, use the same CCACHE_DIR setting, set CCACHE_UMASK appropriately and consider using CCACHE_BASEDIR.
A: If you also use the related distcc, then the permission problems would largely go away, as the compilations would be run under it's control on whatever compile-farm hosts you set.
You could also include the developers desktop machines among the distcc hosts, though at the expense of having some duplicated work where a file would potentially be compiled on more than one machine - though it would never return an out of date compiled object file. It would also speed up day to day recompilations.
A: Please see xcache.
It's ccache-cloud and has been used in alibaba which has high efficiency.
If you are using ccache, it would be very easy to switch to xcache.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/119999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: How can you get more information about processes when Linux runs out of memory? I had recently a problem with oom-killer starting to kill processes after some time. I could see that the memory was consumed, but by the time I got to the server it wasn't clear anymore what consumed it. Is there a good non-obvious place to get more info about oom-killer? E.g. detailed info about processes at the time of activation, detailed info about killed processes and reasons for the choice?
I'm looking for a specific place to find this information, specific tool to gather it or some configuration to improve oom-killer reporting. I'm not looking for generic info about oom-killer. /var/messages by default will only contain a detailed report on the free/allocated memory, but not specific processes it was allocated to.
A: You can check the messages log file to see which process got killed and some related information. As for the reasons:
... the ideal candidate for liquidation is a recently started, non privileged process which together with it's children uses lots of memory, has been nice'd, and does no raw I/O. Something like a nohup'd parallel kernel build (which is not a bad choice since all results are saved to disk and very little work is lost when a 'make' is terminated).
From here.
You can define some processes to be immune to the killer, adjust the swappiness parameter in case you have it too low (which makes the killer trigger happy) and check for things listed here
A: Typically you should get a message in /var/log/messages, with quite a large amount of detail relating to the process that was killed by the oom-killer.
A: This is not the exact answer to your question, but the malloc(3) man page on Linux has some information on how to turn off memory overcommit
echo 2 > /proc/sys/vm/overcommit_memory
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Load Excel data sheet to Oracle database I am looking for a free tool to load Excel data sheet into an Oracle database. I tried the Oracle SQL developer, but it keeps throwing a NullPointerException. Any ideas?
A: Another way to do Excel -> CSV -> Oracle is using External Tables, first introduced in 9i. External tables let you query a flat file as if it's a table. Behind the scenes Oracle is still using SQL*Loader. There's a solid tutorial here:
http://www.orafaq.com/node/848
A: Oracle Application Express, which comes free with Oracle, includes a "Load Spreadsheet Data" utility under:
Utilities > Data Load/Unload > Load > Load Spreadsheet Data
You need to save the spreadsheet as a CSV file first.
A: Excel -> CSV -> Oracle
Save the Excel spreadsheet as file type 'CSV' (Comma-Separated Values).
Transfer the .csv file to the Oracle server.
Create the Oracle table, using the SQL CREATE TABLE statement to define the table's column lengths and types.
Use sqlload to load the .csv file into the Oracle table. Create a sqlload control file like this:
load data
infile theFile.csv
replace
into table theTable
fields terminated by ','
(x,y,z)
Invoke sqlload to read the .csv file into the new table, creating one row in the table for each line in the .csv file. This is done as a Unix command:
% sqlload userid=username/password control=<filename.ctl> log=<filename>.log
OR
If you just want a tool, use QuickLoad
A: If this is a one time process you may just want to copy and paste the data into a Microsoft access table and do an append query to the oracle table that you have setup through your odbc manager.
A:
There are different ways to load excel/csv to oracle database. I am giving those below:
1. Use Toad. Toad gives very fexible option to upload excel. It gives column mapping window as well. Normally from Tools -> Import, you will find the option. For details I can provide full instruction manual.
2. Load to Microsoft Access first and then pass it to Oracle from there.
Step1: There is a tab in access named "External Data" which gives excel upload to access database.
Step2: Once table is created, just click write mouse on the table and choose Export to ODBC DATABASE. It will ask for oracle database connection details.
It's is free.
3. Use Oracle SQL Loader. It's a service which have datafile, control file. You need to write configuration. It is used as text/any file load which maintain one pattern.
Hope it helps. If required, I can share more details.
A: As you mention you are looking for a tool - you might like to check out this Oracle specific video - you can load data from any source -
http://youtu.be/shYiN2pnPbA
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Multiple Session Factories under Spring/Hibernate I have been given a requirement where I need to support multiple databases in the same instance, to support multi-tenancy. Each DB has an identical schema. The user logs into a specific database by choosing from a list, and all subsequent calls will go to that DB until they log out.
I want to hot swap the session factory inside a single HibernateDaoTemplate based on a parameter supplied by the client.
I can find lots of stuff on hot-swapping data sources (and all the transaction issues associated with that) but I want to hot swap session factories - retaining all the caching for each.
What's the easiest way to do this? Configure a HotSwappableTarget for the DaoTemplate? Can anyone point me to samples on how to do this?
A: If all the databases are identical, then I can suggest using a single SessionFactory and providing your own implementations for the DataSource and Cache that are actually "tenant-aware". (Implementing these is fairly trivial: just maintain a map of tenant id -> real cache/real datasource and then delegate all calls to the appropriate one). Configure the single SessionFactory to use your tenant-aware Cache and DataSource. A ThreadLocal can be used to make the tenant ID of the current request available to any code that needs to know about it.
I have used this approach before successfully to support multi-tenancy.
A: Where I used to work we did this via ThreadLocal following this guide. We just used one SessionFactory and swapped it's datasource based on a session variable the user could change while logged in. I don't remember the exact details, but if you're interested I can dig up some more information on our implementation.
That being said though, the guys at my former workplace are now moving away from this approach and towards a sharded database. Definitely a more elegant solution that I'd recommend you take a look at.
A: extend your DAO class from HibernateDaoSupport, then invoke setSessionFactory() method, to do the hot swap of the databases
A: You could also take a look at the Hibernate Shards project:
http://www.hibernate.org/414.html
... which is focused on adding support for horizontal partitioning to the Hibernate Core. It does not yet cover the full Hibernate API, but does support a large portion of it (which may or may not be sufficient for your needs). Of course, they are working towards complete coverage.
A: I also tried the cache provider via ThreadLocal and the difficult part was doing the hot swap on the cache, you have to make sure the SessionFactory does not have any active sessions associate with it. Now, I think there is a much better solution: by using the Spring 3 java configuration, you can create your tenant-aware SessionFactory dynamically and let Spring to do the cache management for you.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Retrieving CDATA contents from XML using PHP and simplexml I have the following XML structure:
<?xml version="1.0" ?>
<course xml:lang="nl">
<body>
<item id="787900813228567" view="12000" title="0x|Beschrijving" engtitle="0x|Description"><![CDATA[Dit college leert studenten hoe ze een onderzoek kunn$
<item id="5453116633894965" view="12000" title="0x|Onderwijsvorm" engtitle="0x|Method of instruction"><![CDATA[instructiecollege]]></item>
<item id="7433550075448316" view="12000" title="0x|Toetsing" engtitle="0x|Examination"><![CDATA[Opdrachten/werkstuk]]></item>
<item id="015071401858970545" view="12000" title="0x|Literatuur" engtitle="0x|Required reading"><![CDATA[Wayne C. Booth, Gregory G. Colomb, Joseph M. Wi$
<item id="5960589172957031" view="12000" title="0x|Uitbreiding" engtitle="0x|Expansion"><![CDATA[]]></item>
<item id="3610066867901779" view="12000" title="0x|Aansluiting" engtitle="0x|Place in study program"><![CDATA[]]></item>
<item id="19232369892482925" view="12000" title="0x|Toegangseisen" engtitle="0x|Course requirements"><![CDATA[]]></item>
<item id="3332396346891524" view="12000" title="0x|Doelgroep" engtitle="0x|Target audience"><![CDATA[]]></item>
<item id="6606851872934866" view="12000" title="0x|Aanmelden bij" engtitle="0x|Enrollment at"><![CDATA[]]></item>
<item id="1478643580820973" view="12000" title="0x|Informatie bij" engtitle="0x|Information at"><![CDATA[Docent]]></item>
<item id="9710608434763993" view="12000" title="0x|Rooster" engtitle="0x|Schedule"><![CDATA[1e semester, maandag 15.00-17.00, zaal 1175/030]]></item>
</body>
</course>
I want to get the data from one of the item tags. To get to this tag, I use the following xpath:
$description = $xml->xpath("//item[@title='0x|Beschrijving']");
This does indeed return an array in the form of:
Array
(
[0] => SimpleXMLElement Object
(
[@attributes] => Array
(
[id] => 787900813228567
[view] => 12000
[title] => 0x|Beschrijving
[engtitle] => 0x|Description
)
)
)
But where is the actual information (that is stored between the item tags) located? I must be doing something wrong, but I can't figure out what that might be... Probably something really simple... Help would be appreciated.
A: I believe its equivalent to the __toString() method on the object, so
echo $description[0];
Should display it, or you can cast it;
$str = (string) $description[0];
A: Take a look at the PHP.net documentation for "SimpleXMLElement" (http://uk.php.net/manual/en/function.simplexml-element-children.php) it looks like converting the node to a string "(string)$value;" does the trick.
Failing that, there's plenty of examples on that page that should point you in the right direction!
A: When you load the XML file, you'll need to handle the CDATA.. This example works:
<?php
$xml = simplexml_load_file('file.xml', NULL, LIBXML_NOCDATA);
$description = $xml->xpath("//item[@title='0x|Beschrijving']");
var_dump($description);
?>
Here's the output:
array(1) {
[0]=>
object(SimpleXMLElement)#2 (2) {
["@attributes"]=>
array(4) {
["id"]=>
string(15) "787900813228567"
["view"]=>
string(5) "12000"
["title"]=>
string(15) "0x|Beschrijving"
["engtitle"]=>
string(14) "0x|Description"
}
[0]=>
string(41) "Dit college leert studenten hoe ze een on"
}
}
A: $description = $xml->xpath("//item[@title='0x|Beschrijving']");
while(list( , $node) = each($description)) {
echo($node);
}
dreamwerx's solution is better
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Tool to monitor HTTP, TCP, etc. Web Service traffic What's the best tool that you use to monitor Web Service, SOAP, WCF, etc. traffic that's coming and going on the wire? I have seen some tools that made with Java but they seem to be a little crappy. What I want is a tool that sits in the middle as a proxy and does port redirection (which should have configurable listen/redirect ports). Are there any tools work on Windows to do this?
A: You might find Microsoft Network Monitor helpful if you're on Windows.
A: Wireshark (or Tshark) is probably the defacto standard traffic inspection tool. It is unobtrusive and works without fiddling with port redirecting and proxying. It is very generic, though, as does not (AFAIK) provide any tooling specifically to monitor web service traffic - it's all tcp/ip and http.
You have probably already looked at tcpmon but I don't know of any other tool that does the sit-in-between thing.
A: I tried Fiddler with its reverse proxy ability which is mentioned by @marxidad and it seems to be working fine, since Fiddler is a familiar UI for me and has the ability to show request/responses in various formats (i.e. Raw, XML, Hex), I accept it as an answer to this question. One thing though. I use WCF and I got the following exception with reverse proxy thing:
The message with To 'http://localhost:8000/path/to/service' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree
I have figured out (thanks Google, erm.. I mean Live Search :p) that this is because my endpoint addresses on server and client differs by port number. If you get the same exception consult to the following MSDN forum message:
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2302537&SiteID=1
which recommends to use clientVia Endpoint Behavior explained in following MSDN article:
http://msdn.microsoft.com/en-us/magazine/cc163412.aspx
A: For Windows HTTP, you can't beat Fiddler. You can use it as a reverse proxy for port-forwarding on a web server. It doesn't necessarily need IE, either. It can use other clients.
A: I've been using Charles for the last couple of years. Very pleased with it.
A: I second Wireshark. It is very powerful and versatile.
And since this tool will work not only on Windows but also on Linux or Mac OSX, investing your time to learn it (quite easy actually) makes sense. Whatever the platform or the language you use, it makes sense.
Regards,
Richard
Just Programmer
http://sili.co.nz/blog
A: Wireshark does not do port redirection, but sniffs and interprets a lot of protocols.
A: I find WebScarab very powerful
A: Check out Paros Proxy.
A: JMeter's built-in proxy may be used to record all HTTP request/response information.
Firefox "Live HTTP headers" plugin may be used to see what is happening on the browser side when sending/receiving request.
Firefox "Tamper data" plugin may be useful when you need to intercept and modify request.
A: I use LogParser to generate graphs and look for elements in IIS logs.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
}
|
Q: Does WPF have equivalent controls for all Winforms controls? Just found this out the hard way. I wanted to pop up a FontDialog to allow the user to choose a font.. one of those familiar dialogs..
A: Not all of them have equivalents.
The FontDialog for instance doesnt.. (grumble grumble). This page has the complete lowdown.. posting since it may be useful just as a mental note.
http://msdn.microsoft.com/en-us/library/ms750559.aspx
Update:
The Programming WPF book had this covered. Apparently some of the dialogs didn't make the RTM bus. The FontDialog that will included into the next update is available here.. as is the ColorPicker dialog. Also you shuoldn't blindly use Win32 dialogs, because the corresponding types in WPF (e.g. Font and Color are "bigger and better" now.)
http://blogs.msdn.com/wpfsdk/archive/2006/10/26/Uncommon-Dialogs--Font-Chooser-and-Color-Picker-Dialogs.aspx
A: VistaBridge samples have wrappers for some of the vista dialog boxes!
Also check out the wrappers provided by System.Win32 Microsoft.Win32
[UPDATE] `Microsoft.Win32.FileDialog
A: Embedding Windows forms using the WindowsFormsHost can cause a lot of problems - especially when dealing rendering\ visibility etc..
Some controls are already implemented by others and could be found over the WEB such as:
NumericUpDown
DateTimePicker
SplitButton
and of course the new WPFDataGrid
A: I know a team working on a WPF application for a couple of years by now, and their feedback is that WPF is still a no match to WinForms when it comes to complicated controls (advanced data grids, tree views and the like). Basic controls are ok though.
A: Unfortunately, it does not. However, you can "borrow" some of Windows Forms dialogs by using the Microsoft.Win32 namespace, or you can integrate Windows Forms controls by using the System.Windows.Forms.WindowsFormHost WPF element. Also, you can display Windows Forms dialogs directly by calling the constructor and then invoking the ShowDialog() method on them.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Any workarounds for non-static member array initialization? In C++, it's not possible to initialize array members in the initialization list, thus member objects should have default constructors and they should be properly initialized in the constructor. Is there any (reasonable) workaround for this apart from not using arrays?
[Anything that can be initialized using only the initialization list is in our application far preferable to using the constructor, as that data can be allocated and initialized by the compiler and linker, and every CPU clock cycle counts, even before main. However, it is not always possible to have a default constructor for every class, and besides, reinitializing the data again in the constructor rather defeats the purpose anyway.]
E.g. I'd like to have something like this (but this one doesn't work):
class OtherClass {
private:
int data;
public:
OtherClass(int i) : data(i) {}; // No default constructor!
};
class Foo {
private:
OtherClass inst[3]; // Array size fixed and known ahead of time.
public:
Foo(...)
: inst[0](0), inst[1](1), inst[2](2)
{};
};
The only workaround I'm aware of is the non-array one:
class Foo {
private:
OtherClass inst0;
OtherClass inst1;
OtherClass inst2;
OtherClass *inst[3];
public:
Foo(...)
: inst0(0), inst1(1), inst2(2) {
inst[0]=&inst0;
inst[1]=&inst1;
inst[2]=&inst2;
};
};
Edit: It should be stressed that OtherClass has no default constructor, and that it is very desirable to have the linker be able to allocate any memory needed (one or more static instances of Foo will be created), using the heap is essentially verboten. I've updated the examples above to highlight the first point.
A: One possible workaround is to avoid the compiler calling the OtherClass constructor at all, and to call it on your own using placement new to initialize it whichever way you need. Example:
class Foo
{
private:
char inst[3*sizeof(OtherClass)]; // Array size fixed. OtherClass has no default ctor.
// use Inst to access, not inst
OtherClass &Inst(int i) {return (OtherClass *)inst+i;}
const OtherClass &Inst(int i) const {return (const OtherClass *)inst+i;}
public:
Foo(...)
{
new (Inst(0)) OtherClass(...);
new (Inst(1)) OtherClass(...);
new (Inst(2)) OtherClass(...);
}
~Foo()
{
Inst(0)->~OtherClass();
Inst(1)->~OtherClass();
Inst(2)->~OtherClass();
}
};
To cater for possible alignment requirements of the OtherClass, you may need to use __declspec(align(x)) if working in VisualC++, or to use a type other than char like:
Type inst[3*(sizeof(OtherClass)+sizeof(Type)-1)/sizeof(Type)];
... where Type is int, double, long long, or whatever describes the alignment requirements.
A: What data members are in OtherClass? Will value-initialization be enough for that class?
If value-initialization is enough, then you can value-initialize an array in the member initialization list:
class A {
public:
A ()
: m_a() // All elements are value-initialized (which for int means zero'd)
{
}
private:
int m_a[3];
};
If your array element types are class types, then the default constructor will be called.
EDIT: Just to clarify the comment from Drealmer.
Where the element type is non-POD, then it should have an "accessible default constructor" (as was stated above). If the compiler cannot call the default constructor, then this solution will not work.
The following example, would not work with this approach:
class Elem {
public:
Elem (int); // User declared ctor stops generation of implicit default ctor
};
class A {
public:
A ()
: m_a () // Compile error: No default constructor
{}
private:
Elem m_a[10];
};
A: One method I typically use to make a class member "appear" to be on the stack (although actually stored on the heap):
class Foo {
private:
int const (&array)[3];
int const (&InitArray() const)[3] {
int (*const rval)[3] = new int[1][3];
(*rval)[0] = 2;
(*rval)[1] = 3;
(*rval)[2] = 5;
return *rval;
}
public:
explicit Foo() : array(InitArray()) { }
virtual ~Foo() { delete[] &array[0]; }
};To clients of your class, array appears to be of type "int const [3]". Combine this code with placement new and you can also truly initialize the values at your discretion using any constructor you desire. Hope this helps.
A: Array members are not initialized by default. So you could use a static helper function that does the initialization, and store the result of the helper function in a member.
#include "stdafx.h"
#include <algorithm>
#include <cassert>
class C {
public: // for the sake of demonstration...
typedef int t_is[4] ;
t_is is;
bool initialized;
C() : initialized( false )
{
}
C( int deflt )
: initialized( sf_bInit( is, deflt ) )
{}
static bool sf_bInit( t_is& av_is, const int i ){
std::fill( av_is, av_is + sizeof( av_is )/sizeof( av_is[0] ), i );
return true;
}
};
int _tmain(int argc, _TCHAR* argv[])
{
C c(1), d;
assert( c.is[0] == 1 );
return 0;
}
Worth noting is that in the next standard, they're going to support array initializers.
A: Use inheritance for creating proxy object
class ProxyOtherClass : public OtherClass {
public:
ProxyOtherClass() : OtherClass(0) {}
};
class Foo {
private:
ProxyOtherClass inst[3]; // Array size fixed and known ahead of time.
public:
Foo(...) {}
};
A: And what about using array of pointers instead of array of objects?
For example:
class Foo {
private:
OtherClass *inst[3];
public:
Foo(...) {
inst[0]=new OtherClass(1);
inst[1]=new OtherClass(2);
inst[2]=new OtherClass(3);
};
~Foo() {
delete [] inst;
}
};
A: You say "Anything that can be initialized using only the initialization list is in our application far preferable to using the constructor, as that data can be allocated and initialized by the compiler and linker, and every CPU clock cycle counts".
So, don't use constructors. That is, don't use conventional "instances". Declare everything statically. When you need a new "instance", create a new static declaration, potentially outside of any classes. Use structs with public members if you have to. Use C if you have to.
You answered your own question. Constructors and destructors are only useful in environments with a lot of allocation and deallocation. What good is destruction if the goal is for as much data as possible to be allocated statically, and so what good is construction without destruction? To hell with both of them.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: SQLplus script to format text as numeric I am selecting rows from a table, but some of the columns are a text type, but they always have numeric data in them. How can I format them as numbers?
e.g. column quantity heading 'Quantity' format 999,999
However, since the column in the table is text, the numeric formatting is ignored.
A: You will need to TO_NUMBER the column in your query.
A: To render with thousand seperators, you'll need to...
to_char(to_number(quantity), '999,999')
A: Thanks Steve,
I can now have:
column quantity heading 'Quantity' format 999,999
select TO_NUMBER(quantity) as quantity from Sales
And I get a right justified report.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Releasing a database build consistently via isql I am releasing a database build to SQL Server 2000 via a batch file using isql. The batch file is used so multiple files are released consistently to different SQL Servers (development, test, live).
The SQL Server uses ANSI code page 1252 (from sp_helpsort) but isql is an OEM client using code page 437. This means that all extended characters (with ASCII code > 128) are converted when the scripts are run, leading to inconsistent results when characters like “£” are included in the script. Differences are explained in this Microsoft knowledgebase article.
Possible solutions are: -
*
*Save the script using Unicode and
use osql.
*Turn off the AutoAnsiToOem setting using the SQL Server Client Network Utility (that writes a registry key).
Both these options rely on various people doing things consistently. All have to select the same code page option when saving a file OR all people performing the builds have to have the same option set for AutoAnsiToOem.
Is there a way to force the use of a code page either in the SQL script OR in the batch file that calls it, so that the build is always released consistently, regardless of how the file is saved or the various settings of whoever performs the release?
A: isql is obsolete. It isn't included in SQL Server 2005 or later, because it uses the DB-Library connections, which are also obsolete. For the reasons why, and the effects this has, see Connecting Early Version Clients to SQL Server 2000.
osql uses ODBC connections to connect to SQL Server. For completeness, this has been supplemented by sqlcmd in SQL Server 2005, which uses OLE DB with the SQL Native Client provider.
A: Further to the accepted answer, I have tested using sqlcmd against a SQL Server 2000 database and it works. You obviously have to run sqlcmd from a machine with the SQL Server 2005 client tools installed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Fetch a Wikipedia article with Python I try to fetch a Wikipedia article with Python's urllib:
f = urllib.urlopen("http://en.wikipedia.org/w/index.php?title=Albert_Einstein&printable=yes")
s = f.read()
f.close()
However instead of the html page I get the following response: Error - Wikimedia Foundation:
Request: GET http://en.wikipedia.org/w/index.php?title=Albert_Einstein&printable=yes, from 192.35.17.11 via knsq1.knams.wikimedia.org (squid/2.6.STABLE21) to ()
Error: ERR_ACCESS_DENIED, errno [No Error] at Tue, 23 Sep 2008 09:09:08 GMT
Wikipedia seems to block request which are not from a standard browser.
Anybody know how to work around this?
A: You need to use the urllib2 that superseedes urllib in the python std library in order to change the user agent.
Straight from the examples
import urllib2
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
infile = opener.open('http://en.wikipedia.org/w/index.php?title=Albert_Einstein&printable=yes')
page = infile.read()
A: It is not a solution to the specific problem. But it might be intersting for you to use the mwclient library (http://botwiki.sno.cc/wiki/Python:Mwclient) instead. That would be so much easier. Especially since you will directly get the article contents which removes the need for you to parse the html.
I have used it myself for two projects, and it works very well.
A: In case you are trying to access Wikipedia content (and don't need any specific information about the page itself), instead of using the api you should just call index.php with 'action=raw' in order to get the wikitext, like in:
'http://en.wikipedia.org/w/index.php?action=raw&title=Main_Page'
Or, if you want the HTML code, use 'action=render' like in:
'http://en.wikipedia.org/w/index.php?action=render&title=Main_Page'
You can also define a section to get just part of the content with something like 'section=3'.
You could then access it using the urllib2 module (as sugested in the chosen answer).
However, if you need information about the page itself (such as revisions), you'll be better using the mwclient as sugested above.
Refer to MediaWiki's FAQ if you need more information.
A: The general solution I use for any site is to access the page using Firefox and, using an extension such as Firebug, record all details of the HTTP request including any cookies.
In your program (in this case in Python) you should try to send a HTTP request as similar as necessary to the one that worked from Firefox. This often includes setting the User-Agent, Referer and Cookie fields, but there may be others.
A: requests is awesome!
Here is how you can get the html content with requests:
import requests
html = requests.get('http://en.wikipedia.org/w/index.php?title=Albert_Einstein&printable=yes').text
Done!
A: Rather than trying to trick Wikipedia, you should consider using their High-Level API.
A: Try changing the user agent header you are sending in your request to something like:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008072820 Ubuntu/8.04 (hardy) Firefox/3.0.1 (Linux Mint)
A: You don't need to impersonate a browser user-agent; any user-agent at all will work, just not a blank one.
A: Requesting the page with ?printable=yes gives you an entire relatively clean HTML document. ?action=render gives you just the body HTML. Requesting to parse the page through the MediaWiki action API with action=parse likewise gives you just the body HTML but would be good if you want finer control, see parse API help.
If you just want the page HTML so you can render it, it's faster and better is to use the new RESTBase API, which returns a cached HTML representation of the page. In this case, https://en.wikipedia.org/api/rest_v1/page/html/Albert_Einstein.
As of November 2015, you don't have to set your user-agent, but it's strongly encouraged. Also, nearly all Wikimedia wikis require HTTPS, so avoid a 301 redirect and make https requests.
A: import urllib
s = urllib.urlopen('http://en.wikipedia.org/w/index.php?action=raw&title=Albert_Einstein').read()
This seems to work for me without changing the user agent. Without the "action=raw" it does not work for me.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
}
|
Q: Mnesia table replication/sharing Assume that we have N erlang nodes, running same application. I want
to share an mnesia table T1 with all N nodes, which I see no problem.
However, I want to share another mnesia table T2 with pairs of nodes.
I mean the contents of T2 will be identical and replicated to/with
only sharing pair. In another words, I want N/2 different contents for
T2 table. Is this possible with mnesia, not with renaming T2 for each
distinct pair of nodes?
A: It's possible to do this with mnesia's table fragmentation, if one makes use of the mnesia_frag_hash callback behaviour. This allows you to control the distribution of keys, and it would be possible to construct the keys such that the callback is able to determine which node pair (and thus, which fragment) should be used.
Whether or not this works in your particular case depends on your access patterns and data set. Chances are that it's a pretty convoluted approach, and that you'd be better served by simply using different table names instead.
A: One table is always one table, no matter how many nodes you share it with. If you want pairs of nodes sharing a table, you would have to create a unique table for each pair of nodes.
You can use the same settings (records etc) for all those tables though, so there shouldn't be so much more work to get it done.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Doing readback from Direct3D textures and surfaces I need to figure out how to get the data from D3D textures and surfaces back to system memory. What's the fastest way to do such things and how?
Also if I only need one subrect, how can one read back only that portion without having to read back the entire thing to system memory?
In short I'm looking for concise descriptions of how to copy the following to system memory:
*
*a texture
*a subset of a texture
*a surface
*a subset of a surface
*a D3DUSAGE_RENDERTARGET texture
*a subset of a D3DUSAGE_RENDERTARGET texture
This is Direct3D 9, but answers about newer versions of D3D would be appreciated too.
A: The most involved part is reading from some surface that is in video memory ("default pool"). This is most often render targets.
Let's get the easy parts first:
*
*reading from a texture is the same as reading from 0-level surface of that texture. See below.
*the same for subset of a texture.
*reading from a surface that is in non-default memory pool ("system" or "managed") is just locking it and reading bytes.
*the same for subset of surface. Just lock relevant portion and read it.
So now we have left surfaces that are in video memory ("default pool"). This would be any surface/texture marked as render target, or any regular surface/texture that you have created in default pool, or the backbuffer itself. The complex part here is that you can't lock it.
Short answer is: GetRenderTargetData method on D3D device.
Longer answer (a rough outline of the code that will be below):
*
*rt = get render target surface (this can be surface of the texture, or backbuffer, etc.)
*if rt is multisampled (GetDesc, check D3DSURFACE_DESC.MultiSampleType), then: a) create another render target surface of same size, same format but without multisampling; b) StretchRect from rt into this new surface; c) rt = this new surface (i.e. proceed on this new surface).
*off = create offscreen plain surface (CreateOffscreenPlainSurface, D3DPOOL_SYSTEMMEM pool)
*device->GetRenderTargetData( rt, off )
*now off contains render target data. LockRect(), read data, UnlockRect() on it.
*cleanup
Even longer answer (paste from the codebase I'm working on) follows. This will not compile out of the box, because it uses some classes, functions, macros and utilities from the rest of codebase; but it should get you started. I also ommitted most of error checking (e.g. whether given width/height is out of bounds). I also omitted the part that reads actual pixels and possibly converts them into suitable destination format (that is quite easy, but can get long, depending on number of format conversions you want to support).
bool GfxDeviceD3D9::ReadbackImage( /* params */ )
{
HRESULT hr;
IDirect3DDevice9* dev = GetD3DDevice();
SurfacePointer renderTarget;
hr = dev->GetRenderTarget( 0, &renderTarget );
if( !renderTarget || FAILED(hr) )
return false;
D3DSURFACE_DESC rtDesc;
renderTarget->GetDesc( &rtDesc );
SurfacePointer resolvedSurface;
if( rtDesc.MultiSampleType != D3DMULTISAMPLE_NONE )
{
hr = dev->CreateRenderTarget( rtDesc.Width, rtDesc.Height, rtDesc.Format, D3DMULTISAMPLE_NONE, 0, FALSE, &resolvedSurface, NULL );
if( FAILED(hr) )
return false;
hr = dev->StretchRect( renderTarget, NULL, resolvedSurface, NULL, D3DTEXF_NONE );
if( FAILED(hr) )
return false;
renderTarget = resolvedSurface;
}
SurfacePointer offscreenSurface;
hr = dev->CreateOffscreenPlainSurface( rtDesc.Width, rtDesc.Height, rtDesc.Format, D3DPOOL_SYSTEMMEM, &offscreenSurface, NULL );
if( FAILED(hr) )
return false;
hr = dev->GetRenderTargetData( renderTarget, offscreenSurface );
bool ok = SUCCEEDED(hr);
if( ok )
{
// Here we have data in offscreenSurface.
D3DLOCKED_RECT lr;
RECT rect;
rect.left = 0;
rect.right = rtDesc.Width;
rect.top = 0;
rect.bottom = rtDesc.Height;
// Lock the surface to read pixels
hr = offscreenSurface->LockRect( &lr, &rect, D3DLOCK_READONLY );
if( SUCCEEDED(hr) )
{
// Pointer to data is lt.pBits, each row is
// lr.Pitch bytes apart (often it is the same as width*bpp, but
// can be larger if driver uses padding)
// Read the data here!
offscreenSurface->UnlockRect();
}
else
{
ok = false;
}
}
return ok;
}
SurfacePointer in the code above is a smart pointer to a COM object (it releases object on assignment or destructor). Simplifies error handling a lot. This is very similar to _comptr_t things in Visual C++.
The code above reads back whole surface. If you want to read just a part of it efficiently, then I believe fastest way is roughly:
*
*create a default pool surface that is of the needed size.
*StretchRect from part of original surface to that smaller one.
*proceed as normal with the smaller one.
In fact this is quite similar to what code above does to handle multi-sampled surfaces. If you want to get just a part of a multi-sampled surface, you can do a multisample resolve and get part of it in one StretchRect, I think.
Edit: removed piece of code that does actual read of pixels and format conversions. Was not directly related to question, and the code was long.
Edit: updated to match edited question.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: How can I find the first occurrence of a pattern in a string from some starting position? I have a string of arbitrary length, and starting at position p0, I need to find the first occurrence of one of three 3-letter patterns.
Assume the string contain only letters. I need to find the count of triplets starting at position p0 and jumping forward in triplets until the first occurrence of either 'aaa' or 'bbb' or 'ccc'.
Is this even possible using just a regex?
A: You can't really count with regexes, but you can do something like this:
pos $string = $start_from;
$string =~ m/\G # anchor to previous pos()
((?:...)*?) # capture everything up to the match
(aaa|bbb|ccc)
/xs or die "No match"
my $result = length($1) / 3;
But I think it's a bit faster to use substr() and unpack() to split into triple and walk the triples in a for-loop.
(edit: it's length(), not lenght() ;-)
A: $string=~/^ # from the start of the string
(?:.{$p0}) # skip (don't capture) "$p0" occurrences of any character
(?:...)*? # skip 3 characters at a time,
# as few times as possible (non-greedy)
(aaa|bbb|ccc) # capture aaa or bbb or ccc as $1
/x;
(Assuming p0 is 0-based).
Of course, it's probably more efficient to use substr on the string to skip forward:
substr($string, $p0)=~/^(?:...)*?(aaa|bbb|ccc)/;
A: Moritz says this might be faster than a regex. Even if it's a little slower, it's easier to understand at 5 am. :)
#0123456789.123456789.123456789.
my $string = "alsdhfaaasccclaaaagalkfgblkgbklfs";
my $pos = 9;
my $length = 3;
my $regex = qr/^(aaa|bbb|ccc)/;
while( $pos < length $string ) {
print "Checking $pos\n";
if( substr( $string, $pos, $length ) =~ /$regex/ ) {
print "Found $1 at $pos\n";
last;
}
$pos += $length;
}
A: The main part of this is split /(...)/. But at the end of this, you'll have your positions and occurrence data.
my @expected_triplets = qw<aaa bbb ccc>;
my $data_string
= 'fjeidoaaaivtrxxcccfznaaauitbbbfzjasdjfncccftjtjqznnjgjaaajeitjgbbblafjan'
;
my $place = 0;
my @triplets = grep { length } split /(...)/, $data_string;
my %occurrence_for = map { $_, [] } @expected_triplets;
foreach my $i ( 0..@triplets ) {
my $triplet = $triplets[$i];
push( @{$occurrence_for{$triplet}}, $i ) if exists $occurrence_for{$triplet};
}
Or for simple counting by regex (it uses Experimental (??{}))
my ( $count, %count );
my $data_string
= 'fjeidoaaaivtrxxcccfznaaauitbbbfzjasdjfncccftjtjqznnjgjaaajeitjgbbblafjan'
;
$data_string =~ m/(aaa|bbb|ccc)(??{ $count++; $count{$^N}++ })/g;
A: If speed is a serious concern, you can, depending on what the 3 strings are, get really fancy by creating a tree (e.g. Aho-Corasick algorithm or similar).
A map for every possible state is possible, e.g. state[0]['a'] = 0 if no strings begin with 'a'.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Problem with Asp.Net RequireFieldValidator and Javascript WYSIWYG I am using the open source Javascript WYSIWYG from OpenWebWare and Asp.Net RequiredFieldValidator on the TextBox which I am calling the WYSIWYG for. Everything works fine, but the first time I try to submit the form, I get the server-side RFV ErrorMessage "Required", but if I submit a second time, it goes through.
Am I missing something? I would like to have the client-side validation... how can I get the text to register as not empty?
A: I think the reason for this behavior is that validation code runs earlier than the code that updates underlying TextBox from value of WYSIWYG. So the first time you get the error, then the field is updated and the second time you don't get it. Try removing all the content the second time and I bet you wont get validation error (since the value for validator at the moment is what you actually submitted the first time).
The solution would be to find a JavaScript API call for your WYSIWYG which would force the update of the underlying text box field and call it onclick (client-side) of your submit button or whatever you use for that.
A: the textarea HTML tag is one of the most unpleasent tags to work with and I'm not 100% sure if the client-side validator will support it, regardless of whether it's a WYSIWYG or not.
I think you'd be best off using a CustomValidator and writing the JavaScript which does the checking manually.
Alternatively you can debug though the JavaScript which is used with FireBug or VS 2008.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Stopping looped notifications with VetoableChangeListener I have a problem with the design of a VetoableChangeListener. I implement the VetoableChangeListener interface to listen changes of a property in a model class, so when the model fires
vetoableChange(PropertyChangeEvent evt) throws PropertyVetoException
…I try to save the change in a DB, which could fail (by an SQLException, for example). If it fails I throw a PropertyVetoException to revert changes in the model.
The model is delegating in a VetoableChangeSupport (JDK class), which when it receives a PropertyVetoException catches it and notifies the revert to ALL the VetoableChangeListener, with the oldValue/newValue interchanged (later it rethrows the exception), so that the event comes to my class again and I try to save in DB again, etc...
I have a workaround which is that the model does NOT change until nobody throws a PropertyVetoException, so that in the VetoableChangeListener I FIRST check if the data I'm going to save in the database is NOT equal to the data in the model, if it's equal I simply ignore the change.
Is there another, better workaround?
A: Your "workaround" is not really a workaround but in fact sounds like the proper solution to me: confirming that there is in fact a change for the current state of the object prior to attempting to "change" the persisted version. This will also be much more efficient (database access is expensive).
A: You should check the Vetoable change before you change the model, not after...
ie: if there is a problem, the model is not changed, not revert the model if the change was wrong
A: To:timyates
Thats exactly what i do, i recive the event, try to update the DB, and it fails, i throw the exception vetoing the change, so that the model is not updated, but the problem is that the VetoableChangeSupport notifies me my own veto, entering in a bucle if i not do the workaround that i explain in the question
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I alter the precision of a decimal column in Microsoft SQL Server? Is there a way to alter the precision of an existing decimal column in Microsoft SQL Server?
A: ALTER TABLE Testing ALTER COLUMN TestDec decimal(16,1)
Just put decimal(precision, scale), replacing the precision and scale with your desired values.
I haven't done any testing with this with data in the table, but if you alter the precision, you would be subject to losing data if the new precision is lower.
A: There may be a better way, but you can always copy the column into a new column, drop it and rename the new column back to the name of the first column.
to wit:
ALTER TABLE MyTable ADD NewColumnName DECIMAL(16, 2);
GO
UPDATE MyTable
SET NewColumnName = OldColumnName;
GO
ALTER TABLE CONTRACTS DROP COLUMN OldColumnName;
GO
EXEC sp_rename
@objname = 'MyTable.NewColumnName',
@newname = 'OldColumnName',
@objtype = 'COLUMN'
GO
This was tested on SQL Server 2008 R2, but should work on SQL Server 2000+.
A: ALTER TABLE (Your_Table_Name) MODIFY (Your_Column_Name) DATA_TYPE();
For you problem:
ALTER TABLE (Your_Table_Name) MODIFY (Your_Column_Name) DECIMAL(Precision, Scale);
A: In Oracle 10G and later following statement will work.
ALTER TABLE <TABLE_NAME> MODIFY <COLUMN_NAME> <DATA_TYPE>
If the current data type is NUMBER(5,2) and you want to change it to NUMBER(10,2), following is the statement
ALTER TABLE <TABLE_NAME> MODIFY <COLUMN_NAME> NUMBER(10,2)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "113"
}
|
Q: performance of accessing a mono server application via remoting This is my setting: I have written a .NET application for local client machines, which implements a feature that could also be used on a webpage. To keep this example simple, assume that the client installs a software into which he can enter some data and gets some data back.
The idea is to create a webpage that holds a form into which the user enters the same data and gets the same results back as above. Due to the company's available web servers, the first idea was to create a mono webservice, but this was dismissed for reasons unknown. The "service" is not to be run as a webservice, but should be called by a PHP script. This is currently realized by calling the mono application via shell_exec from PHP.
So now I am stuck with a mono port of my application, which works fine, but takes way too long to execute. I have already stripped out all unnecessary dlls, methods etc, but calling the application via the command line - submitting the desired data via commandline parameters - takes approximately 700ms. We expect about 10 hits per second, so this could only work when setting up a lot of servers for this task.
I assume the 700m are related to the cost of starting the application every time, because it does not make much difference in terms of time if I handle the request only once or five hundred times (I take the original input, vary it slighty and do 500 iterations with "new" data every time. Starting from the second iteration, the processing time drops down to approximately 1ms per iteration)
My next idea was to setup the mono application as a remoting server, so that it only has to be started once and can then handle incoming requests. I therefore wrote another mono application that serves as the client. Calling the client, letting the client pass the data to the server and retrieving the result now takes 344ms. This is better, but still way slower than I would expect and want it to be.
I have then implemented a new project from scratch based on this blog post and get stuck with the same performance issues.
The question is: am I missing something related to the mono-projects that could improve the speed of the client/server? Although the idea of creating a webservice for this task was dismissed, would a webservice perform better under these circumstances (as I would not need the client application to call the service), although it is said that remoting is faster than webservices?
I could have made that clearer, but implementing a webservice is currently not an option (and please don't ask why, I didn't write the requirements ;))
Meanwhile I have checked that it's indeed the startup of the client, which takes most of the time in the remoting scenario.
I could imagine accessing the server via pipes from the command line, which would be perfectly suitable in my scenario. I guess this would be done using sockets?
A: You can try to use AOT to reduce the startup time. On .NET you would use ngen for that purpoise, on mono just do a mono --aot on all assemblies used by your application.
AOT'ed code is slower than JIT'ed code, but has the advantage of reducing startup time.
You can even try to AOT framework assemblies such as mscorlib and System.
A: I believe that remoting is not an ideal thing to use in this scenario. However your idea of having mono on server instead of starting it every time is indeed solid.
Did you consider using SOAP webservices over HTTP? This would also help you with your 'web page' scenario.
Even if it is a little to slow for you in my experience a custom RESTful services implementation would be easier to work with than remoting.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Stored Procedure Syntax My stored procedure is called as below from an SQL instegartion package within SQL Server 2005
EXEC ? = Validation.PopulateFaultsFileDetails ? , 0
Though i'm not sure what the ? means
A: The ? stands fora variable, to be precise, a parameter. The first ? is the return value of the stored prcoedure and the second one is the first parameter of the stored procedure
A: When this SQL statment is called, both question marks (?) will be replaced. The first will be replaced by a variable which will receive the return value of the stored procedure. The second will be replaced by a value which will be passed into the stored procedure. The code to use this statement will look something like this (pseudocode):
dim result
SQL = "EXEC ? = Validation.PopulateFaultsFileDetails ? , 0"
SQL.execute(result, 99) // pass in 99 to the stored proc
debug.print result
This gives you 3 advantages:
*
*you can re-use the same bit of SQL with different values
*you can pick up the return value and test for success/error
*if the value you are passing in is a string, it should be correctly escaped for you, reducing the risk of SQL injection vulnerabilities in your app.
A: Thanks I appreciate the answer.
I was able to successfully execute the stored procedure using
DECLARE @FaultsFileName varchar
DECLARE @FaultsFileID int
EXEC @FaultsFileID = Validation.PopulateFaultsFileDetails 'SameMonth Test.txt' , @FaultsFileID
SELECT @FaultsFileID
But when I pass the input parameter as 'SameMonth Test.txt' in the Integration Package I get an error which says:
Parameter names cannot be a mixture of ordinal and named types.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Guidelines for email newsletter service I'm implementing a email newsletter sender service using .NET and Windows Server technologies. Are there comprehensive guidelines which could help avoiding emails being trapped by spam filters and other mechanisms?
They should cover all aspects of (legal) bulk mail sending: SMTP configuration, DNS, HTML content, images, links within content etc. A simple example: is it better to embed images or load them from a server?
It would be great if you could provide some empirical data to show the efficiency of some measures taken.
A: Although I don't have a definitive answer, I think this is a very important question.
Here are few tidbits I know about it
*
*Choose a clean hosting/smtp server. IP addresses of spamming SMTP servers are often black-listed by other ISPs.
*Send a simple introductory email to every subscriber, asking them to add your sender address to their safe list.
*Be very prudent in sending to only those people who are actually expecting it. You wouldn't want pattern recognizers of spam filters learning the smell of your content.
*If you don't know your smtp servers in advance, its a good practice to provide configuration options in your application for controlling batch sizes and delay between batches. Some servers don't like large batches or continuous activity.
A: Unless you have a very specific reason to host the newsletter yourself, I think you'd be much better off using a third party service. There are lots out there, and some are very cheaply priced.
*
*It'll save you on development work
(no point in re-inventing the
wheel).
*Their system will handle all
the unsubscribe link stuff that you
need to include in email newsletters
to comply with CAN SPAM laws or
whatever.
*They handle the spam
reports that you will inevitably get
if you have a list of any non-trivial size.
They keep records of who signed up,
how they signed up, and their IP
address, and can present those on
receipt of a spam report to prove
that their service wasn't sending
out spam.
*You can use double-opt in
(or confirmed opt in), for extra
evidence to prove that the people
you're sending emails to actually
signed up to receive them.
If you really do need to host it yourself I'd suggest you search the web for "email deliverability". Things that are known to help include properly set up SPF records, DomainKeys/DKIM, correct DNS settings (reverse DNS especially - best to just use an online service to check your DNS settings). You can test a lot of these things by sending an email to check-auth@verifier.port25.com.
It's best to avoid using spammy words in your email - always a bit of guesswork this but you some words can trip filters.
But I'd guess that by far the most important thing is to be sending your email from a trusted server that maintains good relationships with ISPs (i.e. ensuring that ISPs don't think that the server is sending out spam). This is a big reason why it's much much easier to get a third party to handle everything for you.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Retrieve catalog, metadata or schema information from MS Access database when connecting with PHP I am using following PHP code to connect to MS Access database:
$odb_conn = new COM("ADODB.Connection");
$connstr = "DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=". $db_path.";";
$odb_conn->open($connstr);
How can I retrieve database catalog/metadata from the mdb file?
FOUND THE SOLUTION
$rs_meta = $odb_conn->OpenSchema(20, array(Null, Null, Null, "TABLE"));
A: You will find information on ADO here :
*
*http://msdn.microsoft.com/en-us/library/ms675532(VS.85).aspx
*http://www.w3schools.com/ado/default.asp
The connection object has an OpenSchema method to get database schema information.
I don't know how to use MS Acces DB with PHP and how your new COM() object works, but I think it's better to use an OleDB connection instead an ADO object : http://msdn.microsoft.com/en-us/library/ms722784(VS.85).aspx
A: The MSysObjects table can be used to query metadata in Access:
SELECT NAME
FROM MSysObjects
WHERE Type In (1,4,6) AND Left([Name],4)<>"MSYS"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Encryption libraries? I have been tasked with implementing a PKI library in C# for a company project, and have been unable to find a good implementation of it. There appear to be multiple libraries, and many broken links pointing to MSDN libraries that have been removed. I've seen people using Crypt32.dll, people building their own libraries, people using P/Invoke to access system certificate stores, people extending the built-in libraries, examples that simply don't apply to C# (e.g. Java examples), and commercial libraries.
My question is, which implementation/library is most recommended for simple encryption/decryption of data?
As some background for what I plan to do with it, I simply need to encrypt messages using a private key (.pfx), and decrypt with public keys (.cer). Message signing and authentication isn't required at this level of the project, although it may be in future.
I have seen reference to encryption lengths which make me uneasy. We need to be able to encrypt any length message (within reason, of course!). Is this something I need to worry about, and if so, is there a way to deal with it?
I would prefer not to store public/private keys in the windows certificate manager if at all possible, but if it makes implementation significantly simpler, so be it.
I realize PKI and encryption is a large and complex subject, but I'm hoping for a relatively simple library anyway... (one can hope, right?)
Thanks!
A: Well, you did not mention that the built-in class doesn't cover your need, so how about System.Security.Cryptography.RSACryptoServiceProvider?
It has a large set qualified ways to asymmetrically encrypt/decrypt streams.
There are several tutorial/guides to take you along the way:
*
*Public Key RSA Encryption in C# .NET - Code Project
*RSA Encryption in .NET -- Demystified! - By Peter A. Bromberg
There are countless more to be found through Google.
Update: About the length-restrictments, it's should not be any problems if you just implement the same buffer-algorithm on both sides, encryption and decryption.
Update2: Yes, my example was RSACryptoProvider, but you can use any class that derives from System.Security.Cryptography.AsymmetricAlgorithm, if you want a public/private key-solution. Or build your own... or maybe not :)
A: Yes, what's wrong with built-in classes?
And if you don't want to use Windows certificate store you can use something like this
RSACryptoServiceProvider rscp = new RSACryptoServiceProvider();
rscp.FromXmlString("<RSAKeyValue><Modulus>key data gere</Modulus><Exponent></Exponent></RSAKeyValue>");
Not sure that this is a good idea for private keys, though.
There's a good tutorial on the subject here
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/120116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.