text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: Unit Testing Guidelines Does anyone know of where to find unit testing guidelines and recommendations? I'd like to have something which addresses the following types of topics (for example):
*
*Should tests be in the same project as application logic?
*Should I have test classes to mirror my logic classes or should I have only as many test classes as I feel I need to have?
*How should I name my test classes, methods, and projects (if they go in different projects)
*Should private, protected, and internal methods be tested, or just those that are publicly accessible?
*Should unit and integration tests be separated?
*Is there a good reason not to have 100% test coverage?
What am I not asking about that I should be?
An online resource would be best.
A: It's a good question. We grew our own organically, and I suspect the best way is just that. There's a bit "It depends..." in there.
We put out tests in the same project, in a sub-namespace called "UnitTes"
Our test classes mirror the logic class, in order to simplify keeping track of where the tests are in relation to what they are testing
Classes are named like the logic class they are testing, methods are named for the scenario they are testing.
We only write tests for the public and internal methods (test are in the same project), and aim for 95% coverage of the class.
I prefer not to distinguish between "unit" and "intergation". To much time will be spent trying to figure out which is which...bag that! A test is a test.
100% is too difficult to achieve all the time. We aim for 95%. There's also diminishing returns on how much time it will take to get that final 5% and what it will actually catch.
That's us and what suited out environment and pace. You're milage may vary. Think about your envivonment nad the personalities that are involved.
I look forward to seeing what others have to say on this one!
A: Josh's answer is right on - just one point of clarification:
The reason I separate unit tests from integration and acceptance tests is speed. I use TDD. I need close to instant feedback about the line of code I just created/modified. I cannot get that if I'm running full suites of integration and/or acceptance tests - tests that hit real disks, real networks, and really slow and unpredictable external systems.
Don't cross the beams. Bad things will happen if you do.
A: I would recommend Kent Beck's book on TDD.
Also, you need to go to Martin Fowler's site. He has a lot of good information about testing as well.
We are pretty big on TDD so I will answer the questions in that light.
Should tests be in the same project as application logic?
Typically we keep our tests in the same solution, but we break tests into seperate DLL's/Projects that mirror the DLL's/Projects they are testing, but maintain namespaces with the tests being in a sub namespace. Example: Common / Common.Tests
Should I have test classes to mirror my logic classes or should I have only as many test classes as I feel I need to have?
Yes, your tests should be created before any classes are created, and by definition you should only test a single unit in isolation. Therefore you should have a test class for each class in your solution.
How should I name my test classes, methods, and projects (if they go in different projects)
I like to emphasize that behavior is what is being tested so I typically name test classes after the SUT. For example if I had a User class I would name the test class like so:
public class UserBehavior
Methods should be named to describe the behavior that you expect.
public void ShouldBeAbleToSetUserFirstName()
Projects can be named however you want but usually you want it to be fairly obvious which project it is testing. See previous answer about project organization.
Should private, protected, and internal methods be tested, or just those that are publicly accessible?
Again you want tests to assert expected behavior as if you were a 3rd party consumer of the objects being tested. If you test internal implementation details then your tests will be brittle. You want your test to give you the freedom to refactor without worrying about breaking existing functionality. If your test know about implementation details then you will have to change your tests if those details change.
Should unit and integration tests be separated?
Yes, unit tests need to be isolated from acceptance and integration tests. Separation of concerns applies to tests as well.
Is there a good reason not to have 100% test coverage?
I wouldn't get to hung up on the 100% code coverage thing. 100% code coverage tends to imply some level of quality in the tests, but that is a myth. You can have terrible tests and still get 100% coverage. I would instead rely on a good Test First mentality. If you always write a test before you write a line of code then you will ensure 100% coverage so it becomes a moot point.
In general if you focus on describing the full behavioral scope of the class then you will have nothing to worry about. If you make code coverage a metric then lazy programmers will simply do just enough to meet that mark and you will still have crappy tests. Instead rely heavily on peer reviews where the tests are reviewed as well.
A: I insistently recommend you to read Test Driven Development: By Example and Test-Driven Development: A Practical Guide It's too much questions for single topic
A: In order:
*
*No, it's usually best to include them in a seperate project; unless you want to be able to run diagnostics at runtime.
*The ideal is 100% code coverage, which means every line of code in every routine in every class.
*I go with ClassnameTest, ClassnameTest.MethodNameTestnumber
*Everything.
*I'd say yes, as integration tests don't need to be run if unit tests fail.
*Simple properties that just set and get a field don't need to be tested.
A: With respect to your last question, in my experience, the "good" reason for not insisting upon 100% test coverage is that it takes a disproportionate amount of effort to get the last few percentage points, particularly in larger code bases. As such, it's a matter of deciding whether or not it's worth your time once you reach that point of diminishing returns.
A: Should tests be in the same project as application logic?
It depends. There are trade-offs either way.
Keeping it in one project requires extra bandwidth to distribute your project, extra build time and increases the installation footprint, and makes it easier to make the mistake of having production logic that depends on test code.
On the other hand, keeping separate projects can make it harder to write tests involving private methods/classes (depending on programming language), and causes minor administration hassles, such as making setting up a new development environment (e.g. when a new developer joins the project) harder.
How much these different costs matter varies by project, so there's no universal answer.
Should I have test classes to mirror my logic classes or should I have only as many test classes as I feel I need to have?
No.
You should have test classes that allow for well-factored test code (i.e. minimal duplication, clear intent, etc).
The obvious advantage of directly mirroring the logic classes in your test classes is that it makes it easy to find the tests corresponding to a particular piece of code. There are other ways solve this problem without restricting the flexibility of the test code. Simple naming conventions for test modules and classes is usually enough.
How should I name my test classes, methods, and projects (if they go in different projects)
You should name them so that:
*
*each test class and test method has a clear purpose, and
*so that someone looking for a particular test (or for tests about a particular unit) can find it easily.
Should private, protected, and internal methods be tested, or just those that are publicly accessible?
Often non-public methods should be tested. It depends on if you get enough confidence from just testing the public interface, or if the unit you really want to be testing is not publically accessible.
Should unit and integration tests be separated?
This depends on your choice of testing framework(s). Do whichever works best with your testing framework(s) and makes it so that:
*
*both the unit and integration tests relating to a piece of code are easy to find,
*it is easy to run just the unit tests,
*it is easy to run just the integration tests,
*it is easy to run all tests.
Is there a good reason not to have 100% test coverage?
Yes, there is a good reason. Strictly speaking “100% test coverage” means every possible situation in your code is exercised and tested. This is simply impractical for almost any project to achieve.
If you simply take “100% test coverage” to mean that every line of source code is exercised by the test suite at some point, then this is a good goal, but sometimes there are just a couple of lines in awkward places that are hard to reach with automated tests. If the cost of manually verifying that functionality periodically is less than the cost of going through contortions to reach those last five lines, then that is a good reason not to have 100% line coverage.
Rather than a simple rule that you should have 100% line coverage, encourage your developers to discover any gaps in your testing, and find ways to fix those gaps, whether or the number of lines “covered” improves. In other words, if you measure lines covered, then you will improve your line coverge — but what you actually want is improved quality. So don't forget that line coverage is just a very crude approximation of quality.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: What is Java EE? I realize that literally it translates to Java Enterprise Edition. But what I'm asking is what does this really mean? When a company requires Java EE experience, what are they really asking for? Experience with EJBs? Experience with Java web apps?
I suspect that this means something different to different people and the definition is subjective.
A: Yes, experience with EJB, Web Apps ( servlest and JSP ), transactions, webservices, management, and application servers.
It also means, experience with "enteprise" level application, as opposed to desktop applications.
In many situations the enterprise applications needs to connect to with a number of legacy systems, they are not only "web pages", and with the features availalble on the "edition" of java that kind of connectivity can be solved.
A: J2EE traditionally referred to products and standards released by Sun. For example if you were developing a standard J2EE web application, you would be using EJBs, Java Server Faces, and running in an application server that supports the J2EE standard. However since there is such a huge open source plethora of libraries and products that do the same jobs as well as (and many will argue better) then these Sun offerings, the day to day meaning of J2EE has migrated into referring to these as well (For instance a Spring/Tomcat/Hibernate solution) in many minds.
This is a great book in my opinion that discusses the 'open source' approach to J2EE
http://www.theserverside.com/tt/articles/article.tss?l=J2EEWithoutEJB_BookReview
A: J(2)EE, strictly speaking, is a set of APIs (as the current top answer has it) which enable a programmer to build distributed, transactional systems. The idea was to abstract away the complicated distributed, transactional bits (which would be implemented by a Container such as WebSphere or Weblogic), leaving the programmer to develop business logic free from worries about storage mechanisms and synchronization.
In reality, it was a cobbled-together, design-by-committee mish-mash, which was pushed pretty much for the benefit of vendors like IBM, Oracle and BEA so they could sell ridicously over-complicated, over-engineered, over-useless products. Which didn't have the most basic features (such as scheduling)!
J2EE was a marketing construct.
A: I would say that J2EE experience = in-depth experience with a few J2EE technologies, general knowledge about most J2EE technologies, and general experience with enterprise software in general.
A: Java EE is a collection of specifications for developing and deploying enterprise applications.
In general, enterprise applications refer to software hosted on servers that provide the applications that support the enterprise.
The specifications (defined by Sun) describe services, application programming interfaces (APIs), and protocols.
The 13 core technologies that make up Java EE are:
*
*JDBC
*JNDI
*EJBs
*RMI
*JSP
*Java servlets
*XML
*JMS
*Java IDL
*JTS
*JTA
*JavaMail
*JAF
The Java EE product provider is typically an application-server, web-server, or database-system vendor who provides classes that implement the interfaces defined in the specifications. These vendors compete on implementations of the Java EE specifications.
When a company requires Java EE experience what are they really asking for is experience using the technologies that make up Java EE. Frequently, a company will only be using a subset of the Java EE technologies.
A: There are 2 version of the Java Environments, J2EE and Se. SE is the standard edition, which includes all the basic classes that you would need to write single user applications. While the Enterprise Edition is set up for multi-tiered enterprise applications, or possible distributed applications. If you'd be using app servers, like tomcat or websphere, you'd want to use the J2EE, with the extra classes for n-tier support.
A: Java EE is actually a collection of technologies and APIs for the Java platform designed to support "Enterprise" Applications which can generally be classed as large-scale, distributed, transactional and highly-available applications designed to support mission-critical business requirements.
In terms of what an employee is looking for in specific techs, it is quite hard to say, because the playing field has kept changing over the last five years. It really is about the class of problems that are being solved more than anything else. Transactions and distribution are key.
A: It's meaning changes all the time. It used to mean Servlets and JSP and EJBs.
Now-a-days it probably means Spring and Hibernate etc.
Really what they are looking for is experience and understanding of the Java ecosystem, Servlet containers, JMS, JMX, Hibernate etc. and how they all fit together.
Testing and source control would be an important skills too.
A: Seems like Oracle is now trying to do away with JSPs (replace with Faces) and emulate Spring's REST (JAX-RS) and DI.
ref: https://docs.oracle.com/javaee/7/firstcup/java-ee001.htm
Table 2-1 Web-Tier Java EE Technologies
JavaServer Faces technology
A user-interface component framework for web applications that allows you to include UI components (such as fields and buttons) on a XHTML page, called a Facelets page; convert and validate UI component data; save UI component data to server-side data stores; and maintain component state
Expression Language
A set of standard tags used in Facelets pages to refer to Java EE components
Servlets
Java programming language classes that dynamically process requests and construct responses, usually for HTML pages
Contexts and Dependency Injection for Java EE
A set of contextual services that make it easy for developers to use enterprise beans along with JavaServer Faces technology in web applications
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "230"
}
|
Q: Javascript - Get Image height I need to display a bunch of images on a web page using AJAX. All of them have different dimensions, so I want to adjust their size before displaying them. Is there any way to do this in JavaScript?
Using PHP's getimagesize() for each image causes an unnecessary performance hit since there will be many images.
A: I was searching a solution to get height and width of an image using JavaScript. I found many, but all those solutions only worked when the image was present in browser cache.
Finally I found a solution to get the image height and width even if the image does not exist in the browser cache:
<script type="text/javascript">
var imgHeight;
var imgWidth;
function findHHandWW() {
imgHeight = this.height;
imgWidth = this.width;
return true;
}
function showImage(imgPath) {
var myImage = new Image();
myImage.name = imgPath;
myImage.onload = findHHandWW;
myImage.src = imgPath;
}
</script>
Thanks,
Binod Suman
http://binodsuman.blogspot.com/2009/06/how-to-get-height-and-widht-of-image.html
A: You can use img.naturalWidth and img.naturalHeight to get real dimension of image in pixels
A: Try this:
var curHeight;
var curWidth;
function getImgSize(imgSrc)
{
var newImg = new Image();
newImg.src = imgSrc;
curHeight = newImg.height;
curWidth = newImg.width;
}
A: ...but... wouldn't it be better to adjust the image size on the server side rather than transmitting the bytes to the browser and doing it there?
When I say adjust the image size, I don't mean set the height and width in the HTML image tag. If you do that, you are still shipping a large number of bytes from server to client. I mean, actually manipulate the image itself server side.
I have .NET C# code here that takes that approach, but there must be a php way to do it too: http://ifdefined.com/www/gallery.html
Also, by doing it server side, that opens up the possibility of doing the adjustment just once and then saving the adjusted image, which would be very fast.
A: My preferred solution for this would be to do the resizing server-side, so you are transmitting less unnecessary data.
If you have to do it client-side though, and need to keep the image ratio, you could use the below:
var image_from_ajax = new Image();
image_from_ajax.src = fetch_image_from_ajax(); // Downloaded via ajax call?
image_from_ajax = rescaleImage(image_from_ajax);
// Rescale the given image to a max of max_height and max_width
function rescaleImage(image_name)
{
var max_height = 100;
var max_width = 100;
var height = image_name.height;
var width = image_name.width;
var ratio = height/width;
// If height or width are too large, they need to be scaled down
// Multiply height and width by the same value to keep ratio constant
if(height > max_height)
{
ratio = max_height / height;
height = height * ratio;
width = width * ratio;
}
if(width > max_width)
{
ratio = max_width / width;
height = height * ratio;
width = width * ratio;
}
image_name.width = width;
image_name.height = height;
return image_name;
}
A: Try with JQuery:
<script type="text/javascript">
function jquery_get_width_height()
{
var imgWidth = $("#img").width();
var imgHeight = $("#img").height();
alert("JQuery -- " + "imgWidth: " + imgWidth + " - imgHeight: " + imgHeight);
}
</script>
or
<script type="text/javascript">
function javascript_get_width_height()
{
var img = document.getElementById('img');
alert("JavaSript -- " + "imgWidth: " + img.width + " - imgHeight: " + img.height);
}
</script>
A: Do you want to adjust the images themselves, or just the way they display? If the former, you want something on the server side. If the latter, you just need to change image.height and image.width.
A: Well...there are several ways to interpret this question.
The first way and the way I think you mean is to simply alter the display size so all images display the same size. For this, I would actually use CSS and not JavaScript. Simply create a class that has the appropriate width and height values set, and make all <img> tags use this class.
A second way is that you want to preserve the aspect ration of all the images, but scale the display size to a sane value. There is a way to access this in JavaScript, but I'll need a bit to write up a quick code sample.
The third way, and I hope you don't mean this way, is to alter the actual size of the image. This is something you'd have to do on the server side, as not only is JavaScript unable to create images, but it wouldn't make any sense, as the full sized image has already been sent.
A: It's worth noting that in Firefox 3 and Safari, resizing an image by just changing the height and width doesn't look too bad. In other browsers it can look very noisy because it's using nearest-neighbor resampling. Of course, you're paying to serve a larger image, but that might not matter.
A: Just load the image in a hidden <img> tag (style = "display none"), listen to the load event firing with jQuery, create a new Image() with JavaScript, set the source to the invisible image, and get the size like above.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
}
|
Q: How best to draw in the console? I'm trying to write a console (as in terminal, not gaming console) pong game in python and I'm having trouble figuring how best to (re)draw the game.
I was thinking of having an 2d array as a sort of bitmap, editing the array to reflect the ball/paddles new positions and then casting each row to a string and printing it. However that means that the old "frames" will remain, and if the dimensions of the game are smaller than the console window, old frames will still be visible.
Is there a way to delete characters from the console? '\b' I've heard is unreliable.
Or is there an easier alternative route to outputting to the console for this sort of app?
A: Try urwid. One of the examples bundled with urwid is a simulator for animated bar graphs. The bar graphs clear the screen well, without leaving artifacts of the old "frame".
A: It looks like there is a curses port/library for Python:
https://docs.python.org/library/curses.html
A: There are actually two libraries that solve this, the older curses and the newer S-Lang. Curses has a tendency to make buggy line art, especially on Windows and on unicode consoles (it's unicode support is shit). S-Lang's screen management functions are better.
While I haven't used either of them in Python, and it seems curses is better supported, in C at least I'm switching my code to S-Lang because of those issues, and because deep down I never really liked the curses API.
A: I have recently been developing an ASCII animation package (https://github.com/peterbrittain/asciimatics) which faced similar issues. While it doesn't have everything you need to write a game, it should give you most of what you want.
The Sprite class in particular will help you handle redrawing issues. There are plenty of samples to help you get to grips with various ways to use them and other effects in the package. Here's a little demo I put together as a tribute to one of my favourite games of yesteryear...
A: You can use curses.
It has a Windows Port and Unix Port, and plenty of documentation.
You can also use some helper libs.
A: I would investigate using the curses module. It will take care of a lot of the details and let you focus on the higher level stuff.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Whats a good value for 'identity increment' for an 'Orders' table Whats a good value for an identity increment for an 'Orders' table? (orders as in shopping cart orders)
I want the order numbers to appear so that we have more orders than we really do, plus make it harder for users to guess order numbers of other users in cases where that might be a problem.
I dont want too big a value such that I might run out of values, and i also don't want a noticable sequence to be apparent.
I've settled on 42 for now
A: It is not usually a good (security) idea to expose IDs to end-users.
I would use a normal +1 autoincrement ID column, and have the user-visible order number be a string based off the current date. Maybe use date + number of orders so far today: "20080919336".
A: I would say the most important thing is to have your identity seed start out high like 156,786 or something. As for the increment, it might be good to use something odd so not all of you order numbers are even.
I must say however, that it is better to not use an Identity field for an order # that will be exposed to users. It's usually better to keep these things hidden in the database and have a separate field for the Order Number. This way, you can change an Order Number without messing up all your references. All your other tables will reference the Identity field (should be your primary key) and then you can just slap an index on the other Order # field to keep it unique.
You'll thank me later.
A: Why increment? Using a GUID would make the number of orders unguessable, and make it almost impossible to guess an order URL (obviously you'd still want to check to see if whoever's viewing it is authorized to see it).
If you are determined to use a monotonically increasing integer ID, it just comes down to estimation how many orders you'll have until you run out. But they're always going to be guessable if somebody can see a handful of them and guess the sequence increment number. Then they'll know how many orders you've had exactly, and be able to guess URLs all day.
A: If you want to make it appear there are more orders than there really are, just pick an arbitrarily large id number to start with. But then, if it were me, I'd just set the increment to 1. To keep users from guessing order numbers, obfuscating the number might not be the best way to go.
If I'm user 123, and I place an order, number 4567, let's say the url to view the order looks like:
http://example.com/orders?id=4567
Say I'm feeling mischievous and decide to start playing with that url. What if I try:
http://example.com/orders?id=5000
If there's no order 5000 yet, what will it display? What about something as simple as "Invalid order id". But then, say I try:
http://example.com/orders?id=4568
And that order does exist, should it display the order? It could check the id of the user that created the order, and unless it's me (good old user #123), display the same error message, "Invalid order id". That could make it impossible for a user to tell whether any given order id exists unless they created the order themselves.
A: You could use a non-numerical Order Reference code like "ABC0123". Depends on your platform, but you can either use this as the Primary Key for your table or in addition to the automatically incremented identifier (which would then simply become the internal reference).
Also: if a user guessing an order number is an issue, you really need to think about some security measures.
A: You can use the indenity value. Just set the seed high and then icrement that.
[ID] [int] IDENTITY(5497,73) NOT NULL,
I have a feeling that if people see they are order number 1, they would not care. What I would do is set it at 3 million with a increment of 1. It will be a large number, and go up. You can always reseed it if you think people aren't buying becuase they are the 5th person to order.
A: While it is the answer for everything BUT this question, 42 isn't a good bet.
One possibility is that you use the customer ID and appended increment number... but that wouldn't be an 'identity increment' with a seed on the orders table in that case.
Example: JoeBlow has a ID of 56 in the customers table, its his 18th order (5618)
and to further mask your order count(?) you can do whatever else along those lines you like, append millseconds/random or something like that. This is a simple example
A: Why not have a pool of random numbers, and then take the next random number from your pool? This could be done by taking some data (user id, and a counter) and using some encryption/hashing algorithm.
A: *
*create an incrementing Int as your internal PK. Yields superior fill factors for your storage and indexing performance.
*create a hash, composite key, or GUID as an external AK - this value should be the one exposed on forms etc. US Drivers Licence # is a great example of contrived composite keys - generated from unique random number plus some arbitary combination of the driver name characters.
*never have a GUID as a Clustered index - it causes terrible page fill issues
A: A simple way to do this is to start high like has already been suggested and use an increment of 1. When you have to show this order number to the user, reverse it and append a 1.
So order 64010 would become 101046
I have to append a 1 in case the Id ends with a zero.
The users will see some high numbers that don't seemingly have a pattern and it would be difficult to guess the pattern used.
Preferably, you would have a column in your table called OrderNumber which is just a reversal of Id. This would prevent yourself mistakenly checking the Id for specific order numbers when you're debugging :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Any experiences with Intel's Threading Building Blocks? Intel's Threading Building Blocks (TBB) open source library looks really interesting. Even though there's even an O'Reilly Book about the subject I don't hear about a lot of people using it. I'm interested in using it for some multi-level parallel applications (MPI + threads) in Unix (Mac, Linux, etc.) environments. For what it's worth, I'm interested in high performance computing / numerical methods kinds of applications.
Does anyone have experiences with TBB? Does it work well? Is it fairly portable (including GCC and other compilers)? Does the paradigm work well for programs you've written? Are there other libraries I should look into?
A: Portability
TBB is portable. It supports Intel and AMD (i.e. x86) processors, IBM PowerPC and POWER processors, ARM processors, and possibly others. If you look in the build directory, you can see all the configurations the build system support, which include a wide range of operating systems (Linux, Windows, Android, MacOS, iOS, FreeBSD, AIX, etc.) and compilers (GCC, Intel, Clang/LLVM, IBM XL, etc.). I have not tried TBB with the PGI C++ compiler and know that it does not work with the Cray C++ compiler (as of 2017).
A few years ago, I was part of the effort to port TBB to IBM Blue Gene systems. Static linking was a challenge, but is now addressed by the big_iron.inc build system helper. The other issues were supporting relatively ancient versions of GCC (4.1 and 4.4) and ensuring the PowerPC atomics were working. I expect that porting to any currently unsupported architecture would be relatively straightforward on platforms that provide or are compatible with GCC and POSIX.
Usage in community codes
I am aware of at least two HPC application frameworks that uses TBB:
*
*MOOSE
*MADNESS
I do not know how MOOSE uses TBB, but MADNESS uses TBB for its task queue and memory allocator.
Performance versus other threading models
I have personally used TBB in the Parallel Research Kernels project, within which I have compared TBB to OpenMP, OpenCL, Kokkos, RAJA, C++17 Parallel STL, and other models. See the C++ subdirectory for details.
The following figure shows the relative performance of the aforementioned models on an Intel Xeon Phi 7250 processor (the details aren't important - all models used the same settings). As you can see, TBB does quite well except for smaller problem sizes, where the overhead of adaptive scheduling is more relevant. TBB has tuning knobs that will affect these results.
Full disclosure: I work for Intel in a research/pathfinding capacity.
A: I'm not doing numerical computing but I work with data mining (think clustering and classification), and our workloads are probably similar: all the data is static and you have it at the beginning of the program. I have briefly investigated Intel's TBB and found them overkill for my needs. After starting with raw pthread-based code, I switched to OPENMP and got the right mix between readability and performance.
A: ZThread is LGPL, you are limited to use the library in dynamic linkage if not working in a open source project.
The Threading Building Blocks (TBB) in the open source version, (there is a new commercial version, $299 , don't know the differences yet) is GNU General Public License version 2 with a so-called “Runtime Exception” (that is specific to the use only on creating free software.)
I've seen other Runtime Exceptions that attempt to approach LGPL but enabling commercial use and static linking this is not is now the case.
I'm only writing this because I took the chance to examine the libraries licenses and those should be also a consideration for selection based on the use one intends to give them.
Txs, Jihn for pointing out this update...
A: I have used TBB briefly, and will probably use it more in the future. I liked using it, most importantly because you dont have to deal with macros/extensions of C++, but remain within the language. Also its pretty portable. I have used it on both windows and linux. One thing though: it is difficult to work with threads using TBB, you would have to think in terms of tasks (which is actually a good thing). Intel TBB would not support your use of bare locks (it will make this tedious). But overall, this is my preliminary experience.
I'd also recommend having a look at openMP 3 too.
A: I use TBB in one project. It seemed to be easier to use it than threads.
There are tasks which can be run in parallel. A task is just a call to your parallelized subroutine. Load balancing is done automatically. That is why I accept it as a higher level parallelization library. I achieved 2.5x speed up without much work on a 4 core intel processor.
There are examples, they answer questions on forums and it is maintained and it is free.
A: It's worth being clear what TBB (Threading Building Blocks) is for to contrast with other alternatives (e.g. C++ 11x concurrency features). TBB is a portable and scalable library (not a compiler extension) allowing you to write your code in the form of lightweight tasks that TBB will schedule to run as fast as possible on the CPU resources available. It's not designed support threading for other purposes (e.g. pre-emption).
I've used TBB to speed up existing image processing of for loops over image scan lines into parallel_for loops (a minimum of 2-4 scan lines as a 'grain' size). This has been very successful. It does require your loop body is (re)written to process an arbitrary index rather than assuming each loop body is processed sequentially (e.g. pointers that are incremented between each loop iteration).
This was a fairly trivial case as there wasn't any shared storage to update. Using the more powerful features (e.g. pipeline) will require significant reimagining and/or rewriting of existing code so is perhaps better suited to new code.
It's a powerful advantage that this TBB based code remains portable, doesn't seem to interfere with other code elsewhere in the same process concurrently using other threading strategies and can later be combined with multiprocessing strategies at a higher or lower level (e.g. the TBB parallel_for code could be called from a filter in a TBB multiprocessing pipeline).
A: I've looked into TBB but never used it in a project. I saw no advantages (for my purposes) over ZThread. A brief and somewhat dated overview can be found here.
It's fairly complete with several thread dispatch options, all the usual synchronization classes and a very handy exception based thread "interrupt" mechanism . It's easily extendable, well written and documented. I've used it on 20+ projects.
It also plays nice with any *NIX that supports POSIX threads as well as Windows.
Worth a look.
A:
The Threading Building Blocks (TBB) in
the open source version, (there is a
new commercial version, $299, don't
know the differences yet) is GNU
General Public License version 2 with
a so-called “Runtime Exception” (that
is specific to the use only on
creating free software.) I've seen
other Runtime Exceptions that attempt
to approach LGPL but enabling
comercial use and static linking this
is not the case.
According to this question threading building blocks is usable without copy-left restrictions with commercial use.
A: I've introduced it into our code base because we needed a bettor malloc to use when we moved to a 16 core machine. With 8 and under it wasn't a significant issue. It has worked well for us. We plan on using the fine grained concurrent containers next. Ideally we can make use of the real meat of the product, but that requires rethinking how we build our code. I really like the ideas in TBB, but it's not easy to retrofit onto a code base.
You can't think of TBB as another threading library. They have a whole new model that really sits on top of threads and abstracts the threads away. You learn to think in task, parallel_for type operations and pipelines. If I were to build a new project I would probably try to model it in this fashion.
We work in Visual Studio and it works just fine. It was originally written for linux/pthreads so it runs just fine over there also.
A: Have you looked at boost library with its thread API?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
}
|
Q: Adobe Captivate 3 Typing Sound viewing a recording I am using Adobe Captivate 3 and am having trouble disabling the typing noise when using a recording. Sometimes when I have this 'feature' disabled and make a recording, it doesn't make the typing sound, but when I publish the project, the typing sound returns. Also, I have other projects I am working on where the typing sound wasn't initially disabled during the recording and I can't get it to stop making that typing noise.
Any suggestions on how to disable the typing sound permanently within a new or existing Captivate project?
A: Couple items from some searching about:
Adobe.com : Setting keyboard tap sounds
If you do not like the tapping sounds in your finished project, use the following procedure to remove them. The tapping sound is a project-level setting, so the sound either plays on all slides or is removed from all slides.
Adobe Captivate 3: Recorded Keystrokes Got You Re-Recording? Not Anymore!
Long post on this one, with a couple options.
Hope that helps!
A: You can remove the clicking sounds just before publishing (Captivate 3). Click Publish, then click Preferences then un-check "Play tap audio for recorded type when project is generated". This worked for me.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I make WISA act like LAMP (Protecting .mp3s on IIS) I have created a few small flash widgets that stream .mp3 audio from an Apache/php host. The mp3 file cannot be directly accessed and does not save it self to the browsers cache.
To do this I set the mp3 file permission on the host to "owner: read/write" (numeric value 600). This makes it so that only my .php file can read the .mp3.
Then I make a request to my php file from my ActionScript and it streams the mp3 to my widget. (If the client/user looks in the browsers cache the mp3 file is not found as desired)
This is the php code that streams the file:
<?php
ob_start();
header("Expires: Mon, 20 Dec 1977 00:00:00 GMT");
header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT");
header("Cache-Control: no-store, no-cache, must-revalidate");
header("Cache-Control: post-check=0, pre-check=0", false);
header("Pragma: no-cache");
header("Content-Type: audio/mpeg");
@readfile($_GET["file"]);
ob_end_flush();
?>
Does anyone know how to reproduce this behavior using IIS/ASP.Net
1.) Make it so a file is only accessible to a file on the server.
2.) Stream that file using an .ASPX or .ASHX?
A: You're not really protecting the MP3s, just obfuscating them. Anyone can still save them, especially if they just fire up an HTTP debugger like Fiddler to figure out what HTTP calls are being made. The fact that you've set them to not cache and to go through a PHP script doesn't help much.
To get this same effect using ASP.NET, you'd write an HTTPHandler (probably just off an .ashx), set up all the headers the same way using context.Response.Headers, then load the .mp3 file using System.IO.FileStream and send that to context.Response.OutputStream. Look up System.Web.HTTPHandler, System.IO.FileStream, and System.Web.HTTPResponse on MSDN for more info.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: InternalsVisibleTo attribute isn't working I am trying to use the InternalsVisibleTo assembly attribute to make my internal classes in a .NET class library visible to my unit test project. For some reason, I keep getting an error message that says:
'MyClassName' is inaccessible due to its protection level
Both assemblies are signed and I have the correct key listed in the attribute declaration. Any ideas?
A: You can use AssemblyHelper tool that will generate InternalsVisibleTo syntax for you. Here's the link to the latest version. Just note that it only works for strongly-named assemblies.
A: In addition to all of the above, when everything seems to be correct, but the friend assembly stubbornly refuses to see any internals, reloading the solution or restarting Visual Studio can solve the problem.
A: Here's a macro I use to quickly generate this attribute. Its a bit hacky, but it works. On my machine. When the latest signed binary is in /bin/debug. Etc equivocation etc. Anyhow, you can see how it gets the key, so that'll give you a hint. Fix/improve as your time permits.
Sub GetInternalsVisibleToForCurrentProject()
Dim temp = "[assembly: global::System.Runtime.CompilerServices." + _
"InternalsVisibleTo(""{0}, publickey={1}"")]"
Dim projs As System.Array
Dim proj As Project
projs = DTE.ActiveSolutionProjects()
If projs.Length < 1 Then
Return
End If
proj = CType(projs.GetValue(0), EnvDTE.Project)
Dim path, dir, filename As String
path = proj.FullName
dir = System.IO.Path.GetDirectoryName(path)
filename = System.IO.Path.GetFileNameWithoutExtension(path)
filename = System.IO.Path.ChangeExtension(filename, "dll")
dir += "\bin\debug\"
filename = System.IO.Path.Combine(dir, filename)
If Not System.IO.File.Exists(filename) Then
MsgBox("Cannot load file " + filename)
Return
End If
Dim assy As System.Reflection.Assembly
assy = System.Reflection.Assembly.Load(filename)
Dim pk As Byte() = assy.GetName().GetPublicKey()
Dim hex As String = BitConverter.ToString(pk).Replace("-", "")
System.Windows.Forms.Clipboard.SetText(String.Format(temp, assy.GetName().Name, hex))
MsgBox("InternalsVisibleTo attribute copied to the clipboard.")
End Sub
A: Although using an AssemblyInfo file and adding InternalsVisibleTo still works, the preferred approach now (as far as I can tell) is to use an ItemGroup in your Project's csproj file, like so:
<ItemGroup>
<InternalsVisibleTo Include="My.Project.Tests" />
</ItemGroup>
If a PublicKey is required, this attribute may also be added.
A: Another possible "gotcha": The name of the friend assembly that you specify in the InternalsVisibleToAttribute must exactly match the name of your friend assembly as shown in the friend's project properties (in the Application tab).
In my case, I had a project Thingamajig and a companion project ThingamajigAutoTests (names changed to protect the guilty) that both produced unsigned assemblies. I duly added the attribute [assembly: InternalsVisibleTo( "ThingamajigAutoTests" )] to the Thingamajig\AssemblyInfo.cs file, and commented out the AssemblyKeyFile and AssemblyKeyName attributes as noted above. The Thingamajig project built just fine, but its internal members stubbornly refused to show up in the autotest project.
After much head scratching, I rechecked the ThingamajigAutoTests project properties, and discovered that the assembly name was specified as "ThingamajigAutoTests.dll". Bingo - I added the ".dll" extension to the assembly name in the InternalsVisibleTo attribute, and the pieces fell into place.
Sometimes it's the littlest things...
A: You need to use the /out: compiler switch when compiling the friend assembly (the assembly that
does not contain the InternalsVisibleTo attribute).
The compiler needs to know the name of the assembly being compiled in order to determine if the resulting assembly should be considered a friend assembly.
A: If your assemblies aren't signed, but you are still getting the same error, check your AssemblyInfo.cs file for either of the following lines:
[assembly: AssemblyKeyFile("")]
[assembly: AssemblyKeyName("")]
The properties tab will still show your assembly as unsigned if either (or both) of these lines are present, but the InternalsVisibleTo attribute treats an assembly with these lines as strongly signed. Simply delete (or comment out) these lines, and it should work fine for you.
A: You are required to generate an new full public key for the assembly and then specify the attribute to assembly.
[assembly: InternalsVisibleTo("assemblyname,
PublicKey="Full Public Key")]
Follow the below MSDN steps to generate new full public key for the assembly from visual studio.
To add a Get Assembly Public Key item to the Tools menu
In Visual Studio, click External Tools on the Tools menu.
In the External Tools dialog box, click Add and enter Get Assembly Public Key in the Title box.
Fill the Command box by browsing to sn.exe. It is typically installed at the following location: C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0a\Bin\x64\sn.exe.
In the Arguments box, type the following (case sensitive): -Tp $(TargetPath).
Select the Use Output window check box.
Click OK. The new command is added to the Tools menu.
Whenever you need the Public Key Token of the assembly you are developing, click the Get Assembly Public Key command on the Tools menu, and the public key token appears in the Output window.
A: In my case using VS.Net 2015, I needed to sign BOTH assemblies (if at least 1 assembly shall be signed or you want to reference on the public key of your assembly).
My project didn't use signing at all. So I started adding a sign key to my test library and useing the InternalsVisibleTo-Attribute at my project's base library. But VS.Net always explained it couldn't access the friend methods.
When I started to sign the base library (it can be the same or another sign key - as long as you do sign the base library), VS.Net was immediately able to work as expected.
A: Previous answers with PublicKey worked: (Visual Studio 2015: NEED to be on one line, otherwise it complains that the assembly reference is invalid or cannot referenced. PublicKeyToken didn't worked)
[assembly: InternalsVisibleTo("NameSpace.MyFriendAssembly, PublicKey=0024000004800000940000000602000000240000525341310004000001000100F73F4DDC11F0CA6209BC63EFCBBAC3DACB04B612E04FA07F01D919FB5A1579D20283DC12901C8B66A08FB8A9CB6A5E81989007B3AA43CD7442BED6D21F4D33FB590A46420FB75265C889D536A9519674440C3C2FB06C5924360243CACD4B641BE574C31A434CE845323395842FAAF106B234C2C1406E2F553073FF557D2DB6C5")]
Thanks to @Joe
To get the public key of the friend assembly:
sn -Tp path\to\assembly\MyFriendAssembly.dll
Inside a Developper command prompt (Startup > Programs > Visual Studio 2015 > Visual Studio Tools > Developer Command Prompt for VS2015).
Thanks to @Ian G.
Although, the final touch that made it work for me after the above was to sign my friend library project the same way the project of the library to share is signed. Since it was a new Test library, it wasn't signed yet.
A: Another possibility that may be tricky to track down, depending on how your code is written.
*
*You're invoking an internal method defined in X from another assembly Y
*The method signature uses internal types defined in Z
*You then have to add [InternalsVisibleTo] in X AND in Z
For example:
// In X
internal static class XType
{
internal static ZType GetZ() { ... }
}
// In Y:
object someUntypedValue = XType.GetZ();
// In Z:
internal class ZType { ... }
If you have it written like above, where you're not referring to ZType directly in Y, after having added Y as a friend of X, you may be mystified why your code still doesn't compile.
The compilation error could definitely be more helpful in this case.
A: I'm writing this out of frustration. Make sure the assembly you are granting access to is named as you expect.
I renamed my project but this does not automatically update the Assembly Name. Right click your project and click Properties. Under Application, ensure that the Assembly Name and Default Namespace are what you expect.
A: If you have more than 1 referenced assembly - check that all necessary assemblies have InternalsVisibleTo attribute. Sometimes it's not obviously, and no message that you have to add this attribute into else one assembly.
A: 1- Sign the test project
In Visual Studio go to the properties window of the test project and Sign the assembly by checking the checkbox with the same phrase in the Signing tab.
2- Create a PublicKey for the test project
Open Visual Studio Command Prompt (e.g. Developer Command Prompt for VS 2017). Go to the folder where the .dll file of the test project exists. Create a Public Key via sn.exe:
sn -Tp TestProject.dll
Note that the argument is -Tp, but not -tp.
3- Introduce the PublicKey to the project to be tested
Go to the AssemblyInfo.cs file in the project to be tested and add this line with the PublicKey created in the previous step:
[assembly: InternalsVisibleTo("TestProjectAssemblyName, PublicKey=2066212d128683a85f31645c60719617ba512c0bfdba6791612ed56350368f6cc40a17b4942ff16cda9e760684658fa3f357c137a1005b04cb002400000480000094000000060200000024000052534131000400000100010065fe67a14eb30ffcdd99880e9d725f04e5c720dffc561b23e2953c34db8b7c5d4643f476408ad1b1e28d6bde7d64279b0f51bf0e60be2d383a6c497bf27307447506b746bd2075")]
Don't forget to replace the above project name and PublicKey with yours.
4- Make the private method internal
In the project to be tested change the access modifier of the method to internal.
internal static void DoSomething(){...}
A: It is worth noting that if the "friend" (tests) assembly is written in C++/CLI rather than C#/VB.NET, you need to use the following:
#using "AssemblyUnderTest.dll" as_friend
instead of a project reference or the usual #using statement. For some reason, there is no way to do this in the project reference UI.
A: Are you absolutely sure you have the correct public key specified in the attribute?
Note that you need to specify the full public key, not just the public key token. It looks something like:
[assembly: InternalsVisibleTo("MyFriendAssembly,
PublicKey=0024000004800000940000000602000000240000525341310004000001000100F73
F4DDC11F0CA6209BC63EFCBBAC3DACB04B612E04FA07F01D919FB5A1579D20283DC12901C8B66
A08FB8A9CB6A5E81989007B3AA43CD7442BED6D21F4D33FB590A46420FB75265C889D536A9519
674440C3C2FB06C5924360243CACD4B641BE574C31A434CE845323395842FAAF106B234C2C140
6E2F553073FF557D2DB6C5")]
It's 320 or so hex digits. Not sure why you need to specify the full public key - possibly with just the public key token that is used in other assembly references it would be easier for someone to spoof the friend assembly's identity.
A: Applies only if you like to keep unsigned assemblies as unsigned assembly (and don't want to sign it for several reasons):
There is still another point: if you compile your base library from VS.Net to a local directory, it may work as expected.
BUT: As soon as you compile your base library to a network drive, security policies apply and the assembly can't be successfully loaded. This again causes VS.NET or the compiler to fail when checking for the PublicKey match.
FINALLY, it's possible to use unsigned assemblies:
https://msdn.microsoft.com/en-us/library/bb384966.aspx
You must ensure that BOTH assemblies are NOT SIGNED
And the Assembly attribute must be without PublicKey information:
<Assembly: InternalsVisibleTo("friend_unsigned_B")>
A: I had the same problem. None of the solutions worked.
Eventually discovered the issue was due to class X explicitly implementing interface Y, which is internal.
the method X.InterfaceMethod was unavailable, though I have no idea why.
The solution was to cast (X as YourInterface).InterfaceMethod in the test library, and then things worked.
A: As a side note, if you want to easily get the public key without having to use sn and figure out its options you can download the handy program here. It not only determines the public key but also creates the "assembly: InternalsVisibleTo..." line ready to be copied to the clipboard and pasted into your code.
A: I just resolved a similar problem with the InternalsVisibleTo Attribute. Everything seemed right and I couldn't figure out why the internal class I was aiming still wasn't accessible.
Changing the case of the key from upper to lower case fixed the problem.
A: InternalsVisibleTo is often used in the context of testing. If you're using a mocking framework such as Moq, you will need to expose the internals of your project to Moq as well. In the particular case of that framework, I also needed to add [assembly: InternalsVisibleTo("DynamicProxyGenAssembly2")]to my AssemblyInfo.
A: For what it's worth, I found that the property that was inaccessible due to it's protection level:
MyField field;
had to be changed to:
internal MyField field;
And it then compiled. I thought that internal was the default access modifier?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "79"
}
|
Q: Getting Image height before the image loads in HTML I have a table that is dynamically created using DIVs. Each row of the table has two images. I want to set the height for the div (that represents a particular row) to the height of image that is greater of the two images being displayed in that particular row. The images to displayed will always change, and they are from an external server.
How do I set the height for my div so that I can fit images?
A: If you are trying to dynamically resize a couple of divs in a row within a table, you maybe better off using a html table instead and having each image within a td tag. This will make tr tag resize accordingly for the image in each cell.
A: this.img = new Image();
this.img.src = url;
alert(this.img.width);
gives the width while
var img = new Image();
img.src = url;
alert(img.width);
doesnt..
dunno why.
A: You can:
*
*Not specify the height of the div, and let it expand automatically
*Once the image is loaded do:
document.getElementById("myDiv").height = document.getElementById("myImage").height
A: Pre-load them into javascript image objects then just reference the height and width.
Might take some clever devilry to work in all browsers...
function getSize(imgSrc){
var aImg = new Image();
aImg.src = imgSrc;
aHeight = newImg.height;
aWidth = newImg.width;
}
A: We'll need a little more info to be very useful. You can get the height & width of an image after the page loads via Javascript (info), then you could resize the height of the div after loading. Otherwise, you're really out of luck since HTML itself doesn't have anything.
If you're using PHP, there's getimagesize(), which you can use if you're building the site dynamically with PHP. There are similar functions for other languages, but we'd need a little more info.
A: If you want the browser to do layout based on the height of an image, before it fetches the image, you need to send that height to the browser somewhere. This will require something server-side. The fastest thing would be to insert in into the html directly. Slower but more elegant would be to fetch it image by image with <script src=> statements that get instructions from a special bit of javascript-generating cgi. (The speed difference comes from network round trips.)
If you're willing to resize after the data arrives, it's much simpler. Either slap an onload handler on the images or stick them in normal dom (e.g. an actual table, though you can do it with divs and css) and let the layout engine do the work.
A: This question has been answered in multiple ways, and you asked the additional question "Won't this make the UI look bad?"
The answer to that question is Yes. The best thing for you to do in most cases will be to set the height of your div to something that looks good, then scale the images down to fit. This will make the rendering faster, and the final product will look better and more professional.
But that's just my own opinion, though. I have no empirical data to back that up.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Possibilities for Python classes organized across files? I'm used to the Java model where you can have one public class per file. Python doesn't have this restriction, and I'm wondering what's the best practice for organizing classes.
A: I would say to put as many classes as can be logically grouped in that file without making it too big and complex.
A: Since there is no artificial limit, it really depends on what's comprehensible. If you have a bunch of fairly short, simple classes that are logically grouped together, toss in a bunch of 'em. If you have big, complex classes or classes that don't make sense as a group, go one file per class. Or pick something in between. Refactor as things change.
A: A Python file is called a "module" and it's one way to organize your software so that it makes "sense". Another is a directory, called a "package".
A module is a distinct thing that may have one or two dozen closely-related classes. The trick is that a module is something you'll import, and you need that import to be perfectly sensible to people who will read, maintain and extend your software.
The rule is this: a module is the unit of reuse.
You can't easily reuse a single class. You should be able to reuse a module without any difficulties. Everything in your library (and everything you download and add) is either a module or a package of modules.
For example, you're working on something that reads spreadsheets, does some calculations and loads the results into a database. What do you want your main program to look like?
from ssReader import Reader
from theCalcs import ACalc, AnotherCalc
from theDB import Loader
def main( sourceFileName ):
rdr= Reader( sourceFileName )
c1= ACalc( options )
c2= AnotherCalc( options )
ldr= Loader( parameters )
for myObj in rdr.readAll():
c1.thisOp( myObj )
c2.thatOp( myObj )
ldr.laod( myObj )
Think of the import as the way to organize your code in concepts or chunks. Exactly how many classes are in each import doesn't matter. What matters is the overall organization that you're portraying with your import statements.
A: I happen to like the Java model for the following reason. Placing each class in an individual file promotes reuse by making classes easier to see when browsing the source code. If you have a bunch of classes grouped into a single file, it may not be obvious to other developers that there are classes there that can be reused simply by browsing the project's directory structure. Thus, if you think that your class can possibly be reused, I would put it in its own file.
A: It entirely depends on how big the project is, how long the classes are, if they will be used from other files and so on.
For example I quite often use a series of classes for data-abstraction - so I may have 4 or 5 classes that may only be 1 line long (class SomeData: pass).
It would be stupid to split each of these into separate files - but since they may be used from different files, putting all these in a separate data_model.py file would make sense, so I can do from mypackage.data_model import SomeData, SomeSubData
If you have a class with lots of code in it, maybe with some functions only it uses, it would be a good idea to split this class and the helper-functions into a separate file.
You should structure them so you do from mypackage.database.schema import MyModel, not from mypackage.email.errors import MyDatabaseModel - if where you are importing things from make sense, and the files aren't tens of thousands of lines long, you have organised it correctly.
The Python Modules documentation has some useful information on organising packages.
A: I find myself splitting things up when I get annoyed with the bigness of files and when the desirable structure of relatedness starts to emerge naturally. Often these two stages seem to coincide.
It can be very annoying if you split things up too early, because you start to realise that a totally different ordering of structure is required.
On the other hand, when any .java or .py file is getting to more than about 700 lines I start to get annoyed constantly trying to remember where "that particular bit" is.
With Python/Jython circular dependency of import statements also seems to play a role: if you try to split too many cooperating basic building blocks into separate files this "restriction"/"imperfection" of the language seems to force you to group things, perhaps in rather a sensible way.
As to splitting into packages, I don't really know, but I'd say probably the same rule of annoyance and emergence of happy structure works at all levels of modularity.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "297"
}
|
Q: Making code internal but available for unit testing from other projects We put all of our unit tests in their own projects. We find that we have to make certain classes public instead of internal just for the unit tests. Is there anyway to avoid having to do this. What are the memory implication by making classes public instead of sealed?
A: If it is an internal class then it must not be getting used in isolation. Therefore you shouldn't really be testing it apart from testing some other class that makes use of that object internally.
Just as you shouldn't test private members of a class, you shouldn't be testing internal classes of a DLL. Those classes are implementation details of some publicly accessible class, and therefore should be well exercised through other unit tests.
The idea is that you only want to test the behavior of a class because if you test internal implementation details then your tests will be brittle. You should be able to change the implementation details of any class without breaking all your tests.
If you find that you really need to test that class, then you might want to reexamine why that class is internal in the first place.
A: for documentation purposes
alternatively you can instantiate internal class by using Type.GetType method
example
//IServiceWrapper is public class which is
//the same assembly with the internal class
var asm = typeof(IServiceWrapper).Assembly;
//Namespace.ServiceWrapper is internal
var type = asm.GetType("Namespace.ServiceWrapper");
return (IServiceWrapper<T>)Activator
.CreateInstance(type, new object[1] { /*constructor parameter*/ });
for generic type there are different process as bellow:
var asm = typeof(IServiceWrapper).Assembly;
//note the name Namespace.ServiceWrapper`1
//this is for calling Namespace.ServiceWrapper<>
var type = asm.GetType("Namespace.ServiceWrapper`1");
var genType = type.MakeGenericType(new Type[1] { typeof(T) });
return (IServiceWrapper<T>)Activator
.CreateInstance(genType, new object[1] { /*constructor parameter*/});
A: If you're using .NET, the InternalsVisibleTo assembly attribute allows you to create "friend" assemblies. These are specific strongly named assemblies that are allowed to access internal classes and members of the other assembly.
Note, this should be used with discretion as it tightly couples the involved assemblies. A common use for InternalsVisibleTo is for unit testing projects. It's probably not a good choice for use in your actual application assemblies, for the reason stated above.
Example:
[assembly: InternalsVisibleTo("NameAssemblyYouWantToPermitAccess")]
namespace NameOfYourNameSpace
{
A: Below are ways to use in .NET Core applications.
*
*Add AssemblyInfo.cs file and add [assembly: InternalsVisibleTo("AssemblytoVisible")]
*Add this in .csproj file (the project which contains the Internal classes)
<ItemGroup>
<AssemblyAttribute Include="System.Runtime.CompilerServices.InternalsVisibleTo">
<_Parameter1>Test_Project_Name</_Parameter1> <!-- The name of the project that you want the Internal class to be visible To it -->
</AssemblyAttribute>
</ItemGroup>
For more information please follow https://improveandrepeat.com/2019/12/how-to-test-your-internal-classes-in-c/
A: Classes can be both public AND sealed.
But, don't do that.
You can create a tool to reflect over internal classes, and emit a new class that accesses everything via reflection. MSTest does that.
Edit: I mean, if you don't want to include -any- testing stuff in your original assembly; this also works if the members are private.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "147"
}
|
Q: How to draw custom button in Window Titlebar with Windows Forms? How do you draw a custom button next to the minimize, maximize and close buttons within the Titlebar of the Form?
I know you need to use Win32 API calls and override the WndProc procedure, but I haven't been able to figure out a solution that works right.
Does anyone know how to do this? More specifically, does anyone know a way to do this that works in Vista?
A: The following will work in XP, I have no Vista machine handy to test it, but I think your issues are steming from an incorrect hWnd somehow. Anyway, on with the poorly commented code.
// The state of our little button
ButtonState _buttState = ButtonState.Normal;
Rectangle _buttPosition = new Rectangle();
[DllImport("user32.dll")]
private static extern IntPtr GetWindowDC(IntPtr hWnd);
[DllImport("user32.dll")]
private static extern int GetWindowRect(IntPtr hWnd,
ref Rectangle lpRect);
[DllImport("user32.dll")]
private static extern int ReleaseDC(IntPtr hWnd, IntPtr hDC);
protected override void WndProc(ref Message m)
{
int x, y;
Rectangle windowRect = new Rectangle();
GetWindowRect(m.HWnd, ref windowRect);
switch (m.Msg)
{
// WM_NCPAINT
case 0x85:
// WM_PAINT
case 0x0A:
base.WndProc(ref m);
DrawButton(m.HWnd);
m.Result = IntPtr.Zero;
break;
// WM_ACTIVATE
case 0x86:
base.WndProc(ref m);
DrawButton(m.HWnd);
break;
// WM_NCMOUSEMOVE
case 0xA0:
// Extract the least significant 16 bits
x = ((int)m.LParam << 16) >> 16;
// Extract the most significant 16 bits
y = (int)m.LParam >> 16;
x -= windowRect.Left;
y -= windowRect.Top;
base.WndProc(ref m);
if (!_buttPosition.Contains(new Point(x, y)) &&
_buttState == ButtonState.Pushed)
{
_buttState = ButtonState.Normal;
DrawButton(m.HWnd);
}
break;
// WM_NCLBUTTONDOWN
case 0xA1:
// Extract the least significant 16 bits
x = ((int)m.LParam << 16) >> 16;
// Extract the most significant 16 bits
y = (int)m.LParam >> 16;
x -= windowRect.Left;
y -= windowRect.Top;
if (_buttPosition.Contains(new Point(x, y)))
{
_buttState = ButtonState.Pushed;
DrawButton(m.HWnd);
}
else
base.WndProc(ref m);
break;
// WM_NCLBUTTONUP
case 0xA2:
// Extract the least significant 16 bits
x = ((int)m.LParam << 16) >> 16;
// Extract the most significant 16 bits
y = (int)m.LParam >> 16;
x -= windowRect.Left;
y -= windowRect.Top;
if (_buttPosition.Contains(new Point(x, y)) &&
_buttState == ButtonState.Pushed)
{
_buttState = ButtonState.Normal;
// [[TODO]]: Fire a click event for your button
// however you want to do it.
DrawButton(m.HWnd);
}
else
base.WndProc(ref m);
break;
// WM_NCHITTEST
case 0x84:
// Extract the least significant 16 bits
x = ((int)m.LParam << 16) >> 16;
// Extract the most significant 16 bits
y = (int)m.LParam >> 16;
x -= windowRect.Left;
y -= windowRect.Top;
if (_buttPosition.Contains(new Point(x, y)))
m.Result = (IntPtr)18; // HTBORDER
else
base.WndProc(ref m);
break;
default:
base.WndProc(ref m);
break;
}
}
private void DrawButton(IntPtr hwnd)
{
IntPtr hDC = GetWindowDC(hwnd);
int x, y;
using (Graphics g = Graphics.FromHdc(hDC))
{
// Work out size and positioning
int CaptionHeight = Bounds.Height - ClientRectangle.Height;
Size ButtonSize = SystemInformation.CaptionButtonSize;
x = Bounds.Width - 4 * ButtonSize.Width;
y = (CaptionHeight - ButtonSize.Height) / 2;
_buttPosition.Location = new Point(x, y);
// Work out color
Brush color;
if (_buttState == ButtonState.Pushed)
color = Brushes.LightGreen;
else
color = Brushes.Red;
// Draw our "button"
g.FillRectangle(color, x, y, ButtonSize.Width, ButtonSize.Height);
}
ReleaseDC(hwnd, hDC);
}
private void Form1_Load(object sender, EventArgs e)
{
_buttPosition.Size = SystemInformation.CaptionButtonSize;
}
A: I know it's been long since the last answer but this really helped me recently and I like to update the code provided by Chris with my comments and modifications.
The version runs perfectly on Win XP and Win 2003. On Win 2008 ot has a small bug that I was not able to identify, when resizing windows. Works on Vista too (no-Aero) but note that the title bar buttons are not square and button dimensions should take that into account.
switch (m.Msg)
{
// WM_NCPAINT / WM_PAINT
case 0x85:
case 0x0A:
//Call base method
base.WndProc(ref m);
//we have 3 buttons in the corner of the window. So first's new button left coord is offseted by 4 widths
int crt = 4;
//navigate trough all titlebar buttons on the form
foreach (TitleBarImageButton crtBtn in titleBarButtons.Values)
{
//Calculate button coordinates
p.X = (Bounds.Width - crt * crtBtn.Size.Width);
p.Y = (Bounds.Height - ClientRectangle.Height - crtBtn.Size.Height) / 2;
//Initialize button and draw
crtBtn.Location = p;
crtBtn.ButtonState = ImageButtonState.NORMAL;
crtBtn.DrawButton(m.HWnd);
//increment button left coord location offset
crt++;
}
m.Result = IntPtr.Zero;
break;
// WM_ACTIVATE
case 0x86:
//Call base method
base.WndProc(ref m);
//Draw each button
foreach (TitleBarImageButton crtBtn in titleBarButtons.Values)
{
crtBtn.ButtonState = ImageButtonState.NORMAL;
crtBtn.DrawButton(m.HWnd);
}
break;
// WM_NCMOUSEMOVE
case 0xA0:
//Get current mouse position
p.X = ((int)m.LParam << 16) >> 16;// Extract the least significant 16 bits
p.Y = (int)m.LParam >> 16; // Extract the most significant 16 bits
p.X -= windowRect.Left;
p.Y -= windowRect.Top;
//Call base method
base.WndProc(ref m);
ImageButtonState newButtonState;
foreach (TitleBarImageButton crtBtn in titleBarButtons.Values)
{
if (crtBtn.HitTest(p))
{//mouse is over the current button
if (crtBtn.MouseButtonState == MouseButtonState.PRESSED)
//button is pressed - set pressed state
newButtonState = ImageButtonState.PRESSED;
else
//button not pressed - set hoover state
newButtonState = ImageButtonState.HOOVER;
}
else
{
//mouse not over the current button - set normal state
newButtonState = ImageButtonState.NORMAL;
}
//if button state not modified, do not repaint it.
if (newButtonState != crtBtn.ButtonState)
{
crtBtn.ButtonState = newButtonState;
crtBtn.DrawButton(m.HWnd);
}
}
break;
// WM_NCLBUTTONDOWN
case 0xA1:
//Get current mouse position
p.X = ((int)m.LParam << 16) >> 16;// Extract the least significant 16 bits
p.Y = (int)m.LParam >> 16; // Extract the most significant 16 bits
p.X -= windowRect.Left;
p.Y -= windowRect.Top;
//Call base method
base.WndProc(ref m);
foreach (TitleBarImageButton crtBtn in titleBarButtons.Values)
{
if (crtBtn.HitTest(p))
{
crtBtn.MouseButtonState = MouseButtonState.PRESSED;
crtBtn.ButtonState = ImageButtonState.PRESSED;
crtBtn.DrawButton(m.HWnd);
}
}
break;
// WM_NCLBUTTONUP
case 0xA2:
case 0x202:
//Get current mouse position
p.X = ((int)m.LParam << 16) >> 16;// Extract the least significant 16 bits
p.Y = (int)m.LParam >> 16; // Extract the most significant 16 bits
p.X -= windowRect.Left;
p.Y -= windowRect.Top;
//Call base method
base.WndProc(ref m);
foreach (TitleBarImageButton crtBtn in titleBarButtons.Values)
{
//if button is press
if (crtBtn.ButtonState == ImageButtonState.PRESSED)
{
//Rasie button's click event
crtBtn.OnClick(EventArgs.Empty);
if (crtBtn.HitTest(p))
crtBtn.ButtonState = ImageButtonState.HOOVER;
else
crtBtn.ButtonState = ImageButtonState.NORMAL;
}
crtBtn.MouseButtonState = MouseButtonState.NOTPESSED;
crtBtn.DrawButton(m.HWnd);
}
break;
// WM_NCHITTEST
case 0x84:
//Get current mouse position
p.X = ((int)m.LParam << 16) >> 16;// Extract the least significant 16 bits
p.Y = (int)m.LParam >> 16; // Extract the most significant 16 bits
p.X -= windowRect.Left;
p.Y -= windowRect.Top;
bool isAnyButtonHit = false;
foreach (TitleBarImageButton crtBtn in titleBarButtons.Values)
{
//if mouse is over the button, or mouse is pressed
//(do not process messages when mouse was pressed on a button)
if (crtBtn.HitTest(p) || crtBtn.MouseButtonState == MouseButtonState.PRESSED)
{
//return 18 (do not process further)
m.Result = (IntPtr)18;
//we have a hit
isAnyButtonHit = true;
//return
break;
}
else
{//mouse is not pressed and not over the button, redraw button if needed
if (crtBtn.ButtonState != ImageButtonState.NORMAL)
{
crtBtn.ButtonState = ImageButtonState.NORMAL;
crtBtn.DrawButton(m.HWnd);
}
}
}
//if we have a hit, do not process further
if (!isAnyButtonHit)
//Call base method
base.WndProc(ref m);
break;
default:
//Call base method
base.WndProc(ref m);
//Console.WriteLine(m.Msg + "(0x" + m.Msg.ToString("x") + ")");
break;
}
The code demonstrates the messages that heve to be treated and how to treat them.
The code uses a collection of custom TitleBarButton objets. That class is too big to be included here but I can provide it if needed along with an example.
A: Drawing seems to be the easy part, the following will do that:
[Edit: Code removed, see my other answer]
The real problem is changing the state and detecting clicks on the button... for that you'll need to hook into the global message handler for the program, .NET seems to hide the mouse events for a form while not in the actual container areas (ie. mouse moves and clicks on the title bar). I'm looking for info on that, found it now, I'm working on it, shouldn't be too hard... If we can figure out what these messages are actually passing.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How can I get source and variable values in ruby tracebacks? Here's the last few frames of a typical Ruby on Rails traceback:
And here are the last few frames of a typical Nevow traceback in Python:
It's not just the web environment either, you can make similar comparisons between ipython and irb. How can I get more of these sorts of details in Ruby?
A: AFAIK, once an exception has been caught it's too late to grab the context in which it was raised. If you trap the exception's new call, you could use evil.rb's Binding.of_caller to grab the calling scope, and do
eval("local_variables.collect { |l| [l, eval(l)] }", Binding.of_caller)
But that's quite a big hack. The right answer is probably to extend Ruby to allow some inspection of the call stack. I'm not sure if some of the new Ruby implementations will allow this, but I do remember a backlash against Binding.of_caller because it will make optimizations much harder.
(To be honest, I don't understand this backlash: as long as the interpreter records enough information about the optimizations performed, Binding.of_caller should be able to work, although perhaps slowly.)
Update
Ok, I figured it out. Longish code follows:
class Foo < Exception
attr_reader :call_binding
def initialize
# Find the calling location
expected_file, expected_line = caller(1).first.split(':')[0,2]
expected_line = expected_line.to_i
return_count = 5 # If we see more than 5 returns, stop tracing
# Start tracing until we see our caller.
set_trace_func(proc do |event, file, line, id, binding, kls|
if file == expected_file && line == expected_line
# Found it: Save the binding and stop tracing
@call_binding = binding
set_trace_func(nil)
end
if event == :return
# Seen too many returns, give up. :-(
set_trace_func(nil) if (return_count -= 1) <= 0
end
end)
end
end
class Hello
def a
x = 10
y = 20
raise Foo
end
end
class World
def b
Hello.new.a
end
end
begin World.new.b
rescue Foo => e
b = e.call_binding
puts eval("local_variables.collect {|l| [l, eval(l)]}", b).inspect
end
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Shared File Database Suggestion I would like to build and deploy a database application for Windows based systems, but need to live within the following constraints:
*
*Cannot run as a server (i.e., have open ports);
*Must be able to share database files with other instances of the program (running on other machines);
*Must not require a DBA for maintenance;
*No additional cost for run-time license.
In addition, the following are nice to have "features":
*
*Zero-install (e.g., no registry entries, no need to put files in \Windows\..., etc.);
*"Reasonable" performance (yes, that's vague);
*"Reasonable" file size limitations (at least 1GB per table/file--just in case).
I've seen this question
Embedded Database for .net that can run off a network
but it doesn't quite answer it for me.
I have seen the VistaDB site, but while it looks promising, I have no personal experience with it.
I have also looked at SQLite, and while it seems good enough for Goggle, I (again) have no personal experience with it.
I would love to use a Java based solution because it's cross-platform (even though my main target is Windows, I'd like to be flexible) and WebStart is a really nice way to distribute software, but the most commonly used DBs (Derby and hsqldb) won't support shared access.
I know that I'm not the only one who's trying/tried to do this, so I'm hoping I could get some advice.
A: I'd go with SQLite. There are SQLite bindings for everything, and it's very widely used as a embedded database for a large number of applications.
A: I use SQLite at work and one thing that you should keep in mind is that its file based and uses a file lock for managing concurrent connections. It is not a great solution when you have multiple users trying to use the database at the same time. SQLite is however a great database for one user application, its fast, has a small foot print and has a thriving community built around it.
A: If you've got VStudio sitting around, how about SQL Server 3.5 Compact edition? MSSQL running in-proc.
http://www.microsoft.com/sql/editions/compact/downloads.mspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Should I use internal or public visibility by default? I'm a pretty new C# and .NET developer. I recently created an MMC snapin using C# and was gratified by how easy it was to do, especially after hearing a lot of horror stories by some other developers in my organisation about how hard it is to do in C++.
I pretty much went through the whole project at some point and made every instance of the "public" keyword to "internal", except as required by the runtime in order to run the snapin. What is your feeling on this, should you generally make classes and methods public or internal?
A: I think you should err on the side of internal classes and members. You can always increase an item's visibility but decreasing it can cause problems. This is especially true if you are building a framework for others.
You do need to be careful though not to hide useful functionality from your users. There are many useful methods in the .NET BCL that cannot be used without resorting to reflection. However, by hiding these methods, the surface area of what has to be tested and maintained is reduced.
A: I believe in blackboxes where possible. As a programmer, I want a well defined blackbox which I can easily drop into my systems, and have it work. I give it values, call the appropriate methods, and then get my results back out of it.
To that end, give me only the functionality that the class needs to expose to work.
Consider an elevator. To get it to go to a floor, I push a button. That's the public interface to the black box which activates all the functions needed to get the elevator to the desired floor.
A: I prefer to avoid marking classes as public unless I explicitly want my customer to consume them, and I am prepared to support them.
Instead of marking a class as internal, I leave the accessibility blank. This way, public stands out to the eye as something notable. (The exception, of course, is nested classes, which have to be marked if they are to be visible even in the same assembly.)
A: Most classes should be internal, but most non-private members should be public.
The question you should ask about a member is "if the class were made public would I want to member the member to be exposed?". The answer is usually "yes (so public)" because classes without any accessible members are not much use!
internal members do have a role; they are 'back-door access' meant only for close relatives that live in the same assembly.
Even if your class remains internal, it is nice to see which are front-door members and which are back-door. And if you ever change it to public you are not going to have to go back and think about which are which.
A: What you did is exactly what you should do; give your classes the most minimal visibility you can. Heck, if you want to really go whole hog, you can make everything internal (at most) and use the InternalsVisibleTo attribute, so that you can separate your functionality but still not expose it to the unknown outside world.
The only reason to make things public is that you're packaging your project in several DLLs and/or EXEs and (for whatever reason) you don't care to use InternalsVisibleTo, or you're creating a library for use by third parties. But even in a library for use by third parties, you should try to reduce the "surface area" wherever possible; the more classes you have available, the more confusing your library will be.
In C#, one good way to ensure you're using the minimum visibility possible is to leave off the visibility modifiers until you need them. Everything in C# defaults to the least visibility possible: internal for classes, and private for class members and inner classes.
A: You should tend toward exposing as little as possible to other classes, and think carefully about what you do expose and why.
A: Is there any reason you need to use Internal instead of Private? You do realise that Internal has assembly level scope. In other words Internal classes/members are accessible to all classes in a multi-class assembly.
As some other answers have said, in general go for the highest level of encapsulation as possible (ie private) unless you actually need internal/protected/public.
A: I found a problem using internal classes as much as possible. You cannot have methods, properties, fields, etc of that type (or parameter type or return type) more visible than internal. This leads to have constructors that are internal, as well as properties. This shouldn't be a problem, but as a matter of fact, when using Visual Studio and the xaml designer, there are problems. False positive errors are detected by the designer due to the fact that the methods are not public, user control properties seems not visible to the designer. I don't know if others have already fallen on such issues...
A: You should try to make them only as visible as possible, but as stated by Mike above, this causes problems with UserControls and using the VS Designer with those controls on forms or other UserControls.
So as a general rule, keep all classes and UserControls that you aren't adding using the Designer only as visible as they need to be. But if you are creating a UserControl that you want to use in the Designer (even if that's within the same assembly), you will need to make sure that the UserControl class, its default constructor, and any properties and events, are made public for the designer to work with it.
I had a problem recently where the designer would keep removing the this.myControl = new MyControl() line from the InitializeComponent() method because the UserControl MyControl was marked as internal along with its constructor.
It's really a bug I think because even if they are marked as internal they still show up in the Toolbox to add in the Designer, either Microsoft needs to only show public controls with public constructors, or they need to make it work with internal controls as well.
A: It depends on how much control you have over code that consumes it. In my Java development, I make all my stuff public final by default because getters are annoying. However, I also have the luxury of being able to change anything in my codebase whenever I want. In the past, when I've had to release code to consumers, I've always used private variables and getters.
A: I like to expose things as little as possible. Private, protected, internal, public: give classes, variables, properties, and functions the least amount of visibility they need for everything to still work.
I'll bump something's visibility up that chain toward public only when there's a good reason to.
A: I completely disagree with the answers so far. I feel that internal is a horrid idea, preventing another assembly from inheriting your types, or even using your internal types should the need for a workaround come about.
Today, I had to use reflection in order to get to the internals of a System.Data.DataTable (I have to build a datatable lightning fast, without all of its checks), and I had to use reflection, since not a single type was available to me; they were all marked as internal.
A: by default class is created as internal in c#:
internal means: Access is limited to the current assembly.
see
http://msdn.microsoft.com/en-us/library/0b0thckt.aspx
Good Article the defaults scope is internal:
http://www.c-sharpcorner.com/UploadFile/84c85b/default-scope-of-a-C-Sharp-class/
A: Do not choose a "default". Pick what best fits the visibility needs for that particular class. When you choose a new class in Visual Studio, the template is created as:
class Class1
{
}
Which is private (since no scope is specified). It is up to you to specify scope for the class (or leave as private). There should be a reason to expose the class.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43"
}
|
Q: Is the Unity Framework any good for Inversion of Control? I have been using IoC for a little while now and I am curious if I should use Microsoft's Unity framework (official name "Unity Application Block"). Does anyone have experience using it? So for I have been copying my IoC container code from project to project, but I think it would be better to using something standard. I think IoC can make a HUGE difference in keeping component based applications loosely coupled and therefore changeable but I am by no means an expert on IoC, so I am nervous to switch to a framework that will just paint me into a corner as a dependency I will one day want to walk away from.
A: I am using Unity with no real problems. I know a few ALT.NET type people warn against Unity but I really think that is just because of the history the MS P&P team have of writing bloatware. Unity is not yet bloated IMO and works well.
A: I took a look at the Unity Framework, but found it to be a little 'too big' for my needs (no, I can't really quantify that, it just seemed to require much more knowledge that other frameworks that I've been playing with... this was a while ago so it's possible that that's changed as Unity's been developed/refined).
My current IoC/Dependency Injection framework is Ninject. It's quick, fast, and I was able to go from reading the tutorials (about 10 minutes) to using it in a pre-existing project in about two hours.
If you're looking for a clean way to do dependency injection, I'd highly recommend checking it out.
A: I've played with CompositeWPF (aka Prism) - successor of Composite app block. From my experience Unity works much better as compared with previous version of ObjectBuilder. However it's up to you to evaluate IoC frameworks and choose one suited for your needs.
Unity tutorials & samples
Unity IoC Screencast
A: I would say stick with the one you know until you feel confident with it and the whole concept. After what you'll have a better judgement to pick a framework which fullfill your needs.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How can I lookup data about a book from its barcode number? I'm building the world's simplest library application. All I want to be able to do is scan in a book's UPC (barcode) using a typical scanner (which just types the numbers of the barcode into a field) and then use it to look up data about the book... at a minimum, title, author, year published, and either the Dewey Decimal or Library of Congress catalog number.
The goal is to print out a tiny sticker ("spine label") with the card catalog number that I can stick on the spine of the book, and then I can sort the books by card catalog number on the shelves in our company library. That way books on similar subjects will tend to be near each other, for example, if you know you're looking for a book about accounting, all you have to do is find SOME book about accounting and you'll see the other half dozen that we have right next to it which makes it convenient to browse the library.
There seem to be lots of web APIs to do this, including Amazon and the Library of Congress. But those are all extremely confusing to me. What I really just want is a single higher level function that takes a UPC barcode number and returns some basic data about the book.
A: Edit It would be pretty easy if you had ISBN. but converting from UPC to ISBN is not as easy as you'd like.
Here's some javascript code for it from http://isbn.nu where it's done in script
if (indexisbn.indexOf("978") == 0) {
isbn = isbn.substr(3,9);
var xsum = 0;
var add = 0;
var i = 0;
for (i = 0; i < 9; i++) {
add = isbn.substr(i,1);
xsum += (10 - i) * add;
}
xsum %= 11;
xsum = 11 - xsum;
if (xsum == 10) { xsum = "X"; }
if (xsum == 11) { xsum = "0"; }
isbn += xsum;
}
However, that only converts from UPC to ISBN some of the time.
You may want to look at the barcode scanning project page, too - one person's journey to scan books.
So you know about Amazon Web Services. But that assumes amazon has the book and has scanned in the UPC.
You can also try the UPCdatabase at http://www.upcdatabase.com/item/{UPC}, but this is also incomplete - at least it's growing..
The library of congress database is also incomplete with UPCs so far (although it's pretty comprehensive), and is harder to get automated.
Currently, it seems like you'd have to write this yourself in order to have a high-level lookup that returns simple information (and tries each service)
A: There's a very straightforward web based solution over at ISBNDB.com that you may want to look at.
Edit: Updated API documentation link, now there's version 2 available as well
Link to prices and tiers here
You can be up and running in just a few minutes (these examples are from API v1):
*
*register on the site and get a key to use the API
*try a URL like:
http://isbndb.com/api/books.xml?access_key={yourkey}&index1=isbn&results=details&value1=9780143038092
The results=details gets additional details including the card catalog number.
As an aside, generally the barcode is the isbn in either isbn10 or isbn13. You just have to delete the last 5 numbers if you are using a scanner and you pick up 18 numbers.
Here's a sample response:
<ISBNdb server_time="2008-09-21T00:08:57Z">
<BookList total_results="1" page_size="10" page_number="1" shown_results="1">
<BookData book_id="the_joy_luck_club_a12" isbn="0143038095">
<Title>The Joy Luck Club</Title>
<TitleLong/>
<AuthorsText>Amy Tan, </AuthorsText>
<PublisherText publisher_id="penguin_non_classics">Penguin (Non-Classics)</PublisherText>
<Details dewey_decimal="813.54" physical_description_text="288 pages" language="" edition_info="Paperback; 2006-09-21" dewey_decimal_normalized="813.54" lcc_number="" change_time="2006-12-11T06:26:55Z" price_time="2008-09-20T23:51:33Z"/>
</BookData>
</BookList>
</ISBNdb>
A: Sounds like the sort of job one might get a small software company to do for you...
More seriously, there are services that provide an interface to the ISBN catalog, www.literarymarketplace.com.
On worldcat.com, you can create a URL using the ISBN that will take you straight to a book detail page. That page isn't as very useful because it's still HTML scraping to get the data, but they have a link to download the book data in a couple "standard" formats.
For example, their demo book: http://www.worldcat.org/isbn/9780060817084
Has a "EndNote" format download link http://www.worldcat.org/oclc/123348009?page=endnote&client=worldcat.org-detailed_record, and you can harvest the data from that file very easily. That's linked from their own OCLC number, not the ISBN, but the scrape to convert that isn't hard, and they may yet have a good interface to do it.
A: My librarian wife uses http://www.worldcat.org/, but they key off ISBN. If you can scan that, you're golden. Looking at a few books, it looks like the UPC is the same or related to the ISBN.
Oh, these guys have a function for doing the conversion from UPC to ISBN.
A: Note: I'm the LibraryThing guy, so this is partial self-promotion.
Take a look at this StackOverflow answer, which covers some good ways to get data for a given ISBN.
To your issues, Amazon includes a simple DDC (Dewey); Google does not. The WorldCat API does, but you need to be an OCLC library to use it.
The ISBN/UPC issue is complex. Prefer the ISBN, if you can find them. Mass market paperbacks sometimes sport UPCs on the outside and an ISBN on inside.
LibraryThing members have developed a few pages on the issue and on efforts to map the two:
*
*http://www.librarything.com/wiki/index.php/UPC
*http://www.librarything.com/wiki/index.php/CueCat:_ISBNs_and_Barcodes
If you buy from Borders your book's barcodes will all be stickered over with their own internal barcodes (called a "BINC"). Most annoyingly whatever glue they use gets harder and harder to remove cleanly over time. I know of no API that converts them. LibraryThing does it by screenscraping.
For an API, I'd go with Amazon. LibraryThing is a good non-API option, resolving BINCs and adding DDC and LCC for books that don't have them by looking at other editions of the "work."
What's missing is the label part. Someone needs to create a good PDF template for that.
A: Using the web site Library Thing, you can scan in your barcodes (the entire barcode, not just the ISBN - if you have a scanning "wedge" you're in luck) and build your library. (It is an excellent social network - think StackOverflow for book enthusiasts.)
Then, using the TOOLS section, you can export your library. Now you have a text file to import/parse and can create your labels, a card catalog, etc.
A: I'm afraid the problem is database access. Companies pay to have a UPC assigned and so the database isn't freely accessible. The UPCdatabase site mentioned by Philip is a start, as is UPCData.info, but they're user entered--which means incomplete and possibly inaccurate.
You can always enter in the UPC to Google and get a hit, but that's not very automated. But it does get it right most of the time.
I thought I remembered Jon Udell doing something like this (e.g., see this), but it was purely ISBN based.
Looks like you've found a new project for someone to work on!
A: If you're wanting to use Amazon you can implement it easily with LINQ to Amazon.
A: Working in the library world we simply connect to the LMS pass in the barcode and hey presto back comes the data. I believe there are a number of free LMS providers - Google for "open source lms".
Note: This probably works off ISBN...
A: You can find a PHP implemented ISBN lookup tool at Dawson Interactive.
A: I frequently recommend using Amazon's Product Affiliate API (check it out here https://affiliate-program.amazon.com), however there are a few other options available as well.
If you want to guarantee the accuracy of the data, you can go with the a paid solution. GS1 is the organization that issues UPC codes, so their information should always be accurate (https://www.gs1us.org/tools/gs1-company-database-gepir).
There are also a number of third party databases with relevant information such as https://www.upccodesearch.com/ or https://www.upcdatabase.com/ .
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "75"
}
|
Q: How do I read a file over a network that is in use/locked by another process in c#? Is there a way to read a locked file across a network given that you are the machine admin on the remote machine? I haven't been able to read the locked file locally, and attempting it over the network adds another layer of difficulty.
A: Depending on the type of lock (read only vs exclusive) it should be possible to copy the file first, then you can work with the unlocked copy.
You should be able to do that in a background thread. If you really like threading, have the file watcher start the read process once the copy is complete (although that might be overkill)
A: There is no problems to READ the file locally or remotely if it's not locked EXCLUSIVELY or READ/WRITE. If the file is locked - your administrative rights will not help (even if you're GOD :-). If the file is not locked fore READ (you can check it by opening it with a notepad) - you can read it locally and remotely (it doesn't matter, unless your network share is putting some extra restrictions).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What's the best way to pass data between concurrent threads in .NET? I have two threads, one needs to poll a bunch of separate static resources looking for updates. The other one needs to get the data and store it in the database. How can thread 1 tell thread 2 that there is something to process?
A: If the pieces of data are independant then treat the pieces of data as work items to be processed by a pool of threads. Use the thread pool and QueueUserWorkItem to post the data to the thread(s). You should get better scalability using a pool of symmetric threads and limiting the amount of synchronisation that has to occur between the producer and consumer(s).
For example (from MSDN):
TaskInfo ti = new TaskInfo("This report displays the number {0}.", 42);
// Queue the task and data.
if (ThreadPool.QueueUserWorkItem(new WaitCallback(ThreadProc), ti)) {
Console.WriteLine("Main thread does some work, then sleeps.");
// If you comment out the Sleep, the main thread exits before
// the ThreadPool task has a chance to run. ThreadPool uses
// background threads, which do not keep the application
// running. (This is a simple example of a race condition.)
Thread.Sleep(1000);
Console.WriteLine("Main thread exits.");
}
else {
Console.WriteLine("Unable to queue ThreadPool request.");
}
// The thread procedure performs the independent task, in this case
// formatting and printing a very simple report.
//
static void ThreadProc(Object stateInfo) {
TaskInfo ti = (TaskInfo) stateInfo;
Console.WriteLine(ti.Boilerplate, ti.Value);
}
A: I use Monitor.Wait / Pulse on a Queue of work items.
A: Does the "store in the DB" thread always need to be running? It seems like perhaps the best option (if possible) would be to have the polling thread spin up another thread to do the save. Depending on the number of threads being created though, it could be that having the first polling thread use ThreadPool.QueueUserWorkItem() might be the more efficient route.
For more efficiency, when saving to the database, I would use async I/O on the DB rather than the sync methods.
Anytime you can get away from having to communicate directly between two threads, you should. Having to throw together some sync primitives, your code won't be as easy to debug and could introduce some very subtle race conditions that cause "once in a million execution" type bugs (which are far from fun to find/fix).
If the second thread always needs to be executing, let us know why with some more information and we can come back with a more in-depth answer.
Good luck!
A: I personally would have thread 1 raise events which thread 2 can respond to. The threads can be wired up to the appropriate events by the controlling process which initiates both the threads.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/106979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Predict next auto-inserted row id (SQLite) I'm trying to find if there is a reliable way (using SQLite) to find the ID of the next row to be inserted, before it gets inserted. I need to use the id for another insert statement, but don't have the option of instantly inserting and getting the next row.
Is predicting the next id as simple as getting the last id and adding one? Is that a guarantee?
Edit: A little more reasoning...
I can't insert immediately because the insert may end up being canceled by the user. User will make some changes, SQL statements will be stored, and from there the user can either save (inserting all the rows at once), or cancel (not changing anything). In the case of a program crash, the desired functionality is that nothing gets changed.
A: You can probably get away with adding 1 to the value returned by sqlite3_last_insert_rowid under certain conditions, for example, using the same database connection and there are no other concurrent writers. Of course, you may refer to the sqlite source code to back up these assumptions.
However, you might also seriously consider using a different approach that doesn't require predicting the next ID. Even if you get it right for the version of sqlite you're using, things could change in the future and it will certainly make moving to a different database more difficult.
A: Try SELECT * FROM SQLITE_SEQUENCE WHERE name='TABLE';. This will contain a field called seq which is the largest number for the selected table. Add 1 to this value to get the next ID.
Also see the SQLite Autoincrement article, which is where the above info came from.
Cheers!
A: Insert the row with an INVALID flag of some kind, Get the ID, edit it, as needed, delete if necessary or mark as valid. That and don't worry about gaps in the sequence
BTW, you will need to figure out how to do the invalid part yourself. Marking something as NULL might work depending on the specifics.
Edit: If you can, use Eevee's suggestion of using proper transactions. It's a lot less work.
A: I realize your application using SQLite is small and SQLite has its own semantics. Other solutions posted here may well have the effect that you want in this specific setting, but in my view every single one of them I have read so far is fundamentally incorrect and should be avoided.
In a normal environment holding a transaction for user input should be avoided at all costs. The way to handle this, if you need to store intermediate data, is to write the information to a scratch table for this purpose and then attempt to write all of the information in an atomic transaction. Holding transactions invites deadlocks and concurrency nightmares in a multi-user environment.
In most environments you cannot assume data retrieved via SELECT within a transaction is repeatable. For example
SELECT Balance FROM Bank ...
UPDATE Bank SET Balance = valuefromselect + 1.00 WHERE ...
Subsequent to UPDATE the value of balance may well be changed. Sometimes you can get around this by updating the row(s) your interested in Bank first within a transaction as this is guaranteed to lock the row preventing further updates from changing its value until your transaction has completed.
However, sometimes a better way to ensure consistency in this case is to check your assumptions about the contents of the data in the WHERE clause of the update and check row count in the application. In the example above when you "UPDATE Bank" the WHERE clause should provide the expected current value of balance:
WHERE Balance = valuefromselect
If the expected balance no longer matches neither does the WHERE condition -- UPDATE does nothing and rowcount returns 0. This tells you there was a concurrency issue and you need to rerun the operation again when something else isn't trying to change your data at the same time.
A: select max(id) from particular_table is unreliable for the reason below..
http://www.sqlite.org/autoinc.html
"The normal ROWID selection algorithm described above will generate monotonically increasing unique ROWIDs as long as you never use the maximum ROWID value and you never delete the entry in the table with the largest ROWID. If you ever delete rows or if you ever create a row with the maximum possible ROWID, then ROWIDs from previously deleted rows might be reused when creating new rows and newly created ROWIDs might not be in strictly ascending order."
A: Either scrapping or committing a series of database operations all at once is exactly what transactions are for. Query BEGIN; before the user starts fiddling and COMMIT; once he/she's done. You're guaranteed that either all the changes are applied (if you commit) or everything is scrapped (if you query ROLLBACK;, if the program crashes, power goes out, etc). Once you read from the db, you're also guaranteed that the data is good until the end of the transaction, so you can grab MAX(id) or whatever you want without worrying about race conditions.
http://www.sqlite.org/lang_transaction.html
A: I think this can't be done because there is no way to be sure that nothing will get inserted between you asking and you inserting. (you might be able to lock the table to inserts but Yuck)
BTW I've only used MySQL but I don't think that will make any difference)
A: Most likely you should be able to +1 the most recent id. I would look at all (going back a while) of the existing id's in the ordered table .. Are they consistent and is each row's ID is one more than the last? If so, you'll probably be fine. I'd leave a comments in the code explaining the assumption however. Doing a Lock will help guarantee that you're not getting additional rows while you do this as well.
A: Select the last_insert_rowid() value.
A: Most of everything that needs to be said in this topic already has... However, be very careful of race conditions when doing this. If two people both open your application/webpage/whatever, and one of them adds a row, the other user will try to insert a row with the same ID and you will have lots of issues.
A: select max(id) from particular_table;
The next id will be +1 from the maximum id.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: Detect And Remove Rootkit What is the best (hopefully free or cheap) way to detect and then, if necessary, remove a rootkit found on your machine?
A: SysInternals stopped updating RootKit Revealer a couple of years ago.
The only sure way to detect a rootkit is to do an offline compare of installed files and filesystem metadata from a trusted list of known files and their parameters. Obviously, you need to trust the machine you are running the comparison from.
In most situations, using a boot cdrom to run a virus scanner does the trick, for most people.
Otherwise, you can start with a fresh install of whatever, boot it from cdrom, attach an external drive, run a perl script to find and gather parameters (size, md5, sha1), then store the parameters.
To check, run a perl script to find and gather parameters, then compare them to the stored ones.
Also, you'd need a perl script to update your stored parameters after a system update.
--Edit--
Updating this to reflect available techniques. If you get a copy of any bootable rescue cd (such as trinity or rescuecd) with an up-to-date copy of the program "chntpasswd", you'll be able to browse and edit the windows registry offline.
Coupled with a copy of the startup list from castlecops.com, you should be able to track down the most common run points for the most common rootkits. And always keep track of your driver files and what the good versions are too.
With that level of control, your biggest problem will be the mess of spaghetti your registry is left in after you delete the rootkit and trojans. Usually.
-- Edit --
and there are windows tools, too. But I described the tools I'm familiar with, and which are free and better documented.
A: Rootkit revealer from SysInternals
A: Remember that you can never trust a compromised machine. You may think you found all signs of a rootkit, but the attacker may have created backdoors in other places. Non-standard backdoors that tools you use won't detect. As a rule you should reinstall a compromised machine from scratch.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Is there a reference for the SharePoint XSLT extension functions? There are a couple of different .NET XSLT functions that I see used in the out of the box SharePoint web parts (RSS Viewer and Data View web part).
<xsl:stylesheet
xmlns:ddwrt="http://schemas.microsoft.com/WebParts/v2/DataView/runtime"
xmlns:rssaggwrt="http://schemas.microsoft.com/WebParts/v3/rssagg/runtime"
...>
...
<xsl:value-of select="rssaggwrt:MakeSafe($Html)"/>
<a href="{ddwrt:EnsureAllowedProtocol(string(link))}">More...</a>
...
</xsl:stylesheet>
Where can I find a reference that describes all of the extension functions that SharePoint provides?
A: I have been wanting more info on ddwrt as well. The most information I have been able to find is from Serge van den Oever that was later turned into the MSDN article referenced in the previous answer.
http://weblogs.asp.net/soever/archive/2005/01/03/345535.aspx
As he noted in his blog post, this article contains some info that was censored in the MSDN article.
Other than this article, there is very little written on the topic. It unfortunately appears that scouring existing generated code (such as the xsl in DataForm web parts) is the best technique to learn more at present.
A: Here is some documentation I found that describes the ddwrt (http://schemas.microsoft.com/WebParts/v2/DataView/runtime) namespace.
http://msdn.microsoft.com/en-us/library/aa505323.aspx
A: Good question +1
See also
SharePoint Data View Web Part Extension Functions in the ddwrt Namespace
by Serge van den Oever
A: Serge's article points to Microsoft.SharePoint, where you can find the Microsoft.SharePoint.WebPartPages namespace. In there, you can find the DdwRuntime and the BaseDdwRuntime. There, you can find all of the ddwrt functions. I used a decompiler to look this up.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: How to make a simple site render correctly on multiple mobile browsers? We have a rather simple site (minimal JS) with plain html and CSS. It is a simple mobile interface for our main application.
We are running into trouble because we have more than one column and several browsers seem to force single columns.
Through some searching I ran into 2 meta tags.
<meta name="MobileOptimized" content="220" />
<meta name="viewport" content="width=320" />
With these we have a good 'scaled' view for IE Mobile and the iPhone. We have not run into any problems with palm's Blazer. But Blackberry is another matter.
Does the Blackberry have a simple way to control the view of the browser as well? By simple I mean without making a special page for that device.
A: My recommendation would be to create two or three versions of the site:
*
*Full blown site for modern desktop browsers (if it's a very heavy application)
*Site with minimal JS and CSS for good mobile browsers and Desktop browsers (IPhone and SkyFire come to mind)
*Site with no JS, single column and mostly just plain text.
The reason is that coding for 3-4 desktop browsers is hard enough. Don't kill yourself over another hundred devices to code for and create a simple page that just puts out information.
Remember the basic design principle of web development: Users don't care. They want information, or functionality. It will look a whole lot better for you if you had a simple, clear layout for bad mobile browsers (IE or Blackberry) then try to hack up something that eventually becomes a maintainability nightmare and potentially make you look bad if somebody uses yet another mobile browser and you have not written the phone-specific site for yet.
A: I wouldn't bother making a "medium" version for the iPhone etc, iPhone users can just look at your real web page easily enough. Have your full version and a single column version, and you'll reach the largest audience with minimal work.
To answer your question though, there's no good way to make the Blackberry do anything other than 1 column views. You can get it to look fairly professional, as CSS and simple javascript still apply, but you'll have to lose a lot of your horizontal real estate.
A: BlackBerry (from OS 4.6 and higher) supports both the meta-viewport tag as well as the meta-HandheldFriendly tag. See the "Content Design Guidelines" document at http://na.blackberry.com/eng/support/docs/subcategories/?userType=21&category=BlackBerry+Browser for details.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Consistent outdent of first letter with CSS? I'm trying to implement an outdent of the first letter of the first paragraph of the body text. Where I'm stuck is in getting consistent spacing between the first letter and the rest of the paragraph.
For example, there is a huge difference in spacing between a "W" and an "I"
Anyone have any ideas about how to mitigate the differences? I'd prefer a pure CSS solution, but will resort to JavaScript if need be.
PS: I don't necessarily need compatibility in IE or Opera
A: Apply this to p.outdent:first-letter:
margin-left: -800px;
padding-right: 460px;
float: right;
This will position the first letter on the right edge of the paragraph, then shove it left it by more or less the width of the paragraph, then move both the letter and all the padding into the float's large negative margin so the paragraph fits in the margin and doesn't try to wrap around.
A: I tried using a fix-width font like 'courier new' and since the characters are more or less the same width it made it a lot less noticeable.
Edit - this font is decent but might only work for windows
p.outdent:first-letter {
font-family: ms mincho;
font-size: 8em;
line-height: 1;
font-weight: normal;
float: left;
margin: -0.1em 0 0 -.55em;
letter-spacing: 0.05em;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Commercial uses for grid computing? I keep hearing from associates about grid computing which, from what I can gather, is highly distributed stuff along the lines of SETI@Home.
Is anyone working on these sort of systems for business use? My interest is in figuring out if there's a commercial reason for starting software development in this field.
A: *
*Rendering Farms such as Pixar
*Model Evaluation e.g. weather, financials, military
*Architectural Engineering e.g. earthquakes.
To list a few.
A: Grid computing is really only needed if you have a lot of WORK that needs to be done, like folding proteins, otherwise a simple server farm will likely be plenty.
A: Obviously Google are major users of Grid Computing; all their search service relies on it, and many others.
Engines such as BigTable are based on using lots of nodes for storage and computation. These are commercially very useful because they're a good alternative to a small number of big servers, providing better redundancy and cost effective scaling.
The downside is that the software is fiendishly difficult to write, but Google seem to manage that one ok :)
So anything which requires big storage and/or lots of computation.
A: I used to work for these guys. Grid computing is used all over. Anyone who makes computer chips uses them to test designs before getting physical silicon cut. Financial websites use grids to calculate if you qualify for that loan. These days they are starting to replace big iron in a lot of places, as they tend to be cheaper to maintain over the long term.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: An algorithm to generate a game map from individual images I am designing a game to be played in the browser.
Game is a space theme and I need to generate a map of the "Galaxy".
The basic idea of the map is here:
game map http://www.oglehq.com/map.png
The map is a grid, with each grid sector can contain a planet/system and each of these has links to a number of adjacent grids.
To generate the maps I figured that I would have a collection of images representing the grid elements. So in the case of the sample above, each of the squares is a separate graphic.
To create a new map I would "weave" the images together.
The map element images would have the planets and their links already on them, and I, therefore, need to stitch the map together in such a way that each image is positioned with its appropriate counterparts => so the image in the bottom corner must have images to the left and diagonal left that link up with it correctly.
How would you go about creating the code to know where to place the images?
Is there a better way than using images?
At the moment performance and/or load should not be a consideration (if I need to generate maps to have preconfigured rather than do it in real-time, I don't mind).
If it makes a difference I will be using HTML, CSS, and JavaScript and backed by a Ruby on Rails app.
A: There are two very nice browser-based vector / javascript-manipulable graphics packages which, together, are virtually universal: SVG and VML. They generally produce high-quality vector-based images with low bandwidth.
SVG is supported by firefox, opera, safari, and chrome - technically only part of the specification is supported, but for practical purposes you should be able to do what you need. w3schools has a good reference for learning/using svg.
VML is Microsoft's answer to SVG, and (surprise) is natively supported by IE, although SVG is not. Msdn has the best reference for vml.
Although it's more work, you could write two similar/somewhat integrated code bases for these two technologies. The real benefit is that users won't have to install anything to play your game online - it'll just work, for 99.9% of all users.
By the way, you say that you're asking for an algorithm, and I'm offering technologies (if that's the right term for SVG/VML). If you could clarify the input/output specification and perhaps what part presents the challenge (e.g. which naive implementation won't work, and why), that would clarify the question and maybe provide more focused answers.
Addendum The canvas tag is becoming more widely supported, with the notable exception of IE. This might be a cleaner way to embed graphic elements in html.
Useful canvas stuff: Opera's canvas tutorial | Mozilla's canvas tutorial | canvas-in-IE partial implementation
A: Hmm. If each box can only link to its 8 neighbours, then you only have 2^8 = 256 tile types. Fewer if you limit the number of possible links from any one tile.
You can encode which links are present in an image with an 8 char filename:
11000010.jpeg
Or save some bytes and convert that to decimal or hex
196.jpg
Then the code. There's lots of ways you could choose to represent the map internally. One way is to have an object for each planet. A planet object knows its own position in the grid, and the positions of its linked planets. Hence it has enough information to choose the appropriate file.
Or have a 2D array. To work out which image to show for each array item, look at the 8 neighbouring array items. If you do this, you can avoid coding for boundaries by making the array two bigger in both axes, and having an empty 'border' around the edges. This saves you checking whether a neighbouring array item is off the array.
A: There are two ways to represent your map.
One way is to represent it is a grid of squares, where each square can have a planet/system in it or not. You can then specify that if there is a neighbor one square away in any of the eight directions (NW, N, NE, W, E, SW, S, SE) then there is a connection to that neighbor. Note however in your sample map the center system is not connected to the system north/east of it, so perhaps this is not the representation you want. But it can be used to build the other representation
The second way is to represent each square as having eight bits, defining whether or not there is a connection to a neighbor along each of the same eight directions. Presumably if there is even one connection, then the square has a system inside it, otherwise if there are no connections it is blank.
So in your example 3x3 grid, the data would be:
Tile Connections
nw n ne w e sw s se
nw 0 0 0 0 0 0 0 0
n 0 0 0 0 1 0 1 0
ne 0 0 0 1 0 0 0 0
w 0 0 0 0 0 0 0 0
center 0 1 0 0 0 0 1 1
e 0 0 0 0 0 0 0 0
se 0 0 0 0 0 0 0 0
s 0 1 0 0 1 0 0 0
sw 1 0 0 1 0 0 0 0
You could represent these connections as an array of eight boolean values, or much more compactly as an eight bit integer.
Its then easy to use the eight boolean values (or the eight bit integer) to form the filename of the bitmap to load for that grid square. For example, your center tile using this scheme could be called "Bitmap01000011.png" (just using the boolean values), or alternatively "Bitmap43.png" (using the hexidecimal value of the eight bit integer representing that binary pattern for a shorter filename).
Since you have 256 possible combinations, you will need 256 bitmaps.
You could also reduce the data to four booleans/bits per tile, since a "north" connection for instance implies that the tile to the north has a "south" connection, but that makes selecting the bitmaps a bit harder, but you can work it out if you want.
Alternatively you could layer between zero (empty) and nine (fully connected + system circle) bitmaps together in each square. You would just need to use transparent .png's so that you could combine them together. The downside is that the browser might be slow to draw each square (especially the fully connected ones). The advantage would be less data for you to create, and less data to load from your website.
You would represent the map itself as a table, and add your bitmaps as image links to each cell as needed.
The pseudo-code to map would be:
draw_map(connection_map):
For each grid_square in connection_map
connection_data = connection_map[grid_square]
filenames = bitmap_filenames_from(connection_data)
insert_image_references_into_table(grid_square,filenames)
# For each square having one of 256 bitmaps:
bitmap_filenames_from(connection_data):
filename="Bitmap"
for each bit in connection_data:
filename += bit ? "1" : 0
return [filename,]
# For each square having zero through nine bitmaps:
bitmap_filename_from(connection_data):
# Special case - square is empty
if 1 not in connection_data:
return []
filenames=[]
for i in 0..7:
if connection_data[i]:
filenames.append("Bitmap"+i)
filenames.append("BitmapSystem");
return filenames
A: I would recommend using a graphics library to draw the map. If you do you won't have the above problem and you will end up with much cleaner/simpler code. Some options are SVG, Canvas, and flash/flex.
A: Personally I would just render the links in game, and have the cell graphics only provide a background. This gives you more flexibility, allows you to more easily increase the number of ways cells can link to each other, and generally more scalable.
Otherwise you will need to account for every possible way a cell might be linked, and this is rather a lot even if you take into account rotational and mirror symmetries.
A: Oh, and you could also just have a small number of tile png files with transparency on them, and overlap these using css-positioned div's to form a picture similar to your example, if that suffices.
Last time I checked, older versions of IE did not have great support for transparency in image files, though. Can anyone edit this to provide better info on transparency support?
A: As long as links have a maximum length that's not too long, then you don't have too many different possible images for each cell. You need to come up with an ordering on the kinds of image cells. For example, an integer where each bit indicates the presense or absence of an image component.
Bit 0 : Has planet
Bit 1 : Has line from planet going north
Bit 2 : Has line from planet going northwest
...
Bit 8 : Has line from planet going northeast
Ok, now create 512 images. Many languages have libraries that let you edit and write images to disk. If you like Ruby, try this: http://raa.ruby-lang.org/project/ruby-gd
I don't know how you plan to store your data structure describing the graph of planets and links. An adjacency matrix might make it easy to generate the map, although it's not the smallest representation by far. Then it's pretty straightforward to spit out html like (for a 2x2 grid):
<table border="0" cellspace="0" cellpadding="0">
<tr>
<td><img src="cell_X.gif"></td>
<td><img src="cell_X.gif"></td>
</tr>
<tr>
<td><img src="cell_X.gif"></td>
<td><img src="cell_X.gif"></td>
</tr>
</table>
Of course, replace each X with the appropriate number corresponding to the combination of bits describing the appearance of the cell. If you're using an adjacency matrix, putting the bits together is pretty simple--just look at the cells around the "current" cell.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Does WCF play well with Java? Which of the WCF Service Protocols work well with Java?
Do the TCP Service Bindings work with java remoting (either Corba, EJB, JMS, etc.)?
What about the WebServices exposed as Service EndPoints. Have these been tested against the common Java WebServices stack for interoperability?
A: You will need to use one of the HTTP bindings. The TCP binding requires WCF to be on both sides.
A: I have had some bad experiences when dealing with a Java based web service using the WS-Security specs. In that case there was very little, and mostly conflicting, documentation and no tech support at all from the vendor. It took us quite a bit of time to get it working but using a WS-Security sample as the basis we got everything working in the end.
The main problem was working against a poorly document black box system with security enabled makes it hard to figure out where you are going wrong, with or without WCF.
A: WCF has been tested with Sun's Java WEbservices stack and Apache's Axis for interoperability.
So, I'd say it's pretty good.
Can you elaborate on "OR DOES TCP WORK AS WELL" ?
thank you,
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Does WCF support WS-Eventing? I know WCF supports many WS-* protocols but WS-Eventing does seem to be listed.
I do know that WCF has a pub/sub model, but is it WS-Eventing compliant?
A: I seem to remember reading about this on CodeProject a while ago.
Sorry I can't help more, but this is the article by Roman Kiss.
A: At least with WCF4 you can simply create a wsdl client by importing the WS-Eventing WSDL (with a soap binding). It requires a duplex binding so either http-duplex or simple tcp should work. The problem is adding the correct callback. For us this did the trick
Subscribe s = new Subscribe();
(s.Delivery = new DeliveryType()).Mode = "http://schemas.xmlsoap.org/ws/2004/08/eventing/DeliveryModes/Push";
XmlDocument doc = new XmlDocument();
using (XmlWriter writer = doc.CreateNavigator().AppendChild())
{
EndpointReferenceType notifyTo = new EndpointReferenceType();
(notifyTo.Address = new AttributedURI()).Value = callbackEndpoint.Uri.AbsoluteUri;
XmlRootAttribute notifyToElem = new XmlRootAttribute("NotifyTo");
notifyToElem.Namespace = "http://schemas.xmlsoap.org/ws/2004/08/eventing";
XmlDocument doc2 = new XmlDocument();
using (XmlWriter writer2 = doc2.CreateNavigator().AppendChild())
{
XmlRootAttribute ReferenceElement = new XmlRootAttribute("ReferenceElement");
foreach(AddressHeader h in callbackEndpoint.Headers)
{
h.WriteAddressHeader(writer2);
}
writer2.Close();
notifyTo.ReferenceParameters = new ReferenceParametersType();
notifyTo.ReferenceParameters.Any = notifyTo.ReferenceParameters.Any = doc2.ChildNodes.Cast<XmlElement>().ToArray<XmlElement>();
}
new XmlSerializer(notifyTo.GetType(), notifyToElem).Serialize(writer, notifyTo);
}
(s.Delivery.Any = new XmlElement[1])[0] = doc.DocumentElement;
(s.Filter = new FilterType()).Dialect = "http://schemas.xmlsoap.org/ws/2006/02/devprof/Action";
(s.Filter.Any = new System.Xml.XmlNode[1])[0] = new System.Xml.XmlDocument().CreateTextNode("http://www.teco.edu/SensorValues/SensorValuesEventOut");
SubscribeResponse subscription;
try
{
Console.WriteLine("Subscribing to the event...");
//Console.ReadLine();
subscription = eventSource.SubscribeOp(s);
}
A: There is no native pub/sub model in WCF 3.0, however there are a few options.
- The Roman Kiss article Ash found.
- There is a lot of other patterns you could implement (covered in MSDN Mag)
- Juval Lowy has two framework implementations you can download on his site at IDesign
- Lastly what I am using currently to mimic this with little overhead is MSMQ.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Why is there no generic synchronized queue in .NET? I noticed that you can call Queue.Synchronize to get a thread-safe queue object, but the same method isn't available on Queue<T>. Does anyone know why? Seems kind of weird.
A: You might find the Parallel CTP worth checking out; here's a blog entry from the guys who are putting it together that's pretty topical:
Enumerating Concurrent Collections
It's not quite the same thing, but it might solve your bigger problem. (They even use Queue<T> versus ConcurrentQueue<T> as their example.)
A: There is one now, in .Net 4.0:
ConcurrentQueue<T>
in System.Collections.Concurrent
http://msdn.microsoft.com/en-us/library/dd267265.aspx
A: Update - in .NET 4, there now is ConcurrentQueue<T> in System.Collections.Concurrent, as documented here http://msdn.microsoft.com/en-us/library/dd267265.aspx. It's interesting to note that its IsSynchronized method (rightly) returns false.
ConcurrentQueue<T> is a complete ground up rewrite, creating copies of the queue to enumerate, and using advanced no-lock techniques like Interlocked.CompareExchange() and Thread.SpinWait().
The rest of this answer is still relevant insofar as it relates to the demise of the old Synchronize() and SyncRoot members, and why they didn't work very well from an API perspective.
As per Zooba's comment, the BCL team decided that too many developers were misunderstanding the purpose of Synchronise (and to a lesser extent, SyncRoot)
Brian Grunkemeyer described this on the BCL team blog a couple of years back:
http://blogs.msdn.com/bclteam/archive/2005/03/15/396399.aspx
The key issue is getting the correct granularity around locks, where some developers would naively use multiple properties or methods on a "synchronized" collection and believe their code to be thread safe. Brian uses Queue as his example,
if (queue.Count > 0) {
object obj = null;
try {
obj = queue.Dequeue();
Developers wouldn't realize that Count could be changed by another thread before Dequeue was invoked.
Forcing developers to use an explicit lock statement around the whole operation means preventing this false sense of security.
As Brian mentions, the removal of SyncRoot was partly because it had mainly been introduced to support Synchronized, but also because in many cases there is a better choice of lock object - most of the time, either the Queue instance itself, or a
private static object lockObjForQueueOperations = new object();
on the class owning the instance of the Queue...
This latter approach is usually safest as it avoids some other common traps:
*
*Never lock(this)
*Don't lock(string) or lock(typeof(A))
As they say, threading is hard, and making it seem easy can be dangerous.
A: (I assume you mean Queue<T> for the second one.)
I can't specifically answer the question, except that the IsSynchronized and SyncRoot properties (but not Synchronise() explicitly) are inherited from the ICollection interface. None of the generic collections use this and the ICollection<T> interface does not include SyncRoot.
As to why it is not included, I can only speculate that they weren't being used either in the way intended by the library designers or they simply weren't being used enough to justify retaining them in the newer collections.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
}
|
Q: How to install Delphi 7 on Vista I tried to install Delphi 7 on Vista several times and Vista prevented me from doing so by telling me that there are known problems with this application (Delphi 7). Several other people in my company experienced problems with installing D7 on Vista.
This lead to the conclusion that we were at risk with our D7 application, as the company could within the lifetime of the app switch to Vista or Windows 7 and newer Delphi versions are not in the policy of the company. Therefore management decided on rewriting the app in C#.
My question(s):
*
*How to install D7 on Vista
*Experience with such an installation
*Risk assessment concerning stability of IDE and developed programs
*Risk assessment concerning executability under Windows 7
Not using any third party components or database - there should be no problem running the developed app under Vista. If not able to develop and debug under Vista (which at the point being will be the only customer platform, yes, internal programming) will result in a sort of cross platform development - if we would be allowed to keep XP as the development platform.
It is not a developers decision to rewrite, it has been done in the company for the last 3 years: if you had to significantly touch an app developed in Delphi or if there was a certain risk of it not to survive the planned life circle/life span, it had to be rewritten. The life cycle just expanded to 2015 due to canceling another project.
So the main issue here would really be: I would like to have educated arguments about the risks.
A: Running Delphi 7 under Vista is no problem if you can turn UAC off. With UAC on, you get an error message when starting D7, but it still works, just click ok and go on.
Programs compiled with D7 have no problem with Vista. But new features of Vista are supported by Delphi 2007/2009 only.
We use D7 on XP and on Vista, building and maintaining a commercial App which has gone from D2 to D4, D5 to D7. Besides problems with the BDE, which made us switch to DBX (Corelabs) there are no problems.
A: Just follow these instructions and you'll be fine. No reason to turn off UAC! I've been running Delphi 7 on Vista for about a year without any problem at all. Debugging is totally fine too.
http://www.drbob42.com/examines/examin84.htm
A: For installing Delphi 7 in Vista, you can try this patch from Microsoft.
http://support.microsoft.com/default.aspx/kb/932246
As for the rest someone else I suspect will have more knowledge.
A: I have Delphi 7 working fine on my Vista development box. Yes there was a few issues during installation, but no more than other applications and these issues have been resolved in subsequent versions of Delphi.
None of this should cause problems with apps developed by D7 for Vista. We use Delphi as our primary development tool for all our applications and they work just fine with Vista.
It sounds like this is an excuse by someone in the company to get rid of Delphi and move to C#. Typical FUD tactics. There may be genuine reasons for your company to move away from Delphi, but Vista compatibility should not be one of them.
A: Also, if you'd like all the Vista-ready features in your Delphi 7 application, have a look at this article here: Creating Windows Vista Ready Applications with Delphi
This will make it so that your application correctly appears when doing Flip3D, or when showing a preview thumbnail when hovering over the app in the taskbar. Essentially, this will give you the "Vista-readiness" of Delphi 2007, from within older versions of Delphi (I have used this with Delphi 2006 and it works very well).
You also get the new Vista task dialogs and new Common dialogs with the modifications listed on the linked website.
A: I think there's a big jump from having trouble installing D7 in Vista (D7 which after all contains low-level bits and pieces for the debugger and which doesn't know about the 'correct' place to put things under Vista), to assuming that your own app will have problems with Vista...
You have the source code, you can test your program running under Vista, you can make whatever (usually minor) tweaks are necessary to your code.... I'm really surprised that you would decide to rewrite the app in another language just because you can't get the (old) development tool to install under Vista.
We need to know more about what your application does, and what components you make use of, to be able to make any guess at your 3rd and 4th questions. They're too general.
FOr instance, I have several D7 applications on the market, one of which uses open-source Interbase 6 with Delphi and can be a problem to get installed/working on Vista Home (the process seems less painful on Vista Business). Another of our apps uses SQL Express 2005 and runs quite happily on Vista. Our newest app, written in D2007, runs fine on Vista. On both Delphi platforms, our two main 'third party' tools are DevExpress controls and ReportBuilder.
A: I have been using D4 with Vista for year as one of our key products uses it, its good version still and there are workarounds to make it use new Vista features. You can call any win32 API (new functions) so there is no point to update to D7.
I installed/moved D4 to my new machine by hand:
1. by exporting registry hive(s)
2. registering a few components
3. copying files
thats it.. no need to run slow setupper.
A: As others have noted, there is no problem running Delphi 7 applications under Vista: We do this with a multi-hundred-thousand line Delphi 7 application that uses numerous third-party controls (Developer Express grids, TSILang translation components, etc.).
We use Vista as our primary operating system, but we run the Delphi 7 development environment in a Windows XP Virtual machine. It works perfectly, and there are no installation issues.
A: It is very simple really.. All what you have to is the following:
Search UAC (User Account Control) off and then intall delphi7 but, you must have no other version of delphi on your computer.
A: 1,2 in Vista) no problems heard if you install http://support.microsoft.com/kb/947562 and configure UAC;
3) None stability issues are known to any of my friends here...
4) Not using Windows 7 with Delphi 7... But heard of many problems with both...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Can you use Reflector to get the source code of an app and then debug using that source code? It seems like you could use a mashup of Relector and a Debugger to be able to debug any .NET app WITHOUT having the source code at all. Is this possible? Has anyone seen this before?
A: The Deblector plug-in for Reflector allows you to debug straight from Reflector.
A: I tried this a long time ago without success. Reflector has improved alot since then so I imagine it may be possible today.
It's actually kind of scary if you think about it. Someone could decompile your app and have the full code, then modify it and distribute their own version of it. All without being open source. But then again that's why "they" created obfuscators.
A: I can't find the link but somebody did use the reflector source to compile a debug version of the 1.1 framework the could step through. I tried with the 2.0 framework and found too many errors to make it worth my while.
If you would like to try this start with a plugin like FileDisassembler. In my brief experience with this I found that there were some errors to fix but not to bad.
With a small-medium size library this method should be very doable.
A: Reflector Pro allows you to do exactly this!
A: No, you need the symbol file (.PDB) file that belongs to the application you are trying to debug.
Reflector allows you to go from IL to readable .NET code but it maintains meaning only not the exact code as written by the developer. So even if you had the PDB and the source from Reflector it wouldn't match up for debugging.
I suppose you could use the source output from reflector to create a .NET project and generate your own version of the assembly you want to debug. That is ussually a real pain though and in the case of the .NET framework Microsoft publishes the debugging information for use by anyone who is interested.
I remember at one point there was a plugin for debugging in Reflector but I could never get it to work.
Configuring Visual Studio to Debug .NET Framework Source Code
MSDN: PDB Files
A: I've seen it and done it before. I used it to show my boss that our app wasn't as protected as he thought it was. Took a DLL, got the source code, and bam -- he practically had a heart attack.
There are scenarios where .Net Reflector breaks down, but it is hard to do so -- I know because I've actively tried. The good obfuscators will make the code so unmanageable / unreadable (like overloading the "a" function to do a ton of different thigns based on the parameters) that seeing the source does you no good, but you can still debug -- good luck figuring out what is going on.
A: It's possible but not really practical in larger more complex applications, especially when lots of the more recent structures like lambdas and initializers have been used (you get a whole bunch of variable names containing dollar signs like CS$4$0000 that have to be fixed manually). Even simple switch statements can cause some very ugly spaghetti code full of goto statements in Reflector.
I've had a lot more luck decomiling to MSIL and recompiling in debug mode. You can then put break points in the IL files and use all the usual debugger features in VS. MSIL looks a little scary at first but you get the hang of it pretty quickly.
This excellent article explains how to do it:
http://www.codeproject.com/KB/dotnet/Debug_Framework_Classes.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: What columns generally make good indexes? As a follow up to "What are indexes and how can I use them to optimise queries in my database?" where I am attempting to learn about indexes, what columns are good index candidates? Specifically for an MS SQL database?
After some googling, everything I have read suggests that columns that are generally increasing and unique make a good index (things like MySQL's auto_increment), I understand this, but I am using MS SQL and I am using GUIDs for primary keys, so it seems that indexes would not benefit GUID columns...
A: Any column that is going to be regularly used to extract data from the table should be indexed.
This includes:
foreign keys -
select * from tblOrder where status_id=:v_outstanding
descriptive fields -
select * from tblCust where Surname like "O'Brian%"
The columns do not need to be unique. In fact you can get really good performance from a binary index when searching for exceptions.
select * from tblOrder where paidYN='N'
A: In general (I don't use mssql so can't comment specifically), primary keys make good indexes. They are unique and must have a value specified. (Also, primary keys make such good indexes that they normally have an index created automatically.)
An index is effectively a copy of the column which has been sorted to allow binary search (which is much faster than linear search). Database systems may use various tricks to speed up search even more, particularly if the data is more complex than a simple number.
My suggestion would be to not use any indexes initially and profile your queries. If a particular query (such as searching for people by surname, for example) is run very often, try creating an index over the relevate attributes and profile again. If there is a noticeable speed-up on queries and a negligible slow-down on insertions and updates, keep the index.
(Apologies if I'm repeating stuff mentioned in your other question, I hadn't come across it previously.)
A: It really depends on your queries. For example, if you almost only write to a table then it is best not to have any indexes, they just slow down the writes and never get used. Any column you are using to join with another table is a good candidate for an index.
Also, read about the Missing Indexes feature. It monitors the actual queries being used against your database and can tell you what indexes would have improved the performace.
A: Some folks answered a similar question here: How do you know what a good index is?
Basically, it really depends on how you will be querying your data. You want an index that quickly identifies a small subset of your dataset that is relevant to a query. If you never query by datestamp, you don't need an index on it, even if it's mostly unique. If all you do is get events that happened in a certain date range, you definitely want one. In most cases, an index on gender is pointless -- but if all you do is get stats about all males, and separately, about all females, it might be worth your while to create one. Figure out what your query patterns will be, and access to which parameter narrows the search space the most, and that's your best index.
Also consider the kind of index you make -- B-trees are good for most things and allow range queries, but hash indexes get you straight to the point (but don't allow ranges). Other types of indexes have other pros and cons.
Good luck!
A: A GUID column is not the best candidate for indexing. Indexes are best suited to columns with a data type that can be given some meaningful order, ie sorted (integer, date etc).
It does not matter if the data in a column is generally increasing. If you create an index on the column, the index will create it's own data structure that will simply reference the actual items in your table without concern for stored order (a non-clustered index). Then for example a binary search can be performed over your index data structure to provide fast retrieval.
It is also possible to create a "clustered index" that will physically reorder your data. However you can only have one of these per table, whereas you can have multiple non-clustered indexes.
A: Your primary key should always be an index. (I'd be surprised if it weren't automatically indexed by MS SQL, in fact.) You should also index columns you SELECT or ORDER by frequently; their purpose is both quick lookup of a single value and faster sorting.
The only real danger in indexing too many columns is slowing down changes to rows in large tables, as the indexes all need updating too. If you're really not sure what to index, just time your slowest queries, look at what columns are being used most often, and index them. Then see how much faster they are.
A: Numeric data types which are ordered in ascending or descending order are good indexes for multiple reasons. First, numbers are generally faster to evaluate than strings (varchar, char, nvarchar, etc). Second, if your values aren't ordered, rows and/or pages may need to be shuffled about to update your index. That's additional overhead.
If you're using SQL Server 2005 and set on using uniqueidentifiers (guids), and do NOT need them to be of a random nature, check out the sequential uniqueidentifier type.
Lastly, if you're talking about clustered indexes, you're talking about the sort of the physical data. If you have a string as your clustered index, that could get ugly.
A: Indexes can play an important role in query optimization and searching the results speedily from tables. The most important step is to select which columns are to be indexed. There are two major places where we can consider indexing: columns referenced in the WHERE clause and columns used in JOIN clauses. In short, such columns should be indexed against which you are required to search particular records. Suppose, we have a table named buyers where the SELECT query uses indexes like below:
SELECT
buyer_id /* no need to index */
FROM buyers
WHERE first_name='Tariq' /* consider indexing */
AND last_name='Iqbal' /* consider indexing */
Since "buyer_id" is referenced in the SELECT portion, MySQL will not use it to limit the chosen rows. Hence, there is no great need to index it. The below is another example little different from the above one:
SELECT
buyers.buyer_id, /* no need to index */
country.name /* no need to index */
FROM buyers LEFT JOIN country
ON buyers.country_id=country.country_id /* consider indexing */
WHERE
first_name='Tariq' /* consider indexing */
AND
last_name='Iqbal' /* consider indexing */
According to the above queries first_name, last_name columns can be indexed as they are located in the WHERE clause. Also an additional field, country_id from country table, can be considered for indexing because it is in a JOIN clause. So indexing can be considered on every field in the WHERE clause or a JOIN clause.
The following list also offers a few tips that you should always keep in mind when intend to create indexes into your tables:
*
*Only index those columns that are required in WHERE and ORDER BY clauses. Indexing columns in abundance will result in some disadvantages.
*Try to take benefit of "index prefix" or "multi-columns index" feature of MySQL. If you create an index such as INDEX(first_name, last_name), don’t create INDEX(first_name). However, "index prefix" or "multi-columns index" is not recommended in all search cases.
*Use the NOT NULL attribute for those columns in which you consider the indexing, so that NULL values will never be stored.
*Use the --log-long-format option to log queries that aren’t using indexes. In this way, you can examine this log file and adjust your queries accordingly.
*The EXPLAIN statement helps you to reveal that how MySQL will execute a query. It shows how and in what order tables are joined. This can be much useful for determining how to write optimized queries, and whether the columns are needed to be indexed.
Update (23 Feb'15):
Any index (good/bad) increases insert and update time.
Depending on your indexes (number of indexes and type), result is searched. If your search time is gonna increase because of index then that's bad index.
Likely in any book, "Index Page" could have chapter start page, topic page number starts, also sub topic page starts. Some clarification in Index page helps but more detailed index might confuse you or scare you. Indexes are also having memory.
Index selection should be wise. Keep in mind not all columns would require index.
A: It all depends on what queries you expect to ask about the tables. If you ask for all rows with a certain value for column X, you will have to do a full table scan if an index can't be used.
Indexes will be useful if:
*
*The column or columns have a high degree of uniqueness
*You frequently need to look for a certain value or range of values for
the column.
They will not be useful if:
*
*You are selecting a large % (>10-20%) of the rows in the table
*The additional space usage is an issue
*You want to maximize insert performance. Every index on a table reduces insert and update performance because they must be updated each time the data changes.
Primary key columns are typically great for indexing because they are unique and are often used to lookup rows.
A: The ol' rule of thumb was columns that are used a lot in WHERE, ORDER BY, and GROUP BY clauses, or any that seemed to be used in joins frequently. Keep in mind I'm referring to indexes, NOT Primary Key
Not to give a 'vanilla-ish' answer, but it truly depends on how you are accessing the data
A: It should be even faster if you are using a GUID.
Suppose you have the records
*
*100
*200
*3000
*....
If you have an index(binary search, you can find the physical location of the record you are looking for in O( lg n) time, instead of searching sequentially O(n) time. This is because you dont know what records you have in you table.
A: Best index depends on the contents of the table and what you are trying to accomplish.
Taken an example A member database with a Primary Key of the Members Social Security Numnber. We choose the S.S. because the application priamry referes to the individual in this way but you also want to create a search function that will utilize the members first and last name. I would then suggest creating a index over those two fields.
You should first find out what data you will be querying and then make the determination of which data you need indexed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "123"
}
|
Q: ASP.NET TreeView and Selecting the Selected Node How do I capture the event of the clicking the Selected Node of a TreeView?
It doesn't fire the SelectedNodeChanged since the selection has obviously not changed but then what event can I catch so I know that the Selected Node was clicked?
UPDATE:
When I have some time, I'm going to have to dive into the bowels of the TreeView control and dig out what and where it handles the click events and subclass the TreeView to expose a new event OnSelectedNodeClicked.
I'll probably do this over the Christmas holidays and I'll report back with the results.
UPDATE:
I have come up with a solution below that sub-classes the TreeView control.
A: Easiest way - if it doesn't interfere with the rest of your code - is to simply set the node as not selected in the SelectedNodeChanged method.
protected void TreeView1_SelectedNodeChanged(object sender, EventArgs e){
// Do whatever you're doing
TreeView1.SelectedNode.Selected = false;
}
A: After a somewhat lengthy period, I have finally had some time to look into how to subclass the TreeView to handle a Selected Node being clicked.
Here is my solution which exposes a new event SelectedNodeClicked which you can handle from the Page or wherever.
(If needed it is a simple task to refactor into C#)
Imports System.Web.UI
Imports System.Web
Public Class MyTreeView
Inherits System.Web.UI.WebControls.TreeView
Public Event SelectedNodeClicked As EventHandler
Private Shared ReadOnly SelectedNodeClickEvent As Object
Private Const CurrentValuePathState As String = "CurrentValuePath"
Protected Property CurrentValuePath() As String
Get
Return Me.ViewState(CurrentValuePathState)
End Get
Set(ByVal value As String)
Me.ViewState(CurrentValuePathState) = value
End Set
End Property
Friend Sub RaiseSelectedNodeClicked()
Me.OnSelectedNodeClicked(EventArgs.Empty)
End Sub
Protected Overridable Sub OnSelectedNodeClicked(ByVal e As EventArgs)
RaiseEvent SelectedNodeClicked(Me, e)
End Sub
Protected Overrides Sub OnSelectedNodeChanged(ByVal e As System.EventArgs)
MyBase.OnSelectedNodeChanged(e)
' Whenever the Selected Node changed, remember its ValuePath for future reference
Me.CurrentValuePath = Me.SelectedNode.ValuePath
End Sub
Protected Overrides Sub RaisePostBackEvent(ByVal eventArgument As String)
' Check if the node that caused the event is the same as the previously selected node
If Me.SelectedNode IsNot Nothing AndAlso Me.SelectedNode.ValuePath.Equals(Me.CurrentValuePath) Then
Me.RaiseSelectedNodeClicked()
End If
MyBase.RaisePostBackEvent(eventArgument)
End Sub
End Class
A: Store what is selected and use code in the Page_Load event handler to compare what is selected to what you have stored. Page_Load is called for every post back even if the selected value doesn't change, unlike SelectedNodeChanged.
Example
alt text http://smithmier.com/TreeViewExample.png
html
<form id="form1" runat="server">
<div>
<asp:TreeView ID="TreeView1" runat="server" OnSelectedNodeChanged="TreeView1_SelectedNodeChanged"
ShowLines="True">
<Nodes>
<asp:TreeNode Text="Root" Value="Root">
<asp:TreeNode Text="RootSub1" Value="RootSub1"></asp:TreeNode>
<asp:TreeNode Text="RootSub2" Value="RootSub2"></asp:TreeNode>
</asp:TreeNode>
<asp:TreeNode Text="Root2" Value="Root2">
<asp:TreeNode Text="Root2Sub1" Value="Root2Sub1">
<asp:TreeNode Text="Root2Sub1Sub1" Value="Root2Sub1Sub1"></asp:TreeNode>
</asp:TreeNode>
<asp:TreeNode Text="Root2Sub2" Value="Root2Sub2"></asp:TreeNode>
</asp:TreeNode>
</Nodes>
</asp:TreeView>
<asp:Label ID="Label1" runat="server" Text="Selected"></asp:Label>
<asp:TextBox ID="TextBox1" runat="server"></asp:TextBox>
<asp:Label ID="Label2" runat="server" Text="Label"></asp:Label></div>
</form>
C#
protected void Page_Load(object sender, EventArgs e)
{
if(TreeView1.SelectedNode!=null && this.TextBox1.Text == TreeView1.SelectedNode.Value.ToString())
{
Label2.Text = (int.Parse(Label2.Text) + 1).ToString();
}
else
{
Label2.Text = "0";
}
}
protected void TreeView1_SelectedNodeChanged(object sender, EventArgs e)
{
this.TextBox1.Text = TreeView1.SelectedNode.Value.ToString();
}
A: When you're adding nodes to the tree in the _TreeNodePopulate() event, set the .SelectionAction property on the node.
TreeNode newCNode;
newCNode = new TreeNode("New Node");
newCNode.SelectAction = TreeNodeSelectAction.Select;
//now you can set the .NavigateUrl property to call the same page with some query string parameter to catch in the page_load()
newCNode.NavigateUrl = "~/ThisPage.aspx?args=" + someNodeAction
RootNode.ChildNodes.Add(newCNode);
A: protected void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
{
TreeView1.SelectedNode.Selected = false;
}
}
works for me
A: c#:
TreeNode node = TreeTypes.FindNode(obj.CustomerTypeId.ToString());
TreeTypes.Nodes[TreeTypes.Nodes.IndexOf(node)].Select();
A: You can always use the MouseDown or MouseUp event and check to see if it the selected node.
A: I use the ShowCheckBox property and the Checked property to "highlight" the selected item.
When the SelectedNodeChanged event raises:
*
*I set to false the ShowCheckBox property and the Checked property for the old selected one and I set to true the ShowCheckBox property and the Checked property for the new selected one.
*I use the selected node for any action
*Finally, I unselect the selected item:
myTreeView.SelecteNode.Selected = false
A: i have a problem look like but i solved it !
in server side code :
protected void MainTreeView_SelectedNodeChanged(object sender, EventArgs e)
{
ClearTreeView();
MainTreeView.SelectedNode.Text = "<span class='SelectedTreeNodeStyle'>" + MainTreeView.SelectedNode.Text + "</span>";
MainTreeView.SelectedNode.Selected = false;
}
public void ClearTreeView()
{
for (int i = 0; i < MainTreeView.Nodes.Count; i++)
{
for(int j=0;j< MainTreeView.Nodes[i].ChildNodes.Count;j++)
{
ClearNodeText(MainTreeView.Nodes[i].ChildNodes[j]);
}
ClearNodeText(MainTreeView.Nodes[i]);
}
}
public void ClearNodeText(TreeNode tn)
{
tn.Text = tn.Text.Replace("<span class='SelectedTreeNodeStyle'>", "").Replace("</span>", "");
}
in client side code :
<style type="text/css">
.SelectedTreeNodeStyle { font-weight: bold;}
</style>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Can a console app stay alive until it has finished its work? I've just read up on Thread.IsBackground and if I understand it correctly, when it's set to false the Thread is a foreground thread which means it should stay alive until it has finished working even though the app have been exited out. Now I tested this with a winform app and it works as expected but when used with a console app the process doesn't stay alive but exits right away. Does the Thread.IsBackground behave differently from a console app than a winform app?
A: The Thread.IsBackground property only marks if the thread should block the process from exiting. It doesn't perform any magic to keep the thread alive until some sort of explicit exit.
To quote the Thread.IsBackground Property MSDN (emphasis mine):
A thread is either a background thread or a foreground thread. Background threads are identical to foreground threads, except that background threads do not prevent a process from terminating. Once all foreground threads belonging to a process have terminated, the common language runtime ends the process. Any remaining background threads are stopped and do not complete.
In order to keep your console app alive you'll need to have some sort of loop which will spin until you ask it to stop via a flag or similar. Windows Forms applications have this built in because of their message pump (I believe).
A: I believe with the winforms based app you get a seperate thread to handle the messaging, so if the "main" thread exits, you still have a thread going to keep the process alive. With a console app, once main exits, unless you started a foreground thread, the process also terminates.
A: IMHO really you should be a lot more explicit about the expected semantics of your application and deliberately <thread>.Join.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How does one add a custom build step to an automake-based project in KDevelop? I recently started work on a personal coding project using C++ and KDevelop. Although I started out by just hacking around, I figure it'll be better in the long run if I set up a proper unit test suite before going too much further. I've created a seperate test-runner executable as a sub project, and the tests I've added to it appear to function properly. So far, success.
However, I'd really like to get my unit tests running every time I build, not only when I explicitly run them. This will be especially true as I split up the mess I've made into convenience libraries, each of which will probably have its own test executable. Rather than run them all by hand, I'd like to get them to run as the final step in my build process. I've looked all through the options in the project menu and the automake manager, but I can't figure out how to set this up.
I imagine this could probably be done by editing the makefile by hand. Unfortunately, my makefile-fu is a bit weak, and I'm also afraid that KDevelop might overwrite any changes I make by hand the next time I change something through the IDE. Therefore, if there's an option on how to do this through KDevelop itself, I'd much prefer to go that way.
Does anybody know how I could get KDevelop to run my test executables as part of the build process? Thank you!
(I'm not 100% tied to KDevelop. If KDevelop can't do this, or else if there's an IDE that makes this much easier, I could be convinced to switch.)
A:
Although you could manipulate the default `make` target to run your tests,
it is generally not recommended, because every invocation of
make
would run all the tests.
You should use the "check" target instead,
which is an accepted quasi-standard among software packages.
By doing that,
the tests are only started when you run
make check
You can then easily configure KDevelop
to run "make check" instead of just "make".
Since you are using automake (through KDevelop),
you don't need to write the "check" target yourself.
Instead, just edit your `Makefile.am` and set some variables:
TESTS = ...
Please have a look at the
automake documentation, "Support for test suites"
for further information.
A: I got it working this way:
$ cat src/base64.c
//code to be tested
int encode64(...) { ... }
#ifdef UNITTEST
#include <assert.h>
int main(int argc, char* argv[])
{
assert( encode64(...) == 0 );
return 0;
}
#endif //UNITTEST
/* end file.c */
$ cat src/Makefile.am
...
check_PROGRAMS = base64-test
base64_test_SOURCES = base64.c
base64_test_CPPFLAGS = -I../include -DUNITTEST
TESTS = base64-test
A make check would build src/base64-test and run it:
$ make check
...
PASS: base64-test
==================
All 1 tests passed
==================
...
Now I'm trying to encapsulate it all as a m4 macro to be used like this:
MAKE_UNITTEST(base64.c)
which should produce something like the solution above.
Hope this helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Big-O for Eight Year Olds? I'm asking more about what this means to my code. I understand the concepts mathematically, I just have a hard time wrapping my head around what they mean conceptually. For example, if one were to perform an O(1) operation on a data structure, I understand that the number of operations it has to perform won't grow because there are more items. And an O(n) operation would mean that you would perform a set of operations on each element. Could somebody fill in the blanks here?
*
*Like what exactly would an O(n^2) operation do?
*And what the heck does it mean if an operation is O(n log(n))?
*And does somebody have to smoke crack to write an O(x!)?
A: This might be too mathematical, but here's my try. (I am a mathematician.)
If something is O(f(n)), then it's running time on n elements will be equal to A f(n) + B (measured in, say, clock cycles or CPU operations). It's key to understanding that you also have these constants A and B, which arise from the specific implementation. B represents essentially the "constant overhead" of your operation, for example some preprocessing that you do that doesn't depend on the size of the collection. A represents the speed of your actual item-processing algorithm.
The key, though, is that you use big O notation to figure out how well something will scale. So those constants won't really matter: if you're trying to figure out how to scale from 10 to 10000 items, who cares about the constant overhead B? Similarly, other concerns (see below) will certainly outweigh the weight of the multiplicative constant A.
So the real deal is f(n). If f grows not at all with n, e.g. f(n) = 1, then you'll scale fantastically---your running time will always just be A + B. If f grows linearly with n, i.e. f(n) = n, your running time will scale pretty much as best as can be expected---if your users are waiting 10 ns for 10 elements, they'll wait 10000 ns for 10000 elements (ignoring the additive constant). But if it grows faster, like n2, then you're in trouble; things will start slowing down way too much when you get larger collections. f(n) = n log(n) is a good compromise, usually: your operation can't be so simple as to give linear scaling, but you've managed to cut things down such that it'll scale much better than f(n) = n2.
Practically, here are some good examples:
*
*O(1): retrieving an element from an array. We know exactly where it is in memory, so we just go get it. It doesn't matter if the collection has 10 items or 10000; it's still at index (say) 3, so we just jump to location 3 in memory.
*O(n): retrieving an element from a linked list. Here, A = 0.5, because on average you''ll have to go through 1/2 of the linked list before you find the element you're looking for.
*O(n2): various "dumb" sorting algorithms. Because generally their strategy involves, for each element (n), you look at all the other elements (so times another n, giving n2), then position yourself in the right place.
*O(n log(n)): various "smart" sorting algorithms. It turns out that you only need to look at, say, 10 elements in a 1010-element collection to intelligently sort yourself relative to everyone else in the collection. Because everyone else is also going to look at 10 elements, and the emergent behavior is orchestrated just right so that this is enough to produce a sorted list.
*O(n!): an algorithm that "tries everything," since there are (proportional to) n! possible combinations of n elements that might solve a given problem. So it just loops through all such combinations, tries them, then stops whenever it succeeds.
A: don.neufeld's answer is very good, but I'd probably explain it in two parts: first, there's a rough hierarchy of O()'s that most algorithms fall into. Then, you can look at each of those to come up with sketches of what typical algorithms of that time complexity do.
For practical purposes, the only O()'s that ever seem to matter are:
*
*O(1) "constant time" - the time required is independent of the size of the input. As a rough category, I would include algorithms such as hash lookups and Union-Find here, even though neither of those are actually O(1).
*O(log(n)) "logarithmic" - it gets slower as you get larger inputs, but once your input gets fairly large, it won't change enough to worry about. If your runtime is ok with reasonably-sized data, you can swamp it with as much additional data as you want and it'll still be ok.
*O(n) "linear" - the more input, the longer it takes, in an even tradeoff. Three times the input size will take roughly three times as long.
*O(n log(n)) "better than quadratic" - increasing the input size hurts, but it's still manageable. The algorithm is probably decent, it's just that the underlying problem is more difficult (decisions are less localized with respect to the input data) than those problems that can be solved in linear time. If your input sizes are getting up there, don't assume that you could necessarily handle twice the size without changing your architecture around (eg by moving things to overnight batch computations, or not doing things per-frame). It's ok if the input size increases a little bit, though; just watch out for multiples.
*O(n^2) "quadratic" - it's really only going to work up to a certain size of your input, so pay attention to how big it could get. Also, your algorithm may suck -- think hard to see if there's an O(n log(n)) algorithm that would give you what you need. Once you're here, feel very grateful for the amazing hardware we've been gifted with. Not long ago, what you are trying to do would have been impossible for all practical purposes.
*O(n^3) "cubic" - not qualitatively all that different from O(n^2). The same comments apply, only more so. There's a decent chance that a more clever algorithm could shave this time down to something smaller, eg O(n^2 log(n)) or O(n^2.8...), but then again, there's a good chance that it won't be worth the trouble. (You're already limited in your practical input size, so the constant factors that may be required for the more clever algorithms will probably swamp their advantages for practical cases. Also, thinking is slow; letting the computer chew on it may save you time overall.)
*O(2^n) "exponential" - the problem is either fundamentally computationally hard or you're being an idiot. These problems have a recognizable flavor to them. Your input sizes are capped at a fairly specific hard limit. You'll know quickly whether you fit into that limit.
And that's it. There are many other possibilities that fit between these (or are greater than O(2^n)), but they don't often happen in practice and they're not qualitatively much different from one of these. Cubic algorithms are already a bit of a stretch; I only included them because I've run into them often enough to be worth mentioning (eg matrix multiplication).
What's actually happening for these classes of algorithms? Well, I think you had a good start, although there are many examples that wouldn't fit these characterizations. But for the above, I'd say it usually goes something like:
*
*O(1) - you're only looking at most at a fixed-size chunk of your input data, and possibly none of it. Example: the maximum of a sorted list.
*
*Or your input size is bounded. Example: addition of two numbers. (Note that addition of N numbers is linear time.)
*O(log n) - each element of your input tells you enough to ignore a large fraction of the rest of the input. Example: when you look at an array element in binary search, its value tells you that you can ignore "half" of your array without looking at any of it. Or similarly, the element you look at gives you enough of a summary of a fraction of the remaining input that you won't need to look at it.
*
*There's nothing special about halves, though -- if you can only ignore 10% of your input at each step, it's still logarithmic.
*O(n) - you do some fixed amount of work per input element. (But see below.)
*O(n log(n)) - there are a few variants.
*
*You can divide the input into two piles (in no more than linear time), solve the problem independently on each pile, and then combine the two piles to form the final solution. The independence of the two piles is key. Example: classic recursive mergesort.
*Each linear-time pass over the data gets you halfway to your solution. Example: quicksort if you think in terms of the maximum distance of each element to its final sorted position at each partitioning step (and yes, I know that it's actually O(n^2) because of degenerate pivot choices. But practically speaking, it falls into my O(n log(n)) category.)
*O(n^2) - you have to look at every pair of input elements.
*
*Or you don't, but you think you do, and you're using the wrong algorithm.
*O(n^3) - um... I don't have a snappy characterization of these. It's probably one of:
*
*You're multiplying matrices
*You're looking at every pair of inputs but the operation you do requires looking at all of the inputs again
*the entire graph structure of your input is relevant
*O(2^n) - you need to consider every possible subset of your inputs.
None of these are rigorous. Especially not linear time algorithms (O(n)): I could come up with a number of examples where you have to look at all of the inputs, then half of them, then half of those, etc. Or the other way around -- you fold together pairs of inputs, then recurse on the output. These don't fit the description above, since you're not looking at each input once, but it still comes out in linear time. Still, 99.2% of the time, linear time means looking at each input once.
A: No, an O(n) algorithm does not mean it will perform an operation on each element. Big-O notation gives you a way to talk about the "speed" of you algorithm independent of your actual machine.
O(n) means that the time your algorithm will take grows linearly as your input increase. O(n^2) means that the time your algorithm takes grows as the square of your input. And so forth.
A: The way I think about it, is you have the task of cleaning up a problem caused by some evil villain V who picks N, and you have to estimate out how much longer it's going to take to finish your problem when he increases N.
O(1) -> increasing N really doesn't make any difference at all
O(log(N)) -> every time V doubles N, you have to spend an extra amount of time T to complete the task. V doubles N again, and you spend the same amount.
O(N) -> every time V doubles N, you spend twice as much time.
O(N^2) -> every time V doubles N, you spend 4x as much time. (it's not fair!!!)
O(N log(N)) -> every time V doubles N, you spend twice as much time plus a little more.
These are bounds of an algorithm; computer scientists want to describe how long it is going to take for large values of N. (which gets important when you are factoring numbers that are used in cryptography -- if the computers speed up by a factor of 10, how many more bits do you have to use to ensure it will still take them 100 years to break your encryption and not just 1 year?)
Some of the bounds can have weird expressions if it makes a difference to the people involved. I've seen stuff like O(N log(N) log(log(N))) somewhere in Knuth's Art of Computer Programming for some algorithms. (can't remember which one off the top of my head)
A: One thing that hasn't been touched on yet for some reason:
When you see algorithms with things like O(2^n) or O(n^3) or other nasty values it often means you're going to have to accept an imperfect answer to your problem in order to get acceptable performance.
Correct solutions that blow up like this are common when dealing with optimization problems. A nearly-correct answer delivered in a reasonable timeframe is better than a correct answer delivered long after the machine has decayed to dust.
Consider chess: I don't know exactly what the correct solution is considered to be but it's probably something like O(n^50) or even worse. It is theoretically impossible for any computer to actually calculate the correct answer--even if you use every particle in the universe as a computing element performing an operation in the minimum possible time for the life of the universe you still have a lot of zeros left. (Whether a quantum computer can solve it is another matter.)
A: *
*And does somebody have to smoke crack to write an O(x!)?
No, just use Prolog. If you write a sorting algorithm in Prolog by just describing that each element should be bigger than the previous, and let backtracking do the sorting for you, that will be O(x!). Also known as "permutation sort".
A: The "Intuitition" behind Big-O
Imagine a "competition" between two functions over x, as x approaches infinity: f(x) and g(x).
Now, if from some point on (some x) one function always has a higher value then the other, then let's call this function "faster" than the other.
So, for example, if for every x > 100 you see that f(x) > g(x), then f(x) is "faster" than g(x).
In this case we would say g(x) = O(f(x)). f(x) poses a sort of "speed limit" of sorts for g(x), since eventually it passes it and leaves it behind for good.
This isn't exactly the definition of big-O notation, which also states that f(x) only has to be larger than C*g(x) for some constant C (which is just another way of saying that you can't help g(x) win the competition by multiplying it by a constant factor - f(x) will always win in the end). The formal definition also uses absolute values. But I hope I managed to make it intuitive.
A: One way of thinking about it is this:
O(N^2) means for every element, you're doing something with every other element, such as comparing them. Bubble sort is an example of this.
O(N log N) means for every element, you're doing something that only needs to look at log N of the elements. This is usually because you know something about the elements that let you make an efficient choice. Most efficient sorts are an example of this, such as merge sort.
O(N!) means to do something for all possible permutations of the N elements. Traveling salesman is an example of this, where there are N! ways to visit the nodes, and the brute force solution is to look at the total cost of every possible permutation to find the optimal one.
A: I like don neufeld's answer, but I think I can add something about O(n log n).
An algorithm which uses a simple divide and conquer strategy is probably going to be O(log n). The simplest example of this is finding a something in an sorted list. You don't start at the beginning and scan for it. You go to the middle, you decide if you should then go backwards or forwards, jump halfway to the last place you looked, and repeat this until you find the item you're looking for.
If you look at the quicksort or mergesort algorithms, you will see that they both take the approach of dividing the list to be sorted in half, sorting each half (using the same algorithm, recursively), and then recombining the two halves. This sort of recursive divide and conquer strategy will be O(n log n).
If you think about it carefully, you'll see that quicksort does an O(n) partitioning algorithm on the whole n items, then an O(n) partitioning twice on n/2 items, then 4 times on n/4 items, etc... until you get to an n partitions on 1 item (which is degenerate). The number of times you divide n in half to get to 1 is approximately log n, and each step is O(n), so recursive divide and conquer is O(n log n). Mergesort builds the other way, starting with n recombinations of 1 item, and finishing with 1 recombination of n items, where the recombination of two sorted lists is O(n).
As for smoking crack to write an O(n!) algorithm, you are unless you have no choice. The traveling salesman problem given above is believed to be one such problem.
A: Think of it as stacking lego blocks (n) vertically and jumping over them.
O(1) means at each step, you do nothing. The height stays the same.
O(n) means at each step, you stack c blocks, where c1 is a constant.
O(n^2) means at each step, you stack c2 x n blocks, where c2 is a constant, and n is the number of stacked blocks.
O(nlogn) means at each step, you stack c3 x n x log n blocks, where c3 is a constant, and n is the number of stacked blocks.
A: The big thing that Big-O notation means to your code is how it will scale when you double the amount of "things" it operates on. Here's a concrete example:
Big-O | computations for 10 things | computations for 100 things
----------------------------------------------------------------------
O(1) | 1 | 1
O(log(n)) | 3 | 7
O(n) | 10 | 100
O(n log(n)) | 30 | 700
O(n^2) | 100 | 10000
So take quicksort which is O(n log(n)) vs bubble sort which is O(n^2). When sorting 10 things, quicksort is 3 times faster than bubble sort. But when sorting 100 things, it's 14 times faster! Clearly picking the fastest algorithm is important then. When you get to databases with million rows, it can mean the difference between your query executing in 0.2 seconds, versus taking hours.
Another thing to consider is that a bad algorithm is one thing that Moore's law cannot help. For example, if you've got some scientific calculation that's O(n^3) and it can compute 100 things a day, doubling the processor speed only gets you 125 things in a day. However, knock that calculation to O(n^2) and you're doing 1000 things a day.
clarification:
Actually, Big-O says nothing about comparative performance of different algorithms at the same specific size point, but rather about comparative performance of the same algorithm at different size points:
computations computations computations
Big-O | for 10 things | for 100 things | for 1000 things
----------------------------------------------------------------------
O(1) | 1 | 1 | 1
O(log(n)) | 1 | 3 | 7
O(n) | 1 | 10 | 100
O(n log(n)) | 1 | 33 | 664
O(n^2) | 1 | 100 | 10000
A: A lot of these are easy to demonstrate with something non-programming, like shuffling cards.
Sorting a deck of cards by going through the whole deck to find the ace of spades, then going through the whole deck to find the 2 of spades, and so on would be worst case n^2, if the deck was already sorted backwards. You looked at all 52 cards 52 times.
In general the really bad algorithms aren't necessarily intentional, they're commonly a misuse of something else, like calling a method that is linear inside some other method that repeats over the same set linearly.
A: I try to explain by giving simple code examples in C# and JavaScript.
C#
For List<int> numbers = new List<int> {1,2,3,4,5,6,7,12,543,7};
O(1) looks like
return numbers.First();
O(n) looks like
int result = 0;
foreach (int num in numbers)
{
result += num;
}
return result;
O(n log(n)) looks like
int result = 0;
foreach (int num in numbers)
{
int index = numbers.Count - 1;
while (index > 1)
{
// yeah, stupid, but couldn't come up with something more useful :-(
result += numbers[index];
index /= 2;
}
}
return result;
O(n2) looks like
int result = 0;
foreach (int outerNum in numbers)
{
foreach (int innerNum in numbers)
{
result += outerNum * innerNum;
}
}
return result;
O(n!) looks like, uhm, to tired to come up with anything simple.
But I hope you get the general point?
JavaScript
For const numbers = [ 1, 2, 3, 4, 5, 6, 7, 12, 543, 7 ];
O(1) looks like
numbers[0];
O(n) looks like
let result = 0;
for (num of numbers){
result += num;
}
O(n log(n)) looks like
let result = 0;
for (num of numbers){
let index = numbers.length - 1;
while (index > 1){
// yeah, stupid, but couldn't come up with something more useful :-(
result += numbers[index];
index = Math.floor(index/2)
}
}
O(n2) looks like
let result = 0;
for (outerNum of numbers){
for (innerNum of numbers){
result += outerNum * innerNum;
}
}
A: Most Jon Bentley books (e.g. Programming Pearls) cover such stuff in a really pragmatic manner. This talk given by him includes one such analysis of a quicksort.
While not entirely relevant to the question, Knuth came up with an interesting idea: teaching Big-O notation in high school calculus classes, though I find this idea quite eccentric.
A: Ok - there are some very good answers here but almost all of them seem to make the same mistake and it's one that is pervading common usage.
Informally, we write that f(n) = O( g(n) ) if, up to a scaling factor and for all n larger than some n0, g(n) is larger than f(n). That is, f(n) grows no quicker than, or is bounded from above by, g(n). This tells us nothing about how fast f(n) grows, save for the fact that it is guaranteed not to be any worse than g(n).
A concrete example: n = O( 2^n ). We all know that n grows much less quickly than 2^n, so that entitles us to say that it is bounded by above by the exponential function. There is a lot of room between n and 2^n, so it's not a very tight bound, but it's still a legitimate bound.
Why do we (computer scientists) use bounds rather than being exact? Because a) bounds are often easier to prove and b) it gives us a short-hand to express properties of algorithms. If I say that my new algorithm is O(n.log n) that means that in the worst case its run-time will be bounded from above by n.log n on n inputs, for large enough n (although see my comments below on when I might not mean worst-case).
If instead, we want to say that a function grows exactly as quickly as some other function, we use theta to make that point (I'll write T( f(n) ) to mean \Theta of f(n) in markdown). T( g(n) ) is short hand for being bounded from above and below by g(n), again, up to a scaling factor and asymptotically.
That is f(n) = T( g(n) ) <=> f(n) = O(g(n)) and g(n) = O(f(n)). In our example, we can see that n != T( 2^n ) because 2^n != O(n).
Why get concerned about this? Because in your question you write 'would someone have to smoke crack to write an O(x!)?' The answer is no - because basically everything you write will be bounded from above by the factorial function. The run time of quicksort is O(n!) - it's just not a tight bound.
There's also another dimension of subtlety here. Typically we are talking about the worst case input when we use O( g(n) ) notation, so that we are making a compound statement: in the worst case running time it will not be any worse than an algorithm that takes g(n) steps, again modulo scaling and for large enough n. But sometimes we want to talk about the running time of the average and even best cases.
Vanilla quicksort is, as ever, a good example. It's T( n^2 ) in the worst case (it will actually take at least n^2 steps, but not significantly more), but T(n.log n) in the average case, which is to say the expected number of steps is proportional to n.log n. In the best case it is also T(n.log n) - but you could improve that for, by example, checking if the array was already sorted in which case the best case running time would be T( n ).
How does this relate to your question about the practical realisations of these bounds? Well, unfortunately, O( ) notation hides constants which real-world implementations have to deal with. So although we can say that, for example, for a T(n^2) operation we have to visit every possible pair of elements, we don't know how many times we have to visit them (except that it's not a function of n). So we could have to visit every pair 10 times, or 10^10 times, and the T(n^2) statement makes no distinction. Lower order functions are also hidden - we could have to visit every pair of elements once, and every individual element 100 times, because n^2 + 100n = T(n^2). The idea behind O( ) notation is that for large enough n, this doesn't matter at all because n^2 gets so much larger than 100n that we don't even notice the impact of 100n on the running time. However, we often deal with 'sufficiently small' n such that constant factors and so on make a real, significant difference.
For example, quicksort (average cost T(n.log n)) and heapsort (average cost T(n.log n)) are both sorting algorithms with the same average cost - yet quicksort is typically much faster than heapsort. This is because heapsort does a few more comparisons per element than quicksort.
This is not to say that O( ) notation is useless, just imprecise. It's quite a blunt tool to wield for small n.
(As a final note to this treatise, remember that O( ) notation just describes the growth of any function - it doesn't necessarily have to be time, it could be memory, messages exchanged in a distributed system or number of CPUs required for a parallel algorithm.)
A: The way I describe it to my nontechnical friends is like this:
Consider multi-digit addition. Good old-fashioned, pencil-and-paper addition. The kind you learned when you were 7-8 years old. Given two three-or-four-digit numbers, you can find out what they add up to fairly easily.
If I gave you two 100-digit numbers, and asked you what they add up to, figuring it out would be pretty straightforward, even if you had to use pencil-and-paper. A bright kid could do such an addition in just a few minutes. This would only require about 100 operations.
Now, consider multi-digit multiplication. You probably learned that at around 8 or 9 years old. You (hopefully) did lots of repetitive drills to learn the mechanics behind it.
Now, imagine I gave you those same two 100-digit numbers and told you to multiply them together. This would be a much, much harder task, something that would take you hours to do - and that you'd be unlikely to do without mistakes. The reason for this is that (this version of) multiplication is O(n^2); each digit in the bottom number has to be multiplied by each digit in the top number, leaving a total of about n^2 operations. In the case of the 100-digit numbers, that's 10,000 multiplications.
A: You might find it useful to visualize it:
Also, on LogY/LogX scale the functions n1/2, n, n2 all look like straight lines, while on LogY/X scale 2n, en, 10n are straight lines and n! is linearithmic (looks like n log n).
A: To understand O(n log n), remember that log n means log-base-2 of n. Then look at each part:
O(n) is, more or less, when you operate on each item in the set.
O(log n) is when the number of operations is the same as the exponent to which you raise 2, to get the number of items. A binary search, for instance, has to cut the set in half log n times.
O(n log n) is a combination – you're doing something along the lines of a binary search for each item in the set. Efficient sorts often operate by doing one loop per item, and in each loop doing a good search to find the right place to put the item or group in question. Hence n * log n.
A: Just to respond to the couple of comments on my above post:
Domenic - I'm on this site, and I care. Not for pedantry's sake, but because we - as programmers - typically care about precision. Using O( ) notation incorrectly in the style that some have done here renders it kind of meaningless; we may just as well say something takes n^2 units of time as O( n^2 ) under the conventions used here. Using the O( ) adds nothing. It's not just a small discrepancy between common usage and mathematical precision that I'm talking about, it's the difference between it being meaningful and it not.
I know many, many excellent programmers who use these terms precisely. Saying 'oh, we're programmers therefore we don't care' cheapens the whole enterprise.
onebyone - Well, not really although I take your point. It's not O(1) for arbitrarily large n, which is kind of the definition of O( ). It just goes to show that O( ) has limited applicability for bounded n, where we would rather actually talk about the number of steps taken rather than a bound on that number.
A: Tell your eight year old log(n) means the number of times you have to chop a length n log in two for it to get down to size n=1 :p
O(n log n) is usually sorting
O(n^2) is usually comparing all pairs of elements
A: Suppose you had a computer that could solve a problem of a certain size. Now imagine that we can double the performance a few times. How much bigger a problem can we solve with each doubling?
If we can solve a problem of double the size, that's O(n).
If we have some multiplier that isn't one, that's some sort of polynomial complexity. For example, if each doubling allows us to increase the problem size by about 40%, it's O(n^2), and about 30% would be O(n^3).
If we just add to the problem size, it's exponential or worse. For example, if each doubling means we can solve a problem 1 bigger, it's O(2^n). (This is why brute-forcing a cipher key becomes effectively impossible with reasonably sized keys: a 128-bit key requires about 16 quintillion times as much processing as a 64-bit.)
A: Remember the fable of the tortoise and the hare (turtle and rabbit)?
Over the long run, the tortoise wins, but over the short run the hare wins.
That's like O(logN) (tortoise) vs. O(N) (hare).
If two methods differ in their big-O, then there is a level of N at which one of them will win, but big-O says nothing about how big that N is.
A: To remain sincere to the question asked I would answer the question in the manner I would answer an 8 year old kid
Suppose an ice-cream seller prepares a number of ice creams ( say N ) of different shapes arranged in an orderly fashion.
You want to eat the ice cream lying in the middle
Case 1 : - You can eat an ice cream only if you have eaten all the ice creams smaller than it
You will have to eat half of all the ice creams prepared (input).Answer directly depends on the size of the input
Solution will be of order o(N)
Case 2 :- You can directly eat the ice cream in the middle
Solution will be O(1)
Case 3 : You can eat an ice cream only if you have eaten all the ice creams smaller than it and each time you eat an ice cream you allow another kid (new kid everytime ) to eat all his ice creams
Total time taken would be N + N + N.......(N/2) times
Solution will be O(N2)
A: log(n) means logarithmic growth. An example would be divide and conquer algorithms. If you have 1000 sorted numbers in an array ( ex. 3, 10, 34, 244, 1203 ... ) and want to search for a number in the list (find its position), you could start with checking the value of the number at index 500. If it is lower than what you seek, jump to 750. If it is higher than what you seek, jump to 250. Then you repeat the process until you find your value (and key). Every time we jump half the search space, we can cull away testing many other values since we know the number 3004 can't be above number 5000 (remember, it is a sorted list).
n log(n) then means n * log(n).
A: I'll try to actually write an explanation for a real eight year old boy, aside from technical terms and mathematical notions.
Like what exactly would an O(n^2) operation do?
If you are in a party, and there are n people in the party including you. How many handshakes it take so that everyone has handshaked everyone else, given that people would probably forget who they handshaked at some point.
Note: this approximate to a simplex yielding n(n-1) which is close enough to n^2.
And what the heck does it mean if an operation is O(n log(n))?
Your favorite team has won, they are standing in line, and there are n players in the team. How many hanshakes it would take you to handshake every player, given that you will hanshake each one multiple times, how many times, how many digits are in the number of the players n.
Note: this will yield n * log n to the base 10.
And does somebody have to smoke crack to write an O(x!)?
You are a rich kid and in your wardrobe there are alot of cloths, there are x drawers for each type of clothing, the drawers are next to each others, the first drawer has 1 item, each drawer has as many cloths as in the drawer to its left and one more, so you have something like 1 hat, 2 wigs, .. (x-1) pants, then x shirts. Now in how many ways can you dress up using a single item from each drawer.
Note: this example represent how many leaves in a decision-tree where number of children = depth, which is done through 1 * 2 * 3 * .. * x
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "321"
}
|
Q: Building Ruby on Windows XP Has anyone out there got a good set of instructions for building/compiling Ruby from source of windows XP ?
A: Luis Lavena maintains the Ruby One-Click installer binaries. His blogs and postings on Ruby Forum are definitely the place to start.
A: http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/184380
I normally get a binary installable for windows.. much faster if you just need Ruby installed. But you may be modifying ruby source.. anyways..
Update: I ended up compiling Ruby from source today... here is what worked for me
http://madcoderspeak.blogspot.com/2009/06/how-to-compile-ruby-from-source-on.html
A: I would agree that the binary distrubtion is your best bet for Ruby on Windows, however, like Gishu mentioned, you may be modifying it a bit. If that's the case I would build it from source with Cygwin. This will give you the familiar tool set for building software from source.
However the following thread at Ruby Forum seems to have a very active discussion on building Ruby in Windows using Microsoft's Visual C++ toolkit with some other .NET additions.
Good luck!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Can you debug a .NET app with ONLY the source code of one file? I want to debug an application in Visual Studio but I ONLY have the source code for 1 class. I only need to step through a single function in that file, but I don't understand what I need to do it. I think the steps are normally something like this:
*
*Open a file in VS
*Load in the "symbols" (.PDB file)
*Attach to the running process
I know how to do #1 and #3, but I don't how to do #2 without the .PDB file. Is it possible to generate the .PDB file for this to make it work? Thanks!
A: You need *.pdb files (step 2 from your post) These files contain mapping between source code and compiled assembly. So your step are correct. If your source file has differences with original file, set check mark "Allow the source code to be different from the original version" in BP's properties dialog.
Breakpoints and Tracepoints in Visual Studio
If you don't have PDB files you can try to decompile your project using Reflector.FileDisassembler or FileGenerator For Reflector. They you can recompile these files to get PDBs
Also take a look at Deblector - debugging addin for Reflector.
A: You need the symbol file (.PDB) file that belongs to the application you are trying to debug.
MSDN: PDB Files
The Visual Studio debugger uses the path to the PDB in the EXE or DLL file to find the project.pdb file. If the debugger cannot find the PDB file at that location, or if the path is invalid, for example, if the project was moved to another computer, the debugger searches the path containing the EXE followed by the symbol paths specified in the Options dialog box. This path is generally the Debugging folder in the Symbols node. The debugger will not load a PDB that does not match the binary being debugged.
A: The symbol file is the .pdb file. If you place that next to the exectuable, that will load the symbols, and point to the source file.
A: In your case 'Symbols' means a pdb file for the assembly you want to debug. The debugger doesn't require that you have all the source, just that you have the matching pdb. The pdb is generated during the build of the assembly, and no you unfortunately cannot create one after the fact. If you don't have the pdb you will need to debug at a lower level then the source code.
If you built the assembly on your machine then the symbols will be found when you attach. In that case just set a breakpoint on the source and do whatever is necessary to make that code run, and you'll hit the breakpoint.
If you did not build it you need to find the pdb for the assembly. The modules window found under Debug/Windows/Modules can often help by telling you the assemblies loaded in the process along with version info, and timestamps.
You'll need that information in cases where there might be multiple versions of an assembly (such as keep many nightly builds, or the last 20 or so versions from continuous integration builds).
hope that helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How would one code test and set behavior without a special hardware instruction? Most of the implementations I find require a hardware instruction to do this. However I strongly doubt this is required (if it is, I can't figure out why...)
A: You don't need a test and set instruction to get mutual exclusion locking, if thats what you're asking.
Dijkstra described the first mutual exclusion algorithm I am aware of, in 1965. The title of the paper was "Solution of a problem in concurrent programming control", search Google for a copy near you. The original algorithm required no special support from the hardware at all, but providing an atomic instruction in the CPU dramatically improves the performance.
Test-and-set, atomic swap, and load-linked + store-conditional are all common primitives for CPUs to provide. All can be used to implement mutual exclusion, which can then be used to implement whatever locking semantics you want.
A: If you'd like a cross-arch way to do so, and are using gcc, then you can use gcc's atomic builtins:
http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html
Calling these will result in a hardware specific machine instruction for the current build architecture. On those that do not support them, the compile will fail. (I think...)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Why is a different dll produced after a clean build, with no code changes? When I do a clean build my C# project, the produced dll is different then the previously built one (which I saved separately). No code changes were made, just clean and rebuild.
Diff shows some bytes in the DLL have changes -- few near the beginning and few near the end, but I can't figure out what these represent. Does anybody have insights on why this is happening and how to prevent it?
This is using Visual Studio 2005 / WinForms.
Update: Not using automatic version incrementing, or signing the assembly. If it's a timestamp of some sort, how to I prevent VS from writing it?
Update: After looking in Ildasm/diff, it seems like the following items are different:
*
*Two bytes in PE header at the start of the file.
*<PrivateImplementationDetails>{guid} section
*Cryptic part of the string table near the end (wonder why, I did not change the strings)
*Parts of assembly info at the end of file.
No idea how to eliminate any of these, if at all possible...
A: My best guess would be the changed bytes you're seeing are the internally-used metadata columns that are automatically generated at build-time.
Some of the Ecma-335 Partition II (CLI Specification Metadata Definition) columns that can change per-build, even if the source code doesn't change at all:
*
*Module.Mvid: A build-time-generated GUID. Always changes, every build.
*AssemblyRef.HashValue: Could change if you're referencing another assembly that has also been rebuilt since the old build.
If this really, really bothers you, my best tip on finding out exactly what is changing would be to diff the actual metadata tables. The way to get these is to use the ildasm MetaInfo window:
View > MetaInfo > Raw:Header,Schema,Rows // important, otherwise you get very basic info from the next step
View > MetaInfo > Show!
A: I think that would be the TimeDateStamp field in the IMAGE_FILE_HEADER header of the PE32 specifications.
A: Could be that the build or revision numbers have changed.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: How to load a specific version of an assembly To complete some testing I need to load the 64 bit version of an assembly even though I am running a 32 bit version of Windows. Is this possible?
A: I'm not sure why you would want to do this, but I suppose you could. If you don't do anything to tell it otherwise, the CLR will load the version of the assembly that is specific to the CPU you are using. That's usually what you want. But I have had an occasion where I needed to load the neutral IL version of an assembly. I used the Load method to specify the version. I haven't tried it (and others here suggest it won't work for an executable assembly), but I suppose you can do the same to specify you want to load the 64 bit version. (You'll have to specify if you want the AMD64 or IA64 version.)
A: From CLR Via C# (Jeff Richter):
"If your assembly files contain only type-safe managed code,
you are writing code that should work on both 32-bit and 64-bit versions of Windows. No
source code changes are required for your code to run on either version of Windows.
In fact,
the resulting EXE/DLL file produced by the compiler will run on 32-bit Windows as well as
the x64 and IA64 versions of 64-bit Windows! In other words, the one file will run on any
machine that has a version of the .NET Framework installed on it."
" The C# compiler offers a /platform command-line switch. This switch allows you to specify
whether the resulting assembly can run on x86 machines running 32-bit Windows versions
only, x64 machines running 64-bit Windows only, or Intel Itanium machines running 64-bit
Windows only. If you don't specify a platform, the default is anycpu, which indicates that the
resulting assembly can run on any version of Windows.
A: 32 bit Windows can not run 64 bit executables without a VM/emutalor
32 bit Windows can compile for execution on 64 bit Windows
A: No, you cannot run assemblies that are compiled for 64-bit on a system running the 32-bit version of Windows.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Are units of measurement unique to F#? I was reading Andrew Kennedy's blog post series on units of measurement in F# and it makes a lot of sense in a lot of cases. Are there any other languages that have such a system?
Edit: To be more clear, I mean the flexible units of measurement system where you can define your own arbitrarily.
A: Nemerle had compiler-checked Units of Measure in 2006.
http://nemerle.org
http://nemerle.org/forum.old/viewtopic.php?t=265&view=previous&sid=00f48f33fafd3d49cc6a92350b77d554
A: C++ has it, in the form of boost::units.
A: I'm not sure if this really counts, but the RPL system on my HP-48 calculator does have similar features. I can write 40_gal 5_l + and get the right answer of 156.416 liters.
A: I believe I saw that Fortress support this, I'll see if I can find a link.
I can't find a specific link, but the language specification makes mention of it in a couple of places. The 1.0 language specification also says that dimensions and units were temporarily dropped from the specification (along with a whole heap of other features) to match up with the current implementation. It's a work in progress, so I guess things are in flux.
A: F# is the first mainstream language to support this feature.
A: There is also a Java specification for units at http://jcp.org/en/jsr/detail?id=275 and you can already use it from here http://jscience.org/
A: Nemerle has something much better than F# !
You should check this one : http://rsdn.ru/forum/src/1823225.flat.aspx#1823225 .
It is really great .
And you can download here : http://rsdn.ru/File/27948/Oyster.Units.0.06.zip
Some example:
def m3 = 1 g;
def m4 = Si.Mass(m1);
WriteLine($"Mass in SI: $m4, in CGS: $m3");
def x1 = Si.Area(1 cm * 10 m);
WriteLine($"Area of 1 cm * 10 m = $x1 m");
A: Does TI-89 BASIC count? Enter 54_kg * (_c^2) and it will give you an answer in joules.
Other than that, I can't recall any languages that have it built in, but any language with decent OO should make it simple to roll your own. Which means someone else probably already did.
Google confirms. For example, here's one in Python. __repr__ could easily be amended to also select the most appropriate derived unit, etc.
CPAN has several modules for Perl: Physics::Unit, Data::Dimensions, Class::Measure, Math::Units::PhysicalValue, and a handful of others that will convert but don't really combine values with units.
A: I'm pretty sure Ada has it.
A: well I made QuantitySystem library especially for units in C#, however its not compile time checking
but I've tried to make it run as I wanted
also it supports expansion so you can define your unique units
http://QuantitySystem.CodePlex.com
also it can differentiate between Torque and Work :) [This was important for me]
the library approach is from Dimension to units
all I've seen till now units only approach.
A: I'm sure you'd be able to do this with most dynamic languages (javascript, python, ruby) by carefully monkey-patching some of the base-classes. You might get into problems though when working with imperial measurements.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: How to repaint a Word 2003 menubar I have a Word 2003 .dot template that changes its menu based on the condition of the active document.
The DocumentChange, DocumentOpen and NewDocument events of Word.Application trigger setting the .Visible and .Enabled properties of CommandBarButton controls.
On switching active documents, controls exposed by changing the Visible property display correctly, but text buttons which have been enabled/disabled do not change appearance. You can show enabled controls by hovering over them, but the disabled ones do not repaint until you place a window in front.
Is there a simple way to send a repaint message to the menubar, to simulate hiding and exposing?
A: You are playing with the visible & enabled properties of the controls. But did you try to hide/unhide the whole commandbar to refresh it?
application.CommandBars.ActiveMenuBar.visible = false
application.CommandBars.ActiveMenuBar.visible = true
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How often to commit changes to source control? How often should I commit changes to source control ? After every small feature, or only for large features ?
I'm working on a project and have a long-term feature to implement. Currently, I'm committing after every chunk of work, i.e. every sub-feature implemented and bug fixed. I even commit after I've added a new chunk of tests for some feature after discovering a bug.
However, I'm concerned about this pattern. In a productive day of work I might make 10 commits. Given that I'm using Subversion, these commits affect the whole repository, so I wonder if it indeed is a good practice to make so many ?
A: When you say you are concerned that your "commits affect the whole repository" --- are you referring to the fact that the whole repository's revision number increases? I don't know how many bits Subversion uses to store it, but I'm pretty sure you're not going to run out of revision numbers! Many commits are not a problem. You can commit ten times as often as the guy next door and you won't increase your carbon footprint at all.
A single function or method should be named for what it does, and if the name is too long, it is doing too much. I try to apply the same rule to check-ins: the check-in comment should describe exactly what the change accomplishes, and if the comment is too long, I'm probably changing too much at once.
A: The rule of thumb, that I use, is check-in when the group of files being checked-in can be covered by a single check-in comment.
This is generally to ensure that check-ins are atomic and that the comments can be easily digested by other developers.
It is especially true when your changes affect a configuration file (such as a spring context file or a struts config file) that has application wide scope. If you make several 'groups' of changes before checking in, their impact overlaps in the configuration file, causing the 2 groups to become merged with each other.
A: I don't think you should worry so much about how often. The important thing here is what, when and why. Saying that you have to commit every 3 hours or every 24 hours really makes no sense. Commit when you have something to commit, don't if you don't.
Here's an extract from my recommended best practices for version control:
[...] If you are doing many changes to a project at the same time, split them up into logical parts and commit them in multiple sessions. This makes it much easier to track the history of individual changes, which will save you a lot of time when trying to find and fix bugs later on. For example, if you are implementing feature A, B and C and fixing bug 1, 2 and 3, that should result in a total of at least six commits, one for each feature and one for each bug. If you are working on a big feature or doing extensive refactoring, consider splitting your work up into even smaller parts, and make a commit after each part is completed. Also, when implementing independent changes to multiple logical modules, commit changes to each module separately, even if they are part of a bigger change.
Ideally, you should never leave your office with uncommitted changes on your hard drive. If you are working on projects where changes will affect other people, consider using a branch to implement your changes and merge them back into the trunk when you are done. When committing changes to libraries or projects that other projects—and thus, other people—depend on, make sure you don’t break their builds by committing code that won’t compile. However, having code that doesn’t compile is not an excuse to avoid committing. Use branches instead. [...]
A: Your current pattern makes sense. Keep in mind how you use this source control: what if you have to rollback, or if you want to do a diff? The chunks you describe seem like exactly the right differential in those cases: the diff will show you exactly what changed in implementing bug #(specified in checkin log), or exactly what the new code was for implementing a feature. The rollback, similarly, will only touch one thing at a time.
A: I also like to commit after I finish a chunk of work, which is often several times a day. I think it's easier to see what's happening in small commits than big ones. If you're worried about too many commits, you may consider creating a branch and merging it back to the trunk when the whole feature is finished.
Here's a related blog post: Coding Horror: Check In Early, Check In Often
A: As others have stated, try to commit one logical chunk that is "complete" enough that it does not get in other devs' way (e.g., it builds and passes automated tests).
Each dev team / company must define what is "complete enough" for each branch. For example, you may have feature branches that require the code only to build, a Trunk that also requires code to pass automated tests, and labels indicating something has passed QA testing... or something like that.
I'm not saying that this is a good pattern to follow; I'm only pointing out that how done is "done" depends on your team's / company's policies.
A: I also like to check in regularly. That is every time I have a completed a step towards my goal.
This is typically every couple of hours.
My difficulty is finding someone willing and able to perform so many code reviews.
Our company policy is that we need to have a code review before we can check anything in, which makes sense, but there is not always someone in the department who has time to immediately perform a code review. Possible Solutions:
*
*More work per check in; less checkins == less reviews.
*Change the company checkin policy. If I have just done some refactoring and the unit tests all run green, maybe I can relax the rule?
*Shelve the change until someone can perform the review and continue working. This can be problematic if the reviewer does not like you code and you have to redesign. Juggling different stages of a task by 'shelving' changes can become messy.
A: I like this small article from Jeff Atwood: Check In Early, Check In Often
A: The moment you think about it.
(as long as what you check in is safe)
A: Depends on your source code system and what else you have in place. If you're using Git, then commit whenever you finish a step. I use SVN and I like to commit when I finish a whole feature, so, every one to five hours. If I were using CVS I'd do the same.
A: I agree with several of the responses: do not check in code that will not compile; use a personal branch or repository if your concern is having a "backup" of the code or its changes; check in when logical units are complete.
One other thing that I would add is that depending on your environment, the check-in rate may vary with time. For example, early in a project checking in after each functional piece of a component is complete makes sense for both safety and having a revision history (I am thinking of cases where earlier bits get refactored as later ones are being developed). Later in the project, on the other hand, entirely complete functionality becomes more important, especially during integration development/testing. A half-integration or half-fix does not help anyone.
As for checking in after each bug fix: unless the fix is trivial, absolutely! Nothing is more of a pain than finding that one check in contained three fixes and one of them needs to be rolled back. More often than not it seems that in that situation the developer fixed three bugs in one area and unwinding which change goes to which bug fix is a nightmare.
A: I personally commit every logical group of code that is finished/stable/compiles and try not to leave the day without committing what I did that day.
A: If you are making major changes and are concerned about affecting others working on the code, you can create a new branch, and then merge back into the trunk after your changes are complete.
A: Anytime I complete a "full thought" of code that compiles and runs I check-in. This usually ends up being anywhere between 15-60 minutes. Sometimes it could be longer, but I always try to checkin if I have a lot of code changes that I wouldn't want to rewrite in case of failure. I also usually make sure my code compiles and I check-in at the end of the work day before I go home.
I wouldn't worry about making "too many" commits/check-ins. It really sucks when you have to rewrite something, and it's nice to be able to rollback in small increments just in case.
A: I like to commit changes every 30-60 minutes, as long as it compiles cleanly and there are no regressions in unit tests.
A: Well, you could have your own branch to which you can commit as often as you like, and when you are done with your feature, you could merge it to the main trunk.
On the frequency of Commits, I think of it this way, how much pain would it be to me if my hard disk crashed and I hadn't committed something - the quantum of this something for me is about 2 hours of work.
Of course, I never commit something that doesn't compile.
A: At least once a day.
A: I don't have a specific time limit per commit, I tend to commit once a test has passed and I'm happy with the code. I wouldn;t commit code that does not compile or is otherwise in a state that I would not feel good about reverting to in case of failure
A: You have to balance the compromise between safety and recoverability on the one hand and ease of change management for the entire project on the other.
The best scheme that I've used has had two answers to that question.
We used 2 completely separate repositories : one was the project wide repository and the other was our own personal repository (we were using rcs at the time).
We would check into our personal repository very regularly, pretty much each time you saved your open files. As such the personal repository was basically a big, long ranging, undo buffer.
Once we had a chunk of code that would compile, tested ok and was accepted as being ready for general use it was checked into the project repository.
Unfortunately this system relied on the use of different VCS technologies to be workable. I've not found any satisfactory method of achieving the same results while using two of VCS of the same type (eg. two subversion repositories)
However, I have had acceptable results by creating "personal" development branches in a subversion repository - checking into the branch regularly and then merging into the trunk upon completion.
A: If you're working on a branch which won't be released, a commit is always safe.
However, if you are sharing it with other developers, committing non-working code is likely to be a bit annoying (particularly if it's in an important place). Normally I only commit code which is effectively "working" - not that it's been fully tested, but that I've ascertained that it does actually compile and not fail immediately.
If you're using an integrated bug tracker, it may be helpful to do separate commits if you've fixed two bugs, so that the commit log can go against the right bugs. But then again, sometimes one code change fixes two bugs, so then you just have to choose which one to put it against (unless your system allows one commit to be associated with multiple bugs)
A: I still believe in the phrase 'commit often, commit early'. I prefer decentralized VCS like Mercurial and there's no problem to commit several things and push it upstream later.
This is really a common question, but the real question is: Can you commit unfinished code?
A: Whenever you finish some code that works and won't screw anyone else up if they get it in an update.
And please make sure you comment properly.
A: If your version control comment is longer than one or two sentences, you probably aren't committing often enough.
A: I follow the open-source mantra (paraphrased) - commit early, commit often.
Basically whenever I think I've added useful functionality (however small) without introducing problems for other team members.
This commit-often strategy is particularly useful in continuous integration environments as it allows integration testing against other development efforts, giving early detection of problems.
A: I commit everytime I'm done with a task. That usually takes 30 mins to 1 hr.
A: Don't commit code that doesn't actually work. Don't use your repository as a backup solution.
Instead, back up your incomplete code locally in an automated way. Time Machine takes care of me, and there are plenty of free programs for other platforms.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "225"
}
|
Q: Is QuickSilver dead? After having read that QuickSilver was no longer supported by BlackTree and has since gone open source, I noticed more and more people switching to/suggesting other app launchers i.e. Buttler and LaunchBar.
Is QuickSilver still relevant? Has anyone experienced any instability since it's gone open source?
A: It still runs stably for me. I would be miserable without it.
And yeah, I would recommend switching if you only use it for an "app launcher", but launching apps is like white belt Quicksilver. I don't know of any program that lets you simply tell your computer what to do in such a simple way. And even Spotlight won't remember the keys you usually type to identify an object or action.
Ubiquity for Firefox is pretty good, but it's locked inside a browser...
A: I haven't used OS X in a while, but the impression I get is that Spotlight has largely negated the reason for using a launcher in the first place. Quicksilver has some cool things like direct objects built in, but by and large it was mostly used for launching apps, and Spotlight can now do that just as fast.
A: I also gave up on QuickSilver for a while when Leopard came out. I tried Spotlight. I gave up on that and returned. QuickSilver is much faster, and it does so much more that I missed.
I have not noticed any instability (Leopard) running B54 (3815) - it looks like the open-source version is B56A3 though.
QuickSilver is awesome when integrated with Parallels/VMWare Fusion to launch Windows apps too. You don't get the deep integration as with the various OSX plugins, but it definitely helps the dual-OS usability.
A: Quicksilver is still alive and well. There are at least a couple of endeavours to keep it going, up to date and restructure and clean up the code base. Check out the code from Google Code.
As for launching apps, not even Spotlight comes close to how fast it is in Quicksilver.
Of course the real joy of Quicksilver is past just launching apps and using triggers, scripts and the many plugins. My workflow goes to a new level with Quicksilver. I'd be lost without it.
Update: Since posting this I switched and use LaunchBar for a while. This was during the time that QuickSilver seemed to be almost close to death. Loved LaunchBar and didn't need to switch back to QuickSilver. Recently though, I have left LaunchBar and have been using Alfred. I would highly recommend it. For me, LaunchBar and Alfred are pretty close. But, aesthetically and operationally, Alfred suits my tastes more than LaunchBar.
A: I love QS and agree that it is so productive that I am willing to put up with its flaws. I usually have to launch it several times before it gets up and running, though. To fix that issue I created a little quicksilver launcher app.
A: I use quicksilver all day (on latest version of OSX); and no spotlight doesn't negate it... quicksilver is still much faster for launching applications.
A: After Quicksilver stopped being updated for a while, I migrated to LaunchBar. Quicksilver had some occasional crashes and could be very resource intensive. LaunchBar has largely the same functionality without these problems. It is not free though.
A: I didn't know Quicksilver wasn't being as actively supported.
It does all I need it to do at the moment though.
Just installed LaunchBar but I can't set it to be Option + Space to "launch", I can't deal with it not using that, I'm too use to Spotlight on Command + Space and Ctrl + Space is for VS 2008 :P
A: The one thing I do miss was using QS to quickly send attachments via email to people in my address book. Highlighting the file, activate QS, Current Selection tab Mail to.. tab Person's name was just awesome.
After the 10.5.5 update, I find Spotlight to solve 99% of the things I originally used Quicksilver for and the speed is nearly identical now. Spotlight is invaluable for finding information you may not remember where or when you last saw it. Unless a major rewrite of QS causes me to reevaluate it again, I suspect Spotlight will be all I need and use.
A: There are a couple branches out there that are active, I think I'm currently running B56 and loving it. I have too many scripts, triggers, objects that I rely on daily...I would be lost without it.
A: It's 201 and it's still running strong!
A: QuickSilver is still alive, and well.
You can find the hub-website for all activities at http://qsapp.com/
GitHub (used for source code and issues tracking) is at https://github.com/quicksilver/Quicksilver
The latest version, B58 (3841) is quite stable on Snow Leopard (10.6.6).
A: No. It's back, baby.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: What is the optimal way to organize shared .net assemblies in SVN? We are starting a new SOA project with a lot of shared .net assemblies. The code for these assemblies will be stored in SVN.
In development phase, we would like to be able to code these assemblies as an entire solution with as little SVN 'friction' as possible.
When the project enters more of a maintenance mode, the assemblies will be maintained on an individual level.
Without making Branching, Tagging, and Automated Builds a maintenance nightmare, what's the best way to organize these libraries in SVN that also works well with the VS 2008 IDE?
Do you setup Trunk/Branches/Tags at each library level and try to spaghetti it all together somehow at compile time, or is it better to keep it all as one big project with code replicated here and there for simplicity? Is there a solution using externs?
A: What we did at our company was to set up a tools repository, and then a project repository. The tools repository is a Subversion repository, organized as follows:
/svn/tools/
vendor1/
too11/
1.0/
1.1/
latest = a copy of vendor1/tool1/1.1
tool2/
1.0/
1.5/
latest = a copy of vendor1/tool2/1.5
vendor2/
foo/
1.0.0/
1.1.0/
1.2.0/
latest = a copy of vendor2/foo/1.2.0
Every time we get a new version of a tool from a vendor, it is added under its vendor, name, and version number, and the 'latest' tag is updated.
[Clarification: this is NOT a typical source respository -- it's intended to store specific versions of 'installed' images. Thus /svn/tools/nunit/nunit2/2.4 would be the top of a directory tree containing the results of installing NUnit 2.4 to a directory and importing it into the tools repository. Source and examples may be present, but the primary focus is on executables and libraries that are necessary to use the tool. If we needed to modify a vendor tool, we'd do that in a separate repository, and release the result to this repository.]
One of the vendors is my company, and has a separate section for each tool, assembly, whatever that we release internally.
The projects repository is a standard Subversion repository, with trunks, tags, and branches as you normally expect. Any given project will look like:
/svn/
branches/
tags/
trunk/
foo/
source/
tools/
publish/
foo-build.xml (for NAnt)
foo.build (for MSBuild)
The tools directory has a Subversion svn:externals property set, that links in the appropriate version (either a specific version or 'latest') of each tool or assembly that is needed by that project. When the 'foo' project is built by CruiseControl.NET, the publish task will populate the 'publish' directory as the 'foo' assembly is intended to be deployed, and then executes the following subversion commands:
svn import publish /svn/tools/vendor2/foo/1.2.3
svn delete /svn/tools/vendor2/foo/latest
svn copy /svn/tools/vendor2/foo/1.2.3 /svn/tools/vendor2/foo/latest
Developers work on their projects as normal, and let the build automation take care of the details. A normal subversion update will pull the latest versions of external tools as well as as project updates.
If you've got a lot of tool interdependency, you can configure CruiseControl.NET (by hand) to trigger builds for subordinate projects when their dependencies change, but we haven't needed to go that far yet.
Note: All of the Subversion repository paths have been shortened for clarity. We actually use Apache+SVN, and two separate servers, but you should adapt this as you see fit.
A: What we did with shared assemblies during development phase (in a project which had loads of these), is that we put them on a network share (N Drive) type of a place, and every developer referenced them from there.
Our build process would always update this share with the latest versions. This way the actual assemblies never had to be kept in source control. Only the code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Change pointer to an array to get a specific array element I understand the overall meaning of pointers and references(or at least I think i do), I also understand that when I use new I am dynamically allocating memory.
My question is the following:
If i were to use cout << &p, it would display the "virtual memory location" of p.
Is there a way in which I could manipulate this "virtual memory location?"
For example, the following code shows an array of ints.
If I wanted to show the value of p[1] and I knew the "virtual memory location" of p, could I somehow do "&p + 1" and obtain the value of p[1] with cout << *p, which will now point to the second element in the array?
int *p;
p = new int[3];
p[0] = 13;
p[1] = 54;
p[2] = 42;
A: Sure, you can manipulate the pointer to access the different elements in the array, but you will need to manipulate the content of the pointer (i.e. the address of what p is pointing to), rather than the address of the pointer itself.
int *p = new int[3];
p[0] = 13;
p[1] = 54;
p[2] = 42;
cout << *p << ' ' << *(p+1) << ' ' << *(p+2);
Each addition (or subtraction) mean the subsequent (prior) element in the array. If p points to a 4 byte variable (e.g. int on typical 32-bits PCs) at address say 12345, p+1 will point to 12349, and not 12346. Note you want to change the value of what p contains before dereferencing it to access what it points to.
A: Not quite. &p is the address of the pointer p. &p+1 will refer to an address which is one int* further along. What you want to do is
p=p+1; /* or ++p or p++ */
Now when you do
cout << *p;
You will get 54. The difference is, p contains the address of the start of the array of ints, while &p is the address of p. To move one item along, you need to point further into the int array, not further along your stack, which is where p lives.
If you only had &p then you would need to do the following:
int **q = &p; /* q now points to p */
*q = *q+1;
cout << *p;
That will also output 54 if I am not mistaken.
A: It's been a while (many years) since I worked with pointers but I know that if p is pointing at the beginning of the array (i.e. p[0]) and you incremented it (i.e. p++) then p will now be pointing at p[1].
I think that you have to de-reference p to get to the value. You dereference a pointer by putting a * in front of it.
So *p = 33 with change p[0] to 33.
I'm guessing that to get the second element you would use *(p+1) so the syntax you'd need would be:
cout << *(p+1)
or
cout << *(++p)
A: I like to do this:
&p[1]
To me it looks neater.
A: Think of "pointer types" in C and C++ as laying down a very long, logical row of cells superimposed on the bytes in the memory space of the CPU, starting at byte 0. The width of each cell, in bytes, depends on the "type" of the pointer. Each pointer type lays downs a row with differing cell widths. A "int *" pointer lays down a row of 4-byte cells, since the storage width of an int is 4 bytes. A "double *" lays down a 8-byte per-cell row; a "struct foo *" pointer lays down a row with each cell the width of a single "struct foo", whatever that is. The "address" of any "thing" is the byte offset, starting at 0, of the cell in the row holding the "thing".Pointer arithmetic is based on cells in the row, not bytes. "*(p+10)" is a reference to the 10th cell past "p", where the cell size is determined by the type of p. If the type of "p" is "int", the address of "p+10" is 40 bytes past p; if p is a pointer to a struct 1000 bytes long, "p+10" is 10,000 bytes past p. (Note that the compiler gets to choose an optimal size for a struct that may be larger than what you'd think; this is due to "padding" and "alignment". The 1000 byte struct discussed might actually take 1024 bytes per cell, for example, so "p+10" would actually be 10,240 bytes past p.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Any suggestions for testing extjs code in a browser, preferably with selenium? We've been using selenium with great success to handle high-level website testing (in addition to extensive python doctests at a module level). However now we're using extjs for a lot of pages and its proving difficult to incorporate Selenium tests for the complex components like grids.
Has anyone had success writing automated tests for extjs-based web pages? Lots of googling finds people with similar problems, but few answers. Thanks!
A: This blog helped me a lot. He's written quite a lot on the topic and it seems like its still active. The guy also seems to appreciate good design.
He basically talks about using sending javascript to do queries and using the Ext.ComponentQuery.query method to retrieve stuff in the same way you do in your ext app internally. That way you can use xtypes and itemIds and dont have to worry about trying to parse any of the mad auto-generated stuff.
I found this article in particular very helpful.
Might post something a bit more detailed on here soon - still trying to get my head around how to do this properly
A: I have been testing my ExtJs web application with selenium. One of the biggest problem was selecting an item in the grid in order to do something with it.
For this, I wrote helper method (in SeleniumExtJsUtils class which is a collection of useful methods for easier interaction with ExtJs):
/**
* Javascript needed to execute in order to select row in the grid
*
* @param gridId Grid id
* @param rowIndex Index of the row to select
* @return Javascript to select row
*/
public static String selectGridRow(String gridId, int rowIndex) {
return "Ext.getCmp('" + gridId + "').getSelectionModel().selectRow(" + rowIndex + ", true)";
}
and when I needed to select a row, I'd just call:
selenium.runScript( SeleniumExtJsUtils.selectGridRow("<myGridId>", 5) );
For this to work I need to set my id on the grid and not let ExtJs generate it's own.
A: To detect that element is visible you use the clause:
not(contains(@style, "display: none")
It's better to use this:
visible_clause = "not(ancestor::*[contains(@style,'display: none')" +
" or contains(@style, 'visibility: hidden') " +
" or contains(@class,'x-hide-display')])"
hidden_clause = "parent::*[contains(@style,'display: none')" +
" or contains(@style, 'visibility: hidden')" +
" or contains(@class,'x-hide-display')]"
A: Can you provide more insight into the types of problems you're having with extjs testing?
One Selenium extension I find useful is waitForCondition. If your problem seems to be trouble with the Ajax events, you can use waitForCondition to wait for events to happen.
A: Ext JS web pages can be tricky to test, because of the complicated HTML they end up generating like with Ext JS grids.
HTML5 Robot deals with this by using a series of best practices for how to reliably lookup and interact with components based on attributes and conditions which are not dynamic. It then provides shortcuts for doing this with all of the HTML, Ext JS, and Sencha Touch components that you would need to interact with. It comes in 2 flavors:
*
*Java - Familiar Selenium and JUnit based API that has built in web driver support for all modern browsers.
*Gwen - A human style language for quickly and easily creating and maintaining browser tests, which comes with its own integrated development environment. All of which is based on the Java API.
For example if you were wanting to find the Ext JS grid row containing the text "Foo", you could do the following in Java:
findExtJsGridRow("Foo");
...and you could do the following in Gwen:
extjsgridrow by text "Foo"
There is a lot of documentation for both Java and Gwen for how to work with Ext JS specific components. The documentation also details the resulting HTML for all of these Ext JS components, which you also may find useful.
A: Useful tips for fetch grid via Id of grid on the page:
I think you can extend more useful function from this API.
sub get_grid_row {
my ($browser, $grid, $row) = @_;
my $script = "var doc = this.browserbot.getCurrentWindow().document;\n" .
"var grid = doc.getElementById('$grid');\n" .
"var table = grid.getElementsByTagName('table');\n" .
"var result = '';\n" .
"var row = 0;\n" .
"for (var i = 0; i < table.length; i++) {\n" .
" if (table[i].className == 'x-grid3-row-table') {\n".
" row++;\n" .
" if (row == $row) {\n" .
" var cols_len = table[i].rows[0].cells.length;\n" .
" for (var j = 0; j < cols_len; j++) {\n" .
" var cell = table[i].rows[0].cells[j];\n" .
" if (result.length == 0) {\n" .
" result = getText(cell);\n" .
" } else { \n" .
" result += '|' + getText(cell);\n" .
" }\n" .
" }\n" .
" }\n" .
" }\n" .
"}\n" .
"result;\n";
my $result = $browser->get_eval($script);
my @res = split('\|', $result);
return @res;
}
A: Easier testing through custom HTML data- attributes
From the Sencha documentation:
An itemId can be used as an alternative way to get a reference to a component when no object reference is available. Instead of using an id with Ext.getCmp, use itemId with Ext.container.Container.getComponent which will retrieve itemId's or id's. Since itemId's are an index to the container's internal MixedCollection, the itemId is scoped locally to the container -- avoiding potential conflicts with Ext.ComponentManager which requires a unique id.
Overriding the Ext.AbstractComponent's onBoxReady method, I set a custom data attribute (whose name comes from my custom testIdAttr property of each component) to the component's itemId value, if it exists. Add the Testing.overrides.AbstractComponent class to your application.js file's requires array.
/**
* Overrides the Ext.AbstracComponent's onBoxReady
* method to add custom data attributes to the
* component's dom structure.
*
* @author Brian Wendt
*/
Ext.define('Testing.overrides.AbstractComponent', {
override: 'Ext.AbstractComponent',
onBoxReady: function () {
var me = this,
el = me.getEl();
if (el && el.dom && me.itemId) {
el.dom.setAttribute(me.testIdAttr || 'data-selenium-id', me.itemId);
}
me.callOverridden(arguments);
}
});
This method provides developers with a way to reuse a descriptive identifier within their code and to have those identifiers available each time the page is rendered. No more searching through non-descriptive, dynamically-generated ids.
A: We are developing a testing framework that uses selenium and encountered problems with extjs (since it's client side rendering). I find it useful to look for an element once the DOM is ready.
public static boolean waitUntilDOMIsReady(WebDriver driver) {
def maxSeconds = DEFAULT_WAIT_SECONDS * 10
for (count in 1..maxSeconds) {
Thread.sleep(100)
def ready = isDOMReady(driver);
if (ready) {
break;
}
}
}
public static boolean isDOMReady(WebDriver driver){
return driver.executeScript("return document.readyState");
}
A: The biggest hurdle in testing ExtJS with Selenium is that ExtJS doesn't render standard HTML elements and the Selenium IDE will naively (and rightfully) generate commands targeted at elements that just act as decor -- superfluous elements that help ExtJS with the whole desktop-look-and-feel. Here are a few tips and tricks that I've gathered while writing automated Selenium test against an ExtJS app.
General Tips
Locating Elements
When generating Selenium test cases by recording user actions with Selenium IDE on Firefox, Selenium will base the recorded actions on the ids of the HTML elements. However, for most clickable elements, ExtJS uses generated ids like "ext-gen-345" which are likely to change on a subsequent visit to the same page, even if no code changes have been made. After recording user actions for a test, there needs to be a manual effort to go through all such actions that depend on generated ids and to replace them. There are two types of replacements that can be made:
Replacing an Id Locator with a CSS or XPath Locator
CSS locators begin with "css=" and XPath locators begin with "//" (the "xpath=" prefix is optional). CSS locators are less verbose and are easier to read and should be preferred over XPath locators. However, there can be cases where XPath locators need to be used because a CSS locator simply can't cut it.
Executing JavaScript
Some elements require more than simple mouse/keyboard interactions due to the complex rendering carried out by ExtJS. For example, a Ext.form.CombBox is not really a <select> element but a text input with a detached drop-down list that's somewhere at the bottom of the document tree. In order to properly simulate a ComboBox selection, it's possible to first simulate a click on the drop-down arrow and then to click on the list that appears. However, locating these elements through CSS or XPath locators can be cumbersome. An alternative is to locate the ComoBox component itself and call methods on it to simulate the selection:
var combo = Ext.getCmp('genderComboBox'); // returns the ComboBox components
combo.setValue('female'); // set the value
combo.fireEvent('select'); // because setValue() doesn't trigger the event
In Selenium the runScript command can be used to perform the above operation in a more concise form:
with (Ext.getCmp('genderComboBox')) { setValue('female'); fireEvent('select'); }
Coping with AJAX and Slow Rendering
Selenium has "*AndWait" flavors for all commands for waiting for page loads when a user action results in page transitions or reloads. However, since AJAX fetches don't involve actual page loads, these commands can't be used for synchronization. The solution is to make use of visual clues like the presence/absence of an AJAX progress indicator or the appearance of rows in a grid, additional components, links etc. For example:
Command: waitForElementNotPresent
Target: css=div:contains('Loading...')
Sometimes an element will appear only after a certain amount of time, depending on how fast ExtJS renders components after a user action results in a view change. Instead of using arbitary delays with the pause command, the ideal method is to wait until the element of interest comes within our grasp. For example, to click on an item after waiting for it to appear:
Command: waitForElementPresent
Target: css=span:contains('Do the funky thing')
Command: click
Target: css=span:contains('Do the funky thing')
Relying on arbitrary pauses is not a good idea since timing differences that result from running the tests in different browsers or on different machines will make the test cases flaky.
Non-clickable Items
Some elements can't be triggered by the click command. It's because the event listener is actually on the container, watching for mouse events on its child elements, that eventually bubble up to the parent. The tab control is one example. To click on the a tab, you have to simulate a mouseDown event at the tab label:
Command: mouseDownAt
Target: css=.x-tab-strip-text:contains('Options')
Value: 0,0
Field Validation
Form fields (Ext.form.* components) that have associated regular expressions or vtypes for validation will trigger validation with a certain delay (see the validationDelay property which is set to 250ms by default), after the user enters text or immediately when the field loses focus -- or blurs (see the validateOnDelay property). In order to trigger field validation after issuing the type Selenium command to enter some text inside a field, you have to do either of the following:
*
*Triggering Delayed Validation
ExtJS fires off the validation delay timer when the field receives keyup events. To trigger this timer, simply issue a dummy keyup event (it doesn't matter which key you use as ExtJS ignores it), followed by a short pause that is longer than the validationDelay:
Command: keyUp
Target: someTextArea
Value: x
Command: pause
Target: 500
*Triggering Immediate Validation
You can inject a blur event into the field to trigger immediate validation:
Command: runScript
Target: someComponent.nameTextField.fireEvent("blur")
Checking for Validation Results
Following validation, you can check for the presence or absence of an error field:
Command: verifyElementNotPresent
Target: //*[@id="nameTextField"]/../*[@class="x-form-invalid-msg" and not(contains(@style, "display: none"))]
Command: verifyElementPresent
Target: //*[@id="nameTextField"]/../*[@class="x-form-invalid-msg" and not(contains(@style, "display: none"))]
Note that the "display: none" check is necessary because once an error field is shown and then it needs to be hidden, ExtJS will simply hide error field instead of entirely removing it from the DOM tree.
Element-specific Tips
Clicking an Ext.form.Button
*
*Option 1
Command: click
Target: css=button:contains('Save')
Selects the button by its caption
*Option 2
Command: click
Target: css=#save-options button
Selects the button by its id
Selecting a Value from an Ext.form.ComboBox
Command: runScript
Target: with (Ext.getCmp('genderComboBox')) { setValue('female'); fireEvent('select'); }
First sets the value and then explicitly fires the select event in case there are observers.
A: For complex UI that is not formal HTML, xPath is always something you can count on, but a little complex when it comes to different UI implementation using ExtJs.
You can use Firebug and Firexpath as firefox extensions to test an certain element's xpath, and simple pass full xpath as parameter to selenium.
For example in java code:
String fullXpath = "xpath=//div[@id='mainDiv']//div[contains(@class,'x-grid-row')]//table/tbody/tr[1]/td[1]//button"
selenium.click(fullXpath);
A: When I was testing ExtJS application using WebDriver I used next approach: I looked for field by label's text and got @for attribute from label. For example, we have a label
<label id="dynamic_id_label" class="TextboxLabel" for="textField_which_I_am_lloking_for">
Name Of Needed Label
<label/>
And we need to point WebDriver some input: //input[@id=(//label[contains(text(),'Name Of Needed Label')]/@for)].
So, it will pick the id from @for attribute and use it further. This is probably the simplest case but it gives you the way to locate element. It is much harder when you have no label but then you need to find some element and write your xpath looking for siblings, descend/ascend elements.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "92"
}
|
Q: How do I get the value of a JSObject property from C? In SpiderMonkey, how do I get the value of a property of a JSObject from within my C code?
static JSBool
JSD_getter(JSContext *cx, JSObject *obj, uintN argc, jsval *argv, jsval *rval)
{
jsval js_id;
JS_GetProperty(cx, obj, "id", &js_id); // js_id has JavaScript type
int c_id;
JS_ValueToInt32(cx, js_id, &c_id); // Now, c_id contains the actual value
} // of obj.id, as a C native type
A: JS_GetProperty()
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Using a USB controller as auxiliary keyboard for Visual Studio Our family no longer uses our Mixman DM2 USB controller for making music. This frees it up for me to use as an auxiliary keyboard with 31 "keys" (and a few "sliders"). I had the crazy idea to use these buttons to send keyboard shortcuts to Visual Studio. It just seems easier pressing one key than some of the finger-bending ctrl double-key combos.
I tried a couple utilities like JoyToKey and XPadder but they only recognize game controllers and the DM2 falls into the more generic "USB Controller" category.
Have you ever heard of such nonsense?
Clarify Question: Are you aware of a utility to read inputs from a generic USB Controller and map them to keyboard key presses? -OR- Are you aware of a Visual Studio add-in that will read from a generic USB Controller?
A: Haven't heard of it before, but I can't wait to see if you can make it useful!
You can assign Visual Studio functions to "chords" of key combinations, right? So maybe you could play shave-and-a-haircut to start a build.
A: Yeah, I was thinking about doing something similiar to that myself. I'm pretty sure you'd have to write your own driver for it though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: I need populate a repeater with pseudo-related data Before I do this I figured I would ask if it was the best way. Each "Vendor" object has a "Bucket" object. In my repeater I need to display some properties from Vendor and some from Bucket, also some images populated by FileSystem that are linked to the vendor. I figured the best way to do this is bind the repeater with the vendor object, then on ItemDataBound I would populate the images and the buckets based on the vendor that is bound to that particular Items[e.Item.ItemIndex].
Is this the best way to go about this?
A: That is how I usually go about it, bind on the main object and deal with the details in the ItemDataBound.
A: If the Vendor object can only hold a single Bucket object it may be appropriate to bind it all in a single, top-level repeater. You can access the Bucket through simple data-binding all at the top level without overriding ItemDataBound.
Because you're most likely binding the "Vendor", you have access to it's members in a databind if you want to do it this way:
<%# DataBinder.Eval (Container.DataItem, "Bucket.Property" ) %>
You want to do the ItemDataBound if you must "process something" during each iteration of the binding and need detailed access to each Vendor object for decision making.
If the Vendor object can hold multiple Buckets, then the best way to get access to that is through ItemDataBound. On each iteration of the Vendor you could bind a new, nested repeater to display the bucket data, or perform whatever repeating/aggregation functionality you may need.
Depending on how you want it to behave at your client, you could render the Vendors only. When the user clicks on the Vendor (or whatever), you could perform an AJAX call to the server which would retrieve the Bucket data and render it into your page dynamically. You may want to try that approach if there are a large number of vendors along with their buckets being rendered. This would help database performance, and page render time in contrast to building it all on the ASPX server side. (But this would need to be a lot of data, you should do it for usability/client reasons before trying to merit performance gains.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Regex index in matching string where the match failed I am wondering if it is possible to extract the index position in a given string where a Regex failed when trying to match it?
For example, if my regex was "abc" and I tried to match that with "abd" the match would fail at index 2.
Edit for clarification. The reason I need this is to allow me to simplify the parsing component of my application. The application is an Assmebly language teaching tool which allows students to write, compile, and execute assembly like programs.
Currently I have a tokenizer class which converts input strings into Tokens using regex's. This works very well. For example:
The tokenizer would produce the following tokens given the following input = "INP :x:":
Token.OPCODE, Token.WHITESPACE, Token.LABEL, Token.EOL
These tokens are then analysed to ensure they conform to a syntax for a given statement. Currently this is done using IF statements and is proving cumbersome. The upside of this approach is that I can provide detailed error messages. I.E
if(token[2] != Token.LABEL) { throw new SyntaxError("Expected label");}
I want to use a regular expression to define a syntax instead of the annoying IF statements. But in doing so I lose the ability to return detailed error reports. I therefore would at least like to inform the user of WHERE the error occurred.
A: I agree with Colin Younger, I don't think it is possible with the existing Regex class. However, I think it is doable if you are willing to sweat a little:
*
*Get the Regex class source code
(e.g.
http://www.codeplex.com/NetMassDownloader
to download the .Net source).
*Change the code to have a readonly
property with the failure index.
*Make sure your code uses that Regex
rather than Microsoft's.
A: I guess such an index would only have meaning in some simple case, like in your example.
If you'll take a regex like "ab*c*z" (where by * I mean any character) and a string "abbbcbbcdd", what should be the index, you are talking about?
It will depend on the algorithm used for mathcing...
Could fail on "abbbc..." or on "abbbcbbc..."
A: I don't believe it's possible, but I am intrigued why you would want it.
A: In order to do that you would need either callbacks embedded in the regex (which AFAIK C# doesn't support) or preferably hooks into the regex engine. Even then, it's not clear what result you would want if backtracking was involved.
A: It is not possible to be able to tell where a regex fails. as a result you need to take a different approach. You need to compare strings. Use a regex to remove all the things that could vary and compare it with the string that you know it does not change.
I run into the same problem came up to your answer and had to work out my own solution. Here it is:
https://stackoverflow.com/a/11730035/637142
hope it helps
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: What's the difference between a POST and a PUT HTTP REQUEST? They both seem to be sending data to the server inside the body, so what makes them different?
A: PUT is meant as a a method for "uploading" stuff to a particular URI, or overwriting what is already in that URI.
POST, on the other hand, is a way of submitting data RELATED to a given URI.
Refer to the HTTP RFC
A: The difference between POST and PUT is that PUT is idempotent, that means, calling the same PUT request multiple times will always produce the same result(that is no side effect), while on the other hand, calling a POST request repeatedly may have (additional) side effects of creating the same resource multiple times.
GET : Requests using GET only retrieve data , that is it requests a representation of the specified resource
POST : It sends data to the server to create a resource. The type of the body of the request is indicated by the Content-Type header. It often causes a change in state or side effects on the server
PUT : Creates a new resource or replaces a representation of the target resource with the request payload
PATCH : It is used to apply partial modifications to a resource
DELETE : It deletes the specified resource
TRACE : It performs a message loop-back test along the path to the target resource, providing a useful debugging mechanism
OPTIONS : It is used to describe the communication options for the target resource, the client can specify a URL for the OPTIONS method, or an asterisk (*) to refer to the entire server.
HEAD : It asks for a response identical to that of a GET request, but without the response body
CONNECT : It establishes a tunnel to the server identified by the target resource , can be used to access websites that use SSL (HTTPS)
A: It would be worth mentioning that POST is subject to some common Cross-Site Request Forgery (CSRF) attacks while PUT isn't.
The CSRF below are not possible with PUT when the victim visits attackersite.com.
The effect of the attack is that the victim unintentionally deletes a user just because it (the victim) was logged-in as admin on target.site.com, before visiting attackersite.com:
Malicious code on attackersite.com:
Case 1: Normal request. saved target.site.com cookies will automatically be sent by the browser: (note: supporting PUT only, at the endpoint, is safer because it is not a supported <form> attribute value)
<!--deletes user with id 5-->
<form id="myform" method="post" action="http://target.site.com/deleteUser" >
<input type="hidden" name="userId" value="5">
</form>
<script>document.createElement('form').submit.call(document.getElementById('myform'));</script>
Case 2: XHR request. saved target.site.com cookies will automatically be sent by the browser: (note: supporting PUT only, at the endpoint, is safer because an attempt to send PUT would trigger a preflight request, whose response would prevent the browser from requesting the deleteUser page)
//deletes user with id 5
var xhr = new XMLHttpRequest();
xhr.open("POST", "http://target.site.com/deleteUser");
xhr.withCredentials=true;
xhr.send(["userId=5"]);
MDN Ref : [..]Unlike “simple requests” (discussed above), --[[ Means: POST/GET/HEAD ]]--, for "preflighted" requests the browser first sends an HTTP request using the OPTIONS method[..]
cors in action : [..]Certain types of requests, such as DELETE or PUT, need to go a step further and ask for the server’s permission before making the actual request[..]what is called a preflight request[..]
A: In simple words you can say:
1.HTTP Get:It is used to get one or more items
2.HTTP Post:It is used to create an item
3.HTTP Put:It is used to update an item
4.HTTP Patch:It is used to partially update an item
5.HTTP Delete:It is used to delete an item
A: As far as i know, PUT is mostly used for update the records.
*
*POST - To create document or any other resource
*PUT - To update the created document or any other resource.
But to be clear on that PUT usually 'Replaces' the existing record if it is there and creates if it not there..
A: REST-ful usage
POST is used to create a new resource and then returns the resource URI
EX
REQUEST : POST ..../books
{
"book":"booName",
"author":"authorName"
}
This call may create a new book and returns that book URI
Response ...THE-NEW-RESOURCE-URI/books/5
PUT is used to replace a resource, if that resource is exist then simply update it, but if that resource doesn't exist then create it,
REQUEST : PUT ..../books/5
{
"book":"booName",
"author":"authorName"
}
With PUT we know the resource identifier, but POST will return the new resource identifier
Non REST-ful usage
POST is used to initiate an action on the server side, this action may or may not create a resource, but this action will have side affects always it will change something on the server
PUT is used to place or replace literal content at a specific URL
Another difference in both REST-ful and non REST-ful styles
POST is Non-Idempotent Operation: It will cause some changes if executed multiple times with the same request.
PUT is Idempotent Operation: It will have no side-effects if executed multiple times with the same request.
A: Actually there's no difference other than their title. There's actually a basic difference between GET and the others. With a "GET"-Request method, you send the data in the url-address-line, which are separated first by a question-mark, and then with a & sign.
But with a "POST"-request method, you can't pass data through the url, but you have to pass the data as an object in the so called "body" of the request. On the server side, you have then to read out the body of the received content in order to get the sent data.
But there's on the other side no possibility to send content in the body, when you send a "GET"-Request.
The claim, that "GET" is only for getting data and "POST" is for posting data, is absolutely wrong. Noone can prevent you from creating new content, deleting existing content, editing existing content or do whatever in the backend, based on the data, that is sent by the "GET" request or by the "POST" request. And nobody can prevent you to code the backend in a way, that with a "POST"-Request, the client asks for some data.
With a request, no matter which method you use, you call a URL and send or don't send some data to specify, which information you want to pass to the server to deal with your request, and then the client gets an answer from the server. The data can contain whatever you want to send, the backend is allowed to do whatever it wants with the data and the response can contain any information, that you want to put in there.
There are only these two BASIC METHODS. GET and POST. But it's their structure, which makes them different and not what you code in the backend. In the backend you can code whatever you want to, with the received data. But with the "POST"-request you have to send/retrieve the data in the body and not in the url-addressline, and with a "GET" request, you have to send/retrieve data in the url-addressline and not in the body. That's all.
All the other methods, like "PUT", "DELETE" and so on, they have the same structure as "POST".
The POST Method is mainly used, if you want to hide the content somewhat, because whatever you write in the url-addressline, this will be saved in the cache and a GET-Method is the same as writing a url-addressline with data. So if you want to send sensitive data, which is not always necessarily username and password, but for example some ids or hashes, which you don't want to be shown in the url-address-line, then you should use the POST method.
Also the URL-Addressline's length is limited to 1024 symbols, whereas the "POST"-Method is not restricted. So if you have a bigger amount of data, you might not be able to send it with a GET-Request, but you'll need to use the POST-Request. So this is also another plus point for the POST-request.
But dealing with the GET-request is way easier, when you don't have complicated text to send.
Otherwise, and this is another plus point for the POST method, is, that with the GET-method you need to url-encode the text, in order to be able to send some symbols within the text or even spaces. But with a POST method you have no restrictions and your content doesn't need to be changed or manipulated in any way.
A: Only semantics.
An HTTP PUT is supposed to accept the body of the request, and then store that at the resource identified by the URI.
An HTTP POST is more general. It is supposed to initiate an action on the server. That action could be to store the request body at the resource identified by the URI, or it could be a different URI, or it could be a different action.
PUT is like a file upload. A put to a URI affects exactly that URI. A POST to a URI could have any effect at all.
A: Define operations in terms of HTTP methods
The HTTP protocol defines a number of methods that assign semantic meaning to a request. The common HTTP methods used by most RESTful web APIs are:
GET retrieves a representation of the resource at the specified URI. The body of the response message contains the details of the requested resource.
POST creates a new resource at the specified URI. The body of the request message provides the details of the new resource. Note that POST can also be used to trigger operations that don't actually create resources.
PUT either creates or replaces the resource at the specified URI. The body of the request message specifies the resource to be created or updated.
PATCH performs a partial update of a resource. The request body specifies the set of changes to apply to the resource.
DELETE removes the resource at the specified URI.
The effect of a specific request should depend on whether the resource is a collection or an individual item. The following table summarizes the common conventions adopted by most RESTful implementations using the e-commerce example. Not all of these requests might be implemented—it depends on the specific scenario.
Resource
POST
GET
PUT
DELETE
/customers
Create a new customer
Retrieve all customers
Bulk update of customers
Remove all customers
/customers/1
Error
Retrieve the details for customer 1
Update the details of customer 1 if it exists
Remove customer 1
/customers/1/orders
Create a new order for customer 1
Retrieve all orders for customer 1
Bulk update of orders for customer 1
Remove all orders for customer 1
The differences between POST, PUT, and PATCH can be confusing.
A POST request creates a resource. The server assigns a URI for the new resource and returns that URI to the client. In the REST model, you frequently apply POST requests to collections. The new resource is added to the collection. A POST request can also be used to submit data for processing to an existing resource, without any new resource being created.
A PUT request creates a resource or updates an existing resource. The client specifies the URI for the resource. The request body contains a complete representation of the resource. If a resource with this URI already exists, it is replaced. Otherwise, a new resource is created, if the server supports doing so. PUT requests are most frequently applied to resources that are individual items, such as a specific customer, rather than collections. A server might support updates but not creation via PUT. Whether to support creation via PUT depends on whether the client can meaningfully assign a URI to a resource before it exists. If not, then use POST to create resources and PUT or PATCH to update.
A PATCH request performs a partial update to an existing resource. The client specifies the URI for the resource. The request body specifies a set of changes to apply to the resource. This can be more efficient than using PUT, because the client only sends the changes, not the entire representation of the resource. Technically PATCH can also create a new resource (by specifying a set of updates to a "null" resource), if the server supports this.
PUT requests must be idempotent. If a client submits the same PUT request multiple times, the results should always be the same (the same resource will be modified with the same values). POST and PATCH requests are not guaranteed to be idempotent.
A: Others have already posted excellent answers, I just wanted to add that with most languages, frameworks, and use cases you'll be dealing with POST much, much more often than PUT. To the point where PUT, DELETE, etc. are basically trivia questions.
A: Please see: http://zacharyvoase.com/2009/07/03/http-post-put-diff/
I’ve been getting pretty annoyed lately by a popular misconception by web developers that a POST is used to create a resource, and a PUT is used to update/change one.
If you take a look at page 55 of RFC 2616 (“Hypertext Transfer Protocol – HTTP/1.1”), Section 9.6 (“PUT”), you’ll see what PUT is actually for:
The PUT method requests that the enclosed entity be stored under the supplied Request-URI.
There’s also a handy paragraph to explain the difference between POST and PUT:
The fundamental difference between the POST and PUT requests is reflected in the different meaning of the Request-URI. The URI in a POST request identifies the resource that will handle the enclosed entity. That resource might be a data-accepting process, a gateway to some other protocol, or a separate entity that accepts annotations. In contrast, the URI in a PUT request identifies the entity enclosed with the request – the user agent knows what URI is intended and the server MUST NOT attempt to apply the request to some other resource.
It doesn’t mention anything about the difference between updating/creating, because that’s not what it’s about. It’s about the difference between this:
obj.set_attribute(value) # A POST request.
And this:
obj.attribute = value # A PUT request.
So please, stop the spread of this popular misconception. Read your RFCs.
A: To give examples of REST-style resources:
POST /books with a bunch of book information might create a new book, and respond with the new URL identifying that book: /books/5.
PUT /books/5 would have to either create a new book with the ID of 5, or replace the existing book with ID 5.
In non-resource style, POST can be used for just about anything that has a side effect. One other difference is that PUT should be idempotent: multiple PUTs of the same data to the same URL should be fine, whereas multiple POSTs might create multiple objects or whatever it is your POST action does.
A: A POST is considered something of a factory type method. You include data with it to create what you want and whatever is on the other end knows what to do with it. A PUT is used to update existing data at a given URL, or to create something new when you know what the URI is going to be and it doesn't already exist (as opposed to a POST which will create something and return a URL to it if necessary).
A: It should be pretty straightforward when to use one or the other, but complex wordings are a source of confusion for many of us.
When to use them:
*
*Use PUT when you want to modify a singular resource that is already a part of resource collection. PUT replaces the resource in its entirety. Example: PUT /resources/:resourceId
Sidenote: Use PATCH if you want to update a part of the resource.
*
*Use POST when you want to add a child resource under a collection of resources.
Example: POST => /resources
In general:
*
*Generally, in practice, always use PUT for UPDATE operations.
*Always use POST for CREATE operations.
Example:
GET /company/reports => Get all reports
GET /company/reports/{id} => Get the report information identified by "id"
POST /company/reports => Create a new report
PUT /company/reports/{id} => Update the report information identified by "id"
PATCH /company/reports/{id} => Update a part of the report information identified by "id"
DELETE /company/reports/{id} => Delete report by "id"
A: *
*GET: Retrieves data from the server. Should have no other effect.
*PUT: Replaces target resource with the request payload. Can be used to update or create a new resource.
*PATCH: Similar to PUT, but used to update only certain fields within an existing resource.
*POST: Performs resource-specific processing on the payload. Can be used for different actions including creating a new resource, uploading a file, or submitting a web form.
*DELETE: Removes data from the server.
*TRACE: Provides a way to test what the server receives. It simply returns what was sent.
*OPTIONS: Allows a client to get information about the request methods supported by a service. The relevant response header is Allow with supported methods. Also used in CORS as preflight request to inform the server about actual the request method and ask about custom headers.
*HEAD: Returns only the response headers.
*CONNECT: Used by the browser when it knows it talks to a proxy and the final URI begins with https://. The intent of CONNECT is to allow end-to-end encrypted TLS sessions, so the data is unreadable to a proxy.
A: HTTP PUT:
PUT puts a file or resource at a specific URI, and exactly at that URI. If there's already a file or resource at that URI, PUT replaces that file or resource. If there is no file or resource there, PUT creates one. PUT is idempotent, but paradoxically PUT responses are not cacheable.
HTTP 1.1 RFC location for PUT
HTTP POST:
POST sends data to a specific URI and expects the resource at that URI to handle the request. The web server at this point can determine what to do with the data in the context of the specified resource. The POST method is not idempotent, however POST responses are cacheable so long as the server sets the appropriate Cache-Control and Expires headers.
The official HTTP RFC specifies POST to be:
*
*Annotation of existing resources;
*Posting a message to a bulletin board, newsgroup, mailing list,
or similar group of articles;
*Providing a block of data, such as the result of submitting a
form, to a data-handling process;
*Extending a database through an append operation.
HTTP 1.1 RFC location for POST
Difference between POST and PUT:
The RFC itself explains the core difference:
The fundamental difference between the
POST and PUT requests is reflected in
the different meaning of the
Request-URI. The URI in a POST request
identifies the resource that will
handle the enclosed entity. That
resource might be a data-accepting
process, a gateway to some other
protocol, or a separate entity that
accepts annotations. In contrast, the
URI in a PUT request identifies the
entity enclosed with the request --
the user agent knows what URI is
intended and the server MUST NOT
attempt to apply the request to some
other resource. If the server desires
that the request be applied to a
different URI, it MUST send a 301 (Moved Permanently) response; the user agent MAY then make
its own decision regarding whether or not to redirect the request.
Additionally, and a bit more concisely, RFC 7231 Section 4.3.4 PUT states (emphasis added),
4.3.4. PUT
The PUT method requests that the state of the target resource be
created or replaced with the state defined by the representation
enclosed in the request message payload.
Using the right method, unrelated aside:
One benefit of REST ROA vs SOAP is that when using HTTP REST ROA, it encourages the proper usage of the HTTP verbs/methods. So for example you would only use PUT when you want to create a resource at that exact location. And you would never use GET to create or modify a resource.
A: Summary
*
*Use PUT to create or replace the state of the target resource with the state defined by the representation enclosed in the request. That standardized intended effect is idempotent so it informs intermediaries that they can repeat a request in case of communication failure.
*Use POST otherwise (including to create or replace the state of a resource other than the target resource). Its intended effect is not standardized so intermediaries cannot rely on any universal property.
References
The latest authoritative description of the semantic difference between the POST and PUT request methods is given in RFC 7231 (Roy Fielding, Julian Reschke, 2014):
The fundamental difference between the POST and PUT methods is highlighted by the different intent for the enclosed representation. The target resource in a POST request is intended to handle the enclosed representation according to the resource's own semantics, whereas the enclosed representation in a PUT request is defined as replacing the state of the target resource. Hence, the intent of PUT is idempotent and visible to intermediaries, even though the exact effect is only known by the origin server.
In other words, the intended effect of PUT is standardized (create or replace the state of the target resource with the state defined by the representation enclosed in the request) and so is common to all target resources, while the intended effect of POST is not standardized and so is specific to each target resource. Thus POST can be used for anything, including for achieving the intended effects of PUT and other request methods (GET, HEAD, DELETE, CONNECT, OPTIONS, and TRACE).
But it is recommended to always use the more specialized request method rather than POST when applicable because it provides more information to intermediaries for automating information retrieval (since GET, HEAD, OPTIONS, and TRACE are defined as safe), handling communication failure (since GET, HEAD, PUT, DELETE, OPTIONS, and TRACE are defined as idempotent), and optimizing cache performance (since GET and HEAD are defined as cacheable), as explained in It Is Okay to Use POST (Roy Fielding, 2009):
POST only becomes an issue when it is used in a situation for which some other method is ideally suited: e.g., retrieval of information that should be a representation of some resource (GET), complete replacement of a representation (PUT), or any of the other standardized methods that tell intermediaries something more valuable than “this may change something.” The other methods are more valuable to intermediaries because they say something about how failures can be automatically handled and how intermediate caches can optimize their behavior. POST does not have those characteristics, but that doesn’t mean we can live without it. POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.”
A: Both PUT and POST are Rest Methods .
PUT - If we make the same request twice using PUT using same parameters both times, the second request will not have any effect. This is why PUT is generally used for the Update scenario,calling Update more than once with the same parameters doesn't do anything more than the initial call hence PUT is idempotent.
POST is not idempotent , for instance Create will create two separate entries into the target hence it is not idempotent so CREATE is used widely in POST.
Making the same call using POST with same parameters each time will cause two different things to happen, hence why POST is commonly used for the Create scenario
A: Post and Put are mainly used for post the data and other update the data. But you can do the same with post request only.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1032"
}
|
Q: What are the down sides of using a composite/compound primary key? What are the down sides of using a composite/compound primary key?
A: There's nothing wrong with having a compound key per se, but a primary key should ideally be as small as possible (in terms of number of bytes required). If the primary key is long then this will cause non-clustered indexes to be bloated.
Bear in mind that the order of the columns in the primary key is important. The first column should be as selective as possible i.e. as 'unique' as possible. Searches on the first column will be able to seek, but searches just on the second column will have to scan, unless there is also a non-clustered index on the second column.
A: I think this is a specialisation of the synthetic key debate (whether to use meaningful keys or an arbitrary synthetic primary key). I come down almost completely on the synthetic key side of this debate for a number of reasons. These are a few of the more pertinent ones:
*
*You have to keep dependent child
tables on the end of a foriegn key
up to date. If you change the the
value of one of the primary key
fields (which can happen - see
below) you have to somehow change
all of the dependent tables where
their PK value includes these
fields. This is a bit tricky
because changing key values will
invalidate FK relationships with
child tables so you may (depending
on the constraint validation options
available on your platform) have to
resort to tricks like copying the
record to a new one and deleting the
old records.
*On a deep schema the keys can get
quite wide - I've seen 8 columns
once.
*Changes in primary key values can be
troublesome to identify in ETL
processes loading off the system.
The example I once had occasion to
see was an MIS application
extracting from an insurance
underwriting system. On some
occasions a policy entry would be
re-used by the customer, changing
the policy identifier. This was a
part of the primary key of the
table. When this happens the
warehouse load is not aware of what
the old value was so it cannot match
the new data to it. The developer
had to go searching through audit
logs to identify the changed value.
Most of the issues with non-synthetic primary keys revolve around issues when PK values of records change. The most useful applications of non-synthetic values are where a database schema is intended to be used, such as an M.I.S. application where report writers are using the tables directly. In this case short values with fixed domains such as currency codes or dates might reasonably be placed directly on the table for convenience.
A: *
*Could cause more problems for normalisation (2NF, "Note that when a 1NF table has no composite candidate keys (candidate keys consisting of more than one attribute), the table is automatically in 2NF")
*More unnecessary data duplication. If your composite key consists of 3 columns, you will need to create the same 3 columns in every table, where it is used as a foreign key.
*Generally avoidable with the help of surrogate keys (read about their advantages and disadvantages)
*I can imagine a good scenario for composite key -- in a table representing a N:N relation, like Students - Classes, and the key in the intermediate table will be (StudentID, ClassID). But if you need to store more information about each pair (like a history of all marks of a student in a class) then you'll probably introduce a surrogate key.
A: I would recommend a generated primary key in those cases with a unique not null constraint on the natural composite key.
If you use the natural key as primary then you will most likely have to reference both values in foreign key references to make sure you are identifying the correct record.
A: Take the example of a table with two candidate keys: one simple (single-column) and one compound (multi-column). Your question in that context seems to be, "What disadvantage may I suffer if I choose to promote one key to be 'primary' and I choose the compound key?"
First, consider whether you actually need to promote a key at all: "the very existence of the PRIMARY KEY in SQL seems to be an historical accident of some kind. According to author Chris Date the earliest incarnations of SQL didn't have any key constraints and PRIMARY KEY was only later addded to the SQL standards. The designers of the standard obviously took the term from E.F.Codd who invented it, even though Codd's original notion had been abandoned by that time! (Codd originally proposed that foreign keys must only reference one key - the primary key - but that idea was forgotten and ignored because it was widely recognised as a pointless limitation)." [source: David Portas' Blog: Down with Primary Keys?
Second, what criteria would you apply to choose which key in a table should be 'primary'?
In SQL, the choice of key PRIMARY KEY is arbitrary and product specific. In ACE/Jet (a.k.a. MS Access) the two main and often competing factors is whether you want to use PRIMARY KEY to favour clustering on disk or whether you want the columns comprising the key to appears as bold in the 'Relationships' picture in the MS Access user interface; I'm in the minority by thinking that index strategy trumps pretty picture :) In SQL Server, you can specify the clustered index independently of the PRIMARY KEY and there seems to be no product-specific advantage afforded. The only remaining advantage seems to be the fact you can omit the columns of the PRIMARY KEY when creating a foreign key in SQL DDL, being a SQL-92 Standard behaviour and anyhow doesn't seem such a big deal to me (perhaps another one of the things they added to the Standard because it was a feature already widespread in SQL products?) So, it's not a case of looking for drawbacks, rather, you should be looking to see what advantage, if any, your SQL product gives the PRIMARY KEY. Put another way, the only drawback to choosing the wrong key is that you may be missing out on a given advantage.
Third, are you rather alluding to using an artificial/synthetic/surrogate key to implement in your physical model a candidate key from your logical model because you are concerned there will be performance penalties if you use the natural key in foreign keys and table joins? That's an entirely different question and largely depends on your 'religious' stance on the issue of natural keys in SQL.
A: Need more specificity.
Taken too far, it can overcomplicate Inserts (Every key MUST exist) and documentation and your joined reads could be suspect if incomplete.
Sometimes it can indicate a flawed data model (is a composite key REALLY what's described by the data?)
I don't believe there is a performance cost...it just can go really wrong really easily.
A: *
*when you se it on a diagram are less readable
*when you use it on a query join are less
readable
*when you use it on a foregein key
you have to add a check constraint
about all the attribute have to be
null or not null (if only one is
null the key is not checked)
*usualy need more storage when use it
as foreign key
*some tool doesn't manage composite
key
A: The main downside of using a compound primary key, is that you will confuse the hell out of typical ORM code generators.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: How do you send a HEAD HTTP request in Python 2? What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if http://somedomain/foo/ will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this?
A: Obligatory Requests way:
import requests
resp = requests.head("http://www.google.com")
print resp.status_code, resp.text, resp.headers
A: I believe the Requests library should be mentioned as well.
A: import httplib
import urlparse
def unshorten_url(url):
parsed = urlparse.urlparse(url)
h = httplib.HTTPConnection(parsed.netloc)
h.request('HEAD', parsed.path)
response = h.getresponse()
if response.status/100 == 3 and response.getheader('Location'):
return response.getheader('Location')
else:
return url
A: Just:
import urllib2
request = urllib2.Request('http://localhost:8080')
request.get_method = lambda : 'HEAD'
response = urllib2.urlopen(request)
response.info().gettype()
Edit: I've just came to realize there is httplib2 :D
import httplib2
h = httplib2.Http()
resp = h.request("http://www.google.com", 'HEAD')
assert resp[0]['status'] == 200
assert resp[0]['content-type'] == 'text/html'
...
link text
A: For completeness to have a Python3 answer equivalent to the accepted answer using httplib.
It is basically the same code just that the library isn't called httplib anymore but http.client
from http.client import HTTPConnection
conn = HTTPConnection('www.google.com')
conn.request('HEAD', '/index.html')
res = conn.getresponse()
print(res.status, res.reason)
A: urllib2 can be used to perform a HEAD request. This is a little nicer than using httplib since urllib2 parses the URL for you instead of requiring you to split the URL into host name and path.
>>> import urllib2
>>> class HeadRequest(urllib2.Request):
... def get_method(self):
... return "HEAD"
...
>>> response = urllib2.urlopen(HeadRequest("http://google.com/index.html"))
Headers are available via response.info() as before. Interestingly, you can find the URL that you were redirected to:
>>> print response.geturl()
http://www.google.com.au/index.html
A: edit: This answer works, but nowadays you should just use the requests library as mentioned by other answers below.
Use httplib.
>>> import httplib
>>> conn = httplib.HTTPConnection("www.google.com")
>>> conn.request("HEAD", "/index.html")
>>> res = conn.getresponse()
>>> print res.status, res.reason
200 OK
>>> print res.getheaders()
[('content-length', '0'), ('expires', '-1'), ('server', 'gws'), ('cache-control', 'private, max-age=0'), ('date', 'Sat, 20 Sep 2008 06:43:36 GMT'), ('content-type', 'text/html; charset=ISO-8859-1')]
There's also a getheader(name) to get a specific header.
A: As an aside, when using the httplib (at least on 2.5.2), trying to read the response of a HEAD request will block (on readline) and subsequently fail. If you do not issue read on the response, you are unable to send another request on the connection, you will need to open a new one. Or accept a long delay between requests.
A: I have found that httplib is slightly faster than urllib2. I timed two programs - one using httplib and the other using urllib2 - sending HEAD requests to 10,000 URL's. The httplib one was faster by several minutes. httplib's total stats were: real 6m21.334s
user 0m2.124s
sys 0m16.372s
And urllib2's total stats were: real 9m1.380s
user 0m16.666s
sys 0m28.565s
Does anybody else have input on this?
A: And yet another approach (similar to Pawel answer):
import urllib2
import types
request = urllib2.Request('http://localhost:8080')
request.get_method = types.MethodType(lambda self: 'HEAD', request, request.__class__)
Just to avoid having unbounded methods at instance level.
A: Probably easier: use urllib or urllib2.
>>> import urllib
>>> f = urllib.urlopen('http://google.com')
>>> f.info().gettype()
'text/html'
f.info() is a dictionary-like object, so you can do f.info()['content-type'], etc.
http://docs.python.org/library/urllib.html
http://docs.python.org/library/urllib2.html
http://docs.python.org/library/httplib.html
The docs note that httplib is not normally used directly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "118"
}
|
Q: Best way to translate mouse drag motion into 3d rotation of an object I have a 3d object that I wish to be able to rotate around in 3d. The easiest way is to directly translate X and Y mouse motion to rotation about the Y and X axes, but if there is some rotation along both axes, the way the model rotates becomes highly counterintuitive (i.e. if you flip the object 180 degrees about one axis, your motion along the other axis is reversed).
I could simply do the above method, but instead of storing the amount to rotate about the two axes, I could store the full rotation matrix and just further rotate it along the same axes for each mouse drag, but I'm concerned that that would quickly have precision issues.
A: It is probably most intuitive to rotate the object around the axis perpendicular to the current drag direction, either incrementally with each mouse motion, or relative to the drag start position. The two options give slightly different user interactions, which each have their pluses and minuses.
There is a relatively straightforward way to convert an angle and a 3d vector representing the axis being rotated around into a rotation matrix.
You are right in that updating a raw rotation matrix through incremental rotations will result in the matrix no longer being a pure rotation matrix. That is because a 3x3 rotation matrix has three times as much data than needed to represent a rotation.
A more compact way to represent rotations is with Euler Angles, having a minimal 3 value vector. You could take the current rotation as an Euler angle vector, convert it to a matrix, apply the rotation (incremental or otherwise), and convert the matrix back to an Euler angle vector. That last step would naturally eliminate any non-rotational component to your matrix, so that you once again end up with a pure rotational matrix for the next state.
Euler angles are conceptually nice, however it is a lot of work to do the back and forth conversions.
A more practical choice is Quaternions (also), which are four element vectors. The four elements specify rotation and uniform scale, and it happens that if you go in and normalize the vector to unit length, you will get a scale factor of 1.0. It turns out that an angle-axis value can also be converted to a quaternion value very easily by
q.x = sin(0.5*angle) * axis.x;
q.y = sin(0.5*angle) * axis.y;
q.z = sin(0.5*angle) * axis.z;
q.w = cos(0.5*angle);
You can then take the quaternion product (which uses only simple multiplication and addition) of the current rotation quaternion and incremental rotation quaternion to get a new quaternion which represents performing both rotations. At that point you can normalize the length to ensure a pure rotation, but otherwise continue iteratively combining rotations.
Converting the quaternion to a rotation matrix is very straightforward (uses only multiplication and addition) when you want to display the model in its rotated state using traditional graphics API's.
A: In my Computer Graphics course, we were given the following code which allowed us not to reinvent the wheel.
trackball.h
trackball.c
A: Create an accumulator matrix and initialize it with the identity.
Each frame, apply that to your modelview/world matrix state before drawing the object.
Upon mouse motion, construct a rotation matrix about the X axis with some sensitivity_constant * delta_x. Construct another rotation matrix about the Y axis for the other component. Multiply one, then the other onto the accumulator.
The accumulator will change as you move the mouse. When drawing, it will orient the object as you expect.
Also, the person talking about quaternions is right; this will look good only for small incremental changes. If you drag it quickly on a diagonal, it won't rotate quite the way you expect.
A: You can deal with loss of precision by renormalising your rotation matrix so each of the 3 rows are perpendicular again. Or you can regenerate the rotation matrix you are about to modify based on existing information about the object, and this takes away the need for renormalisation.
Alternatively you can use quaternions, which is an alternative to Euler angles for dealing with rotations.
I learned much of this in my early days from this faq, which deals with this problem (though for another application) in Euler's are Evil.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: What's your best trick to break out of an unbalanced quote condition in BASE SAS? As a base SAS programmer, you know the drill:
You submit your SAS code, which contains an unbalanced quote, so now you've got not only and unclosed quote, but also unclosed comments, macro function definitions, and a missing run; or quit; statement.
What's your best trick for not having those unbalanced quotes bother you?
A: As for myself, I usually Google for "SAS unbalanced quote", and end up with submitting something like this:
*); */; /*’*/ /*”*/; %mend;
... to break out of unclosed comments, quotes and macro functions.
A: Here is the one I use.
;*';*";*/;quit;run;
ODS _ALL_ CLOSE;
QUIT; RUN;
A: enterprise guide 3 used to put the following line at the top of its automatically generated code:
*';*";*/;run;
however, the only way to really "reset" from all kinds of something unbalanced problems is to quit the sas session, and balance whatever is unbalanced before re-submitting the code. Using this kind of quick (cheap?) hacks does not address the root cause.
by the way, ods _all_ close; closes all the ods destinations, including the default, results destination. in an interactive session, you should open it again with ods results; or ods results on; at least according to the documention. but when i tested it on my 9.2, it did not work, as shown below:
%put sysvlong=&sysvlong sysscpl=&sysscpl;
/* sysvlong=9.02.01M0P020508 sysscpl=X64_VSPRO */
ods _all_ close;
proc print data=sashelp.class;
run;
/* on log
WARNING: No output destinations active.
*/
ods results on;
proc print data=sashelp.class;
run;
/* on log
WARNING: No output destinations active.
*/
A: I had a situation with unbalanced quotes in a macro and the only solution was to close the instance of SAS and start over.
I feel that's an unacceptable flaw in SAS.
However, I used the methods by BOTH #2 and #5 and it worked. #2 first and then #1. I put them above ALL code, including my code header, explaining what this program was doing.
Worked like a charm.
A: I wrote a perl program that reads through any given SAS program and keeps track of things that should come in pairs. With things like parentheses, which can be embedded, it prints the level of nesting at the beginning of every line. It needs to be able to distinguish parentheses that are part of macro functions from those that are part of data step functions, including %sysfunc calls that reside in the macro environment but make calls to data step functions (must also do similar for %syscall macro function invocations), but that is doable through regular expressions. If the level of nesting goes negative, it is a clue that the problem may be nearby.
It also starts counting single and double quotes from the start of the program and identifies whether the count of each such symbol it encounters is odd or even. As with parentheses, it needs to be able to distinguish quotes that are part of macro code from those that are part of data step code and also those that are part of literal strings such as O'Riley and %nrstr(%'%") and not count them, but pattern matching can handle that too.
If the problem of the mismatched item stems from code that is generated at runtime by macro code and is therefore not present in the source program, then I turn on option mfile to write the generated data step code to a file and then run the perl script against that code.
I chose perl because of its strong pattern-matching capabilities but any other pattern-matching language should work fine. Hope this helps.
A: This works nearly every time for me:
; *'; *"; */;
ODS _ALL_ CLOSE;
quit; run; %MEND;
data _NULL_; putlog "DONE"; run;
A: You could always just issue a terminate submitted statements command and resubmit what you're trying to run.
A: just wanted to reiterate AFHood's suggestion to use the ODS _ALL_ CLOSE; statement. That's a key one to include. And make sure you use it every time you're finished with ODS anyway.
A: Closing the SAS Session worked in my case. I think you can try this once before you try other methods mentioned here.
A: Yes, I believe the official SAS documentation recommends the solution you have proposed for yourself.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: What's the best way to do a mapping application for the iPhone So I'm looking at writing an iPhone application that shows things on a map. What frameworks/methodologies are out there for doing this?
Searching around on Google, I could only find this one:
http://code.google.com/p/iphone-google-maps-component/
Which according to the issues list is slow, and stops working after a while. Does anyone know of something better, or have any experience with the library above?
A: I'm pretty sure your only options for now are:
*
*Call openURL: to switch to the Maps app
*Use the Google Maps component you linked to
*Roll your own thing
*Wait for Apple to expose a "MapKit" framework
A: WARNING: Embedding Google Maps inside an application may violate the Google Maps terms of service.
I have written a full mapping UIView on the iPhone (the application is on the AppStore) and it is not easy (this would be option #3 "Roll your own thing"). Getting good performance is really difficult. I would like to OpenSource my map component but right now the F-NDA is preventing that.
A: I have been chasing this for a while now, and here's the best solution I've found out there:
http://code.google.com/p/touchcode/
There is a component in there called TouchMap and it comes with a demo that shows it off. It's tiles are loaded directly from Microsoft Virtual Earth.
Edit: @schwa, I just a look at your profile to find your email address, and then realised that you actually released this software. Nice work
A: I've just found route-me which looks like a large(30+) active community of ppl developing an open source mapping app for the iPhone. It can take tiles from OpenStreetMap, Virtual Earth(Bing) or Cloudmade.
A: just use mapkit framework.really good
alternatively: [[uiapplication sharedapplication]openurl:@"www.maps.google.com];
that will open the google map,but one problem with this is that it will navigate you to safari and your application will be exited and if you want to move back to your application from this web page,then it won't be possible .you again need to run the app.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Is JavaScript object-oriented? There have been some questions about whether or not JavaScript is an object-oriented language. Even a statement, "just because a language has objects doesn't make it OO."
Is JavaScript an object-oriented language?
A: JavaScript is a prototype-based programming language (probably prototype-based scripting language is more correct definition). It employs cloning and not inheritance. A prototype-based programming language is a style of object-oriented programming without classes. While object-oriented programming languages encourages development focus on taxonomy and relationships, prototype-based programming languages encourages to focus on behavior first and then later classify.
The term “object-oriented” was coined by Alan Kay in 1967, who explains it in 2003 to mean
only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.
(source)
In object-oriented programming, each object is capable of receiving messages, processing data, and sending messages to other objects.
For a language to be object-oriented in may include features as encapsulation, modularity, polymorphism, and inheritance, but it is not a requirement. Object-oriented programming languages that make use of classes are often referred to as classed-based programming languages, but it is by no means a must to make use of classes to be object-oriented.
JavaScript uses prototypes to define object properties, including methods and inheritance.
Conclusion: JavaScript IS object-oriented.
A: Unlike most object-oriented languages, JavaScript (before ECMA 262 Edition 4, anyway) has no concept of classes, but prototypes. As such, it is indeed somewhat subjective whether to call it object-oriented or not.
@eliben: Wikipedia says object-based. That's not the same as object-oriented. In fact, their article on object-based distinguishes between object-oriented languages and prototype-based ones, explicitly calling JavaScript not object-oriented.
A: IMO (and it is only an opinion) the key characteristic of an object orientated language would be that it would support polymorphism. Pretty much all dynamic languages do that.
The next characteristic would be encapsulation and that is pretty easy to do in Javascript also.
However in the minds of many it is inheritance (specifically implementation inheritance) which would tip the balance as to whether a language qualifies to be called object oriented.
Javascript does provide a fairly easy means to inherit implementation via prototyping but this is at the expense of encapsulation.
So if your criteria for object orientation is the classic threesome of polymorphism, encapsulation and inheritance then Javascript doesn't pass.
Edit: The supplementary question is raised "how does prototypal inheritance sacrifice encapsulation?" Consider this example of a non-prototypal approach:-
function MyClass() {
var _value = 1;
this.getValue = function() { return _value; }
}
The _value attribute is encapsulated, it cannot be modified directly by external code. We might add a mutator to the class to modify it in a way entirely controlled by code that is part of the class. Now consider a prototypal approach to the same class:-
function MyClass() {
var _value = 1;
}
MyClass.prototype.getValue = function() { return _value; }
Well this is broken. Since the function assigned to getValue is no longer in scope with _value it can't access it. We would need to promote _value to an attribute of this but that would make it accessable outside of the control of code written for the class, hence encapsulation is broken.
Despite this my vote still remains that Javascript is object oriented. Why? Because given an OOD I can implement it in Javascript.
A: The short answer is Yes. For more information:
From Wikipedia:
JavaScript is heavily object-based.
Objects are associative arrays,
augmented with prototypes (see below).
Object property names are associative
array keys: obj.x = 10 and obj["x"] =
10 are equivalent, the dot notation
being merely syntactic sugar.
Properties and their values can be
added, changed, or deleted at
run-time. The properties of an object
can also be enumerated via a for...in
loop.
Also, see this series of articles about OOP with Javascript.
A: JavaScript is a very good language to write object oriented web apps. It can support OOP because supports inheritance through prototyping also properties and methods. You can have polymorphism, encapsulation and many sub-classing paradigms.
A: Javascript is a multi-paradigm language that supports procedural, object-oriented (prototype-based) and functional programming styles.
Here is an article discussing how to do OO in Javascript.
A: This is of course subjective and an academic question. Some people argue whether an OO language has to implement classes and inheritance, others write programs that change your life. ;-)
(But really, why should an OO language have to implement classes? I'd think objects were the key components. How you create and then use them is another matter.)
A: it is a good thread. Here's some resources i like. Most people start out with prototype, jquery, or one of the top 6 libs(mootools, ExtJS, YUI), which have different object models. Prototype tries to replicate classical O-O as most people think of it
http://jquery.com/blog/2006/08/20/why-jquerys-philosophy-is-better/
Here's a picture of prototypes and functions that i refer to a lot
http://www.mollypages.org/misc/js.mp?
A: I am responding this question bounced from another angle.
This is an eternal topic, and we could open a flame war in a lot of forums.
When people assert that JavaScript is an OO programming language because they can use OOD with this, then I ask: Why is not C an OO programming language? Repeat, you can use OOD with C and if you said that C is an OO programming language everybody will said you that you are crazy.
We could put here a lot of references about this topic in very old books and forums, because this topic is older than the Internet :)
JavaScript has not changed for many years, but new programmers want to show JavaScript is an OO programming language. Why? JavaScript is a powerful language, but is not an OO programming language.
An OO programming language must have objects, method, property, classes, encapsulation, aggregation, inheritance and polymorphism. You could implement all this points, but JavaScript has not them.
An very illustrate example: In chapter 6 of "Object-Oriented JavaScript" describe 10 manners to implement "inheritance". How many manners there are in Java? One, and in C++? One, and in Delphi (Object Pascal)? One, and in Objective-C? One.
Why is this different? Because Java, C++, Delphi and Objective-C are designed with OOP in mind, but not JavaScript.
When I was a student (in 1993), in university, there was a typical home work: Implement a program designed using a OOD (Object-oriented design) with a non-OO language. In those times, the language selected was C (not C++). The objective of this practices was to make clear the difference between OOD and OOP, and could differentiate between OOP and non-OOP languages.
Anyway, it is evidence that not all people have some opinion about this topic :)
Anyway, in my opinion, JavaScript is a powerful language and the future in the client side layer!
A: Languages do not need to behave exactly like Java to be object-oriented. Everything in Javascript is an object; compare to C++ or earlier Java, which are widely considered object-oriented to some degree but still based on primitives. Polymorphism is a non-issue in Javascript, as it doesn't much care about types at all. The only core OO feature not directly supported by the syntax is inheritance, but that can easily be implemented however the programmer wants using prototypes: here is one such example.
A: JavaScript is object-oriented, but is not a class-based object-oriented language like Java, C++, C#, etc. Class-based OOP languages are a subset of the larger family of OOP languages which also include prototype-based languages like JavaScript and Self.
A: Yes and no.
JavaScript is, as Douglas Crockford puts it, "the world's most misunderstood programming language." He has some great articles on JavaScript that I'd strongly recommend reading on what exactly JavaScript is. It has more in common with LISP that C++.
A: Hanselminutes episode 146 looks at OO Ajax. It was a good show and definitely a good show to help form an opinion.
A: Misko Hevery did an excellent introductory Google Tech Talk where he talks about objects in Javascript. I've found this to be a good starting point for people either questioning the use of objects in Javascript, or wanting to get started with them:
*
*Introduction to JavaScript and Browser DOM
A: The Microsoft Ajax Client Library makes it simple to implement OO in javascript. It supports inharitence, and interface implementation.
A: I think a lot of people answer this question "no" because JavaScript does not implement classes, in the traditional OO sense. Unfortunately (IMHO), that is coming in ECMAScript 4. Until then, viva la prototype! :-)
A: I think when you can follow the same or similar design patterns as a true OO language like Java/C#, you can pretty much call it an OO language. Some aspects are obviously different but you can still use very well established OO design pattersn.
A: JavaScript is Object-Based, not Object-Oriented. The difference is that Object-Based languages don't support proper inheritance, whereas Object-Oriented ones do.
There is a way to achieve 'normal' inheritance in JavaScript (Reference here), but the basic model is based on prototyping.
A: Everything in javascript is an object - classes are objects, functions are objects, numbers are objects, objects objects objects. It's not as strict about typing as other languages, but it's definitely possible to write OOP JS.
A: Yes, it is. However, it doesn't support all of the features one would expect in an object oriented programming language lacking inheritance and polymorphism. This doesn't mean, however, that you cannot simulate these capabilities through the prototyping system that is avaialble to the language.
A: Javascript is not an object oriented language as typically considered, mainly due to lack of true inheritance, DUCK typing allows for a semi-true form of inheritance/polymorphism along with the Object.prototype allowing for complex function sharing. At its heart however the lack of inheritance leads to a weak polymorphism to take place since the DUCK typing will insist some object with the same attribute names are an instance of an Object which they were not intended to be used as. Thus adding attributes to random object transforms their type's base in a manner of speaking.
A: Technically it is a prototype language, but it's easy to to OO in it.
A: It is object oriented, but not based on classes, it's based on prototypes.
A: Objects in JavaScript inherit directly from objects. What can be more object oriented?
A: For me personally the main attraction of OOP programming is the ability to have self-contained classes with unexposed (private) inner workings.
What confuses me to no end in Javascript is that you can't even use function names, because
you run the risk of having that same function name somewhere else in any of the external libraries that you're using.
Even though some very smart people have found workarounds for this, isn't it weird that Javascript in its purest form requires you to create code that is highly unreadable?
The beauty of OOP is that you can spend your time thinking about your app's logic, without having to worry about syntax.
A:
Is JavaScript object-oriented?
Answer : Yes
It has objects which can contain data and methods that act upon that data. Objects can contain other objects.
*
*It does not have classes, but it does have constructors which do what classes do, including acting as containers for class variables and methods.
*It does not have class-oriented inheritance, but it does have prototype-oriented inheritance.
The two main ways of building up object systems are by inheritance (is-a) and by aggregation (has-a). JavaScript does both, but its dynamic nature allows it to excel at aggregation.
Some argue that JavaScript is not truly object oriented because it does not provide information hiding. That is, objects cannot have private variables and private methods: All members are public.
But it turns out that JavaScript objects can have private variables and private methods. (Click here now to find out how.) Of course, few understand this because JavaScript is the world's most misunderstood programming language.
Some argue that JavaScript is not truly object oriented because it does not provide inheritance. But it turns out that JavaScript supports not only classical inheritance, but other code reuse patterns as well.
Sources : http://javascript.crockford.com/javascript.html
A: I would say it has capabilities to seem OO. Especially if you take advantage of it's ability to create methods on an existing object (anonymous methods in some languages). Client script libraries like jquery (jquery.com) or prototype (prototypejs.org) are good examples of libraries taking advantage of this, making javascript behave pretty OO-like.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
}
|
Q: XML based website - how to create? I would like to create an XML-based website. I want to use XML files as datasources since it is a kind of online directory site. Can someone please give me a starting point? Are there any good online resources that I can refer to? I am pretty comfortable with ASP and JavaScript.
A: If you cannot or don't wish to store your data in XHTML format, then XSLT is definitely the way you want to go. By its very definition, it is a transformation language designed to transform data from one format to another. Because this is it's focus, it provides power, speed and flexibility you won't find in many other solutions. It will also ensure you output standards compliant (X)HTML as it's impossible to do otherwise (well, not without deliberately going out of your way to botch it).
MSXML allows you to do XSL transformations for use in Classic ASP - see this page for an example.
ZVON.org is also a great XSLT reference.
A: Hey, here's an idea - xhtml is xml, after all, so if you can define the format of the xml files, just create browser-friendly xhtml to begin with.
Otherwise I'm sure there are XML parsing libraries for ASP and you can look into XSLT (which is cool to learn, but a bit more of a challenge).
A: w3schools has very good information about XSLT.
A: I've worked with an XML/XSLT based templating system and have known others who have and my advice is don't do it. You'll tend to use XSLT as a programming language for presentational logic and it's a headache to develop and maintain.
You could use XML as data sources, but use deserialization or XQuery/XPath to extract the data and use it in a easier-to-use templating system -- even ASP pages are fine.
A: I'd use PHP with the built in SimpleXML functionality, though I'm sure there is similar functionality with ASP.
Alternatively you could use XSLT to transform the XML to display - depends what the XML is and whether you are creating it or just consuming it.
A: jQuery, AJAX, and PHP are your friends - for static content, a few nested loops in PHP can easily prase XML into XHTML (kudos to the person who pointed out that well-formed xhtml is xml), and with jQuery you can AJAX in additional content as needed.
Also - did I mention that they're all free?
A: (I'd really recommend using a traditional database instead.)
In ASP you can use the MSXML-component to parse and change XML-files. More information about the MSXML-component can be found on MSDN.
Basicly what you'd wanna do is read a XML-file and do some filtering on the server side, and outputting to the client.
Maybe something like this will get you started:
XML:
<data>
<item visible="no">
<title>Invisible item 1</title>
</item>
<item visible="yes">
<title>Visible item 1</title>
</item>
<item visible="yes">
<title>Visible item 2</title>
</item>
</data>
And some ASP:
Dim oXMLDoc
Dim oNode
Set oXMLDoc = CreateObject("MSXML.DOMDocument")
oXMLDoc.Load Server.MapPath("../_private/data.xml")
Set oNode = oXMLDoc.SelectSingleNode("data/item")
Do Until oNode Is Nothing
If oNode.GetNamedAttribute("visible") = "yes" Then
Response.Write "Title: " & oNode.SelectSingleNode("title").Text & "<br />" & vbCrLf
End If
Set oNode = oNode.nextSibling
Loop
A: Take a look at tox, http://tox.sourceforge.net/. It is meant for use with Oracle, but you could use the include feature instead to retrieve the XML files. Like most of the other answers, when using tox, you will need to apply a view to your XML via XSLT. There are a couple of very simple examples included in the tox download.
A: Web Content Management Made Simple with XML.
SoftXMLCMS is a unique content management system for managing data in XML format. Easy graphical interface enables you to control the profiling data for the creation of hierarchical structures.
SoftXMLCMS is ideal web tool for creating complex multi-page websites in different languages. Main advantage of SoftXMLCMS is that there is no need for a database that significantly reduce the cost of creating a professional website.
The process of installing an application is very simple and requires no special technical skills.
Compatible with the most important browsers available in the market: IE 5.5+, Firefox 1.0+, Mozilla 1.3+, Netscape 7+, Chrome and require ASP JPEG component only to work.
SoftXMLCMS includes a powerful text editor for editing rich HTML documents and images in CMS. The Word-like interface of editor makes content creation easy for business users who know nothing about HTML and want to keep it that way.
SoftXMLCMS includes a ready website template for displaying CMS content. All this will give you set of tools for creating a professional website in minimum time and in cost effective manner.
SoftXMLCMS requires IIS and support of Microsoft ASP technology and ASP JPEG component.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How do you prototype? We prototype a design, GUI, just to analyze a particular problem, proof of concept, etc. Sometimes we throw away the prototype, and sometimes it ends up in the production code. We use different languages, technologies, strategies, and styles to prototype.
What are the different situations you prototype usually and how do you prototype? Any good resource out there to master the craft?
A: One hot title is Effective Prototyping for Software Makers. The issue is that there are several schools of thought.
*
*Rapid Prototyping. Use fancy tools; get
something done soon.
*Evolutionary Prototyping. Evolve from prototype to production.
Some of this is legacy thinking, based on an era where tools were primitive and projects had to be meticulously planned from the beginning. When I started in this industry, the "green-screen" character-mode applications where rocket science and very painful to mock up. Tools and formal techniques were essential to manage the costs and risks.
This thinking is trumped by some more recent thinking.
*
*Powerful tools remove the need for complex prototypes. HTML mockups can be slapped together quickly. Is it still a prototype when you barely have to budget or plan for it?
[You can mock it up in MS-Word and save it as HTML. It's quicker for a Business Analyst to do it than to specify it and have a programmer do it.]
*Also, powerful tools can reduce the cost of mistakes. If it only took a week to put something together -- production-ready -- what's the point of an formal prototype effort?
*Agile techniques reduce the need to do quite so much detailed up-front planning. When you put something that works in the hands of users quickly, you don't have quite so much need to be sure every nuance is right before you start. It just has to be good enough to consider it progress.
What can happen is the following. [The hidden question is: is this still "prototyping" -- or is this just an Agile approach with powerful tools?]
Using tools like Django, you can put together the essential, core data structure and exercise it almost immediately. Use the default Django admin pages and you should be up and running as soon as you can articulate the data structures and write load utilities.
Then, add presentation pages wrapped around real, working data. Be sure you've got things right. Since you've only built data model and template-driven HTML pages, your investment is minimal. Explore.
Iterate until people start asking for smarter transactions than those available in the default admin pages. At this point, you're moving away from "discovery" and "elaboration" and into "construction". Did you do any prototyping? I suppose each HTML template you discarded was a kind of prototype. For that matter, so where the ones you kept.
The whole time, you can be working with more-or-less live, production users.
A: Personally, I believe a true prototype should not be much more than diagrams drawn on paper to demonstrate the flow of whatever it is you are trying to achieve. You can then use these documented flows to run through several scenarios to see if it works with whoever has requested the functionality.
Once the paper prototype has been modified to a point where it works then use it as a basis to start coding properly.
The benefits of this process are that you can't end up actually using the prototype code in production because there isn't any. Also, it is much easier to test it with business experts as there is not any code for them to understand.
A: Right now I just draw pictures. I would like to do more, but to get something to a point where the users would understand any better than a picture would cost to much time.
I am interested in seeing some of these responses :)
I should mention where I work is just me and one other guy to play the roles of project manager (collect data, design spec & app), dbas, coders, tool researcher/developer, et al that comes with the job of making an app for a small company.
A: For webapps, start with an pure (x)HTML + CSS mock-up, and then use a framework that makes it easy to implement the functionality.
Template-based frameworks are very good for this, but we've had some good experiences with JSF + Facelets + Seam, too.
A: The main reason for doing prototypes is reducing risk. Thus, we do UI prototypes, which are really not very helpful unless they actually do something that a user can play around with. Just as important, we also do prototypes to either prove that something can work or figure out how something does work.
A: I start off making a prototype that makes the most interesting part work, then I throw it away and move on to a new, more interesting project...
*kills self*
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How would you implement auto-capitalization in JavaScript/HTML I need to implement auto-capitalization inside of a Telerik RadEditor control on an ASPX page as a user types.
This can be an IE specific solution (IE6+).
I currently capture every keystroke (down/up) as the user types to support a separate feature called "macros" that are essentially short keywords that expand into formatted text. i.e. the macro "so" could auto expand upon hitting spacebar to "stackoverflow".
That said, I have access to the keyCode information, as well I am using the TextRange methods to select a word ("so") and expanding it to "stackoverflow". Thus, I have some semblence of context.
However, I need to check this context to know whether I should auto-capitalize. This also needs to work regardless of whether a macro is involved.
Since I'm monitoring keystrokes for the macros, should I just monitor for punctuation (it's more than just periods that signal a capital letter) and auto-cap the next letter typed, or should I use TextRange and analyze context?
A: Have you tried to apply the text-transform CSS style to your controls?
A: I'm not sure if this is what you're trying to do, but here is a function (reference) to convert a given string to title case:
function toTitleCase(str) {
return str.replace(/([\w&`'‘’"“.@:\/\{\(\[<>_]+-? *)/g, function(match, p1, index, title){ // ' fix syntax highlighting
if (index > 0 && title.charAt(index - 2) != ":" &&
match.search(/^(a(nd?|s|t)?|b(ut|y)|en|for|i[fn]|o[fnr]|t(he|o)|vs?\.?|via)[ -]/i) > -1)
return match.toLowerCase();
if (title.substring(index - 1, index + 1).search(/['"_{([]/) > -1)
return match.charAt(0) + match.charAt(1).toUpperCase() + match.substr(2);
if (match.substr(1).search(/[A-Z]+|&|[\w]+[._][\w]+/) > -1 ||
title.substring(index - 1, index + 1).search(/[\])}]/) > -1)
return match;
return match.charAt(0).toUpperCase() + match.substr(1);
});
}
A: Sometimes, not to do it is the right answer to a coding problem.
I really would NOT do this, unless you feel you can write a script to correctly set the case in the following sentence, if you were to first convert it to lowercase and pass it into the script.
Jean-Luc "The King" O'Brien MacHenry van d'Graaf IIV (PhD, OBE), left his Macintosh with in Macdonald's with his friends MacIntosh and MacDonald. Jesus gave His Atari ST at AT&T's "Aids for AIDS" gig in St George's st, with Van Halen in van Henry's van, performing The Tempest.
You have set yourself up for a fall by trying to create a Natural Language Parser. You can never do this as well as the user will. At best, you can do an approximation, and give the user the ability to edit and force a correction when you get it wrong. But often in such cases, the editing is more work than just doing it manually and right in the first place.
That said, if you have the space and power to store and search a large n-gram corpus of suitably capitalized words, you would at least be able to have a wild stab at the most likely desired case.
A: You pose an interesting question. Acting upon each key press may be more limiting because you will not know what comes immediately after a given keycode (the complexity of undoing a reaction that turns out to be incorrect could mean having to go to a TextRange-based routine anyway). Granted, I haven't wrestled with code on this problem to date, so this is a hypothesis in my head.
At any length, here's a Title Casing function (java implementation inspired by a John Gruber blogging automation) which may spur ideas when it comes to handling the actual casing code:
http://individed.com/code/to-title-case/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: GCC compiling a dll with __stdcall When we compile a dll using __stdcall inside visual studio 2008 the compiled function names inside the dll are.
FunctionName
Though when we compile the same dll using GCC using wx-dev-cpp GCC appends the number of paramers the function has, so the name of the function using Dependency walker looks like.
FunctionName@numberOfParameters or == FunctionName@8
How do you tell GCC compiler to remove @nn from exported symbols in the dll?
A: __stdcall decorates the function name by adding an underscore to the start, and the number of bytes of parameters to the end (separated by @).
So, a function:
void __stdcall Foo(int a, int b);
...would become _Foo@8.
If you list the function name (undecorated) in the EXPORTS section of your .DEF file, it is exported undecorated.
Perhaps this is the difference?
A: Just use -Wl,--kill-at on the gcc command line, which will pass --kill-at to the linker.
References:
*
*http://sourceware.org/binutils/docs/ld/Options.html#Options
*http://www.geocities.com/yongweiwu/stdcall.htm
A: You can also use -Wl,--add-stdcall-alias to your linker options in GCC. This will ensure that both function names (decorated and undecorated) are present and can be used as alias.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Whats the best way to get started with server virtualization? We recently bought a new rack and set of servers for it, we want to be able to redeploy these boxes as build servers, QA regression test servers, lab re-correlation servers, simulation servers, etc.
We have played a bit with VMWare, VirtualPC, VirtualBox etc, creating a virtual build server, but we came across a lot of issues when we tried to copy it for others to use, having to reconfigure every new copy of the VM.
We are using Windows XP x86/x64 and Windows Vista x86/x64, so I had to rename the machine, join the domain etc for every new copy.
Ideally we just want to be able to add a new box, deploy a thin boot strap OS (Linux is fine here) to get the VM up an running, then use it.
One other thing we have limited to no budget, so free is best.
I would like to understand others experiences in doing the same thing.
FYI, I am not in systems IT, this we are group of software engineers trying to set this up.
Any links to good tutorials would be great.
A: The problem you're running into is the machine SID must be unique for each machine in a domain. Of course by copying an image you now break that unique constraint.
I'd suggest that you read the documentation for Sysprep in the reskit and Vista System Image Manager - your friends for XP/Win2k3 and Vista/Win2k8 respectively.
These tools enable to "reseal" your configured instance of the OS such that the next time it boots - it can prompt for information such as network configuration, machine names, admin user ID's, run scripts etc.
Also be aware that the licencing restrictions for Windows desktop clients are generally per image - not per server.
Using these tools with HyperV we created complete preconfigured instances of Win2k3 & Win2k8 that boot to finish installing Sharepoint - going further we used the diffing disks to overlay Visual Studio so our devs could use the production images for their work. It has radically changed our development process.
At this point our entire public website is run on HyperV with of 5 boxes running 15 images for a mix of soft and hard redundancy - they take several hundred million page views per week.
A: Another option for dealing with the SID probelm is NewSID. This is a simpler tool than sysprep, in that all it does is rename the machine and reassign the SID; if you don't need all the other features of sysprep this is a much easier tool to use.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Trying to run faac's bootstrap script but running into errors I'm trying to install faac and am running into errors. Here are the errors I get when trying to build it:
[root@test faac]# ./bootstrap
configure.in:11: warning: underquoted definition of MY_DEFINE
run info '(automake)Extending aclocal'
or see http://sources.redhat.com/automake/automake.html#Extending-aclocal
aclocal:configure.in:17: warning: macro `AM_PROG_LIBTOOL' not found in library
common/mp4v2/Makefile.am:5: Libtool library used but `LIBTOOL' is undefined
common/mp4v2/Makefile.am:5:
common/mp4v2/Makefile.am:5: The usual way to define `LIBTOOL' is to add `AC_PROG_LIBTOOL'
common/mp4v2/Makefile.am:5: to `configure.in' and run `aclocal' and `autoconf' again.
libfaac/Makefile.am:1: Libtool library used but `LIBTOOL' is undefined
libfaac/Makefile.am:1:
libfaac/Makefile.am:1: The usual way to define `LIBTOOL' is to add `AC_PROG_LIBTOOL'
libfaac/Makefile.am:1: to `configure.in' and run `aclocal' and `autoconf' again.
configure.in:17: error: possibly undefined macro: AM_PROG_LIBTOOL
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
Does anyone know what this means? I was unable to find anything about this so I figured I'd ask you guys. Thank you for your help.
EDIT:
Here's my versions of linux, libtool, automake and autoconf:
[root@test faac]# libtool --version
ltmain.sh (GNU libtool) 2.2
Written by Gordon Matzigkeit <gord@gnu.ai.mit.edu>, 1996
Copyright (C) 2008 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
[root@test faac]# autoconf --version
autoconf (GNU Autoconf) 2.59
Written by David J. MacKenzie and Akim Demaille.
Copyright (C) 2003 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
[root@test faac]# automake --version
automake (GNU automake) 1.9.2
Written by Tom Tromey <tromey@redhat.com>.
Copyright 2004 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
[root@test faac]# cat /etc/redhat-release
Red Hat Enterprise Linux WS release 4 (Nahant)
A: I think the first thing to check is that you have libtool installed.
Edit:
This is what I get on Ubuntu 8.04:
$ ./bootstrap
configure.in:11: warning: underquoted definition of MY_DEFINE
configure.in:11: run info '(automake)Extending aclocal'
configure.in:11: or see http://sources.redhat.com/automake/automake.html#Extending-aclocal
configure.in:4: installing `./install-sh'
configure.in:4: installing `./missing'
common/mp4v2/Makefile.am: installing `./depcomp'
$ libtool --version
ltmain.sh (GNU libtool) 1.5.26 Debian 1.5.26-1ubuntu1 (1.1220.2.493 2008/02/01 16:58:18)
Copyright (C) 2008 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$ autoconf --version
autoconf (GNU Autoconf) 2.61
Copyright (C) 2006 Free Software Foundation, Inc.
This is free software. You may redistribute copies of it under the terms of
the GNU General Public License <http://www.gnu.org/licenses/gpl.html>.
There is NO WARRANTY, to the extent permitted by law.
Written by David J. MacKenzie and Akim Demaille.
$ automake --version
automake (GNU automake) 1.10.1
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv2+: GNU GPL version 2 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Tom Tromey <tromey@redhat.com>
and Alexandre Duret-Lutz <adl@gnu.org>.
A: So your problem is that automake/conf don't know about libtool.
You need to reinstall all of them. Either from all from source, or all from binary packages.
If installing from source, ensure they are all installed to the same location.
A: Maybe your automake doesn't know about your libtool for some reason. It looks like you've got two copies of libtool installed, which might be confusing it.
Maybe you should remove both copies, plus all automake, autoconf installs, and reinstall them (possibly from source?).
I guess the first step is to find out the locations of the active copies of the tools:
which libtool
which automake
which autoconf
A: It sounds like autoconf/automake cannot find libtool.m4, and therefore cannot resolve the macro AM_PROG_LIBTOOL. Look for this file under your libtool installation and copy/link it under /usr/share/aclocal/.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Where should you enable SSL? My last couple of projects have involved websites that sell a product/service and require a 'checkout' process in which users put in their credit card information and such. Obviously we got SSL certificates for the security of it plus giving peace of mind to the customers. I am, however, a little clueless as to the subtleties of it, and most importantly as to which parts of the website should 'use' the certificate.
For example, I've been to websites where the moment you hit the homepage you are put in https - mostly banking sites - and then there are websites where you are only put in https when you are finally checking out. Is it overkill to make the entire website run through https if it doesn't deal with something on the level of banking? Should I only make the checkout page https? What is the performance hit on going all out?
A: If the site is for public usage, you should probably put the public parts on HTTP. This makes things easier and more efficient for spiders and casual users. HTTP requests are much faster to initiate than HTTPS and this is very obvious especially on sites with lots of images.
Browsers also sometimes have a different cache policy for HTTPS than HTTP.
But it's alright to put them into HTTPS as soon as they log on, or just before. At the point at which the site becomes personalised and non-anonymous, it can be HTTPS from there onwards.
It's a better idea to use HTTPS for the log on page itself as well as any other forms, as it gives the use the padlock before they enter their info, which makes them feel better.
A: I have always done it on the entire website.
A: I too would use HTTPS all the way. This doesn't have a big performance impact (since browser cache the negociated symmetric key after the first connection) and protects against sniffing.
Sniffing was once on its way out because of fully switched wired networks, where you would have to work extra hard to capture anyone else's traffic (as opposed to networks using hubs), but it's on its way back because of wireless networks, which create a broadcast medium once again an make session hijacking easy, unless the traffic is encrypted.
A: I think a good rule of thumb is forcing SSL anywhere where sensitive information is going to possibly be transmitted. For example: I'm a member of Wescom Credit Union. There's a section on the front page that allows me to log on to my online bank account. Therefore, the root page forces SSL.
Think of it this way: will sensitive, private information be transmitted? If yes, enable SSL. Otherwise you should be fine.
A: In our organization we have three classifications of applications -
*
*Low Business Impact - no PII, clear-text storage, clear-text transmission, no access restrictions.
*Medium Business Impact - non-transactional PII e.g. email address. clear-text storage, SSL from datacenter to client, clear-text in data center, limited storage access.
*High Business Impact - transactional data e.g. SSN, Credit Card etc. SSL within and outside of datacenter. Encrypted & Audited Storage. Audited applications.
We use these criteria to determine partitioning of data, and which aspects of the site require SSL. Computation of SSL is either done on server or through accelerators such as Netscaler. As level of PII increases so does the complexity of the audit and threat modelling.
As you can imagine we prefer to do LBI applications.
A: I personally go with "SSL from go to woe".
If your user never enters a credit card number, sure, no SSL.
But there's an inherent possible security leak from the cookie replay.
*
*User visits site and gets assigned a cookie.
*User browses site and adds data to cart ( using cookie )
*User proceeds to payment page using cookie.
Right here there is a problem, especially if you have to handle payment negotiation yourself.
You have to transmit information from the non-secure domain to the secure domain, and back again, with no guarantees of protection.
If you do something dumb like share the same cookie with unsecure as you do with secure, you may find some browsers ( rightly ) will just drop the cookie completely ( Safari ) for the sake of security, because if somebody sniffs that cookie in the open, they can forge it and use it in the secure mode to, degrading your wonderful SSL security to 0, and if the Card details ever get even temporarily stored in the session, you have a dangerous leak waiting to happen.
If you can't be certain that your software is not prone to these weaknesses, I would suggest SSL from the start, so their initial cookie is transmitted in the secure.
A: Generally anytime you're transmitting sensitive or personal data you should be using SSL - e.g. adding an item to a basket probably doesn't need SSL, logging in with your username/password, or entering your CC details should be encrypted.
A: I only ever redirect my sites to SSL when it requires the user to enter sensitive information. With a shopping cart as soon as they have to fill out a page with their personal information or credit card details I redirect them to a SSL page.
For the rest of the site its probably not needed - if they are just viewing information/products on your commerce site.
A: SSL is pretty computationally intensive and should not be used to transmit large amounts of data if possible. Therfore it would be better to enable it at the checkout stage where the user would be transmitting sensitive information.
A: There is one major downside to a full https site and it's not the speed (thats ok).
It will be very hard to run Youtube, "Like"boxes etc without the unsecure warning.
We are running a full forces secured website and shop for two years now and this is the biggest drawback. We managed to get Youtube to work now but the "Add this" is still a big challenge. And if they change anything to the protocol then it could be that all our Youtube movies are blank...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Problem calling stored procedure from another stored procedure via classic ASP We have a classic ASP application that simply works and we have been loathe to modify the code lest we invoke the wrath of some long-dead Greek gods.
We recently had the requirement to add a feature to an application. The feature implementation is really just a database operation requires minimal change to the UI.
I changed the UI and made the minor modification to submit a new data value to the sproc call (sproc1).
In sproc1 that is called directly from ASP, we added a new call to another sproc that happens to be located on another server, sproc2.
Somehow, this does not work via our ASP app, but works in SQL Management Studio.
Here's the technical details:
*
*SQL 2005 on both database servers.
*Sql Login is authenticating from the ASP application to SQL 2005 Server 1.
*Linked server from Server 1 to Server 2 is working.
*When executing sproc1 from SQL Management Studio - works fine. Even when credentialed as the same user our code uses (the application sql login).
*sproc2 works when called independently of sproc1 from SQL Management Studio.
*VBScript (ASP) captures an error which is emitted in the XML back to the client. Error number is 0, error description is blank. Both from the ADODB.Connection object and from whatever Err.Number/Err.Description yields in VBScript from the ASP side.
So without any errors, nor any reproducibility (i.e. through SQL Mgmt Studio) - does anyone know the issue?
Our current plan is to break down and dig into the code on the ASP side and make a completely separate call to Server 2.sproc2 directly from ASP rather than trying to piggy-back through sproc1.
A: Have you got set nocount on set in both stored procedures? I had a similar issue once and whilst I can't remember exactly how I solved it at the moment, I know that had something to do with it!
A: You could be suffering from the double-hop problem
The double-hop issue is when the ASP/X page tries to use resources that are located on a server that is different from the IIS server.
Windows NT Challenge/Response does not support double-hop impersonations (in that once passed to the IIS server, the same credentials cannot be passed to a back-end server for authentication).
You should verify the attempted second connection using SQL Profiler.
Note that with your manual testing you are not authenticating via IIS. It's only when you initiate the sql via the ASP/X page that this problem manifests.
More resources:
*
*http://support.microsoft.com/kb/910449
*http://support.microsoft.com/kb/891031
*http://support.microsoft.com/kb/810572
A: I had a similar problem and I solved it by setting nocount on and removing print commands.
A: Example code might help :) Are you trying to return two tables from the stored procedure; I don't think ADO 2.6 can handle multiple tables being returned.
A: I did consider that (double-hop), but what is the difference between a sproc-in-a-sproc call like I am referring to vs. a typical cross-server join via INNER JOIN? Both would be executed on Server1, using the Linked Server credentials, and authenticating to Server 2.
Can anyone confirm that calling a sproc cross-server is different than doing a join on data tables? And why?
If the Linked Server config is a sql account - is that considered a double-hop (since what you refer to is NTLM double-hops?)
In terms of whether multiple resultsets are coming back - no. Both Server1.Sproc1 and Server2.Sproc2 would be "ExecuteNonQuery()" in the .net world and return nothing (no resultsets and no return values).
A: Try to check the permissions to the database for the user specified in the connection string.
Use the same user name in the connection string to log in to the database while using sql mgmt studio.
create some temporary table to write the intermediate values and exceptions since it can be a effective way of debugging your application.
A: Can I just check: You made the addition of sproc2? Prior to that it was working fine for ages.
Could you not change where you call sproc2 from? Rather than calling it from inside sproc1, can you call it from the ASP? That way you control the authentication to SQL in the code, and don't have to rely on setting up any trusts or shared remote authentication on the servers.
A: How is your linked server set up? You generally have some options as to how it authenticates to the remote server, which include logging in as the currently logged in user or specifying a SQL login to always use. Have you tried setting it to always use a specific account? That should eliminate any possible permissions issues in calling the remote procedure...
A: My first reaction is that this might not be an issue of calling cross-server, but one of calling a second proc from a first, and that this might be what's acting differently in the two different environments.
My first question is this: what happens if you remove the cross-server aspect from the equation? If you could set up a test system where your first proc calls your second proc, but the second proc is on the same server and/or in the same database, do you still get the same problem?
Along these same lines: In my experience, when the application and SSMS have gotten different results like that, it has often been an issue of the stored procedures' settings. It could be, as Luke says, NOCOUNT. I've had this sort of thing happen from extraneous PRINT statements in the code, although I seem to remember the PRINTed value becoming part of the error description (very counterintuitively).
If anything is returned in the Messages window when you run this in SSMS, find out where it is coming from and make it stop. I would have to look up the technical terms, but my recollection is that different querying environments have different sensitivities to "errors", and that a default connection via SSSM will not throw an error at certain times when an ADO connection from a scripting language will.
One final thought: in case it is an environment thing, try different settings on your ASP page's connection string. E.g., if you have an OLEDB connection, try ODBC. Try the native and non-native SQL Server drivers. Check out what connection string options your provider supports, and try any of them that seem like they might be worth trying.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Unit testing MFC UI applications? How do you unit test a large MFC UI application?
We have a few large MFC applications that have been in development for many years, we use some standard automated QA tools to run basic scripts to check fundamentals, file open etc. These are run by the QA group post the daily build.
But we would like to introduce procedures such that individual developers can build and run tests against dialogs, menus, and other visual elements of the application before submitting code to the daily build.
I have heard of such techniques as hidden test buttons on dialogs that only appear in debug builds, are there any standard toolkits for this.
Environment is C++/C/FORTRAN, MSVC 2005, Intel FORTRAN 9.1, Windows XP/Vista x86 & x64.
A: I realize this is a dated question, but for those of us who still work with MFC, the Microsoft C++ Unit Testing Framework in VS2012 works well.
The General Procedure:
*
*Compile your MFC Project as a static library
*Add a new Native Unit Test Project to your solution.
*In the Test Project, add your MFC Project as a Reference.
*In the Test Project's Configuration Properties, add the Include directories for your header files.
*In the Linker, input options add your MFC.lib;nafxcwd.lib;libcmtd.lib;
*Under 'Ignore Specific Default Libraries' add nafxcwd.lib;libcmtd.lib;
*Under General add the location of your MFC exported lib file.
The https://stackoverflow.com/questions/1146338/error-lnk2005-new-and-delete-already-defined-in-libcmtd-libnew-obj has a good description of why you need the nafxcwd.lib and libcmtd.lib.
The other important thing to check for in legacy projects. In General Configuration Properties, make sure both projects are using the same 'Character Set'. If your MFC is using a Multi-Byte Character Set you'll need the MS Test to do so as well.
A: Though not perfect, the best I have found for this is AutoIt http://www.autoitscript.com/autoit3
"AutoIt v3 is a freeware BASIC-like scripting language designed for automating the Windows GUI and general scripting. It uses a combination of simulated keystrokes, mouse movement and window/control manipulation in order to automate tasks in a way not possible or reliable with other languages (e.g. VBScript and SendKeys). AutoIt is also very small, self-contained and will run on all versions of Windows out-of-the-box with no annoying "runtimes" required!"
This works well when you have access to the source code of the application being driven, because you can use the resource ID number of the controls you want to drive. In this way you do not have to worry about simulated mouse clicks on particular pixels. Unfortunately, in a legacy application, you may well find that the resource ID are not unique, which may cause problems. However. it is very straightforward to change the IDs to be unique and rebuild.
The other issue is that you will encounter timing problems. I do not have a tried and true solution for these. Trial and error is what I have used, but this is clearly not scalable. The problem is that the AutoIT script must wait for the test application to respond to a command before the script issues the next command or check for the correct response. Sometimes it is not easy to find a convenient event to wait and watch for.
My feeling is that, in developing a new application, I would insist on a consistent way to signal "READY". This would be helpful to the human users as well as test scripts! This may be a challenge for a legacy application, but perhaps you can introduce it in problematical points and slowly spread it to the entire application as maintenance continues.
A: Although it cannot handle the UI side, I unit test MFC code using the Boost Test library. There is a Code Project article on getting started:
Designing Robust Objects with Boost
A: It depends on how the App is structured. If logic and GUI code is separated (MVC) then testing the logic is easy. Take a look at Michael Feathers "Humble Dialog Box" (PDF).
EDIT: If you think about it: You should very carefully refactor if the App is not structured that way. There is no other technique for testing the logic. Scripts which simulate clicks are just scratching the surface.
It is actually pretty easy:
Assume your control/window/whatever changes the contents of a listbox when the user clicks a button and you want to make sure the listbox contains the right stuff after the click.
*
*Refactor so that there is a separate list with the items for the listbox to show. The items are stored in the list and are not extracted from whereever your data comes from. The code that makes the listbox list things knows only about the new list.
*Then you create a new controller object which will contain the logic code. The method that handles the button click only calls mycontroller->ButtonWasClicked(). It does not know about the listbox or anythings else.
*MyController::ButtonWasClicked() does whats need to be done for the intended logic, prepares the item list and tells the control to update. For that to work you need to decouple the controller and the control by creating a interface (pure virtual class) for the control. The controller knows only an object of that type, not the control.
Thats it. The controller contains the logic code and knows the control only via the interface. Now you can write regular unit test for MyController::ButtonWasClicked() by mocking the control. If you have no idea what I am talking about, read Michaels article. Twice. And again after that.
(Note to self: must learn not to blather that much)
A: Since you mentioned MFC, I assumed you have an application that would be hard to get under an automated test harness. You'll observe best benefits of unit testing frameworks when you build tests as you write the code.. But trying to add a new feature in a test-driven manner to an application which is not designed to be testable.. can be hard work and well frustrating.
Now what I am going to propose is definitely hard work.. but with some discipline and perseverance you'll see the benefit soon enough.
*
*First you'll need some management backing for new fixes to take a bit longer. Make sure everyone understands why.
*Next buy a copy of the WELC book. Read it cover to cover if you have the time OR if you're hard pressed, scan the index to find the symptom your app is exhibiting. This book contains a lot of good advice and is just what you need when trying to get existing code testable.
*Then for every new fix/change, spend some time and understand the area you're going to work on. Write some tests in a xUnit variant of your choice (freely available) to exercise current behavior.
*Make sure all tests pass. Write a new test which exercises needed behavior or the bug.
*Write code to make this last test pass.
*Refactor mercilessly within the area under tests to improve design.
*Repeat for every new change that you have to make to the system from here on. No exceptions to this rule.
*Now the promised land: Soon ever growing islands of well tested code will begin to surface. More and more code would fall under the automated test suite and changes will become progressively easier to make. And that is because slowly and surely the underlying design becomes more testable.
The easy way out was my previous answer. This is the difficult but right way out.
A: Well we have one of these humongous MFC Apps at the workplace. Its a gigantic pain to maintain or extend... its a huge ball of mud now but it rakes in the moolah.Anyways
*
*We use Rational Robot for doing smoke tests and the like.
*Another approach that has had some success is to create a small product-specific language and script tests that use VBScript and some Control handle spying magic. Turn common actions into commands.. e.g. OpenDatabase would be a command that in turn will inject the required script blocks to click on Main Menu > File > "Open...". You then create excel sheets which are a series of such commands. These commands can take parameters too. Something like a FIT Test.. but more work. Once you have most of the common commands identified and scripts ready. It's pick and assemble scripts (tagged by CommandIDs) to write new tests. A test-runner parses these Excel sheets, combines all the little script blocks into a test script and runs it.
*
*OpenDatabase "C:\tests\MyDB"
*OpenDialog "Add Model"
*AddModel "M0001", "MyModel", 2.5, 100
*PressOK
*SaveDatabase
HTH
A: Actually we have been using Rational Team Test, then Robot, but in recent discussions with Rational we discovered they have no plans to support Native x64 applications focusing more on .NET, so we decided to switch Automated QA tools. This is great but licensing costs don't allow us to enable it for all developers.
All our applications support a COM API for scripting, which we regression test via VB, but this tests the API no the application as such.
Ideally I would be interested on how people integrate cppunit and similar unit testing frameworks into the application at a developer level.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: Point Sequence Interpolation Given an arbitrary sequence of points in space, how would you produce a smooth continuous interpolation between them?
2D and 3D solutions are welcome. Solutions that produce a list of points at arbitrary granularity and solutions that produce control points for bezier curves are also appreciated.
Also, it would be cool to see an iterative solution that could approximate early sections of the curve as it received the points, so you could draw with it.
A: The Catmull-Rom spline is guaranteed to pass through all the control points. I find this to be handier than trying to adjust intermediate control points for other types of splines.
This PDF by Christopher Twigg has a nice brief introduction to the mathematics of the spline. The best summary sentence is:
Catmull-Rom splines have C1
continuity, local control, and
interpolation, but do not lie within
the convex hull of their control
points.
Said another way, if the points indicate a sharp bend to the right, the spline will bank left before turning to the right (there's an example picture in that document). The tightness of those turns in controllable, in this case using his tau parameter in the example matrix.
Here is another example with some downloadable DirectX code.
A: One way is Lagrange polynominal, which is a method for producing a polynominal which will go through all given data points.
During my first year at university, I wrote a little tool to do this in 2D, and you can find it on this page, it is called Lagrange solver. Wikipedia's page also has a sample implementation.
How it works is thus: you have a n-order polynominal, p(x), where n is the number of points you have. It has the form a_n x^n + a_(n-1) x^(n-1) + ...+ a_0, where _ is subscript, ^ is power. You then turn this into a set of simultaneous equations:
p(x_1) = y_1
p(x_2) = y_2
...
p(x_n) = y_n
You convert the above into a augmented matrix, and solve for the coefficients a_0 ... a_n. Then you have a polynomial which goes through all the points, and you can now interpolate between the points.
Note however, this may not suit your purpose as it offers no way to adjust the curvature etc - you are stuck with a single solution that can not be changed.
A: You should take a look at B-splines. Their advantage over Bezier curves is that each part is only dependent on local points. So moving a point has no effect on parts of the curve that are far away, where "far away" is determined by a parameter of the spline.
The problem with the Langrange polynomial is that adding a point can have extreme effects on seemingly arbitrary parts of the curve; there's no "localness" like described above.
A: Have you looked at the Unix spline command? Can that be coerced into doing what you want?
A: Unfortunately the Lagrange or other forms of polynomial interpolation will not work on an arbitrary set of points. They only work on a set where in one dimension e.g. x
xi < xi+1
For an arbitary set of points, e.g. an aeroplane flight path, where each point is a (longitude, latitude) pair, you will be better off simply modelling the aeroplane's journey with current longitude & latitude and velocity. By adjusting the rate at which the aeroplane can turn (its angular velocity) depending on how close it is to the next waypoint, you can achieve a smooth curve.
The resulting curve would not be mathematically significant nor give you bezier control points. However the algorithm would be computationally simple regardless of the number of waypoints and could produce an interpolated list of points at arbitrary granularity. It would also not require you provide the complete set of points up front, you could simply add waypoints to the end of the set as required.
A: There are several algorithms for interpolating (and exrapolating) between an aribtrary (but final) set of points. You should check out numerical recipes, they also include C++ implementations of those algorithms.
A: I came up with the same problem and implemented it with some friends the other day. I like to share the example project on github.
https://github.com/johnjohndoe/PathInterpolation
Feel free to fork it.
A: Google "orthogonal regression".
Whereas least-squares techniques try to minimize vertical distance between the fit line and each f(x), orthogonal regression minimizes the perpendicular distances.
Addendum
In the presence of noisy data, the venerable RANSAC algorithm is worth checking out too.
A: In the 3D graphics world, NURBS are popular. Further info is easily googled.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Is there still any reason to learn AWK? I am constantly learning new tools, even old fashioned ones, because I like to use the right solution for the problem.
Nevertheless, I wonder if there is still any reason to learn some of them. awk for example is interesting to me, but for simple text processing, I can use grep, cut, sed, etc. while for complex ones, I'll go for Python.
Now I don't mean that's it's not a powerful and handy tool. But since it takes time and energy to learn a new tool, is it worth it ?
A: I use AWK occasionally for dealing with HTML. For instance, this code translates tables to csv files:
BEGIN {s=""; FS="n"}
/<td/ { gsub(/<[^>]*>/, ""); s=(s ", " $1);}
/<tr|<TR/ { print s; s="" }
Which is great if you're screen scraping. Actually, it might be the case that I love AWK because it allows me to build the wrong solution to problems so quickly :) more examples. It's also mentioned in Jon Bentley's lovely Programming Pearls.
A: 6 years after asking this question I can now answer with certainty: no, learning awk is not worth it.
Basic tasks are handled by basic bash commands, or even GUI tools easily. More complex tasks will be easily tackled with modern dynamic languages such as Python (fav or mine) or Ruby.
You should definitely learn a modern scripting dynamic language as it will help you in so many tasks (web, admin, data crunching, automation, etc). And by doing so, learning a tool such as awk is completely useless, it will save you at best a few seconds every month.
A: I do use awk every so often. It's good for very simple text shuffling in the middle of a pipeline; it fills a very narrow niche right between not needing it at all and needing to whip out Perl/Python/whatever.
I wouldn't advise you spend a whole lot of time on it, but it might come in handy to know the basics of the syntax -- at least enough that you can consult the manual quickly should you ever want to use it.
A: Learning AWK was invaluable for me in my last contract working on an embedded Linux system on which neither Perl nor most other scripting languages were installed.
A: If you already know and use sed, you might as well pick up at least a bit of awk. They can be piped together for some pretty powerful tricks. Always impresses the audience.
A: Most awk one liners can be achieved with Perl one liners - if you choose to get into a Perl one liner mindset. Or, just use Perl three liners :)
If you're maintaining shell scripts written by someone who liked awk, then clearly, you're going to need to learn awk.
Even if there's no practical need, if you already know regex it won't take long to pick up the basics, and it's fun to see how things were designed back then. It's rather elegant.
A: Computerworld recently did an interview with Alfred V. Aho (one of the three creators of AWK) about AWK. It's a quite interesting read. So maybe you'll find some hints in it, why it's a good idea learn AWK.
A: The only reason I use awk is the auto-splitting:
awk '{print $3}' < file.in
This prints the third whitespace-delimited field in file.in. It's a bit easier than:
tr -s ' ' < file.in | cut -d' ' -f3
A: awk has a very good ratio utility/difficulty, and "simple awk" works in every Unix/Linux/MacOS (and it can be installed in other systems too).
It was designed in Golden Age when people hated typing, so scripts can be very, very short and fast to write. I will try to instal mawk, a fast version, allegedly it accelerates the computation about 9 times, awk/gawk is rather slow, so if you want to use it instead of R etc. you may want mawk.
A: I think awk is great if your file contains columns/fields. I use it when processing/analyzing a particular column in a multicolumn file. Or if I want to add/delete a particular column(s).
e.g.
awk -F \t '{ if ($2 > $3) print; }' <filename>
will print only if the 2nd column value in a tab seperated file is greater than the 3rd column value.
Of course I could use Perl or Python, but awk makes it so much simpler with a concise single line command.
Also learning awk is pretty low-cost. You can learn awk basics in less than an hour, so it's not as much effort as learning any other programming/scripting language.
A: Nope.
Even though it might be interesting, you can do everything that awk can do using other, more powerful tools such as Perl.
Spend your time learning those more powerful tools - and only incidentally pick up some awk along the way.
A: It's useful mostly if you have to occasionally parse log files for data or output of programs while shell scripting, because it's very easy to achieve in awk that that would take you a little more lines of code in python.
It certainly has more power than that, but this seems to be tasks most people use it for.
A: Of course: I'm working in an environment where the only available languages are: (some shity language which generates COBOL, OMG, OMG), bash (old version), perl (I don't master it yet), sed, awk, and some other command line utilities. Knowing awk saved me several hours (and had generated several text processing tasks from my collegaues - they come to me at least three times a day).
A: If you quickly learn the basics of awk, you can indeed do amazing things on the command line.
But the real reason to learn awk is to have an excuse to read the superb book The AWK Programming Language by Aho, Kernighan, and Weinberger.
The AWK Programming Language at archive.org
You would think, from the name, that it simply teaches you awk. Actually, that is just the beginning. Launching into the vast array of problems that can be tackled once one is using a concise scripting language that makes string manipulation easy — and awk was one of the first — it proceeds to teach the reader how to implement a database, a parser, an interpreter, and (if memory serves me) a compiler for a small project-specific computer language! If only they had also programmed an example operating system using awk, the book would have been a fairly complete survey introduction to computer science!
Famously clear and concise, like the original C Language book, it also is a wonderful example of friendly technical writing done right. Even the index is a piece of craftsmanship.
Awk? If you know it, you'll use it at the command-line occasionally, but for anything larger you'll feel trapped, unable to access the wider features of your system and the Internet that something like Python provides access to. But the book? You'll always be glad you read it!
A: I think it depends on the environment you find yourself in. If you are a *nix person, then knowing awk is a Good Thing. The only other scripting environment that can be found on virtually every *nix is sh. So while grep, sed, etc can surely replace awk on a modern mainstream linux distro, when you move to more exotic systems, knowing a little awk is going to be Real Handy.
awk can also be used for more than just text processing. For example one of my supervisors writes astronomy code in awk - that is how utterly old school and awesome he is. Back in his days, it was the best tool for the job... and now even though his students like me use python and what not, he sticks to what he knows and works well.
In closing, there is a lot of old code kicking around the world, knowing a little awk isn't going to hurt. It will also make you better *nix person :-)
A: I'd say it's probably not worth it anymore. I use it from time to time as a much more versatile stream editor than sed with searching abilities included, but if you are proficient with python I do not know a task which you would be able to finish that much faster to compensate for the time needed to learn awk.
The following command is probably the only one for which I've used awk in the last two years (it purges half-removed packages from my Debian/Ubuntu systems):
$ dpkg -l|awk '/^rc/ {print $2}'|xargs sudo dpkg -P
A: I'd say there is. For simple stuff, AWK is a lot easier on the inexperienced sysadmin / developer than Python. You can learn a little AWK and do a lot of things, learning Python means learning a whole new language (yes, I know AWK is a language is a sense too).
Perl might be able to do a lot of things AWK can do, but offered the choice in this day and age I would choose Python here. So yes, you should learn AWK. but learn Python too :-)
A: I was recently trying to visualize network pcap files logging a DOS attack which amounted to over 20Gbs. I needed the timestamp and the Ip addresses. In my scenario, AWK one-liner worked fabulously and pretty fast as well. I specifically used AWK to clean the extracted files, get the ip addresses and the total packet count from those IP addresses within grouped span of time. I totally agree with what other people have written above. It depends on your needs.
A: awk is a powertool language, so you are likely going to find awk being used somewhere if you are an IT professional of any sort. If you can handle the syntax and regular expressions of grep and sed then you should have no problem picking up awk and it is probably worthwhile to.
Where I've found awk really shine is in simplifying things like processing multi-line records and mangling/interpolating multiple files simultaneously.
A: One reason NOT to learn awk is that it doesn't have non-greedy matches in regular expressions.
I have an awk code that now I must rewrite only because I suddenly debugged that there is no such thing as non-greedy matches in awk/gawk thus it can't properly execute some regexes.
A: It depends on your team mates and you leader and the task you are working on.
if( team mates and leader ask to write awk ){
if( you can reject that){
if( awk code is very small){
learn little just like learn Regex
}else{
use python or even java
}
}else{
do as they ask
}
}
A: Now that PERL is ported to pretty much every significant platform, I'd say it's not worth it. It's more versatile than sed and awk together. As for auto-splitting, you can do it in perl like this:
perl -F':' -ane 'print $F[3],"\n";' /etc/passwd
EDIT: you might still want to get somewhat acquainted with awk, because some other tools are based on its philosophy of pattern-based actions (e.g. DTrace on Solaris).
A: I work in area the files are in column format. So awk is invaluable to me to REFORMAT the file so different software can work together. For non IT profession, using awk is enough and perfect. Now a day, computer speed is not an issue, so I can combine awk & unix to pipe many 1 liners command into a "script". With Awk search by field and record, I use it to check the file data very fast, instead of "vi" to open a file. I have to say awk capability brought joy to my job specially, I am able to assist co-worker to sort things out quickly using awk. Amazing code to me.
A: I have been doing some coding in python at present.
But I still do not know it well enough to use easily for simple one off file transformations.
With awk I can quickly develop a one line piece of code on the unix command line that does some pretty swish transformations. Every time I use awk, the piece of code I write will be disposable and no more than a few lines long. Maybe an "if" statment and "printf" statement here or there on the one line.
I have never written a piece of code that is more than 10 lines long with awk.
I saw some such scripts years ago.
But anything that required many lines of code, I would resort to python.
I love awk. It is a very powerful tool in combination with sed.
A: if you care anything about speed, but don't wanna be dealing with C/C++ or assembly, you go for awk, specifically, mawk 1.9.9.6.
It also lacks perl's ugly syntax, python3's feature bloat, javascript's annoying UTF16 setup, or C's memory-pointer pitfall traps
Most of the time, for the implementation of the same pseudo-codes, awk only loses against specialized vectorized instructions, like AVX/SSE
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "122"
}
|
Q: How can i temporarily load a font? I need to load some fonts temporarily in my program. Preferably from a dll resource file.
A: I found this with Google. I have cut & pasted the relevant code below.
You need to add the font to your resource file:
34 FONT "myfont.ttf"
The following C code will load the font from the DLL resource and release it from memory when you are finished using it.
DWORD Count;
HMODULE Module = LoadLibrary("mylib.dll");
HRSRC Resource = FindResource(Module,MAKEINTRESOURCE(34),RT_FONT);
DWORD Length = SizeofResource(Module,Resource);
HGLOBAL Address = LoadResource(Module,Resource);
HANDLE Handle = AddFontMemResourceEx(Address,Length,0,&Count);
/* Use the font here... */
RemoveFontMemResourceEx(Handle);
FreeLibrary(Module);
A: And here a Delphi version:
procedure LoadFontFromDll(const DllName, FontName: PWideChar);
var
DllHandle: HMODULE;
ResHandle: HRSRC;
ResSize, NbFontAdded: Cardinal;
ResAddr: HGLOBAL;
begin
DllHandle := LoadLibrary(DllName);
if DllHandle = 0 then
RaiseLastOSError;
ResHandle := FindResource(DllHandle, FontName, RT_FONT);
if ResHandle = 0 then
RaiseLastOSError;
ResAddr := LoadResource(DllHandle, ResHandle);
if ResAddr = 0 then
RaiseLastOSError;
ResSize := SizeOfResource(DllHandle, ResHandle);
if ResSize = 0 then
RaiseLastOSError;
if 0 = AddFontMemResourceEx(Pointer(ResAddr), ResSize, nil, @NbFontAdded) then
RaiseLastOSError;
end;
to be used like:
var
FontName: PChar;
FontHandle: THandle;
...
FontName := 'DEJAVUSANS';
LoadFontFromDll('Project1.dll' , FontName);
FontHandle := CreateFont(0, 0, 0, 0, FW_NORMAL, 0, 0, 0, DEFAULT_CHARSET,
OUT_DEFAULT_PRECIS, CLIP_DEFAULT_PRECIS, DEFAULT_QUALITY, DEFAULT_PITCH,
FontName);
if FontHandle = 0 then
RaiseLastOSError;
A: Here's some code that will load/make available the font from inside your executable (ie, the font was embedded as a resource, rather than something you had to install into Windows generally).
Note that the font is available to any application until your program gets rid of it.
I don't know how useful you'll find this, but I have used it a few times. I've never put the font into a dll (I prefer this 'embed into the exe' approach) but don't imagine it changes things too much.
procedure TForm1.FormCreate(Sender: TObject);
var
ResStream : TResourceStream;
sFileName : string;
begin
sFileName:=ExtractFilePath(Application.ExeName)+'SWISFONT.TTF';
ResStream:=nil;
try
ResStream:=TResourceStream.Create(hInstance, 'Swisfont', RT_RCDATA);
try
ResStream.SaveToFile(sFileName);
except
on E:EFCreateError Do ShowMessage(E.Message);
end;
finally
ResStream.Free;
end;
AddFontResource(PChar(sFileName));
SendMessage(HWND_BROADCAST, WM_FONTCHANGE, 0, 0);
end;
procedure TForm1.FormDestroy(Sender: TObject);
var
sFile:string;
begin
sFile:=ExtractFilePath(Application.ExeName)+'SWISFONT.TTF';
if FileExists(sFile) then
begin
RemoveFontResource(PChar(sFile));
SendMessage(HWND_BROADCAST, WM_FONTCHANGE, 0, 0);
DeleteFile(sFile);
end;
end;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: XML-RPC: best way to handle 64-bit values? So the official XML-RPC standard doesn't support 64-bit values. But in these modern times, 64-bit values are increasingly common.
How do you handle these? What XML-RPC extensions are the most common? What language bindings are there? I'm especially interested in Python and C++, but all information is appreciated.
A: Some libraries support 64 bits extensions, indeed, but there doesn't seem to be a standard. xmlrpc-c, for example, has a so called i8 but it doesn't work with python (at least not by default).
I would recommend to either:
*
*Convert the integer to string by hand and send it as such. XMLRPC will convert it to string anyway, so I would say this is reasonable.
*Break it in two 32 bits integers and send it as such.
A: The use of "i8" as a data-type is becoming more and more common. I recently added this to my Perl XML-RPC module (http://metacpan.org/pod/RPC::XML) in response to a request from a large group that needed it in order to work with a server written in Java. I don't know what toolkit the server used, but it was already accepting i8 as a type.
One thing that I feel still has to be addressed, is whether the "int" alias for "i4" should also accept i8, the way it currently does i4. Or, for that matter, if a parameter typed as i8 should quietly accept an input typed as i4. XML-RPC has great potential as a lightweight, low-overhead protocol handy when you don't need all the coverage of SOAP, but it is often overlooked in the religious wars between REST and SOAP.
XML-RPC is in need of some updating and revision, if we could just get the original author to permit it...
A: I don't know anything about how XMLRPC could be extended but I did find this mail about the subject:
In XML-RPC, everything is transmitted
as a string, so I don't think that
choice is really that bad - except of
course for the additional clumsiness
for invoking explicit conversion
functions.
But no, XML-RPC doesn't have a data
type that can represent integers above
2**32. If you can accept losing
precision, you can use doubles (but
you still would have to convert
explicitly on the sender).
A: XML-RPC.NET has supported <i8> since release 2.5.0 (5th September 2010).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to compute the critical path of a directional acyclic graph? What is the best (regarding performance) way to compute the critical path of a directional acyclic graph when the nodes of the graph have weight?
For example, if I have the following structure:
Node A (weight 3)
/ \
Node B (weight 4) Node D (weight 7)
/ \
Node E (weight 2) Node F (weight 3)
The critical path should be A->B->F (total weight: 10)
A: I would solve this with dynamic programming. To find the maximum cost from S to T:
*
*Topologically sort the nodes of the graph as S = x_0, x_1, ..., x_n = T. (Ignore any nodes that can reach S or be reached from T.)
*The maximum cost from S to S is the weight of S.
*Assuming you've computed the maximum cost from S to x_i for all i < k, the maximum cost from S to x_k is the cost of x_k plus the maximum cost to any node with an edge to x_k.
A: I have no clue about "critical paths", but I assume you mean this.
Finding the longest path in an acyclic graph with weights is only possible by traversing the whole tree and then comparing the lengths, as you never really know how the rest of the tree is weighted. You can find more about tree traversal at Wikipedia. I suggest, you go with pre-order traversal, as it's easy and straight forward to implement.
If you're going to query often, you may also wish to augment the edges between the nodes with information about the weight of their subtrees at insertion. This is relatively cheap, while repeated traversal can be extremely expensive.
But there's nothing to really save you from a full traversal if you don't do it. The order doesn't really matter, as long as you do a traversal and never go the same path twice.
A: There's a paper that purports to have an algorithm for this: "Critical path in an activity network with time constraints". Sadly, I couldn't find a link to a free copy. Short of that, I can only second the idea of modifying http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm or http://en.wikipedia.org/wiki/A*
UPDATE: I apologize for the crappy formatting—the server-side markdown engine is apparently broken.
A: My first answer so please excuse for any non-standard thing by the culture of stackoverflow.
I think the solution is simple. Just negate the weights and run the classic shortest path for DAG (modified for weights of vertices of course). It should run fairly fast. (Time complexity of O(V+E) maybe)
I think it should work as when you will negate the weights, the biggest one will become smallest, second biggest will be second smallest and so on as if a > b then -a < -b. Then running DAG should suffice as it will find the solution for the smallest path of the negated one and thus finding longest path for the original one
A: Try the A* method.
A* Search Algorithm
At the end, to deal with the leaves, just make all of them lead on to a final point, to set as the goal.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: What do you use when you need reliable UDP? If you have a situation where a TCP connection is potentially too slow and a UDP 'connection' is potentially too unreliable what do you use? There are various standard reliable UDP protocols out there, what experiences do you have with them?
Please discuss one protocol per reply and if someone else has already mentioned the one you use then consider voting them up and using a comment to elaborate if required.
I'm interested in the various options here, of which TCP is at one end of the scale and UDP is at the other. Various reliable UDP options are available and each brings some elements of TCP to UDP.
I know that often TCP is the correct choice but having a list of the alternatives is often useful in helping one come to that conclusion. Things like Enet, RUDP, etc that are built on UDP have various pros and cons, have you used them, what are your experiences?
For the avoidance of doubt there is no more information, this is a hypothetical question and one that I hoped would elicit a list of responses that detailed the various options and alternatives available to someone who needs to make a decision.
A: As others have pointed out, your question is very general, and whether or not something is 'faster' than TCP depends a lot on the type of application.
TCP is generally as fast as it gets for reliable streaming of data from one host to another. However, if your application does a lot of small bursts of traffic and waiting for responses, UDP may be more appropriate to minimize latency.
There is an easy middle ground. Nagle's algorithm is the part of TCP that helps ensure that the sender doesn't overwhelm the receiver of a large stream of data, resulting in congestion and packet loss.
If you need the reliable, in-order delivery of TCP, and also the fast response of UDP, and don't need to worry about congestion from sending large streams of data, you can disable Nagle's algorithm:
int opt = -1;
if (setsockopt(sock_fd, IPPROTO_TCP, TCP_NODELAY, (char *)&opt, sizeof(opt)))
printf("Error disabling Nagle's algorithm.\n");
A:
If you have a situation where a TCP connection is potentially too slow and a UDP 'connection' is potentially too unreliable what do you use? There are various standard reliable UDP protocols out there, what experiences do you have with them?
The key word in your sentence is 'potentially'. I think you really need to prove to yourself that TCP is, in fact, too slow for your needs if you need reliability in your protocol.
If you want to get reliability out of UDP then you're basically going to be re-implementing some of TCP's features on top of UDP which will probably make things slower than just using TCP in the first place.
A: Protocol DCCP, standardized in RFC 4340, "Datagram Congestion Control Protocol" may be what you are looking for.
It seems implemented in Linux.
A: May be RFC 5405, "Unicast UDP Usage Guidelines for Application Designers" will be useful for you.
A: RUDP. Many socket servers for games implement something similar.
A: What about SCTP. It's a standard protocol by the IETF (RFC 4960)
It has chunking capability which could help for speed.
Update: a comparison between TCP and SCTP shows that the performances are comparable unless two interfaces can be used.
Update: a nice introductory article.
A: It's difficult to answer this question without some additional information on the domain of the problem.
For example, what volume of data are you using? How often? What is the nature of the data? (eg. is it unique, one off data? Or is it a stream of sample data? etc.)
What platform are you developing for? (eg. desktop/server/embedded)
To determine what you mean by "too slow", what network medium are you using?
But in (very!) general terms I think you're going to have to try really hard to beat tcp for speed, unless you can make some hard assumptions about the data that you're trying to send.
For example, if the data that you're trying to send is such that you can tolerate the loss of a single packet (eg. regularly sampled data where the sampling rate is many times higher than the bandwidth of the signal) then you can probably sacrifice some reliability of transmission by ensuring that you can detect data corruption (eg. through the use of a good crc)
But if you cannot tolerate the loss of a single packet, then you're going to have to start introducing the types of techniques for reliability that tcp already has. And, without putting in a reasonable amount of work, you may find that you're starting to build those elements into a user-space solution with all of the inherent speed issues to go with it.
A: ENET - http://enet.bespin.org/
I've worked with ENET as a reliable UDP protocol and written an asynchronous sockets friendly version for a client of mine who is using it in their servers. It works quite nicely but I don't like the overhead that the peer to peer ping adds to otherwise idle connections; when you have lots of connections pinging all of them regularly is a lot of busy work.
ENET gives you the option to send multiple 'channels' of data and for the data sent to be unreliable, reliable or sequenced. It also includes the aforementioned peer to peer ping which acts as a keep alive.
A: Did you consider compressing your data ?
As stated above, we lack information about the exact nature of your problem, but compressing the data to transport them could help.
A: We have some defense industry customers that use UDT (UDP-based Data Transfer) (see http://udt.sourceforge.net/) and are very happy with it. I see that is has a friendly BSD license as well.
A: Anyone who decides that the list above isn't enough and that they want to develop their OWN reliable UDP should definitely take a look at the Google QUIC spec as this covers lots of complicated corner cases and potential denial of service attacks. I haven't played with an implementation of this yet, and you may not want or need everything that it provides, but the document is well worth reading before embarking on a new "reliable" UDP design.
A good jumping off point for QUIC is here, over at the Chromium Blog.
The current QUIC design document can be found here.
A: RUDP - Reliable User Datagram Protocol
This provides:
*
*Acknowledgment of received packets
*Windowing and congestion control
*Retransmission of lost packets
*Overbuffering (Faster than real-time streaming)
It seems slightly more configurable with regards to keep alives then ENet but it doesn't give you as many options (i.e. all data is reliable and sequenced not just the bits that you decide should be). It looks fairly straight forward to implement.
A: It is hard to give a universal answer to the question but the best way is probably not to stay on the line "between TCP and UDP" but rather to go sideways :).
A bit more detailed explanation:
If an application needs to get a confirmation response for every piece of data it transmits then TCP is pretty much as fast as it gets (especially if your messages are much smaller than optimal MTU for your connection) and if you need to send periodic data that gets expired the moment you send it out then raw UDP is the best choice for many reasons but not particularly for speed as well.
Reliability is a more complex question, it is somewhat relative in both cases and it always depends on a specific application. For a simple example if you unplug the internet cable from your router then good luck keeping reliably delivering anything with TCP. And what even worse is that if you don't do something about it in your code then your OS will most likely just block your application for a couple of minutes before indicating an error and in many cases this delay is just not acceptable as well.
So the question with conventional network protocols is generally not really about speed or reliability but rather about convenience. It is about getting some features of TCP (automatic congestion control, automatic transmission unit size adjustment, automatic retransmission, basic connection management, ...) while also getting at least some of the important and useful features it misses (message boundaries - the most important one, connection quality monitoring, multiple streams within a connection, etc) and not having to implement it yourself.
From my point of view SCTP now looks like the best universal choice but it is not very popular and the only realistic way to reliably pass it across the Internet of today is still to wrap it inside UDP (probably using sctplib). It is also still a relatively basic and compact solution and for some applications it may still be not sufficient by itself.
As for the more advanced options, in some of the projects we used ZeroMQ and it worked just fine. This is a much more of a complete solution, not just a network protocol (under the hood it supports TCP, UDP, a couple of higher level protocols and some local IPC mechanisms to actually deliver messages). Since a couple of releases its initial developer has switched his attention to his new NanoMSG and currently the newest NNG libraries. It is not as thoroughly developed and tested and it is not very popular but someday it may change. If you don't mind the CPU overhead and some network bandwidth loss then some of the libraries might work for you. There are some other network-oriented message exchange libraries available as well.
A: You should check MoldUDP, which has been around for decades and it is used by Nasdaq's ITCH market data feed. Our messaging system CoralSequencer uses it to implement a reliable multicast event-stream from a central process.
Disclaimer: I'm one of the developers of CoralSequencer
A: The best way to achieve reliability using UDP is to build the reliability in the application program itself( for example, by adding acknowledgment and retransmission mechanisms)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "105"
}
|
Q: Backend administration in Ruby on Rails I'd like to build a real quick and dirty administrative backend for a Ruby on Rails application I have been attached to at the last minute. I've looked at activescaffold and streamlined and think they are both very attractive and they should be simple to get running, but I don't quite understand how to set up either one as a backend administration page. They seem designed to work like standard Ruby on Rails generators/scaffolds for creating visible front ends with model-view-controller-table name correspondence.
How do you create a admin_players interface when players is already in use and you want to avoid, as much as possible, affecting any of its related files?
The show, edit and index of the original resource are not usuable for the administrator.
A: Do check out active_admin at https://github.com/gregbell/active_admin.
A: I think namespaces is the solution to the problem you have here:
map.namespace :admin do |admin|
admin.resources :customers
end
Which will create routes admin_customers, new_admin_customers, etc.
Then inside the app/controller directory you can have an admin directory. Inside your admin directory, create an admin controller:
./script/generate rspec_controller admin/admin
class Admin::AdminController < ApplicationController
layout "admin"
before_filter :login_required
end
Then create an admin customers controller:
./script/generate rspec_controller admin/customers
And make this inhert from your ApplicationController:
class Admin::CustomersController < Admin::AdminController
This will look for views in app/views/admin/customers
and will expect a layout in app/views/layouts/admin.html.erb.
You can then use whichever plugin or code you like to actually do your administration, streamline, ActiveScaffold, whatever personally I like to use resourcecs_controller, as it saves you a lot of time if you use a REST style architecture, and forcing yourself down that route can save a lot of time elsewhere. Though if you inherited the application that's a moot point by now.
A: I have used Streamlined pretty extensively.
To get Streamline working you create your own controllers - so you can actually run it completely apart from the rest of your application, and you can even run it in a separate 'admin' folder and namespace that can be secured with .
Here is the Customers controller from a recent app:
class CustomersController < ApplicationController
layout 'streamlined'
acts_as_streamlined
Streamlined.ui_for(Customer) do
exporters :csv
new_submit_button :ajax => false
default_order_options :order => "created_at desc"
list_columns :name, :email, :mobile, :comments, :action_required_yes_no
end
end
A: Use https://github.com/sferik/rails_admin.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: How can I unit test responses from the webapp WSGI application in Google App Engine? I'd like to unit test responses from the Google App Engine webapp.WSGIApplication, for example request the url '/' and test that the responses status code is 200, using GAEUnit. How can I do this?
I'd like to use the webapp framework and GAEUnit, which runs within the App Engine sandbox (unfortunately WebTest does not work within the sandbox).
A: Actually WebTest does work within the sandbox, as long as you comment out
import webbrowser
in webtest/__init__.py
A: I have added a sample application to the GAEUnit project which demonstrates how to write and execute a web test using GAEUnit. The sample includes a slightly modified version of the 'webtest' module ('import webbrowser' is commented out, as recommended by David Coffin).
Here's the 'web_tests.py' file from the sample application 'test' directory:
import unittest
from webtest import TestApp
from google.appengine.ext import webapp
import index
class IndexTest(unittest.TestCase):
def setUp(self):
self.application = webapp.WSGIApplication([('/', index.IndexHandler)], debug=True)
def test_default_page(self):
app = TestApp(self.application)
response = app.get('/')
self.assertEqual('200 OK', response.status)
self.assertTrue('Hello, World!' in response)
def test_page_with_param(self):
app = TestApp(self.application)
response = app.get('/?name=Bob')
self.assertEqual('200 OK', response.status)
self.assertTrue('Hello, Bob!' in response)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: When and why should $_REQUEST be used instead of $_GET / $_POST / $_COOKIE? Question in the title.
And what happens when all 3 of $_GET[foo], $_POST[foo] and $_COOKIE[foo] exist? Which one of them gets included to $_REQUEST?
A: Sometimes you might want the same script to be called with several different ways. A form submit and an AJAX call comes to mind. In most cases, however, it´s better to be explicit.
Also, see http://docs.php.net/manual/en/ini.core.php#ini.request-order on how the different sources of variables overwrite each other if there is a name collision.
A: I'd say never.
If I wanted something to be set via the various methods, I'd code for each of them to remind myself that I'd done it that way - otherwise you might end up with things being overwritten without realising.
Shouldn't it work like this:
$_GET = non destructive actions (sorting, recording actions, queries)
$_POST = destructive actions (deleting, updating)
$_COOKIE = trivial settings (stylesheet preferences etc)
$_SESSION = non trivial settings (username, logged in?, access levels)
A: $_REQUEST is only a shortcut to prevent you from testing post, get and cookie if the data can come from any of these.
There are some pitfalls:
*
*data is taken from GET, POST and finally COOKIE. The last overrides the first, so be careful with that.
*REST architectures require you to separate the POST and GET semantics, you can't rely on $_REQUEST in that case.
Nevertheless, if you know what you're doing, then it's just another handy PHP trick.
I'd use it if I wanted to quickly update a var that may come from several sources, for example:
*
*In your controller, to decide what page to serve without checking if the request comes from a form action or a hypertext link.
*To check if a session is still active regardless of the way the session id is transmitted.
A: To answer the "what happens when all 3 exist" question, the answer is "it depends."
PHP auto-fills $_REQUEST based on the request_order directive (or variables_order if request_order is absent) in PHP.INI. The default is usually "GPC" which means GET is loaded first, then POST is loaded (overwriting GET if there is a collision), then cookies are loaded (overwriting get/post if there is a collision). However, you can change this directive in the PHP.INI file. For example, changing it to "CPG" makes cookies load first, then post, then get.
As far as when to use it? I'll echo the sentiment of "Never." You already don't trust the user, so why give the user more tools? As the developer, you should know where you expect the data to come from. It's all about reducing your attack surface area.
A: When you're not certain where the values are populated or when you use them both and want to loop over all values by both POST and GET methods.
A: I use POST when I don't want people to have easy access to what is being passed and I use GET when I don't mind them seeing the value in the url. I generally don't use cookies for much as I find SESSION to be fine for persisting values (although having a proper registry is the best way to utilize that).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
}
|
Q: global variables in php not working as expected I'm having trouble with global variables in php. I have a $screen var set in one file, which requires another file that calls an initSession() defined in yet another file. The initSession() declares global $screen and then processes $screen further down using the value set in the very first script.
How is this possible?
To make things more confusing, if you try to set $screen again then call the initSession(), it uses the value first used once again. The following code will describe the process. Could someone have a go at explaining this?
$screen = "list1.inc"; // From model.php
require "controller.php"; // From model.php
initSession(); // From controller.php
global $screen; // From Include.Session.inc
echo $screen; // prints "list1.inc" // From anywhere
$screen = "delete1.inc"; // From model2.php
require "controller2.php"
initSession();
global $screen;
echo $screen; // prints "list1.inc"
Update:
If I declare $screen global again just before requiring the second model, $screen is updated properly for the initSession() method. Strange.
A: Global DOES NOT make the variable global. I know it's tricky :-)
Global says that a local variable will be used as if it was a variable with a higher scope.
E.G :
<?php
$var = "test"; // this is accessible in all the rest of the code, even an included one
function foo2()
{
global $var;
echo $var; // this print "test"
$var = 'test2';
}
global $var; // this is totally useless, unless this file is included inside a class or function
function foo()
{
echo $var; // this print nothing, you are using a local var
$var = 'test3';
}
foo();
foo2();
echo $var; // this will print 'test2'
?>
Note that global vars are rarely a good idea. You can code 99.99999% of the time without them and your code is much easier to maintain if you don't have fuzzy scopes. Avoid global if you can.
A: You need to put "global $screen" in every function that references it, not just at the top of each file.
A: If you have a lot of variables you want to access during a task which uses many functions, consider making a 'context' object to hold the stuff:
//We're doing "foo", and we need importantString and relevantObject to do it
$fooContext = new StdClass(); //StdClass is an empty class
$fooContext->importantString = "a very important string";
$fooContext->relevantObject = new RelevantObject();
doFoo($fooContext);
Now just pass this object as a parameter to all the functions. You won't need global variables, and your function signatures stay clean. It's also easy to later replace the empty StdClass with a class that actually has relevant methods in it.
A: You must declare a variable as global before define values for it.
A: global $foo doesn't mean "make this variable global, so that everyone can use it". global $foo means "within the scope of this function, use the global variable $foo".
I am assuming from your example that each time, you are referring to $screen from within a function. If so you will need to use global $screen in each function.
A: The global scope spans included and required files, you don't need to use the global keyword unless using the variable from within a function. You could try using the $GLOBALS array instead.
A: It is useless till it is in the function or a class. Global means that you can use a variable in any part of program. So if the global is not contained in the function or a class there is no use of using Global
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
}
|
Q: How can I Remove .DS_Store files from a Git repository? How can I remove those annoying Mac OS X .DS_Store files from a Git repository?
A: The following worked best for me. Handled unmatched files, and files with local modifications. For reference, this was on a Mac 10.7 system running git 1.7.4.4.
Find and remove:
find . -name .DS_Store -print0 | xargs -0 git rm --ignore-unmatch -f
I also globally ignore .DS_Store across all repositories by setting a global core.excludesfile.
First, create the file (if one doesn't already exist):
touch ~/.gitignore
Then add the following line and save:
.DS_Store
Now configure git to respect the file globally:
git config --global core.excludesfile ~/.gitignore
A: I found that the following line from snipplr does best on wiping all .DS_Store, including one that has local modifications.
find . -depth -name '.DS_Store' -exec git-rm --cached '{}' \; -print
--cached option, keeps your local .DS_Store since it gonna be reproduced anyway.
And just like mentioned all above, add .DS_Store to .gitignore file on the root of your project. Then it will be no longer in your sight (of repos).
A: I'm a bit late to the party, but I have a good answer.
To remove the .DS_Store files, use the following commands from a terminal window, but be very careful deleting files with 'find'. Using a specific name with the -name option is one of the safer ways to use it:
cd directory/above/affected/workareas
find . -name .DS_Store -delete
You can leave off the "-delete" if you want to simply list them before and after. That will reassure you that they're gone.
With regard to the ~/.gitignore_global advice: be careful here.
You want to place that nice file into .gitignore within the
top level of each workarea and commit it, so that anyone who clones your repo will gain the benefit of its use.
A: In some situations you may also want to ignore some files globally. For me, .DS_Store is one of them. Here's how:
git config --global core.excludesfile /Users/mat/.gitignore
(Or any file of your choice)
Then edit the file just like a repo's .gitignore. Note that I think you have to use an absolute path.
A: This will work:
find . -name "*.DS_Store" -type f -exec git-rm {} \;
It deletes all files whose names end with .DS_Store, including ._.DS_Store.
A: For some reason none of the above worked on my mac.
My solution is from the terminal run:
rm .DS_Store
Then run following command:
git pull origin master
A: If you are unable to remove the files because they have changes staged use:
git rm --cached -f *.DS_Store
A: Sometimes .DS_Store files are there at remote repository, but not visible at your local project folders. To fix this, we need to remove all cached files and add again.
Step 1: Add this to .gitignore file.
# Ignore Mac DS_Store files
.DS_Store
**/.DS_Store
Step 2: Remove the cached files and add again using these commands.
git rm -r --cached .
git add .
git commit -am "Removed git ignored files"
git push -f origin master
A: When initializing your repository, skip the git command that contains
-u
and it shouldn't be an issue.
A: This worked for me, combo of two answers from above:
*
*$ git rm --cached -f *.DS_Store
*$ git commit -m "filter-branch --index-filter 'git rm --cached
--ignore-unmatch .DS_Store"
*$ git push origin master --force
A: Remove ignored files:
(.DS_Store)
$ find . -name .DS_Store -print0 | xargs -0 git rm --ignore-unmatch
A: create a .gitignore file using command touch .gitignore
and add the following lines in it
.DS_Store
save the .gitignore file and then push it in to your git repo.
A: The best way to get rid of this file forever:
Make a global .gitignore file:
echo .DS_Store >> ~/.gitignore_global
Let Git know that you want to use this file for all of your repositories:
git config --global core.excludesfile ~/.gitignore_global
That’s it! .DS_Store will be ignored in all repositories.
A: Remove existing .DS_Store files from the repository:
find . -name .DS_Store -print0 | xargs -0 git rm -f --ignore-unmatch
Add this line:
.DS_Store
to the file .gitignore, which can be found at the top level of your repository (or create the file if it isn't there already). You can do this easily with this command in the top directory:
echo .DS_Store >> .gitignore
Then commit the file to the repo:
git add .gitignore
git commit -m '.DS_Store banished!'
A: $ git commit -m "filter-branch --index-filter 'git rm --cached --ignore-unmatch .DS_Store"
$ git push origin master --force
A: add this to your file .gitignore
#Ignore folder mac
.DS_Store
save this and make commit
git add -A
git commit -m "ignore .DS_Store"
and now you ignore this for all your commits
A: Top voted answer is awesome, but helping out the rookies like me, here is how to create the .gitignore file, edit it, save it, remove the files you might have already added to git, then push up the file to Github.
Create the .gitignore file
To create a .gitignore file, you can just touch the file which creates a blank file with the specified name. We want to create the file named .gitignore so we can use the command:
touch .gitignore
Ignore the files
Now you have to add the line which tells git to ignore the DS Store files to your .gitignore. You can use the nano editor to do this.
nano .gitignore
Nano is nice because it includes instructions on how to get out of it. (Ctrl-O to save, Ctrl-X to exit)
Copy and paste some of the ideas from this Github gist which lists some common files to ignore. The most important ones, to answer this question, would be:
# OS generated files #
######################
.DS_Store
.DS_Store?
The # are comments, and will help you organize your file as it grows.
This Github article also has some general ideas and guidelines.
Remove the files already added to git
Finally, you need to actually remove those DS Store files from your directory.
Use this great command from the top voted answer. This will go through all the folders in your directory, and remove those files from git.
find . -name .DS_Store -print0 | xargs -0 git rm -f --ignore-unmatch
Push .gitignore up to Github
Last step, you need to actually commit your .gitignore file.
git status
git add .gitignore
git commit -m '.DS_Store banished!'
A: Combining benzado and webmat's answers, updating with git rm, not failing on files found that aren't in repo, and making it paste-able generically for any user:
# remove any existing files from the repo, skipping over ones not in repo
find . -name .DS_Store -print0 | xargs -0 git rm --ignore-unmatch
# specify a global exclusion list
git config --global core.excludesfile ~/.gitignore
# adding .DS_Store to that list
echo .DS_Store >> ~/.gitignore
A: Open terminal and type "cd < ProjectPath >"
*
*Remove existing files:
find . -name .DS_Store -print0 | xargs -0 git rm -f --ignore-unmatch
*nano .gitignore
*Add this .DS_Store
*type "ctrl + x"
*Type "y"
*Enter to save file
*git add .gitignore
*git commit -m '.DS_Store removed.'
A: The best solution to tackle this issue is to Globally ignore these files from all the git repos on your system. This can be done by creating a global gitignore file like:
vi ~/.gitignore_global
Adding Rules for ignoring files like:
# Compiled source #
###################
*.com
*.class
*.dll
*.exe
*.o
*.so
# Packages #
############
# it's better to unpack these files and commit the raw source
# git has its own built in compression methods
*.7z
*.dmg
*.gz
*.iso
*.jar
*.rar
*.tar
*.zip
# Logs and databases #
######################
*.log
*.sql
*.sqlite
# OS generated files #
######################
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
Now, add this file to your global git config:
git config --global core.excludesfile ~/.gitignore_global
Edit:
Removed Icons as they might need to be committed as application assets.
A: There are a few solutions to resolve this problem.
To avoid creating .DS_Store files, do not to use the OS X Finder to view folders. An alternative way to view folders is to use UNIX command line.
To remove the .DS_Store files a third-party product called DS_Store Terminator can be used.
To delete the .DS_Store files from the entire system a UNIX shell command can be used.
Launch Terminal from Applications:Utilities
At the UNIX shell prompt enter the following UNIX command:
sudo find / -name ".DS_Store" -depth -exec rm {} \;
When prompted for a password enter the Mac OS X Administrator password.
This command is to find and remove all occurrences of .DS_Store starting from the root (/) of the file system through the entire machine.
To configure this command to run as a scheduled task follow the steps below:
Launch Terminal from Applications:Utilities
At the UNIX shell prompt enter the following UNIX command:
sudo crontab -e
When prompted for a password enter the Mac OS X Administrator password.
Once in the vi editor press the letter I on your keyboard once and enter the following:
15 1 * * * root find / -name ".DS_Store" -depth -exec rm {} \;
This is called crontab entry, which has the following format:
Minute Hour DayOfMonth Month DayOfWeek User Command.
The crontab entry means that the command will be executed by the system automatically at 1:15 AM everyday by the account called root.
The command starts from find all the way to . If the system is not running this command will not get executed.
To save the entry press the Esc key once, then simultaneously press Shift + z+ z.
Note: Information in Step 4 is for the vi editor only.
A: No need to remove .DS_STORE locally
Just add it to .gitignore file
The .gitignore file is just a text file that tells Git which files or folders to ignore in a project.
Commands
*
*nano .gitignore
*Write .DS_Store Then click CTRL+X > y > Hit Return
*git status To have a last look at your changes
*git add .gitignore
*git commit -m 'YOUR COMMIT MESSAGE'
*git push origin master
A: I had to change git-rm to git rm in the above to get it to work:
find . -depth -name '.DS_Store' -exec git rm --cached '{}' \; -print
A: Use this command to remove the existing files:
find . -name '*.DS_Store' -type f -delete
Then add .DS_Store to .gitignore
A: In case you want to remove DS_Store files to every folder and subfolder:
In case of already committed DS_Store:
find . -name .DS_Store -print0 | xargs -0 git rm --ignore-unmatch
Ignore them by:
echo ".DS_Store" >> ~/.gitignore_global
echo "._.DS_Store" >> ~/.gitignore_global
echo "**/.DS_Store" >> ~/.gitignore_global
echo "**/._.DS_Store" >> ~/.gitignore_global
git config --global core.excludesfile ~/.gitignore_global
A: Step 1
This will remove every .DS_Store file in a directory (including subdirectories)
find . -name .DS_Store -print0 | xargs -0 git rm -f --ignore-unmatch
Step 2
Add this to .gitignore to prevent any DS_Store files in the root directory and every subdirectory from going to git!
**/.DS_Store
From the git docs:
*
*A leading "**" followed by a slash means match in all directories. For example, "**/foo" matches file or directory "foo" anywhere, the same as pattern "foo". "**/foo/bar" matches file or directory "bar" anywhere that is directly under directory "foo".
A: delete them using git-rm, and then add .DS_Store to .gitignore to stop them getting added again. You can also use blueharvest to stop them getting created all together
A: I made:
git checkout -- ../.DS_Store
(# Discarding local changes (permanently) to a file)
And it worked ok!
A: For those who have not been helped by any of the above methods - try to inspect your .gitignore more thoroughly, it could have some combination of rules between directories and subdirectories so the annoying .DS_Store files are not ignored in those folders only. For instance you want to ignore gensrc folders except ones in a custom directories, so you would have the following .gitignore:
.DS_Store
gensrc
!custom/**
So with this setup any/path/.DS_Store ignored, but not custom/gensrc/.DS_Store and the fix will be moving .DS_Store entry to the bottom of .gitignore file.
A: This method works if you are using the GitHub Desktop app.
*
*If not already, open your repository in the app (⌘O)
*From Repository Tab, choose Repository Settings.
*Go to "Ignored files" and add the file(s) you wish to be ignored.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1653"
}
|
Q: Disable output buffering Is output buffering enabled by default in Python's interpreter for sys.stdout?
If the answer is positive, what are all the ways to disable it?
Suggestions so far:
*
*Use the -u command line switch
*Wrap sys.stdout in an object that flushes after every write
*Set PYTHONUNBUFFERED env var
*sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
Is there any other way to set some global flag in sys/sys.stdout programmatically during execution?
If you just want to flush after a specific write using print, see How can I flush the output of the print function?.
A: # reopen stdout file descriptor with write mode
# and 0 as the buffer size (unbuffered)
import io, os, sys
try:
# Python 3, open as binary, then wrap in a TextIOWrapper with write-through.
sys.stdout = io.TextIOWrapper(open(sys.stdout.fileno(), 'wb', 0), write_through=True)
# If flushing on newlines is sufficient, as of 3.7 you can instead just call:
# sys.stdout.reconfigure(line_buffering=True)
except TypeError:
# Python 2
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
Credits: "Sebastian", somewhere on the Python mailing list.
A: You can also run Python with stdbuf utility:
stdbuf -oL python <script>
A: Yes, it is.
You can disable it on the commandline with the "-u" switch.
Alternatively, you could call .flush() on sys.stdout on every write (or wrap it with an object that does this automatically)
A: From Magnus Lycka answer on a mailing list:
You can skip buffering for a whole
python process using python -u
or by
setting the environment variable
PYTHONUNBUFFERED.
You could also replace sys.stdout with
some other stream like wrapper which
does a flush after every call.
class Unbuffered(object):
def __init__(self, stream):
self.stream = stream
def write(self, data):
self.stream.write(data)
self.stream.flush()
def writelines(self, datas):
self.stream.writelines(datas)
self.stream.flush()
def __getattr__(self, attr):
return getattr(self.stream, attr)
import sys
sys.stdout = Unbuffered(sys.stdout)
print 'Hello'
A: One way to get unbuffered output would be to use sys.stderr instead of sys.stdout or to simply call sys.stdout.flush() to explicitly force a write to occur.
You could easily redirect everything printed by doing:
import sys; sys.stdout = sys.stderr
print "Hello World!"
Or to redirect just for a particular print statement:
print >>sys.stderr, "Hello World!"
To reset stdout you can just do:
sys.stdout = sys.__stdout__
A: You can create an unbuffered file and assign this file to sys.stdout.
import sys
myFile= open( "a.log", "w", 0 )
sys.stdout= myFile
You can't magically change the system-supplied stdout; since it's supplied to your python program by the OS.
A: You can also use fcntl to change the file flags in-fly.
fl = fcntl.fcntl(fd.fileno(), fcntl.F_GETFL)
fl |= os.O_SYNC # or os.O_DSYNC (if you don't care the file timestamp updates)
fcntl.fcntl(fd.fileno(), fcntl.F_SETFL, fl)
A: It is possible to override only write method of sys.stdout with one that calls flush. Suggested method implementation is below.
def write_flush(args, w=stdout.write):
w(args)
stdout.flush()
Default value of w argument will keep original write method reference. After write_flush is defined, the original write might be overridden.
stdout.write = write_flush
The code assumes that stdout is imported this way from sys import stdout.
A: This relates to Cristóvão D. Sousa's answer, but I couldn't comment yet.
A straight-forward way of using the flush keyword argument of Python 3 in order to always have unbuffered output is:
import functools
print = functools.partial(print, flush=True)
afterwards, print will always flush the output directly (except flush=False is given).
Note, (a) that this answers the question only partially as it doesn't redirect all the output. But I guess print is the most common way for creating output to stdout/stderr in python, so these 2 lines cover probably most of the use cases.
Note (b) that it only works in the module/script where you defined it. This can be good when writing a module as it doesn't mess with the sys.stdout.
Python 2 doesn't provide the flush argument, but you could emulate a Python 3-type print function as described here https://stackoverflow.com/a/27991478/3734258 .
A: Variant that works without crashing (at least on win32; python 2.7, ipython 0.12) then called subsequently (multiple times):
def DisOutBuffering():
if sys.stdout.name == '<stdout>':
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
if sys.stderr.name == '<stderr>':
sys.stderr = os.fdopen(sys.stderr.fileno(), 'w', 0)
A: (I've posted a comment, but it got lost somehow. So, again:)
*
*As I noticed, CPython (at least on Linux) behaves differently depending on where the output goes. If it goes to a tty, then the output is flushed after each '\n'
If it goes to a pipe/process, then it is buffered and you can use the flush() based solutions or the -u option recommended above.
*Slightly related to output buffering:
If you iterate over the lines in the input with
for line in sys.stdin:
...
then the for implementation in CPython will collect the input for a while and then execute the loop body for a bunch of input lines. If your script is about to write output for each input line, this might look like output buffering but it's actually batching, and therefore, none of the flush(), etc. techniques will help that.
Interestingly, you don't have this behaviour in pypy.
To avoid this, you can use
while True:
line=sys.stdin.readline()
...
A: I would rather put my answer in How to flush output of print function? or in Python's print function that flushes the buffer when it's called?, but since they were marked as duplicates of this one (what I do not agree), I'll answer it here.
Since Python 3.3, print() supports the keyword argument "flush" (see documentation):
print('Hello World!', flush=True)
A: def disable_stdout_buffering():
# Appending to gc.garbage is a way to stop an object from being
# destroyed. If the old sys.stdout is ever collected, it will
# close() stdout, which is not good.
gc.garbage.append(sys.stdout)
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
# Then this will give output in the correct order:
disable_stdout_buffering()
print "hello"
subprocess.call(["echo", "bye"])
Without saving the old sys.stdout, disable_stdout_buffering() isn't idempotent, and multiple calls will result in an error like this:
Traceback (most recent call last):
File "test/buffering.py", line 17, in <module>
print "hello"
IOError: [Errno 9] Bad file descriptor
close failed: [Errno 9] Bad file descriptor
Another possibility is:
def disable_stdout_buffering():
fileno = sys.stdout.fileno()
temp_fd = os.dup(fileno)
sys.stdout.close()
os.dup2(temp_fd, fileno)
os.close(temp_fd)
sys.stdout = os.fdopen(fileno, "w", 0)
(Appending to gc.garbage is not such a good idea because it's where unfreeable cycles get put, and you might want to check for those.)
A: The following works in Python 2.6, 2.7, and 3.2:
import os
import sys
buf_arg = 0
if sys.version_info[0] == 3:
os.environ['PYTHONUNBUFFERED'] = '1'
buf_arg = 1
sys.stdout = os.fdopen(sys.stdout.fileno(), 'a+', buf_arg)
sys.stderr = os.fdopen(sys.stderr.fileno(), 'a+', buf_arg)
A: Yes, it is enabled by default. You can disable it by using the -u option on the command line when calling python.
A: In Python 3, you can monkey-patch the print function, to always send flush=True:
_orig_print = print
def print(*args, **kwargs):
_orig_print(*args, flush=True, **kwargs)
As pointed out in a comment, you can simplify this by binding the flush parameter to a value, via functools.partial:
print = functools.partial(print, flush=True)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "646"
}
|
Q: How to overcome StackOverflowException bypassing unhandled exception handling in .NET After hitting a few StackOverflowExceptions in .NET I noticed they completely bypass the unhandled exception handlers that .NET offers (Application.ThreadException / AppDomain.UnhandledException).
This is very disturbing since we have critical cleanup code in those exception handlers.
Is there any way to overcome this?
A: There are three kind of so-called "asynchronous exceptions". That are the ThreadAbortException, the OutOfMemoryException and the mentioned StackOverflowException. Those excepions are allowed to occur at any instruction in your code.
And, there's also a way to overcome them:
The easiest is the ThreadAbortException. When the current code executes in a finally-block. ThreadAbortExceptions are kind of "moved" to the end of the finally-block. So everything in a finally-block can't be aborted by a ThreadAbortException.
To avoid an OutOfMemoryException, you have only one possibility: Do not allocate anything on the Heap. This means that you're not allowed to create any new reference-types.
To overcome the StackOverflowException, you need some help from the Framework. This help manifests in Constrained Execution Regions. The required stack is allocated before the actual code is executed and additionally also ensures that the code is already JIT-Compiled and therefor is available for execution.
There are three forms to execute code in Constrained Execution Regions (copied from the BCL Team Blog):
*
*ExecuteCodeWithGuaranteedCleanup, a stack-overflow safe form of a try/finally.
*A try/finally block preceded immediately by a call to RuntimeHelpers.PrepareConstrainedRegions. The try block is not constrained, but all catch, finally, and fault blocks for that try are.
*As a critical finalizer - any subclass of CriticalFinalizerObject has a finalizer that is eagerly prepared before an instance of the object is allocated.
*
*A special case is SafeHandle's ReleaseHandle method, a virtual method that is eagerly prepared before the subclass is allocated, and called from SafeHandle's critical finalizer.
You can find more at these blog posts:
Constrained Execution Regions and other errata [Brian Grunkemeyer] at the BCL Team Blog.
Joe Duffy's Weblog about Atomicity and asynchronous exception failures where he gives a very good overview over asynchronous exceptions and robustness in the .net Framework.
A: Not really; a stack overflow (or "out of memory") exception happens within the CLR itself means something has gone critically wrong (I usually get it when I've been dumb and created a recursive property).
When this state occurs there is no way for the CLR to allocate new function calls or memory to enable it to call into the exception handlers; it's a "we must halt now" scenario.
If, however, you throw the exception yourself, your exception handlers will be called.
A: A stackoverflow isn't something you can just recover from, since it can't allocate more stack memory to even call your exception handler.
The only thing you can really do is track down the cause and prevent it from happening at all (eg becareful with recursion, and don't allocate large objects on the stack).
A: blowdart nailed it. Really just a problem with typing code too quickly.
private Thing _myThing = null;
Public Thing MyThing
{
get{
return this.MyThing;}
set{
this.MyThing = value;}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: C# libraries for internationalization? What libraries are there to write C# internationalized applications?
Typical functionalities that should be contained in the library:
*
*Validation of country specific data (e.g. VAT numbers, phone numbers, addresses,...)
*Validation of bank and financial coordinates (e.g. Credit Card numbers, IBAN,...)
*Language-specific functionalities (e.g. numbers to words to numbers, summarize,...)
*Language specific content filtering (e.g. swearword filtering...)
An example of such libraries in Perl would be the Internationalization/Locale section of CPAN.
What C# solutions are available?
Note: I am not looking for an introduction to the System.Globalization namespace :)
Note 2: Should I desume that there are no options available? Is someone interested in joining forces and create one?
Note 3: Edit to make the question appear on front page in hope of more answers. This isn't such a hard question, how is it possible that Stackers don't ever do i18n?
A: One project that is working towards a database of globalization, internationalization and localization knowledge is the Unicode Common Locale Data Repository, based on the old ICU project at IBM.
As it is a database of XML data it doesn't contain any .NET-specific code, but as a body of knowledge it is very good.
Only a smallish subset is in the .NET framework. Microsoft hasn't gone near any of the supplemental stuff, like postcode formats, number spelling (for check/cheque amounts), etc. Standard time zone names (from the Olson/tz distribution), etc. are also included, with mappings to the Windows-specific names. Some of the hierarchical locale-specific behaviours also have better support.
A: I wouldn't say that no one does i18n, but I don't know of any generic tools that can be used for every project. Maintaining a database with all of the information you are looking for would be an epic project. It sounds like what you're looking for isn't a specific C# library, but more a collection of information online that you can draw from. If you were able to find a repository of swear words in various languages (for example), it would be trivial for you to use this in C#. I think that finding a solution that wraps up all of your requirements into an easy-to-use assembly is going to be impossible to find.
A: Have a look at
http://www.microsoft.com/globaldev/getwr/dotneti18n.mspx
and
http://www.dotneti18n.com/
A: String to number and vice versa can be dones as following:
culture = new CultureInfo(locale);
int number = Convert.ToInt32(myString, culture.NumberFormat);
string str= Convert.ToString(myNumber, culture.NumberFormat);
As to checking VATS and adresses, I'm interested in that too, haven't found anything useful so far.
A: Not exactly a "library", per se, but I've actually ran into a great service (for pay), by a company called E4X (former client of mine).
What they provide is complete localization of your ecommerce site, including language translations, currency exchanges, local billing and handling of financial transactions including region-specific taxes etc, and more. They even deal with logisitics of physical shipping...
Worth looking into, for an ecommerce business. Let 'em know I sent you... ;-)
A: That's a huge endeavor. Let's start with one simple problem: phone numbers. Libphonenumber Google library at http://code.google.com/p/libphonenumber/ has a C# port at https://bitbucket.org/pmezard/libphonenumber-csharp with notes at http://blog.thekieners.com/2011/06/06/using-googles-libphonenumber-in-microsoft-net-with-c/. Appears to be a good library for handling both US and int'l numbers.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: What are good sources to study the threading implementation of a XMPP application? From my understanding the XMPP protocol is based on an always-on connection where you have no, immediate, indication of when an XML message ends.
This means you have to evaluate the stream as it comes. This also means that, probably, you have to deal with asynchronous connections since the socket can block in the middle of an XML message, either due to message length or a connection being slow.
I would appreciate one source per answer so we can mod them up and see what's the favourite.
A: Are you wanting to deal with multiple connections at once? Good asynch socket processing is a must in that case, to avoid one thread per connection.
Otherwise, you just need an XML parser that can deal with a chunk of bytes at a time. Expat is the canonical example; if you're in Java, try XP. These types of XML parsers will fire events as possible, and buffer partial stanzas until the rest arrives.
Now, to address your assertion that there is no notification when a stanza ends, that's not really true. The important thing is not to process the XML stream as if it is a sequence of documents. Use the following pseudo-code:
stanza = null
while parser has more:
switch on token type:
START_TAG:
elem = create element from parser state
if stanza is not null:
add elem as child of stanza
stanza = elem
END_TAG:
parent = parent of stanza
if parent is not null:
fire OnStanza event
stanza = parent
This approach should work with an event-based or pull parser. It only requires holding on to one pointer worth of state. Obviously, you'll also need to handle attributes, character data, entity references (like & and the like), and special-purpose the stream:stream tag, but this should get you started.
A: Igniterealtime.org provides an open source XMPP-server and client written in java
A: ejabberd is written in Erlang. I don't know the details of the ejabberd implementation, but one advantage of using Erlang is really inexpensive threads. I'll speculate they start a thread per XMPP connection. In Erlang terminology these would be called processes, but these are not protected-memory address spaces they are lightweight user-space threads.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Importing/exporting history from SVN to TFS Are there any import and export tools that would let us move projects into and out of team system with full history and log? Our current SCM is SVN.
PS - Sorry, I know it's a repost, but I didn't get an answer before... :)
A: I don't know if you are still interested, but I just went through this with my current employer (my project was using SVN and they wanted to migrate it into TFS at another site).
These are the following steps I used:
*
*Run svndump on your current repo, and take the file to the intended target
*Using a svn server (e.g. a local repository) import the file - for this I used VisualSVN Server.
*Checkout the SVN repository to a local directory (e.g. svn co <url> Proj_SVN)
*Run SvnBridge (from CodePlex) on the same machine
*Checkout the TFS repository to a local directory (e.g. svn co http:// localhost:8080/<tfs_server>/<project_repo_path> Proj_TFS)
*Using Svn2Svn (from codeplex) I run the following: svn2svn /s:c:\temp\src\Proj_SVN /d:c:\temp\src\Proj_TFS /r:<start_rev>:<end_rev>
Depending on how many revisions, how much data you have and the speed of your network (e.g. it might be faster to run on the TFS server) it could take from 10 minutes onwards to complete each revision.
Anyway this is what I used and it worked for me (painful process though...) - your means might vary.
A: http://www.timelymigration.com/ is another alternative, had a test run that seemed to work.
A: Unfortunately, you probably didn't get an answer because there isn't a good one on offer...
I've looked into this a couple of times before, initially for the first TFS Betas.
(At the time we were eager to move away from VSS while waiting for TFS to be ready... the compromise we ended up with then was to use SVN in the interim, but used a post commit hook that kept a VSS repository in sync to allow for that migration path to TFS.)
These guys (ComponentSource) were around back then with a VSS to TFS converter, and added a SVN to TFS one, but appear to have since discontinued the product.
These guys (Kyrosoft) might have some promise, but I'm concerned that they don't post prices, and do post a customer list (of two). If anyone has experience with the product, please let us all know.
More recently the TFS Migration and Synchronization Toolkit has been released on CodePlex, but to date no one has release a SVN plugin for it (there are 66 votes for the request)
So, you can look at rolling your own plugin for the toolkit, but even then you won't get the original dates for commits, as to my knowledge the TFS team haven't allowed a mechanism for importers to set this, so all migrated revisions will have the date of migration.
(The discontinued first tool above purportedly used to allow this, but how they got around the limitation (secret API? adjusting system time? database manipulation?) I don't know.)
In the end, I suspect most teams end up deciding to just switch systems at an appropriate time (eg new version or project), and manually deal with the bifurcated history lookup for the 6-12 months it remains particularly problematic...
A: Just thinking out loud here, but does SVN support a way to "play back" its history? If there is a way to generate a complete set of SVN commands from an existing repository, then you could feed those commands to SvnBridge, which would actually be writing into TFS.
A: Use KryoSoft. ComponentSource is basically out of business.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Can Flash save content without server-side help? As far as I know, Flash has to pass info off to another external process in order to save files - POSTing to PHP or talking to an executable, right? But every once in a while I hear rumors that Flash is able to open a file, make changes, then save/write those changes, all on its own - is it possible?
A: This will be available in Flash Player 10:
Reading and Writing Local Files in Flash Player 10
http://www.mikechambers.com/blog/2008/08/20/reading-and-writing-local-files-in-flash-player-10/
Otherwise you need to use Adobe AIR, or bounce it off the server.
mike chambers
mesh@adobe.com
A: The next version of the player, Flash 10 can do this. It also has support for some other nifty stuff like simple 3D and typed arrays.
The flash player running inside AIR can also do this.
A: There are lots of security issues around the behavior you just described so Adobe put many sandbox restrictions around file modification behavior. Even with Flash Player 10, expect a requirement that the file manipulation require that the code be executing in response to a mouse event.
A: There is something called Local Shared Object, also known as "Flash Cookie" that allows you to store a limited amount of data locally at a user's computer.
A little googling turned up a few links:
*
*Documentation on the SharedObject class
*A tutorial
And I'm sure a little creative googling can turn up even more
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Why is my Java program leaking memory when I call run() on a Thread object? (Jeopardy-style question, I wish the answer had been online when I had this issue)
Using Java 1.4, I have a method that I want to run as a thread some of the time, but not at others. So I declared it as a subclass of Thread, then either called start() or run() depending on what I needed.
But I found that my program would leak memory over time. What am I doing wrong?
A: This is a known bug in Java 1.4:
http://bugs.sun.com/bugdatabase/view_bug.do;jsessionid=5869e03fee226ffffffffc40d4fa881a86e3:WuuT?bug_id=4533087
It's fixed in Java 1.5 but Sun doesn't intend to fix it in 1.4.
The issue is that, at construction time, a Thread is added to a list of references in an internal thread table. It won't get removed from that list until its start() method has completed. As long as that reference is there, it won't get garbage collected.
So, never create a thread unless you're definitely going to call its start() method. A Thread object's run() method should not be called directly.
A better way to code it is to implement the Runnable interface rather than subclass Thread. When you don't need a thread, call
myRunnable.run();
When you do need a thread:
Thread myThread = new Thread(myRunnable);
myThread.start();
A: I doubt that constructing an instance of a Thread or a subclass thereof leaks memory. Firstly, there's nothing of the sorts mentioned in the Javadocs or the Java Language Specification. Secondly, I ran a simple test and it also shows that no memory is leaked (at least not on Sun's JDK 1.5.0_05 on 32-bit x86 Linux 2.6):
public final class Test {
public static final void main(String[] params) throws Exception {
final Runtime rt = Runtime.getRuntime();
long i = 0;
while(true) {
new MyThread().run();
i++;
if ((i % 100) == 0) {
System.out.println((i / 100) + ": " + (rt.freeMemory() / 1024 / 1024) + " " + (rt.totalMemory() / 1024 / 1024));
}
}
}
static class MyThread extends Thread {
private final byte[] tmp = new byte[10 * 1024 * 1024];
public void run() {
System.out.print(".");
}
}
}
EDIT: Just to summarize the idea of the test above. Every instance of the MyThread subclass of a Thread references its own 10 MB array. If instances of MyThread weren't garbage-collected, the JVM would run out of memory pretty quickly. However, running the test code shows that the JVM is using a small constant amount of memory regardless of the number of MyThreads constructed so far. I claim this is because instances of MyThread are garbage-collected.
A: Let's see if we could get nearer to the core of the problem:
If you start your program (lets say) 1000 x using start(), then 1000 x using run() in a thread, do both loose memory? If so, then your algorithm should be checked (i.e. for outer objects such as Vectors used in your Runnable).
If there is no such memory leak as described above then you should investigate about starting parameters and memory usage of threads regarding the JVM.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: How can I turn on PHP errors display on just a subfolder I don't want PHP errors to display /html, but I want them to display in /html/beta/usercomponent. Everything is set up so that errors do not display at all. How can I get errors to just show up in that one folder (and its subfolders)?
A: The easiest way would be to control the error reporting from a .htaccess file. But this is assuming you are using Apache and the scripts in /html/beta/usercomponent are called from that directory and not included from elsewhere.
.htacess
php_value error_reporting [int]
You will have to compose the integer value yourself from the list as described in the error_reporting documentation, since the constants like E_ERROR aren't defined when Apache interprets the .htaccess.
It's a simple bitwise flag, so a value of 12, for example, would be E_WARNING + E_PARSE + E_NOTICE.
A: you could do this by using an Environment variable. this way you can have more choices than just turning Error reporting on/off for a special directory. in your code where ever you wanted to change any behaviour for a specific set of directoris, or running modes, check if a environment variable is set or not. like this:
if ($_ENV['MY_PHP_APP_MODE'] == 'devel') {
// show errors and debugging info
} elseif ($_ENV['MY_PHP_APP_MODE'] == 'production') {
// show some cool message to the user so he won't freak out
// log the errors and send email to the admin
}
and when you are running your application in your development environment, you can set an env variable in your .htaccess file like this:
setenv MY_PHP_APP_MODE devel
or when you are in production evn:
setenv MY_PHP_APP_MODE production
the same technique applies to your situation. in directories where you want to do something special (turn on error reporting) set some env variable and in your code, check for that.
A: In .htaccess:
php_value error_reporting 2147483647
This number, according to documentation should enable 'all' errors irrespective of version, if you want a more granular setting, manually OR the values together, or run
php -r 'echo E_ALL | E_STRICT ;'
to let php compute the value for you.
You need
AllowOverride All
in apaches master configuration to enable .htaccess files.
More Reading on this can be found here:
*
*Php/Error Reporting Flag
*Php/Error Reporting values
*Php/Different Ways of Tuning Settings
Notice If you are using Php-CGI instead of mod_php, this may not work as advertised, and all you will get is an internal server error, and you will be left without much option other than enabling it either site-wide on a per-script basis with
error_reporting( E_ALL | E_STRICT );
or similar constructs before the error occurs.
My advice is to disable displaying errors to the user, and utilize heavily php's error_log feature.
display_errors = 0
error_logging = E_ALL | E_STRICT
error_log = /var/log/php
If you have problems with this being too noisy, this is not a sign you need to just take error reporting off selectively, this is a sign somebody should fix the code.
@Roger
Yes, you can use it in a <Directory> construct in apaches configuration too, however, the .htaccess in this case is equivalent, and makes it more portable especially if you have multiple working checkout copies of the same codebase and you want to distribute this change to all of them.
If you have multiple virtual hosts, you'll want the construct in the respective virtual hosts definition, otherwise, yes
<Directory /path/to/wherever/on/filesystem>
<IfModule mod_php5.c>
php_value error_reporting 214748364
</IfModule>
</Directory>
The Additional "ifmodule" commands are just a safety net so the above problem with apache dying if you don't have mod_php won't occur.
A: I don't believe there's a simple answer to this, but I'd certainly want to be proven wrong.
edit: turns out this can be controlled from .htaccess files. Thanks people! :)
You can use error_reporting() http://docs.php.net/manual/en/function.error-reporting.php to switch the setting on a script by script basis, though. If you happen to have a single script which is included every time at /html/beta/usercomponent, this will do the trick.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: How to display latest revision in a file? I'm wondering how do you deal with displaying release revision number when pushing live new versions of your app?
You can use $Rev$ in a file to get latest revision, but only after you update the file.
What if I want to update a string in one file every time I change any file in the repository/directory?
Is there a way?
A: The best way to do this is have a build script for releases that will determine the revision number using svnversion or svn info and insert it into a file. It's always helpful to have a script which:
*
*checks out a clean copy of the source into an empty directory
*uses svnversion or something similar to compute a build number
*compiles source into a product
*creates an archive (zip or tarball or whatever) of the product
*cleans up: deletes everything but the archive
Then you have a one-step process to create a release with an easily identifiable version. It also helps you avoid giving someone a build from your own working copy, which may have changes that were never checked into source control.
A: There is a simple tool in TortoiseSVN named SubWCRev.exe. It takes revision from path and create file from your own template. You can use it as prebuild command.
A: Did you try to use hooks? They work on server-side only but may do the trick. Otherwise I would just call a script do update the revision if the keywords aren't suitable for you.
A: Automatically update the file as part of building/deploying the release.
A: On the one project where I had a reason do this, I cheated: it calls svnversion on itself when it starts up.
A: As alexander said, one way is to update the revision as part of the build process.
One method of doing this is to take your release builds from an automated build process triggered from your version control checkin, by using a tool such as buildbot.
A scenario might be to trigger the automated build using the post-hook script on your subversion repository.
This causes your buildbot to update to the most recently checked in revision.
Your build script (eg. Makefile) would use 'svnversion' (or 'svn info' and grep) to read the repository revision and write it into a header file before the build takes place.
After the successful build, automatically check this file back into the repository with a suitable comment about the release version.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Connectionstring error after encrypted using aspnet_regiis.exe I've encrypted the connectionstring in my web.config file using the steps in the link below:
http://www.codeproject.com/KB/database/WebFarmConnStringsNet20.aspx
However, whenever I call my application, it will give the following error:
Failed to decrypt using provider
'CustomProvider'. Error message from
the provider: The RSA key container
could not be opened.
The server where I perform the encryption is a 64-bit Windows Server 2003 R2 SP2. Because of that I assign the ACL to NT Authority\Network Service. Yet it still doesn't work.
Hope someone has some ideas what else do I need to check to get this working.
PS. If I used the default rsa key NetFrameworkConfigurationKey for encryption, then the connection string will not have an access problem.
A: Well, I found the source of the problem, and boy was it embarrassing. In the attribute keyContainerName, I spelled the name incorrectly.
That it. That's what caused the problem.
Apparently, the encryption will work even if you provide an incorrect keyContainerName, which I incorrectly assumed will fail. So, once I decrypt the connectionstring and re-encrypt with the right keyContainerName, it works fine.
BTW, make sure to decrypt your existing connectionstring before correcting the keyContainerName. The aspnet_regiis.exe will complain about bad data, because the provider is now different.
A: Did you remember to add the
<configProtectedData>
to your web.config?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Maximum lines of code permitted in a Java class? How many lines of code can a .java file contain? Does it depend on the JVM being used?
A: I believe there is a 64kb limit on bytecode size per method.
A: There is no limit on "lines of code", but there is a limit on total size. Each method has a 64kb limit.
I've only ever run into this with code generation tools.
If you are coming close to the limit, be careful. A lot of profiling and monitoring tools use byte code insertion. They can push you over the top if you're too close. What's worse is that they often alter your class files after compilation. Everything compiles and runs in your development environment, but it crashes when you turn on your monitoring tools in Test or QA.
A: As mentioned the above, there is no limit on "lines of code” per class in java, we can probably use 200 lines as a good guideline and not exceed 500 lines per class.
A: I remember actually running into this limit once in a complex JSP page in Tomcat 4 (way in the past when people were still using JSPs). The java file generated from the JSP had a method that was too big to compile, I think I had to split up the file or do some other stunt, which of course was a good idea in terms of readability anyway.
Sun's bug tracker tells me some people still have the same problem.
A: To extend upon Jonas's response, the Java Virtual Machine Specification, Section 4.8 Constraints on Java Virtual Machine Code says that:
The Java virtual machine code for a
method, instance initialization method
(§3.9), or class or interface
initialization method (§3.9) is stored
in the code array of the Code
attribute of a method_info structure
of a class file. This section
describes the constraints associated
with the contents of the
Code_attribute structure.
Continuing to Section 4.8.1, Static Constraints
The static constraints on a class file
are those defining the well-formedness
of the file. With the exception of the
static constraints on the Java virtual
machine code of the class file, these
constraints have been given in the
previous section. The static
constraints on the Java virtual
machine code in a class file specify
how Java virtual machine instructions
must be laid out in the code array and
what the operands of individual
instructions must be.
The static constraints on the
instructions in the code array are as
follows:
...
*
*The value of the code_length item must be less than 65536.
...
So a method does have a limit of 65535 bytes of bytecode per method. (see note below)
For more limitations to the JVM, see Section 4.10 Limitations of the Java Virtual Machine.
Note: Although there is apparently a problem with the design of the JVM, where if the instruction at byte 65535 is an instruction that is 1 byte long, it is not protected by exception handler - this is listed in footnote 4 of Section 4.10.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: How can I integrate Laconica update stream into SharePoint? I have Laconica (self hosted twitter) configured on my local intranet and would like to integrate the public stream into SharePoint site with a web part. How can I do this?
A: You can point an RSS Viewer web part at the laconi.ca public stream RSS feed and use this XSLT to ensure attractive output.
Result screen shot:
XSL transform:
<xsl:stylesheet xmlns:x="http://www.w3.org/2001/XMLSchema" version="1.0" exclude-result-prefixes="xsl ddwrt msxsl rssaggwrt"
xmlns:ddwrt="http://schemas.microsoft.com/WebParts/v2/DataView/runtime"
xmlns:rssaggwrt="http://schemas.microsoft.com/WebParts/v3/rssagg/runtime"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:msxsl="urn:schemas-microsoft-com:xslt"
xmlns:rssFeed="urn:schemas-microsoft-com:sharepoint:RSSAggregatorWebPart"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:rss="http://purl.org/rss/1.0/"
xmlns:atom="http://www.w3.org/2005/Atom"
xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
xmlns:atom2="http://purl.org/atom/ns#"
xmlns:ddwrt2="urn:frontpage:internal"
xmlns:laconica="http://laconi.ca/ont/">
<xsl:param name="rss_FeedLimit">5</xsl:param>
<xsl:param name="rss_ExpandFeed">false</xsl:param>
<xsl:param name="rss_LCID">1033</xsl:param>
<xsl:param name="rss_WebPartID">RSS_Viewer_WebPart</xsl:param>
<xsl:param name="rss_alignValue">left</xsl:param>
<xsl:param name="rss_IsDesignMode">True</xsl:param>
<xsl:template match="rdf:RDF">
<xsl:call-template name="RDFMainTemplate"/>
</xsl:template>
<xsl:template name="RDFMainTemplate" xmlns:ddwrt="http://schemas.microsoft.com/WebParts/v2/DataView/runtime" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt">
<xsl:variable name="Rows" select="rss:item"/>
<xsl:variable name="RowCount" select="count($Rows)"/>
<div class="slm-layout-main" >
<xsl:call-template name="RDFMainTemplate.body">
<xsl:with-param name="Rows" select="$Rows"/>
<xsl:with-param name="RowCount" select="count($Rows)"/>
</xsl:call-template>
</div>
</xsl:template>
<xsl:template name="RDFMainTemplate.body" xmlns:ddwrt="http://schemas.microsoft.com/WebParts/v2/DataView/runtime" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt">
<xsl:param name="Rows"/>
<xsl:param name="RowCount"/>
<xsl:for-each select="$Rows">
<xsl:variable name="CurPosition" select="position()" />
<xsl:variable name="RssFeedLink" select="$rss_WebPartID" />
<xsl:variable name="CurrentElement" select="concat($RssFeedLink,$CurPosition)" />
<xsl:if test="($CurPosition <= $rss_FeedLimit)">
<xsl:element name="div">
<xsl:if test="($CurPosition mod 2 = 1)">
<xsl:attribute name="style"><![CDATA[background-color:#F9F9F9;]]></xsl:attribute>
</xsl:if>
<xsl:element name="table">
<xsl:attribute name="cellpadding">0</xsl:attribute>
<xsl:attribute name="border">0</xsl:attribute>
<xsl:attribute name="style"><![CDATA[margin:0px;padding:0px;border-spacing:0px;background-color:transparent;]]></xsl:attribute>
<xsl:element name="tr">
<xsl:element name="td">
<xsl:attribute name="style"><![CDATA[vertical-align:top;padding:0px;background-color:transparent;]]></xsl:attribute>
<xsl:attribute name="rowspan">2</xsl:attribute>
<xsl:element name="img">
<xsl:attribute name="src"><xsl:value-of select="laconica:postIcon/@rdf:resource"/></xsl:attribute>
<xsl:attribute name="style"><![CDATA[margin:3px;height:48px;width:48px;]]></xsl:attribute>
</xsl:element>
</xsl:element>
<xsl:element name="td">
<xsl:attribute name="style"><![CDATA[vertical-align:top;padding:0px;background-color:transparent;]]></xsl:attribute>
<div>
<strong><xsl:value-of select="substring-before(rss:title, ':')"/></strong>
</div>
<div style="width:300px;overflow-x:hidden;">
<div>
<xsl:value-of select="substring-after(rss:title, ':')"/>
</div>
</div>
</xsl:element>
</xsl:element>
<xsl:element name="tr">
<xsl:element name="td">
<xsl:attribute name="style"><![CDATA[padding:0px;background-color:transparent;]]></xsl:attribute>
<xsl:element name="a">
<xsl:attribute name="href"><xsl:value-of select="rss:link"/></xsl:attribute>
<xsl:value-of select="ddwrt:FormatDate(dc:date,number($rss_LCID),15)"/>
</xsl:element>
</xsl:element>
</xsl:element>
</xsl:element>
</xsl:element>
</xsl:if>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
A: I like the RSS idea. Another option would be to create a Laconica plugin and hook the EndNoticeSave event to push notices to SharePoint.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: What's the best source-level debugger for ColdFusion MX 2004? I have a legacy web site in ColdFusion MX 2004. I'm re-writing it in .Net, so I don't want to pay $600+ for an upgrade to the latest version of ColdFusion, nor do I want to go through the (very large) site fixing version incompatibilities.
I often have to track down and fix bugs in the site.
A source-level debugger that would let me step through the code line-by-line and examine variables would be a HUGE time-saver.
Does anyone have experience using any of the third-party ColdFusion source-level debuggers?
A: There's no such thing as ColdFusion MX 2004, neither in name nor release year (CFMX 6.1 was released in 2003, and CFMX 7 was released in 2005).
FusionDebug supports all versions of CFMX from 6.1 onwards.
You may want to investigate BlueDragon.NET, which allows you to run CFML code on the .NET platform, and may make the transition easier for you.
A: The developer versions of ColdFusion are free so you can install the latest version, install debugging tools into eclipse and run debugging for free.
The other option, if you prefer to debug on your target version of CF, is FusionDebug which supports CF 6.1 and up.
A: There are certain debuggers listed here on Charlie Arehart's site.
http://www.carehart.org/cf411/#debug
This CF411 site has a whole lot of great info on various subjects.
A: You don't seem to have marked this question as answered. The answers are offered by others, but to be clear:
If you're running a version of CF prior to CF 8, then you can get line debugging using the available commercial third-party tool, FusionDebug (www.fusiondebug.com).
If you're running CF 8, it includes line debugging, for free.
Both approaches leverage the Eclipse IDE's built-in debugging capability, hooking it up to CF.
For more on each, I've written several articles:
The ColdFusion 8 Debugger Explained: Interactive Step Debugging for CF 8
Using the CF8 Debugger
FusionDebug Explained: Interactive Step Debugging for CFML
FusionDebug Tips and Tricks
Hope those help.
A: BlueDragon.Net is interesting, but I installed it and found out that it does not do source-level debugging.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Identifying if 2 HTML pages are similar I'm trying to identify differences between a base case and supplied case. Looking for a library to tell me similarity in percentage or something like that.
For Example:
I've 10 different HTML pages.
* All of them are 404 responses with only one 2 lines of random code (such as time or quote of the day).
Now when I supply a new 404 page I want a result back such as "%80" similar,however if I supply another page totally different or same website but quite different content I should get something lile "%20 similar".
Basically what I want to do is, when I've got a new response I want to identify if the new response is similar to these 10 pages which I supplied before.
I'm trying to solve this in .NET, A library or an algorithm recommendation would be great.
A: Rather than using a diff tool you could use a copy/paste detector (cpd). Then you can configure a threshold of how alike you want files to be.
As an aside, I have used these in the past to track down cheaters in school.
Sam
A: If you want to use a string based solution you can give a shot using k-grams (you compute all the string of length k of consecutive chars for both files, then you perform a Jaccard distance on the resulting sets). It is a standard way to perform approximate queries in DB world.
If you are interested more in hierarchical information embedded into the html file (e.g. you were talking about an unmutable section) you can convert it into an xhtml (for java you have http://htmlcleaner.sourceforge.net/, I'm not into .net but I think there are several alternatives for that env too), seeing the file generated as an ordered labelled tree you can use pq-grams (http://www.inf.unibz.it/~augsten/publ/tods10/ for paper and java code) to evaluate structural similarity (pq-grams are a tree generalization of string k-grams).
At this point if you want you can perform an hash-based comparison on the leaf containing text or using k-grams for this leaves and structural pq-gram based similarity for the rest.
A: A quick and dirty way would be to compute the Levenshtein distance of the markup.
http://en.wikipedia.org/wiki/Levenstein_distance
A: for your task it would be enough to run a command line diff utility and analyze the results.
Alternatively you need to implement an LCS algorithm but to me it would be an overkill.
A:
for your task it would be enough to
run a command line diff utility and
analyze the results.
This is not a one time job really, I need a solution integrated into an application.
And diff has it's own problems in here, because I can not tell diff to process 5 pages and ignore the bits those constantly changing.
These parts can be big, it can 2kb of standard text keep changing. And I think from diff point of view it's a big change however from my point of view it's just a change of one section (which is known to be changed in all other 9 files therefore should be ignored totally).
Maybe a diff library can do that but I'm not aware of such a library.
A: basic algorithm i would use:
parse the text content of pages on both sides, the old and the new. as you parse keep track on how many bytes have processed to be used later on to determine how many % has changed. Now that you have complete story on each side, build up anchor points of sameness. For every achor point of sameness that you've got, try to expand that forward and backward. Identify any gaps between your sameness achor points as a difference. Loop through every difference gap that you've identified and sum up their byte counts. calculate your percentage of diff by using the total ammount difference byte count and the total byte of the story (the one you calculated earlier).
A: You can use jqgram, an implementation of PQ-Gram tree edit distance approximation to specifically solve this problem, but you'll need to run Node.js if you don't want to port to C#. The port should be pretty easy though... the algorithm isn't all that complex. Beauty in simplicity.
https://github.com/hoonto/jqgram
In the example is a DOM vs cheerio example which shows how to deal with children and labels so as to generate the approximate tree edit distance. It gives you a number between zero and one as a result, and so that is your percentage equality. But note that a value of zero doesn't necessarily indicate identical trees, it only means they are very similar. You could do DOM vs DOM comparison or Cheerio vs Cheerio easily enough too - or use the HTML parse that Cheerio uses instead of worrying about using the entire library (Cheerio out of the box is a rather fast server-side jQuery- and DOM-like implementation).
So obviously this solution is Node.js and browser javascript specific, but I think those challenges might be easier than porting to C#/.NET.
// This could probably be optimized significantly, but is a real-world
// example of how to use tree edit distance in the browser.
// For cheerio, you'll have to browserify,
// which requires some fiddling around
// due to cheerio's dynamically generated
// require's (good grief) that browserify
// does not see due to the static nature
// of its code analysis (dynamic off-line
// analysis is hard, but doable).
//
// Ultimately, the goal is to end up with
// something like this in the browser:
var cheerio = require('./lib/cheerio');
// The easy part, jqgram:
var jq = require("../jqgram").jqgram;
// Make a cheerio DOM:
var html = '<body><div id="a"><div class="c d"><span>Irrelevent text</span></div></div></body>';
var cheeriodom = cheerio.load(html, {
ignoreWhitespace: false,
lowerCaseTags: true
});
// For ease, lets assume you have jQuery laoded:
var realdom = $('body');
// The lfn and cfn functions allow you to specify
// how labels and children should be defined:
jq.distance({
root: cheeriodom,
lfn: function(node){
// We don't have to lowercase this because we already
// asked cheerio to do that for us above (lowerCaseTags).
return node.name;
},
cfn: function(node){
// Cheerio maintains attributes in the attribs array:
// We're going to put id's and classes in as children
// of nodes in our cheerio tree
var retarr = [];
if(!! node.attribs && !! node.attribs.class){
retarr = retarr.concat(node.attribs.class.split(' '));
}
if(!! node.attribs && !! node.attribs.id){
retarr.push(node.attribs.id);
}
retarr = retarr.concat(node.children);
return retarr;
}
},{
root: realdom,
lfn: function(node){
return node.nodeName.toLowerCase();
},
cfn: function(node){
var retarr = [];
if(!! node.attributes && !! node.attributes.class && !! node.attributes.class.nodeValue){
retarr = retarr.concat(node.attributes.class.nodeValue.split(' '));
}
if(!! node.attributes && !! node.attributes.id && !! node.attributes.id.nodeValue) {
retarr.push(node.attributes.id.nodeValue);
}
for(var i=0; i<node.children.length; ++i){
retarr.push(node.children[i]);
}
return retarr;
}
},{ p:2, q:3, depth:10 },
function(result) {
console.log(result.distance);
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Is there a Windows/MSVC equivalent to the -rpath linker flag? On Linux/GCC I can use the -rpath flag to change an executables search path for shared libraries without tempering with environment variables.
Can this also be accomplished on Windows? As far as I know, dlls are always searched in the executable's directory and in PATH.
My scenario: I would like to put shared libraries into locations according to their properties (32/64bit/Debug/Release) without taking care of unique names. On Linux, this is easily be done via rpath, but I haven't found any way doing this on Windows yet.
Thanks for any hints!
A: The search order for DLLs in Windows is described on this page on MSDN. If you're using run-time dynamic linking, you can specify the folder when you call LoadLibrary.
A: "Isolated applications" is a mechanism for embedding an XML manifest that describes the DLL dependencies.
A: Sadly there is no direct analogue to RPATH. There are a number of alternative possibilities, each of them most likely undesirable to you in its own special way.
Given that you need a different exe for each build flavor anyway to avoid runtime library clashes, as you might guess the easiest thing to do is to put each exe in the same folder as each set of DLLs.
As you also mentioned, the most universal method is to change the PATH variable by using a batch file to bootstrap the exe.
You could instead change the current working directory before running the program to the desired DLL folder.
You can use the function SetDllDirectory or AddDllDirectory inside your exe. This is probably the closest to an RPATH, but only works on WinXP SP1 or later.
If you're willing to alter the file name of each exe flavor, you can use the "App Paths" registry key. Each exe would need a unique filename.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53"
}
|
Q: How to create a zip file in the same format as the Finder's "Compress" menu item? On Mac OS X, you can create a zip archive from the Finder by selecting some files and selecting "Compress" from the contextual menu or the File menu. Unfortunately, the resulting file is not identical to the archive created by the zip command (with the default options).
This distinction matters to at least one service operated by Apple, which fails to accept archives created with the zip command. Having to create archives manually is preventing me from fully automating my release build process.
How can I create a zip archive in the correct format within a shell script?
EDIT: Since writing this question long ago, I've figured out that the key difference between ditto and zip is how they handle symbolic links: because the code signature inside an app bundle contains a symlink, it needs to be preserved as a link and not stored as a regular file. ditto does this by default, but zip does not (option -y is required).
A: I have a ruby script that makes iPhone App Store builds for me, but the zips it was generating wouldn't get accepted by iTunes Connect. They were accepted if I used Finder's "Compress" function.
millenomi's answer came close for me, but this command is what ended up working. iTunes Connect accepted my build, and the app got approved and can be downloaded no problem, so it's tested.
ditto -c -k --sequesterRsrc --keepParent AppName.app AppName.zip
A: Use the ditto command-line tool as follows:
ditto -ck --rsrc --sequesterRsrc folder file.zip
See the ditto man page for more.
A: man ditto states:
The command:
ditto -c -k --sequesterRsrc --keepParent src_directory archive.zip
will create a PKZip archive similarly to the Finder's Compress functionality.
notice --keepParent
A: The clue's in the tag 'automation'.
Create an action in Automator.app that uses the 'Create Archive' action, invoke it from the command-line (see 'automator').
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
}
|
Q: Is it feasible to introduce Test Driven Development (TDD) in a mature project?
*
*Say we have realized a value of TDD too late. Project is already matured, good deal of customers started using it.
*Say automated testing used are mostly functional/system testing and there is a good deal of automated GUI testing.
*Say we have new feature requests, and new bug reports (!). So good deal of development still goes on.
*Note there would already be plenty of business object with no or little unit testing.
*Too much collaboration/relationships between them, which again is tested only through higher level functional/system testing. No integration testing per se.
*Big databases in place with plenty of tables, views, etc. Just to instantiate a single business object there already goes good deal of database round trips.
How can we introduce TDD at this stage?
Mocking seems to be the way to go. But the amount of mocking we need to do here seems like too much. Sounds like elaborate infrastructure needs to be developed for the mocking system working for existing stuff (BO, databases, etc.).
Does that mean TDD is a suitable methodology only when starting from scratch? I am interested to hear about the feasible strategies to introduce TDD in an already mature product.
A: Yes you can. From your description the project is in a good shape - solid amount of functional tests automation is a way to go! In some aspects its even more useful than unit testing. Remember that TDD != unit testing, it's all about short iterations and solid acceptance criteria.
Please remember that having an existing and accepted project actually makes testing easier - working application is the best requirements specification. So you're in a better position than someone who just have a scrap of paper to work with.
Just start working on your new requirements/bug fixes with an TDD. Remember that there will be an overhead associated with switching the methodology (make sure your clients are aware of this!) and probably expect a good deal of reluctance from the team members who are used to the 'good old ways'.
Don't touch the old things unless you need to. If you will have an enhancement request which will affect existing stuff then factor in extra time for doing the extra set-up things.
Personally I don't see much value in introducing a complex infrastructure for mock-ups - surely there is a way to achieve the same results in a lightweight mode but it obviously depends on your circumstances
A: One tool that can help you testing legacy code (assuming you can't\won't have the time to refactor it, is Typemock Isolator: Typemock.com
It allows injecting dependencies into existing code without needing to extract interfaces and such because it does not use standard reflection techniques (dynamic proxy etc..) but uses the profiler APIs instead.
It's been used to test apps that rely on sharepoint, HTTPContext and other problematic areas.
I recommend you take a look.
(I work as a dev in that company, but it is the only tool that does not force you to refactor existing legacy code, saving you time and money)
I would also highly recommend "Working effectively with legacy code" for more techniques.
Roy
A: Yes you can. Don't do it all at once, but introduce just what you need to test a module whenever you touch it.
You can also start with more high level acceptance tests and work your way down from there (take a look at Fitnesse for this).
A: Creating a complex mocking infrastructure will probably just hide the problems in your code. I would recommend that you start with integration tests, with a test database, around the areas of the code base that you plan to change. Once you have enough tests to ensure that you won't break anything if you make a change, you can start to refactor the code to make it more testable.
Se also Michael Feathers excellent book Working effectively with legacy code, its a must read for anyone thinking of introducing TDD into a legacy code base.
A: I would start with some basic integration tests. This will get buy-in from the rest of the staff. Then start to separate the parts of your code which have dependencies. Work towards using Dependency Injection as it will make your code much more testable. Treat bugs as an opportunity to write testable code.
A: I think its completely feasible to introduce TDD into an existing application, in fact I have recently done it myself.
It is easiest to code new functionality in a TDD way and restructuring the existing code to accommodate this. This way you start of with a small section of your code tested but the effects start to spread through the whole code base.
If you've got a bug, then write a unit test to reproduce it, refactoring the code as necessary (unless the effort is really not worth it).
Personally, I don't think there's any need to go crazy and try and retrofit tests into the existing system as that can be very tedious without a great amount of benefit.
In summary, start small and your project will become more and more test infected.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
}
|
Q: How to add some non-standard font to a website? Is there a way to add some custom font on a website without using images, Flash or some other graphics?
For example, I was working on a wedding website, and I found a lot of nice fonts for that subject. But I can't find the right way to add that font on the server. And how do I include that font with CSS into the HTML? Is this possible to do without graphics?
A: The way to go is using the @font-face CSS declaration which allows authors to specify online fonts to display text on their web pages. By allowing authors to provide their own fonts, @font-face eliminates the need to depend on the limited number of fonts users have installed on their computers.
Take a look at the following table:
As you can see, there are several formats that you need to know about mainly due to cross-browser compatibility. The scenario in mobile devices isn't much different.
Solutions:
1 - Full browser compatibility
This is the method with the deepest support possible right now:
@font-face {
font-family: 'MyWebFont';
src: url('webfont.eot'); /* IE9 Compat Modes */
src: url('webfont.eot?#iefix') format('embedded-opentype'), /* IE6-IE8 */
url('webfont.woff') format('woff'), /* Modern Browsers */
url('webfont.ttf') format('truetype'), /* Safari, Android, iOS */
url('webfont.svg#svgFontName') format('svg'); /* Legacy iOS */
}
2 - Most of the browser
Things are shifting heavily toward WOFF though, so you can probably get away with:
@font-face {
font-family: 'MyWebFont';
src: url('myfont.woff') format('woff'), /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
url('myfont.ttf') format('truetype'); /* Chrome 4+, Firefox 3.5, Opera 10+, Safari 3—5 */
}
3 - Only the latest browsers
Or even just WOFF.
You then use it like this:
body {
font-family: 'MyWebFont', Fallback, sans-serif;
}
References and Further reading:
That's mainly what you need to know about implementing this feature. If you want to research more on the subject I'll encourage to take a look at the following resources. Most of what I put here is extracted from the following
*
*Using Font Face (Very recommended)
*Bulletproof @font-face syntax
*Can i use @font-face web fonts ?
*How to archive Cross-Browser @font-face support
*@font-face at the CSS Mozilla Developer Network
A: Or you could try sIFR. I know it uses Flash, but only if available. If Flash isn't available, it displays the original text in its original (CSS) font.
A: This could be done via CSS:
<style type="text/css">
@font-face {
font-family: "My Custom Font";
src: url(http://www.example.org/mycustomfont.ttf) format("truetype");
}
p.customfont {
font-family: "My Custom Font", Verdana, Tahoma;
}
</style>
<p class="customfont">Hello world!</p>
It is supported for all of the regular browsers if you use TrueType-Fonts (TTF), the Web Open Font Format (WOFF) or Embedded Opentype (EOT).
A: The technique that the W3C has recommended for do this is called "embedding" and is well described by the three articles here: Embedding Fonts. In my limited experiments, I have found this process error-prone and have had limited success in making it function in a multi-browser environment.
A: Safari and Internet Explorer both support the CSS @font-face rule, however they support two different embedded font types. Firefox is planning to support the same type as Apple some time soon. SVG can embed fonts but isn't that widely supported yet (without a plugin).
I think the most portable solution I've seen is to use a JavaScript function to replace headings etc. with an image generated and cached on the server with your font of choice -- that way you simply update the text and don't have to stuff around in Photoshop.
A: If you use ASP.NET, it's really easy to generate image based fonts without actually having to install (as in adding to the installed font base) fonts on the server by using:
PrivateFontCollection pfont = new PrivateFontCollection();
pfont.AddFontFile(filename);
FontFamily ff = pfont.Families[0];
and then drawing with that font onto a Graphics.
A: It looks like it only works in Internet Explorer, but a quick Google search for "html embed fonts" yields http://www.spoono.com/html/tutorials/tutorial.php?id=19
If you want to stay platform-agnostic (and you should!) you'll have to use images, or else just use a standard font.
A: I did a bit of research and dug up Dynamic Text Replacement (published 2004-06-15).
This technique uses images, but it appears to be "hands free". You write your text, and you let a few automated scripts do automated find-and-replace on the page for you on the fly.
It has some limitations, but it is probably one of the easier choices (and more browser compatible) than all the rest I've seen.
A: It is also possible to use WOFF fonts - example here
@font-face {
font-family: 'Plakat Fraktur';
src: url('/resources/fonts/plakat-fraktur-black-modified.woff') format('woff');
font-weight: bold;
font-style: normal;
}
A: Just simply provide the link to actual font like this and you will be good to go
<!DOCTYPE html>
<html>
<head>
<link href='https://fonts.googleapis.com/css?family=Montserrat' rel='stylesheet'>
<style>
body {
font-family: 'Montserrat';font-size: 22px;
}
</style>
</head>
<body>
<h1>Montserrat</h1>
<p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit.</p>
</body>
</html>
A: See the article 50 Useful Design Tools For Beautiful Web Typography for alternative methods.
I have only used Cufon. I have found it reliable and very easy to use, so I've stuck with it.
A: Typeface.js JavaScript Way:
With typeface.js you can embed custom
fonts in your web pages so you don't
have to render text to images
Instead of creating images or using
flash just to show your site's graphic
text in the font you want, you can use
typeface.js and write in plain HTML
and CSS, just as if your visitors had
the font installed locally.
http://typeface.neocracy.org/
A: If you have a file of your font, then you will need to add more formats of that font for other browsers.
For this purpose I use font generator like Fontsquirrel it provides all the font formats & its @font-face CSS, you will only need to just drag & drop it into your CSS file.
A: If by non standard font, you mean custom font of a standard format, here's how I do it, and it works for all browsers I've checked so far:
@font-face {
font-family: TempestaSevenCondensed;
src: url("../fonts/pf_tempesta_seven_condensed.eot") /* EOT file for IE */
}
@font-face {
font-family: TempestaSevenCondensed;
src: url("../fonts/pf_tempesta_seven_condensed.ttf") /* TTF file for CSS3 browsers */
}
so you'll just need both the ttf and eot fonts. Some tools available online can make the conversion.
But if you want to attach font in a non standard format (bitmaps etc), I can't help you.
A: You can add some fonts via Google Web Fonts.
Technically, the fonts are hosted at Google and you link them in the HTML header. Then, you can use them freely in CSS with @font-face (read about it).
For example:
In the <head> section:
<link href=' http://fonts.googleapis.com/css?family=Droid+Sans' rel='stylesheet' type='text/css'>
Then in CSS:
h1 { font-family: 'Droid Sans', arial, serif; }
The solution seems quite reliable (even Smashing Magazine uses it for an article title.). There are, however, not so many fonts available so far in Google Font Directory.
A: I've found that the easiest way to have non-standard fonts on a website is to use sIFR
It does involve the use of a Flash object that contains the font, but it degrades nicely to standard text / font if Flash is not installed.
The style is set in your CSS, and JavaScript sets up the Flash replacement for your text.
Edit: (I still recommend using images for non-standard fonts as sIFR adds time to a project and can require maintenance).
A: The article Font-face in IE: Making Web Fonts Work says it works with all three major browsers.
Here is a sample I got working: http://brendanjerwin.com/test_font.html
More discussion is in Embedding Fonts.
A: Typeface.js and Cufon are two other interesting options. They are JavaScript components that render special font data in JSON format (which you can convert from TrueType or OpenType formats on their web sites) via the new <canvas> element in all newer browsers except Internet Explorer and via VML in Internet Explorer.
The main problem with both (as of now) is that selecting text does not work or at least works only quite awkwardly.
Still, it is very nice for headlines. Body text... I don't know.
And it's surprisingly fast.
A: @font-face {
font-family: "CustomFont";
src: url("CustomFont.eot");
src: url("CustomFont.woff") format("woff"),
url("CustomFont.otf") format("opentype"),
url("CustomFont.svg#filename") format("svg");
}
A: easy solution is to use @fontface in css
@font-face {
font-family: myFirstFont;
src: url(fileLocation);}
div{
font-family: myfirstfont;}
A: You can use @import url(url) to import web fonts. You must replace url with the font source (full web source).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "571"
}
|
Q: CSS list-style: none; still shows bullet I am using YUI reset/base, after the reset it sets the ul and li tags to list-style: disc outside;
My markup looks like this:
<div id="nav">
<ul class="links">
<li><a href="">Testing</a></li>
</ul>
</div>
My CSS is:
#nav {}
#nav ul li {
list-style: none;
}
Now that makes the small disc beside each li disappear.
Why doesn't this work though?
#nav {}
#nav ul.links
{
list-style: none;
}
It works if I remove the link to the base.css file, why?.
Updated: sidenav -> nav
A: I think that Dan was close with his answer, but this isn't an issue of specificity. You can set the list-style on the list (the UL) but you can also override that list-style for individual list items (the LIs).
You are telling the browser to not use bullets on the list, but YUI tells the browser to use them on individual list items (YUI wins):
ul li{ list-style: disc outside; } /* in YUI base.css */
#nav ul.links {
list-style: none; /* doesn't override styles for LIs, just the UL */
}
What you want is to tell the browser not to use them on the list items:
ul li{ list-style: disc outside; } /* in YUI base.css */
#nav ul.links li {
list-style: none;
}
A: In the first snippet you apply the list-style to the li element, in the second to the ul element.
Try
#nav ul.links li
{
list-style: none;
}
A: The latter example probably doesn't work because of CSS specificity. (A more serious explanation can be found here.) That is, YUI's base.css rule is:
ul li{ list-style: disc outside; }
This is more 'specific' than yours, so the YUI rule is being used. As has been noted several times, you can make your rule more specific by targeting the li tags:
#nav ul li{ list-style: none; }
Hard to say for sure without looking at your code, but if you don't know about specificity it's certainly worth a read.
A: shouldn't it be:
#nav ul.links
A: Maybe the style is the base.css overrides your styles with "!important"? Did you try to add a class to this specific li and make an own style for it?
A: Use this one:
.nav ul li {
list-style: none;
}
or
.links li {
list-style: none;
}
it should work...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: A Firefox javascript bookmarking problem I'm using the following JavaScript code:
<script language="JavaScript1.2" type="text/javascript">
function CreateBookmarkLink(title, url) {
if (window.sidebar) {
window.sidebar.addPanel(title, url,"");
} else if( window.external ) {
window.external.AddFavorite( url, title); }
else if(window.opera && window.print) {
return true; }
}
</script>
This will create a bookmark for Firefox and IE. But the link for Firefox will show up in the sidepanel of the browser, instead of being displayed in the main screen. I personally find this very annoying and am looking for a better solution. It is of course possible to edit the bookmark manually to have it not show up in the side panel, but that requires extra steps. I just want to be able to have people bookmark a page (that has a lot of GET information in the URL which is used to build a certain scheme) the easy way.
I'm afraid that it might not be possible to have Firefox present the page in the main screen at all (as Googling this subject resulted in practically nothing worth using), but I might have missed something. If anyone has an idea if this is possible, or if there's a workaround, I'd love to hear about it.
A: For Firefox no need to set any JavaScript for the bookmark an page by script, only an anchor tag with title and rel="sidebar" can do this functionality
<a href="http://www.google.com" title="Google" rel="sidebar">Bookmark This Page</a>
I have tested it on FF9 and its working fine.
When you click on the link, Firefox will open an dialog box New Bookmark and if you wish to not load this bookmark on side bar then un-check Load this bookmark in the sidebar from dialog box.
A: I think that's the only solution for Firefox... I have a better function for that action, it works even for Opera and shows a message for other "unsupported" browsers.
<script type="text/javascript">
function addBookmark(url,name){
if(window.sidebar && window.sidebar.addPanel) {
window.sidebar.addPanel(name,url,''); //obsolete from FF 23.
} else if(window.opera && window.print) {
var e=document.createElement('a');
e.setAttribute('href',url);
e.setAttribute('title',name);
e.setAttribute('rel','sidebar');
e.click();
} else if(window.external) {
try {
window.external.AddFavorite(url,name);
}
catch(e){}
}
else
alert("To add our website to your bookmarks use CTRL+D on Windows and Linux and Command+D on the Mac.");
}
</script>
A: You have a special case for
if (window.sidebar)
and then a branch for 'else' - wouldn't firefox land in the first branch and hence only add the panel?
A: Hojou,
It seems that is the only way to add a bookmark for Firefox. So FF needs to land in the first branch to have anything happening at all. I Googled some more but I'm really getting the idea this is impossible to properly address in FF...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/107971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.