text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: How to find Commit Charge programmatically? I'm looking for the total commit charge.
A: public static long GetCommitCharge()
{
var p = new System.Diagnostics.PerformanceCounter("Memory", "Committed Bytes");
return p.RawValue;
}
A: Here's an example using WMI:
strComputer = "."
Set objSWbemServices = GetObject("winmgmts:\\" & strComputer)
Set colSWbemObjectSet = _
objSWbemServices.InstancesOf("Win32_LogicalMemoryConfiguration")
For Each objSWbemObject In colSWbemObjectSet
Wscript.Echo "Total Physical Memory (kb): " & _
objSWbemObject.TotalPhysicalMemory
WScript.Echo "Total Virtual Memory (kb): " & _
objSWbemObject.TotalVirtualMemory
WScript.Echo "Total Page File Space (kb): " & _
objSWbemObject.TotalPageFileSpace
Next
If you run this script under CScript, you should see the number of kilobytes of physical memory installed on the target computer displayed in the command window. The following is typical output from the script:
Total Physical Memory (kb): 261676
Edit: Included total page file size property also
taken from: http://www.microsoft.com/technet/scriptcenter/guide/sas_wmi_dieu.mspx?mfr=true
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What is best practice for large file transfer - SFTP or asymmetric file encryption? Which is generally considered "best practice" when wanting to securely transmit flat files over the wire? Asymmetric encryption seems to be a pain in that you have to manage keysets at endpoints and make sure that the same algorithm is used by all clients, where as SFTP seems to be a pain because of NAT issues with encrypting the control channel, thus the router cannot translate IP. Is there a third-party solution that is highly recommended?
A: I believe you're talking about FTP with SSL when you say SFTP, and not the SFTP protocol that goes along with SSH. Use SFTP (the SSH version) as it doesn't require an encrypted control channel and will work fine over NAT. The SFTP page I linked to lists a number of graphical SFTP clients at the bottom of the page.
A: rsync is the best file transferring utility out there. Supports resume, recursion and a variety of encryption including ssh (the default). Like scp on steroids.
If you have multiple routers to punch through you can build ssh tunnels. It will only transfer parts of the file that are missing which make it great for backups. It has so many useful features I use it instead of cp for local copying.
It's available for many platforms and included by default on modern *nix systems. More at http://samba.anu.edu.au/rsync/
A: Use PGP / GPG and transfer the gpg-ed file directly via ftp or any other method.
A: Yah, I meant SSL FTP, not SFTP. "Management" is adverse to open-source, but if that's what the de-facto best practice is, then that's what to use...thanks for answers
A: With FTPS, you can generally switch to an unencrypted control channel via the CCC command after authentication. This approach means no problems with routers, while the data you are transferring will remain encrypted.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to dynamically change the "src" or "data" for a PDF Object / Embed file using JavaScript? I have a web application that is dynamically loading PDF files for viewing in the browser.
Currently, it uses "innerHTML" to replace a div with the PDF Object. This works.
But, is there a better way to get the ID of the element and set the "src" or "data" parameter for the Object / Embed and have it instantly load up a new document?
I'm hoping the instance of Adobe Acrobat Reader will stay on the screen, but the new document will load into it.
Here is a JavaScript example of the object:
document.getElementById(`divPDF`).innerHTML = `<OBJECT id='objPDF' DATA="'+strFilename+'" TYPE="application/pdf" TITLE="IMAGING" WIDTH="100%" HEIGHT="100%"></object>`;
Any insight is appreciated.
A: I am not sure if this will work, as I have not tried this out in my projects.
(Looking at your JS, I believe you are using jQuery. If not, please correct me)
Once you have populated the divPDF with the object you might try the code below:
$("objPDF").attr({
data: "dir/to/newPDF"
});
Again, I am not sure if this will work for your particular needs but if you attach this code to an event handler you can switch out the data of the object.
You could also wrap it in a function to be used over and over again:
function pdfLoad(dirToPDF) {
$("objPDF").attr({
data: dirToPDF
});
}
A: If the handler for the PDF is acrobat (it doesn't have to be), it exposes a JS interface that is documented here:
http://www.adobe.com/devnet/acrobat/pdfs/js_api_reference.pdf
See if you can call openDoc(urlToPdf) on document.getElementById('objPDF') -- even if this works, it only works when Acrobat is being used to handle 'application/pdf'
A: @lark
A slight correction:
$('#objPDF').attr('data','dirToPDF');
The # specifies the objPDF is an ID and not an element name. Though I still don't know if this will work.
@Tristan
Take a look at the jQuery Media plugin. It mentions support for PDF as well, though I have never used it.
A: Open a PDF-Link in a external window PDFN with a external PDF-Reader.EXE:
Clicking on the following button:
<FORM action="">
<INPUT type="button" value="PDF file"
onclick="window.open('http://www.Dku-betrieb.eu/Pdfn.html',
'PDFN', 'width=620, height=630')">
</FORM>
opens this frameset Pdfn.html in an external window:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd">
<html lang="de">
<meta http-equiv="refresh" content="12;url=http://www.dku-betrieb.eu/Pdfn1.html">
<head>
<title>Reader</title>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
</head>
<frameset>
<frame src="http://www.dku-betrieb.eu/File.pdf" frameborder=0 name="p1">
</frameset>
</HTML>
which refreshes in 12 seconds to the download of the PDF-Reader:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd">
<html lang="de">
<head>
<title>Reader</title>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
</head>
<frameset >
<frame src="http://www.dku-betrieb.eu/PDFReader.exe" frameborder=0 name="p2">
</frameset>
</HTML>
showing as result the PDF-file in the external window PDFN.
A: function pdfLoad(datasrc) {
var x = document.getElementById('objPDF');
x.data = datasrc;
}
This worked for me
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is there any place a developer can go besides Google to learn what it is they need to learn? I'm not really asking about how programmers learn how to program. More about specific technologies. If I wanted to learn Hibernate, how do I know what I should know prior to Hibernate? Should I learn JPA before, during or after Hibernate? Is there a better solution to Hibernate? (And I'm not really looking for information on Hibernate specifically)
Maybe stackoverflow is the place to find these answers, but it seems like with the shear vastness of frameworks, apis, libraries, programming languages, platforms, and whatever other techie word you want to use, it takes an extremely long time to come up to speed on what technology to use, when and what you need to know prior to using it.
A: I use Wikipedia to compare various technologies to copmlete a task, although it can be incomplete with regards to commercial closed-source frameworks (probably because fewer people have access to them).
A: Sometimes the best way to learn is to just dig in to a framework. Sure, you could use someones wrapper API around something, but if there is something wrong w/ hibernate, then you wouldn't know what's happening.
And to answer "how do i know what i should know prior to hibernate", you don't, that's why you are learning. When learning about c++, started out with simple data types, but i didn't know about pointers yet, didn't need to, but i learned about them when i got there. Just gotta jump in and start playing around.
A: For specific technologies such as Hibernate, Java, JPA, LDAP (OpenLDAP in particular), Log4J, anything Apache: they all have wikis and/or forums associated with the product that are usually more helpful than a Google search for learning. Many even come with tutorials and you should try them.
A: Find a book on the subject and read it. Then email the author with additional questions. Most of these authors are more than happy to help especially if you've bought and read the materials they worked so hard to produce.
If that's still not enough for you, go to a conference covering the subject, if you can make it. Again you can meet many of the people responsible for maintaining and/or creating these technologies and I've found they are always willing to answer questions.
A: go to sites like Coding Horror, Slashdot, Techcrunch etc and find out what people are talking about. Usually if something is popular it's probably something you might want to talk a look at.
A: There are these things called "books" that are filled with all kinds of knowledge.
A: A lot of the time the documentation and/or tutorial for any technology or project will mention what prior knowledge is assumed or useful.
So for example hibernate: http://www.hibernate.org/hib_docs/v3/reference/en/html_single/#tutorial-intro
"This tutorial is intended for new users of Hibernate but requires Java and SQL knowledge"
A: For me, the things that have helped my career and taught me what questions to ask are:
*
*Podcasts -- .NET Rocks, etc., which introduce and discuss new technologies and put them in context
*Join your local users group, and stick around after the presentation to talk shop with the folks there; you can learn a lot just by hearing what other people are doing and what they are working on learning next
A: Just look around online and start trying to use whatever tool/technology your trying to learn. As you try to learn one thing, you'll realize your lacking knowledge in other needed areas. at which point you can repeat the process of looking around for this new item you need to learn.
for example, maybe you want to learn Rails, so you start following rails tutorials, but you realize you suck at Ruby. so then you start to focus a bit more on the details of Ruby, then come back to Rails with a little more knowledge and continue on till the next roadblock. this isn't really totally correct, but you get the idea.
you won't always find a full guide of how to use everything. just give it a shot and work it out on your own if you have the time
A: There is an infinite number of things one could learn. Maybe a better approach would be to think of a project that interests you, or join an open source one, and then learn what you need to know to accomplish what is needed in that project. When you're done, pick a new project that might include new things not learned in the last project.
A: As far as free sources are concerned, as a .NET programmer I like www.asp.net, and there are many others, such as the ASP.NET quickstart tutorials at http://quickstarts.asp.net/QuickStartv20/default.aspx, C-SharpCorner is good, too, if you don't mind C#.
If you don't object to paying a little money, Lynda.com is a decent place. They have OK tutorials on all kinds of things, not just programming, and I got a decent grounding in Javascript using one of their tutorials. They are adding new things all the time, so if they don't have something on Hibernate now, they may later on. I think their basic rate is $25 per month, but you can just pay for one month and then soak up as many courses as you can find time for.
A: Asking a more specific question will get your a more specific answer here. When I want to read up on something I usually head to Wikipedia and then Google.
A: The truth is none of us have the time to read everything we'd like to. So I let someone else do it for me!
The way I solve this is by speed-reading the web - aka. subscribing and reading to other peoples blogs.
Everytime I come across something I'm not familiar with I google it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Best-performance query for "select max in group"? I have a simple table comments (id INT, revision INT, comment VARCHAR(140)) with some content like this:
1|1|hallo1|
1|2|hallo2|
1|3|hallo3|
2|1|hallo1|
2|2|hallo2|
I'm searching for an SQL statement which will return each comment with the highest revision:
1|3|hallo3|
2|2|hallo2|
I've come up with this solution:
select id, revision, comment
from comments
where revision = (
select max(revision)
from comments as f
where f.id = comments.id
);
but it is very slow on large data sets. Are there any better queries to accomplish this?
A: *
*Make sure you have your indexes set up appropriately. Indexing on id, revision would be good.
*Here's a different take on your query. Haven't checked its execution plan, but if you set up the index well it should help:
SELECT c.*
FROM comments c
INNER JOIN (
SELECT id,max(revision) AS maxrev
FROM comments
GROUP BY id
) b
ON c.id=b.id AND c.revision=b.maxrev
Editted to add:
*If you're on SQL Server, you might want to check out Indexed Views as well:
http://www.microsoft.com/technet/prodtechnol/sql/2005/impprfiv.mspx
Editted again to add info:
Subquery:
25157 records
2 seconds
Execution plan includes an Index Seek (82%) base and a Segment (17%)
Left Outer Join:
25160 records
3 seconds
Execution plan includes two Index Scans @ 22% each with a Right Outer Merge at 45% and a Filter at 11%
I'd still go with the sub query.
A: Tested with one of our tables that has nearly 1 million rows total. Indexes exist on both fields FIELD2 AND FIELD3. Query returned 83953 rows in under 3 seconds on our dev box.
select
FIELD1, FIELD2, FIELD3
from
OURTABLE (nolock) T1
WHERE FIELD3 =
(
SELECT MAX(FIELD3) FROM
OURTABLE T2 (nolock)
WHERE T1.FIELD2=T2.FIELD2
)
ORDER BY FIELD2 DESC
A: Here's one way that with appropriate indexing will not be heinously slow and it doesn't use a subselect:
SELECT comments.ID, comments.revision, comments.comment FROM comments
LEFT OUTER JOIN comments AS maxcomments
ON maxcomments.ID= comments.ID
AND maxcomments.revision > comments.revision
WHERE maxcomments.revision IS NULL
Adapted from queries here:
http://www.xaprb.com/blog/2007/03/14/how-to-find-the-max-row-per-group-in-sql-without-subqueries/
(From google search: max group by sql)
A: Analytics would be my recommendation.
select id, max_revision, comment
from (select c.id, c.comment, c.revision, max(c.revision)over(partition by c.id) as max_revision
from comments c)
where revision = max_revision;
A: Idea from left field, but what about adding an extra field to the table:
CurrentRevision bit not null
Then when you make a change, set the flag on the new revision and remove it on all previous ones.
Your query would then simply become:
select Id,
Comment
from Comments
where CurrentRevision = 1
This would be much easier on the database and therefore much faster.
A: One quite clean way to do "latest x by id" type queries is this. It should also be quite easy to index properly.
SELECT id, revision, comment
FROM comments
WHERE (id, revision) IN (
SELECT id, MAX(revision)
FROM comments
-- WHERE clause comes here if needed
GROUP BY id
)
A: For big tables I find that this solution can has a better performance:
SELECT c1.id,
c1.revision,
c1.comment
FROM comments c1
INNER JOIN ( SELECT id,
max(revision) AS max_revision
FROM comments
GROUP BY id ) c2
ON c1.id = c2.id
AND c1.revision = c2.max_revision
A: Without subselects (or temporary tables):
SELECT c1.ID, c1.revision, c1.comment
FROM comments AS c1
LEFT JOIN comments AS c2
ON c1.ID = c2.ID
AND c1.revision < c2.revision
WHERE c2.revision IS NULL
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Best web front-end for SVN? I'm researching SVN repository browsers, and it's a tiresome task given how many are out there (I started here)
The "ideal" system would
*
*Run on Linux
*Be easy to use, even for non-developer types
*Look nice (have a decent skin)
*Either have built-in access control, or be written in PHP so that I could hack it myself to hook it up to something like an LDAP server.
Basically, I'm researching the idea of using a SVN front-end as also a delivery system for assets to other employees (think account executives, project managers, etc.) who need read-only access and are not as technically minded so it needs to be easy to use/navigate. And I'd really need to be able to set read permissions on a per-folder basis - we can't have everyone with full read access to the entire repository.
A: redmine is what we're using at work.
It's similar to trac, but offers multiple project capability. The browser's decent, allowing role based permissions on each project, and each project is based on a subtree of the repository.
Also lets you browse other repository types, has a file store for publishing files and a wiki - all of which can be disabled or enabled on a per-project basis.
A: WebSVN? It's written in PHP, lightweight, and simple. Check out the demo.
A: The trunk development version (set to become version 1.1) of ViewVC supports access control. ViewVC is featureful as a repository viewer, and intuitive to use, without any unnecessary extras.
A: sventon looks very interesting. It is a servlet/jsp solution written on top of the svnkit Java library. It can act as a true client, so it does not need direct access the repository (like ViewVC for example). It can use the access control of the repository itself.
A: Trac.( http://trac.edgewall.org/ ) Its not wonderful, but from what I've seen, for SVN its the best.
With Access control to boot.
I managed to set up a rig with even per-directory permissions for various trac users ( they just didn't appear ) .
Been a while tho.
Default Skin looks pretty good, and is highly tunable.
Comes with a wiki & bug tracker, which you can disable if you want.
A: I'm not employed by Atlassian and fisheye is great. I think adding in crucible makes it a real win. (in the past I have used websvn and found that to be ok). I don't really like the viewVC interface. There's somthing about it that makes it harder for me to groc the changes, I don't know what.
A: Atlassian Fisheye http://www.atlassian.com/software/fisheye/ is a commercial one that I can't live without!
(full disclosure...I am employed by Atlassian, but I say without bias that Fisheye is the best one out there that I've used)
A: We used ViewVC for browsing both CVS and SVN repositories but since we switched to FishEye we finally have a really good solution for code browsing and examination.
We also use other Atlassian products (Jira and Confluence) and integration between all of them is just marvelous!
PS. I'm not an Atlassian employee :)
A: http://beanstalkapp.com/ will host your repository and make it navigable at the same time.
A: You should have a look at http://www.groowiki.com
We plan to have the access control features you miss, it is on the roadmap. We also target search, workflow support and right now you can add description to the file and directories
using radeox and write plugins in Java or Groovy.
And yes, I am affiliated with Groowiki. I wrote it and plan to develop it further. And it is free AND commercial as well. Only a very few features are closed (not open) source so big companies needing those specific features support its development.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
}
|
Q: How to check if a variable is loaded in JavaScript? How do I see if a certain object has been loaded, and if not, how can it be loaded, like the following?
if (!isObjectLoaded(someVar)) {
someVar= loadObject();
}
A: myObject = myObject || loadObject();
A: If it is an object then you should just be able to check to see if it is null or undefined and then load it if it is.
if (myObject === null || myObject === undefined) {
myObject = loadObject();
}
Using the typeof function is also an option as it returns the type of the object provided. However, it will return null or undefined if the object has not been loaded so it might boil down a bit to personal preference in regards to readability.
A: I'm not sure what you mean by "loaded"... does the variable object exist and simply doesn't have the type you want? In that case, you'll want something like:
function isObjectType(obj, type) {
return !!(obj && type && type.prototype && obj.constructor == type.prototype.constructor);
}
and then use if (isObjectType(object, MyType)) { object = loadObject(); }.
If object is not populated with anything before your test (ie - typeof object === 'undefined') then you just need:
if ('undefined' === typeof object) { object = loadObject(); }
A: You probably want to see if a given object is defined
Especially if its done in an asynchronous thread with a setTimeout to check when it turns up.
var generate = function()
{
window.foo = {};
};
var i = 0;
var detect = function()
{
if( typeof window.foo == "undefined" )
{
alert( "Created!");
clearInterval( i );
}
};
setTimeout( generate, 15000 );
i = setInterval( detect, 100 );
should in theory detect when window.foo comes into existance.
A: If by loaded you mean defined, you can check the type of the variable with the typeof function. HOWEVER typeof has a few quirks, and will identify an Object, an Array, and a null as an object
alert(typeof(null));
Identifying a null as a defined object would probably cause your program to fail, so check with something like
if(null !== x && 'object' == typeof(x)){
alert("Hey, It's an object or an array; good enough!");
}
A: if(typeof(o) != 'object') o = loadObject();
A: If you want to detect a custom object:
// craete a custom object
function MyObject(){
}
// check if it's the right kind of object
if(!(object instanceof MyObject)){
object = new MyObject();
}
A: You can also just use a shortcut if(obj)
A: typeof(obj) would return "object" for an object of a class among other possible values.
A: if (!("someVar" in window)) {
someVar = loadObject();
}
will tell you whether any JS has previously assigned to the global someVar or declared a top-level var someVar.
That will work even if the loaded value is undefined.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
}
|
Q: What is the best way to present data from a very large resultset? I'm writing a report view of an audit trail, and I need to display this in a .jsp. What's the "best" way to get the data from the database to the screen?
We're using Spring for dependency injection, Data Access Objects, and Hibernate.
I can use hibernate or straight jdbc for this report.
If I load all the records into memory I run out of memory.
Any ideas that don't involve running the query in the jsp?
A: It seems like this is a natural place to use pagination of your Hibernate results -- run the query at the Servlet level, and paginate results in a way similar to how this person describes:
http://blog.hibernate.org/Bloggers/Everyone/Year/2004/Month/08/Day/14#pagination
This is the easiest method of implementing Hibernate pagination I've seen...
A: The Display Tag Library is very good at presenting paginated result sets in servlets or portlets. But it normally works with the whole list loaded into memory. So you will have to do a little work to get it to work with paginated result sets by implementing the org.displaytag.pagination.PaginatedList interface. There is a tutorial on the Display Tag web site. There isn't very much to the tutorial but then again implementing the interface is pretty easy.
A: Just use paging and only load a certain number of rows on the page at a time.
A: Another way to do it is to use scroll() to stream one row at the time.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What is a variable's linkage and storage specifier? When someone talks about a variables storage class specifier, what are they talking about?
They also often talk about variable linkage in the same context, what is that?
A: The storage class specifier controls the storage and the linkage of your variables. These are two concepts that are different.
C specifies the following specifiers for variables: auto, extern, register, static.
Storage
The storage duration determines how long your variable will live in ram.
There are three types of storage duration: static, automatic and dynamic.
static
If your variable is declared at file scope, or with an extern or static specifier, it will have static storage. The variable will exist for as long as the program is executing. No execution time is spent to create these variables.
automatic
If the variable is declared in a function, but without the extern or static specifier, it has automatic storage. The variable will exist only while you are executing the function. Once you return, the variable no longer exist. Automatic storage is typically done on the stack. It is a very fast operation to create these variables (simply increment the stack pointer by the size).
dynamic
If you use malloc (or new in C++) you are using dynamic storage. This storage will exist until you call free (or delete). This is the most expensive way to create storage, as the system must manage allocation and deallocation dynamically.
Linkage
Linkage specifies who can see and reference the variable. There are three types of linkage: internal linkage, external linkage and no linkage.
no linkage
This variable is only visible where it was declared. Typically applies to variables declared in a function.
internal linkage
This variable will be visible to all the functions within the file (called a translation unit), but other files will not know it exists.
external linkage
The variable will be visible to other translation units. These are often thought of as "global variables".
Here is a table describing the storage and linkage characteristics based on the specifiers
Storage Class Function File
Specifier Scope Scope
-----------------------------------------------------
none automatic static
no linkage external linkage
extern static static
external linkage external linkage
static static static
no linkage internal linkage
auto automatic invalid
no linkage
register automatic invalid
no linkage
A: Variable storage classes or type specifiers (like volatile, auto and static) define how/where variables are saved during program execution. For example, variables defined in functions are usually saved on the stack, which means that it will be lost after the function returns. Using the "static" keyword, you can force the compiler to put the variable in the data segment in memory, making the variables content persistent between calls to that function. The "register" keyword will cause the compiler to try as hard as possible to put the variable in a CPU register, useful for counters in loops etc. However, it's not guaranteed that it's actually in a register after all.
Read more about type specifiers here.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: How to check if one DateTime is greater than the other in C# I have two DateTime objects: StartDate and EndDate. I want to make sure StartDate is before EndDate. How is this done in C#?
A: StartDate < EndDate
A: This is probably too late, but to benefit other people who might stumble upon this, I used an extension method do to this using IComparable like this:
public static class BetweenExtension
{
public static bool IsBetween<T>(this T value, T min, T max) where T : IComparable
{
return (min.CompareTo(value) <= 0) && (value.CompareTo(max) <= 0);
}
}
Using this extension method with IComparable makes this method more generic and makes it usable with a wide variety of data types and not just dates.
You would use it like this:
DateTime start = new DateTime(2015,1,1);
DateTime end = new DateTime(2015,12,31);
DateTime now = new DateTime(2015,8,20);
if(now.IsBetween(start, end))
{
//Your code here
}
A: Check out DateTime.Compare method
A: if(StartDate < EndDate)
{}
DateTime supports normal comparision operators.
A: I had the same requirement, but when using the accepted answer, it did not fulfill all of my unit tests. The issue for me is when you have a new object, with Start and End dates and you have to set the Start date ( at this stage your End date has the minimum date value of 01/01/0001) - this solution did pass all my unit tests:
public DateTime Start
{
get { return _start; }
set
{
if (_end.Equals(DateTime.MinValue))
{
_start = value;
}
else if (value.Date < _end.Date)
{
_start = value;
}
else
{
throw new ArgumentException("Start date must be before the End date.");
}
}
}
public DateTime End
{
get { return _end; }
set
{
if (_start.Equals(DateTime.MinValue))
{
_end = value;
}
else if (value.Date > _start.Date)
{
_end = value;
}
else
{
throw new ArgumentException("End date must be after the Start date.");
}
}
}
It does miss the edge case where both Start and End dates can be 01/01/0001 but I'm not concerned about that.
A: You can use the overloaded < or > operators.
For example:
DateTime d1 = new DateTime(2008, 1, 1);
DateTime d2 = new DateTime(2008, 1, 2);
if (d1 < d2) { ...
A: if (StartDate < EndDate)
// code
if you just want the dates, and not the time
if (StartDate.Date < EndDate.Date)
// code
A: if (new DateTime(5000) > new DateTime(1000))
{
Console.WriteLine("i win");
}
A: if (StartDate>=EndDate)
{
throw new InvalidOperationException("Ack! StartDate is not before EndDate!");
}
A: I'd like to demonstrate that if you convert to .Date that you don't need to worry about hours/mins/seconds etc:
[Test]
public void ConvertToDateWillHaveTwoDatesEqual()
{
DateTime d1 = new DateTime(2008, 1, 1);
DateTime d2 = new DateTime(2008, 1, 2);
Assert.IsTrue(d1 < d2);
DateTime d3 = new DateTime(2008, 1, 1,7,0,0);
DateTime d4 = new DateTime(2008, 1, 1,10,0,0);
Assert.IsTrue(d3 < d4);
Assert.IsFalse(d3.Date < d4.Date);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "137"
}
|
Q: Client View of Everyone Looking at a Webpage First let me say that I really feel directionless on this question. I am using windows integrated security, and I can use vb.net to look up information about a user from AD. I also have other information about users I can look up from a MS SQL 2005 server by getting the logon identity name.
What I would like to do is display information about all the users actively viewing the web page to any one of the users viewing the web page. The information comes both from AD and SQL, and I have no problem retrieving it.
My route so far has been using SQL to store when the user first loads the page. I am stuck not knowing how to show when the user leaves the page. I tried using an ajax timer to update a timestamp for the user's visit every one second that also triggers the table to change the status to inactive of any record that has not been updated in 5 seconds. This works with only a few users, but I find when I have more than a few people viewing the page the 1 second update is not reliable. I also seem to have problems when the user minimizes the page. This sometimes stops the updates from the ajax timer and kicks the user off the list while they are still viewing the page.
This feature is not important to the function of the site it would be on, so I'd given up on it over a year ago. Since then it has really been a pain to me that I can not figure a way to make this work. My searches have led me down many fruitless paths, so I really will appreciate any help that can be offered even if it's only a lead in the correct direction.
A: The answer probably depends on how accurate you need the display to be. If it's just to give users a sense of the other people using the site I'd suggest something similar to what you've described, but backing off on the update frequency:
*
*on a page request associate the user with the page (and a timestamp)
*use an Ajax timer to update the timestamp every minute or so
*kill the association via a window.onbeforeunload event (or similar)
*assume that any timestamps older than a minute (and a bit) are dead
You can try and catch some of the ways people leave a page, but it's never bullet proof. And with regards to the minimised page, I guess it's debatable whether they're actually viewing the page ;)
A: I think the best you can do is set a threshold for "visiting a page." Have an automated task run every 60, 120, 300, or some number of seconds that clears out any entry that is older than a specified amount of time. There is no way to reliably detect (that I am aware of) when a user leaves a page. The best you can do is "assume" a user has stopped using the site if a certain amount of time has elapsed. So you would store the user, the page, and the time viewed. Once that time viewed has surpassed your threshold, remove it.
A: I don't think having an AJAX request every second is a good idea, it's way too chatty.
I think most people implement this feature by just recording when someone makes a request to the site and from that time to threshold the user is 'visiting' the site. If the user doesn't make another server request before the threshold is reached then we assume that they have moved on.
A: How about having a tiny flash app on the page that streams a minuscule 'heartbeat' stream of data from the server....just enough to allow the server to know when a stream had been dropped, and hence when the client had navigated away from the page.
A: For the body have an onunload script:
< body onunload="userLeftPage()"
In that script, send an ajax call to say the user left the page.
A: The first answer works.
BUT...
You didn't mention that the users were logged in, so I suspect that isn't what you are looking at. Though, you can certainly simplify things if you are just requesting the list of logged in users every minute or so.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Find a private field with Reflection? Given this class
class Foo
{
// Want to find _bar with reflection
[SomeAttribute]
private string _bar;
public string BigBar
{
get { return this._bar; }
}
}
I want to find the private item _bar that I will mark with a attribute. Is that possible?
I have done this with properties where I have looked for an attribute, but never a private member field.
What are the binding flags that I need to set to get the private fields?
A: Here is some extension methods for simple get and set private fields and properties (properties with setter):
usage example:
public class Foo
{
private int Bar = 5;
}
var targetObject = new Foo();
var barValue = targetObject.GetMemberValue("Bar");//Result is 5
targetObject.SetMemberValue("Bar", 10);//Sets Bar to 10
Code:
/// <summary>
/// Extensions methos for using reflection to get / set member values
/// </summary>
public static class ReflectionExtensions
{
/// <summary>
/// Gets the public or private member using reflection.
/// </summary>
/// <param name="obj">The source target.</param>
/// <param name="memberName">Name of the field or property.</param>
/// <returns>the value of member</returns>
public static object GetMemberValue(this object obj, string memberName)
{
var memInf = GetMemberInfo(obj, memberName);
if (memInf == null)
throw new System.Exception("memberName");
if (memInf is System.Reflection.PropertyInfo)
return memInf.As<System.Reflection.PropertyInfo>().GetValue(obj, null);
if (memInf is System.Reflection.FieldInfo)
return memInf.As<System.Reflection.FieldInfo>().GetValue(obj);
throw new System.Exception();
}
/// <summary>
/// Gets the public or private member using reflection.
/// </summary>
/// <param name="obj">The target object.</param>
/// <param name="memberName">Name of the field or property.</param>
/// <returns>Old Value</returns>
public static object SetMemberValue(this object obj, string memberName, object newValue)
{
var memInf = GetMemberInfo(obj, memberName);
if (memInf == null)
throw new System.Exception("memberName");
var oldValue = obj.GetMemberValue(memberName);
if (memInf is System.Reflection.PropertyInfo)
memInf.As<System.Reflection.PropertyInfo>().SetValue(obj, newValue, null);
else if (memInf is System.Reflection.FieldInfo)
memInf.As<System.Reflection.FieldInfo>().SetValue(obj, newValue);
else
throw new System.Exception();
return oldValue;
}
/// <summary>
/// Gets the member info
/// </summary>
/// <param name="obj">source object</param>
/// <param name="memberName">name of member</param>
/// <returns>instanse of MemberInfo corresponsing to member</returns>
private static System.Reflection.MemberInfo GetMemberInfo(object obj, string memberName)
{
var prps = new System.Collections.Generic.List<System.Reflection.PropertyInfo>();
prps.Add(obj.GetType().GetProperty(memberName,
System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Instance |
System.Reflection.BindingFlags.FlattenHierarchy));
prps = System.Linq.Enumerable.ToList(System.Linq.Enumerable.Where( prps,i => !ReferenceEquals(i, null)));
if (prps.Count != 0)
return prps[0];
var flds = new System.Collections.Generic.List<System.Reflection.FieldInfo>();
flds.Add(obj.GetType().GetField(memberName,
System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance |
System.Reflection.BindingFlags.FlattenHierarchy));
//to add more types of properties
flds = System.Linq.Enumerable.ToList(System.Linq.Enumerable.Where(flds, i => !ReferenceEquals(i, null)));
if (flds.Count != 0)
return flds[0];
return null;
}
[System.Diagnostics.DebuggerHidden]
private static T As<T>(this object obj)
{
return (T)obj;
}
}
A: Get private variable's value using Reflection:
var _barVariable = typeof(Foo).GetField("_bar", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(objectForFooClass);
Set value for private variable using Reflection:
typeof(Foo).GetField("_bar", BindingFlags.NonPublic | BindingFlags.Instance).SetValue(objectForFoocClass, "newValue");
Where objectForFooClass is a non null instance for the class type Foo.
A: I use this method personally
if (typeof(Foo).GetFields(BindingFlags.NonPublic | BindingFlags.Instance).Any(c => c.GetCustomAttributes(typeof(SomeAttribute), false).Any()))
{
// do stuff
}
A: Yes, however you will need to set your Binding flags to search for private fields (if your looking for the member outside of the class instance).
The binding flag you will need is: System.Reflection.BindingFlags.NonPublic
A: Nice Syntax With Extension Method
You can access any private field of an arbitrary type with code like this:
Foo foo = new Foo();
string c = foo.GetFieldValue<string>("_bar");
For that you need to define an extension method that will do the work for you:
public static class ReflectionExtensions {
public static T GetFieldValue<T>(this object obj, string name) {
// Set the flags so that private and public fields from instances will be found
var bindingFlags = BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance;
var field = obj.GetType().GetField(name, bindingFlags);
return (T)field?.GetValue(obj);
}
}
A: Use BindingFlags.NonPublic and BindingFlags.Instance flags
FieldInfo[] fields = myType.GetFields(
BindingFlags.NonPublic |
BindingFlags.Instance);
A: One thing that you need to be aware of when reflecting on private members is that if your application is running in medium trust (as, for instance, when you are running on a shared hosting environment), it won't find them -- the BindingFlags.NonPublic option will simply be ignored.
A: I came across this while searching for this on google so I realise I'm bumping an old post. However the GetCustomAttributes requires two params.
typeof(Foo).GetFields(BindingFlags.NonPublic | BindingFlags.Instance)
.Where(x => x.GetCustomAttributes(typeof(SomeAttribute), false).Length > 0);
The second parameter specifies whether or not you wish to search the inheritance hierarchy
A: typeof(MyType).GetField("fieldName", BindingFlags.NonPublic | BindingFlags.Instance)
A: You can do it just like with a property:
FieldInfo fi = typeof(Foo).GetField("_bar", BindingFlags.NonPublic | BindingFlags.Instance);
if (fi.GetCustomAttributes(typeof(SomeAttribute)) != null)
...
A: If your .Net framework is greater than 4.5. You can use GetRuntimeFields method.
This method returns all fields that are defined on the specified type, including inherited, non-public, instance, and static fields.
https://learn.microsoft.com/en-us/dotnet/api/system.reflection.runtimereflectionextensions.getruntimefields?view=net-6.0
var foo = new Foo();
var fooFields = foo.GetType().GetRuntimeFields()
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "273"
}
|
Q: How can I detect if my process is running UAC-elevated or not? My Vista application needs to know whether the user has launched it "as administrator" (elevated) or as a standard user (non-elevated). How can I detect that at run time?
A: For those of us working in C#, in the Windows SDK there is a "UACDemo" application as a part of the "Cross Technology Samples". They find if the current user is an administrator using this method:
private bool IsAdministrator
{
get
{
WindowsIdentity wi = WindowsIdentity.GetCurrent();
WindowsPrincipal wp = new WindowsPrincipal(wi);
return wp.IsInRole(WindowsBuiltInRole.Administrator);
}
}
(Note: I refactored the original code to be a property, rather than an "if" statement)
A: I do not think elevation type is the answer you want. You just want to know if it is elevated. Use TokenElevation instead of TokenElevationType when you call GetTokenInformation. If the structure returns a positive value, the user is admin. If zero, the user is normal elevation.
Here is a Delphi solution:
function TMyAppInfo.RunningAsAdmin: boolean;
var
hToken, hProcess: THandle;
pTokenInformation: pointer;
ReturnLength: DWord;
TokenInformation: TTokenElevation;
begin
hProcess := GetCurrentProcess;
try
if OpenProcessToken(hProcess, TOKEN_QUERY, hToken) then try
TokenInformation.TokenIsElevated := 0;
pTokenInformation := @TokenInformation;
GetTokenInformation(hToken, TokenElevation, pTokenInformation, sizeof(TokenInformation), ReturnLength);
result := (TokenInformation.TokenIsElevated > 0);
finally
CloseHandle(hToken);
end;
except
result := false;
end;
end;
A: The following C++ function can do that:
HRESULT GetElevationType( __out TOKEN_ELEVATION_TYPE * ptet );
/*
Parameters:
ptet
[out] Pointer to a variable that receives the elevation type of the current process.
The possible values are:
TokenElevationTypeDefault - This value indicates that either UAC is disabled,
or the process is started by a standard user (not a member of the Administrators group).
The following two values can be returned only if both the UAC is enabled
and the user is a member of the Administrator's group:
TokenElevationTypeFull - the process is running elevated.
TokenElevationTypeLimited - the process is not running elevated.
Return Values:
If the function succeeds, the return value is S_OK.
If the function fails, the return value is E_FAIL. To get extended error information, call GetLastError().
Implementation:
*/
HRESULT GetElevationType( __out TOKEN_ELEVATION_TYPE * ptet )
{
if ( !IsVista() )
return E_FAIL;
HRESULT hResult = E_FAIL; // assume an error occurred
HANDLE hToken = NULL;
if ( !::OpenProcessToken(
::GetCurrentProcess(),
TOKEN_QUERY,
&hToken ) )
{
return hResult;
}
DWORD dwReturnLength = 0;
if ( ::GetTokenInformation(
hToken,
TokenElevationType,
ptet,
sizeof( *ptet ),
&dwReturnLength ) )
{
ASSERT( dwReturnLength == sizeof( *ptet ) );
hResult = S_OK;
}
::CloseHandle( hToken );
return hResult;
}
A: Here is a VB6 implementation of a check if a (current) process is elevated
Option Explicit
'--- for OpenProcessToken
Private Const TOKEN_QUERY As Long = &H8
Private Const TokenElevation As Long = 20
Private Declare Function GetCurrentProcess Lib "kernel32" () As Long
Private Declare Function OpenProcessToken Lib "advapi32" (ByVal ProcessHandle As Long, ByVal DesiredAccess As Long, TokenHandle As Long) As Long
Private Declare Function GetTokenInformation Lib "advapi32" (ByVal TokenHandle As Long, ByVal TokenInformationClass As Long, TokenInformation As Any, ByVal TokenInformationLength As Long, ReturnLength As Long) As Long
Private Declare Function CloseHandle Lib "kernel32" (ByVal hObject As Long) As Long
Public Function IsElevated(Optional ByVal hProcess As Long) As Boolean
Dim hToken As Long
Dim dwIsElevated As Long
Dim dwLength As Long
If hProcess = 0 Then
hProcess = GetCurrentProcess()
End If
If OpenProcessToken(hProcess, TOKEN_QUERY, hToken) <> 0 Then
If GetTokenInformation(hToken, TokenElevation, dwIsElevated, 4, dwLength) <> 0 Then
IsElevated = (dwIsElevated <> 0)
End If
Call CloseHandle(hToken)
End If
End Function
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42"
}
|
Q: Python reading Oracle path On my desktop I have written a small Pylons app that connects to Oracle. I'm now trying to deploy it to my server which is running Win2k3 x64. (My desktop is 32-bit XP) The Oracle installation on the server is also 64-bit.
I was getting errors about loading the OCI dll, so I installed the 32 bit client into C:\oracle32.
If I add this to the PATH environment variable, it works great. But I also want to run the Pylons app as a service (using this recipe) and don't want to put this 32-bit library on the path for all other applications.
I tried using sys.path.append("C:\\oracle32\\bin") but that doesn't seem to work.
A: sys.path is python's internal representation of the PYTHONPATH, it sounds to me like you want to modify the PATH.
I'm not sure that this will work, but you can try:
import os
os.environ['PATH'] += os.pathsep + "C:\\oracle32\\bin"
A: You need to append the c:\Oracle32\bin directory to the PATH variable of your environment before you execute python.exe.
In Linux, I need to set up the LD_LIBRARY_PATH variable for similar reasons, to locate the Oracle libraries, before calling python. I use wrapper shell scripts that set the variable and then call Python.
In your case, maybe you can call, in the service startup, a .cmd or .vbs script that sets the PATH variable and then calls python.exe with your .py script.
I hope this helps!
A: If your Python application runs in the 64-bit space, you will need to access a 64-bit installation of Oracle's oci.dll, rather than the 32-bit version. Normally you would update the system path to include the appropriate Oracle Home bin directory, prior to running the script. The solution may also vary depending on what component you are using to access Oracle from Python.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Are delegates not just shorthand interfaces? Suppose we have:
interface Foo
{
bool Func(int x);
}
class Bar: Foo
{
bool Func(int x)
{
return (x>0);
}
}
class Baz: Foo
{
bool Func(int x)
{
return (x<0);
}
}
Now we can toss around Bar and Baz as a Foos and call their Func methods.
Delegates simplify this a little bit:
delegate bool Foo(int x);
bool Bar(int x)
{
return (x<0);
}
bool Baz(int x)
{
return (x>0);
}
Now we can toss around Bar and Baz as Foo delegates.
What is the real benefit of delegates, except for getting shorter code?
A: From a Software Engineering perspective you are right, delegates are much like function interfaces in that they prototype a function interface.
They can also be used much in the same kind of way: instead of passing a whole class in that contains the method you need you can pass in just a delegate. This saves a whole lot of code and creates much more readable code.
Moreover, with the advent of lambda expressions they can now also be defined easily on fly which is a huge bonus. While it is POSSIBLE to build classes on the fly in C#, it's really a huge pain in the butt.
Comparing the two is an interesting concept. I hadn't previously considered how much alike the ideas are from a use case and code structuring standpoint.
A: A delegate does share a lot in common with a interface reference that has a single method from the caller's point of view.
In the first example, Baz and Bar are classes, which can be inherited and instantiated. In the second example, Baz and Bar are methods.
You can't apply interface references to just any class that matches the interface contract. The class must explicitly declare that it supports the interface.
You can apply a delegate reference to any method that matches the signature.
You can't include static methods in an interface's contract. (Although you can bolt static methods on with extension methods).
You can refer to static methods with a delegate reference.
A: No, delegates are for method pointers. Then you can make sure that the signature of the method associated w/ the delegate is correct.
Also, then you don't need to know the structure of the class. This way, you can use a method that you have written to pass into a method in another class, and define the functionality you want to have happen.
Take a look at the List<> class with the Find method. Now you get to define what determines if something is a match or not, without requiring items contained in the class to implement IListFindable or something similar.
A: You can pass delegates as parameters in functions (Ok technically delegates become objects when compiled, but that's not the point here). You could pass an object as a parameter (obviously), but then you are tying that type of object to the function as a parameter. With delegates you can pass any function to execute in the code that has the same signature regardless of where it comes from.
A: There is a slight difference, delegates can access the member variables of classes in which, they are defined. In C# (unlike Java) all inner class are consider to be static. Therefore if you are using an interface to manage a callback, e.g. an ActionListener for a button. The implementing inner class needs to be passed (via the constructor) references to the parts of the containing class that it may need to interact with during the callback. Delegates do not have this restriction therefore reduces the amount of code required to implement the callback.
Shorter, more concise code is also a worthy benefit.
A: One can think of delegates as an interface for a method which defines what arguments and return type a method must have to fit the delegate
A: A delegate is a typed method pointer. This gives you more flexibility than interfaces because you can take advantage of covariance and contravariance, and you can modify object state (you'd have to pass the this pointer around with interface based functors).
Also, delegates have lots of nice syntactic sugar which allows you to do things like combine them together easily.
A: Yes, a delegate can be thought of as an interface with one method.
A: Interfaces and delegates are two utterly different things, although I understand the temptation to describe delegates in interface-like terms for ease of understanding...however, not knowing the truth may lead to confusion down the line.
Delegates were inspired (partly) because of the black art of C++ method pointers being inadequate for certain purposes. A classic example is implementing a message-passing or event-handling mechanism. Delegates allow you to define a method signature without any knowledge of a class' types or interfaces - I could define a "void eventHandler(Event* e)" delegate and invoke it on any class that implemented it.
For some insight into this classic problem, and why delegates are desirable read this and then this.
A: In at least one proposal for adding closures (i.e. anonymous delegates) to Java, they are equivalent to interfaces with a single member method.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: FindNextFile fails on 64-bit Windows? using C++Builder 2007, the FindFirstFile and FindNextFile functions doesn't seem to be able to find some files on 64-bit versions of Vista and XP. My test application is 32-bit.
If I use them to iterate through the folder C:\Windows\System32\Drivers they only find a handful of files although there are 185 when I issue a dir command in a command prompt. Using the same example code lists all files fine on a 32-bit version of XP.
Here is a small example program:
int main(int argc, char* argv[])
{
HANDLE hFind;
WIN32_FIND_DATA FindData;
int ErrorCode;
bool cont = true;
cout << "FindFirst/Next demo." << endl << endl;
hFind = FindFirstFile("*.*", &FindData);
if(hFind == INVALID_HANDLE_VALUE)
{
ErrorCode = GetLastError();
if (ErrorCode == ERROR_FILE_NOT_FOUND)
{
cout << "There are no files matching that path/mask\n" << endl;
}
else
{
cout << "FindFirstFile() returned error code " << ErrorCode << endl;
}
cont = false;
}
else
{
cout << FindData.cFileName << endl;
}
if (cont)
{
while (FindNextFile(hFind, &FindData))
{
cout << FindData.cFileName << endl;
}
ErrorCode = GetLastError();
if (ErrorCode == ERROR_NO_MORE_FILES)
{
cout << endl << "All files logged." << endl;
}
else
{
cout << "FindNextFile() returned error code " << ErrorCode << endl;
}
if (!FindClose(hFind))
{
ErrorCode = GetLastError();
cout << "FindClose() returned error code " << ErrorCode << endl;
}
}
return 0;
}
Running it in the C:\Windows\System32\Drivers folder on 64-bit XP returns this:
C:\WINDOWS\system32\drivers>t:\Project1.exe
FindFirst/Next demo.
.
..
AsIO.sys
ASUSHWIO.SYS
hfile.txt
raspti.zip
stcp2v30.sys
truecrypt.sys
All files logged.
A dir command on the same system returns this:
C:\WINDOWS\system32\drivers>dir/p
Volume in drive C has no label.
Volume Serial Number is E8E1-0F1E
Directory of C:\WINDOWS\system32\drivers
16-09-2008 23:12 <DIR> .
16-09-2008 23:12 <DIR> ..
17-02-2007 00:02 80.384 1394bus.sys
16-09-2008 23:12 9.453 a.txt
17-02-2007 00:02 322.560 acpi.sys
29-03-2006 14:00 18.432 acpiec.sys
24-03-2005 17:11 188.928 aec.sys
21-06-2008 15:07 291.840 afd.sys
29-03-2006 14:00 51.712 amdk8.sys
17-02-2007 00:03 111.104 arp1394.sys
08-05-2006 20:19 8.192 ASACPI.sys
29-03-2006 14:00 25.088 asyncmac.sys
17-02-2007 00:03 150.016 atapi.sys
17-02-2007 00:03 106.496 atmarpc.sys
29-03-2006 14:00 57.344 atmepvc.sys
17-02-2007 00:03 91.648 atmlane.sys
17-02-2007 00:03 569.856 atmuni.sys
24-03-2005 19:12 5.632 audstub.sys
29-03-2006 14:00 6.144 beep.sys
Press any key to continue . . .
etc.
I'm puzzled. What is the reason for this?
Brian
A: Is there redirection going on? See the remarks on Wow64DisableWow64FsRedirection http://msdn.microsoft.com/en-gb/library/aa365743.aspx
A: I found this on MSDN:
If you are writing a 32-bit application to list all the files in a directory and the application may be run on a 64-bit computer, you should call the Wow64DisableWow64FsRedirectionfunction before calling FindFirstFile and call Wow64RevertWow64FsRedirection after the last call to FindNextFile. For more information, see File System Redirector.
Here's the link
I'll have to update my code because of this :-)
A: Got it:
http://msdn.microsoft.com/en-gb/library/aa384187(VS.85).aspx
When a 32-bit application reads from one of these folders on a 64-bit OS:
%windir%\system32\catroot
%windir%\system32\catroot2
%windir%\system32\drivers\etc
%windir%\system32\logfiles
%windir%\system32\spool
Windows actually lists the content of:
%windir%\SysWOW64\catroot
%windir%\SysWOW64\catroot2
%windir%\SysWOW64\drivers\etc
%windir%\SysWOW64\logfiles
%windir%\SysWOW64\spool
Thanks for your input Kris, that helped me find out what is going on.
EDIT: Thank you Ludvig too :-)
A: Are you sure it is looking in the same directory as the dir command? They don't seem to have any files in common.
Also, this isn't the issue, but the correct wild card for "all files" is *
*.* means "all files with at least one . in the name"
A: Are there any warnings when you compile?
Have you turned ALL warnings on for this particular test (since it is not working)?
Make sure first to solve the warnings.
A: There are no problems with the example code. I have another application that fails too, written in Delphi. I think I found the answer based on Kris' answer about redirection:
http://msdn.microsoft.com/en-gb/library/aa364418(VS.85).aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: How do you list the primary key of a SQL Server table? Simple question, how do you list the primary key of a table with T-SQL? I know how to get indexes on a table, but can't remember how to get the PK.
A: Here's another way from the question get table primary key using sql query:
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
WHERE OBJECTPROPERTY(OBJECT_ID(CONSTRAINT_SCHEMA+'.'+CONSTRAINT_NAME), 'IsPrimaryKey') = 1
AND TABLE_NAME = '<your table name>'
It uses KEY_COLUMN_USAGE to determine the constraints for a given table
Then uses OBJECTPROPERTY(id, 'IsPrimaryKey') to determine if each is a primary key
A: I am telling a simple Technic which I follow
SP_HELP 'table_name'
run this code as query. Mention your table name at place of table_name for which you want to know Primary Key (don't forget the single quotes). The result will show like attached Image. Hope it will help you
A: It's generally recommended practice now to use the sys.* views over INFORMATION_SCHEMA in SQL Server, so unless you're planning on migrating databases I would use those. Here's how you would do it with the sys.* views:
SELECT
c.name AS column_name,
i.name AS index_name,
c.is_identity
FROM sys.indexes i
inner join sys.index_columns ic ON i.object_id = ic.object_id AND i.index_id = ic.index_id
inner join sys.columns c ON ic.object_id = c.object_id AND c.column_id = ic.column_id
WHERE i.is_primary_key = 1
and i.object_ID = OBJECT_ID('<schema>.<tablename>');
A: --This is another Modified Version which is also an example for Co-Related Query
SELECT TC.TABLE_NAME as [Table_name], TC.CONSTRAINT_NAME as [Primary_Key]
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS TC
INNER JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE CCU
ON TC.CONSTRAINT_NAME = CCU.CONSTRAINT_NAME
WHERE TC.CONSTRAINT_TYPE = 'PRIMARY KEY' AND
TC.TABLE_NAME IN
(SELECT [NAME] AS [TABLE_NAME] FROM SYS.OBJECTS
WHERE TYPE = 'U')
A: This should list all the constraints ( primary Key and Foreign Keys ) and at the end of query put table name
/* CAST IS DONE , SO THAT OUTPUT INTEXT FILE REMAINS WITH SCREEN LIMIT*/
WITH ALL_KEYS_IN_TABLE (CONSTRAINT_NAME,CONSTRAINT_TYPE,PARENT_TABLE_NAME,PARENT_COL_NAME,PARENT_COL_NAME_DATA_TYPE,REFERENCE_TABLE_NAME,REFERENCE_COL_NAME)
AS
(
SELECT CONSTRAINT_NAME= CAST (PKnUKEY.name AS VARCHAR(30)) ,
CONSTRAINT_TYPE=CAST (PKnUKEY.type_desc AS VARCHAR(30)) ,
PARENT_TABLE_NAME=CAST (PKnUTable.name AS VARCHAR(30)) ,
PARENT_COL_NAME=CAST ( PKnUKEYCol.name AS VARCHAR(30)) ,
PARENT_COL_NAME_DATA_TYPE= oParentColDtl.DATA_TYPE,
REFERENCE_TABLE_NAME='' ,
REFERENCE_COL_NAME=''
FROM sys.key_constraints as PKnUKEY
INNER JOIN sys.tables as PKnUTable
ON PKnUTable.object_id = PKnUKEY.parent_object_id
INNER JOIN sys.index_columns as PKnUColIdx
ON PKnUColIdx.object_id = PKnUTable.object_id
AND PKnUColIdx.index_id = PKnUKEY.unique_index_id
INNER JOIN sys.columns as PKnUKEYCol
ON PKnUKEYCol.object_id = PKnUTable.object_id
AND PKnUKEYCol.column_id = PKnUColIdx.column_id
INNER JOIN INFORMATION_SCHEMA.COLUMNS oParentColDtl
ON oParentColDtl.TABLE_NAME=PKnUTable.name
AND oParentColDtl.COLUMN_NAME=PKnUKEYCol.name
UNION ALL
SELECT CONSTRAINT_NAME= CAST (oConstraint.name AS VARCHAR(30)) ,
CONSTRAINT_TYPE='FK',
PARENT_TABLE_NAME=CAST (oParent.name AS VARCHAR(30)) ,
PARENT_COL_NAME=CAST ( oParentCol.name AS VARCHAR(30)) ,
PARENT_COL_NAME_DATA_TYPE= oParentColDtl.DATA_TYPE,
REFERENCE_TABLE_NAME=CAST ( oReference.name AS VARCHAR(30)) ,
REFERENCE_COL_NAME=CAST (oReferenceCol.name AS VARCHAR(30))
FROM sys.foreign_key_columns FKC
INNER JOIN sys.sysobjects oConstraint
ON FKC.constraint_object_id=oConstraint.id
INNER JOIN sys.sysobjects oParent
ON FKC.parent_object_id=oParent.id
INNER JOIN sys.all_columns oParentCol
ON FKC.parent_object_id=oParentCol.object_id /* ID of the object to which this column belongs.*/
AND FKC.parent_column_id=oParentCol.column_id/* ID of the column. Is unique within the object.Column IDs might not be sequential.*/
INNER JOIN sys.sysobjects oReference
ON FKC.referenced_object_id=oReference.id
INNER JOIN INFORMATION_SCHEMA.COLUMNS oParentColDtl
ON oParentColDtl.TABLE_NAME=oParent.name
AND oParentColDtl.COLUMN_NAME=oParentCol.name
INNER JOIN sys.all_columns oReferenceCol
ON FKC.referenced_object_id=oReferenceCol.object_id /* ID of the object to which this column belongs.*/
AND FKC.referenced_column_id=oReferenceCol.column_id/* ID of the column. Is unique within the object.Column IDs might not be sequential.*/
)
select * from ALL_KEYS_IN_TABLE
where
PARENT_TABLE_NAME in ('YOUR_TABLE_NAME')
or REFERENCE_TABLE_NAME in ('YOUR_TABLE_NAME')
ORDER BY PARENT_TABLE_NAME,CONSTRAINT_NAME;
For reference please read thru - http://blogs.msdn.com/b/sqltips/archive/2005/09/16/469136.aspx
A: This is a solution which uses only sys-tables.
It lists all the primary keys in the database. It returns schema, table name, column name and the correct column sort order for each primary key.
If you want to get the primary key for a specific table, then you need to filter on SchemaName and TableName.
IMHO, this solution is very generic and does not use any string literals, so it will run on any machine.
select
s.name as SchemaName,
t.name as TableName,
tc.name as ColumnName,
ic.key_ordinal as KeyOrderNr
from
sys.schemas s
inner join sys.tables t on s.schema_id=t.schema_id
inner join sys.indexes i on t.object_id=i.object_id
inner join sys.index_columns ic on i.object_id=ic.object_id
and i.index_id=ic.index_id
inner join sys.columns tc on ic.object_id=tc.object_id
and ic.column_id=tc.column_id
where i.is_primary_key=1
order by t.name, ic.key_ordinal ;
A: The system stored procedure sp_help will give you the information. Execute the following statement:
execute sp_help table_name
A: SELECT t.name AS 'table', i.name AS 'index', it.xtype,
(SELECT c.name FROM syscolumns c INNER JOIN sysindexkeys k
ON k.indid = i.indid
AND c.colid = k.colid
AND c.id = t.id
AND k.keyno = 1
AND k.id = t.id)
AS 'column1',
(SELECT c.name FROM syscolumns c INNER JOIN sysindexkeys k
ON k.indid = i.indid
AND c.colid = k.colid
AND c.id = t.id
AND k.keyno = 2
AND k.id = t.id)
AS 'column2',
(SELECT c.name FROM syscolumns c INNER JOIN sysindexkeys k
ON k.indid = i.indid
AND c.colid = k.colid
AND c.id = t.id
AND k.keyno = 3
AND k.id = t.id)
AS 'column3',
(SELECT c.name FROM syscolumns c INNER JOIN sysindexkeys k
ON k.indid = i.indid
AND c.colid = k.colid
AND c.id = t.id
AND k.keyno = 4
AND k.id = t.id)
AS 'column4',
(SELECT c.name FROM syscolumns c INNER JOIN sysindexkeys k
ON k.indid = i.indid
AND c.colid = k.colid
AND c.id = t.id
AND k.keyno = 5
AND k.id = t.id)
AS 'column5',
(SELECT c.name FROM syscolumns c INNER JOIN sysindexkeys k
ON k.indid = i.indid
AND c.colid = k.colid
AND c.id = t.id
AND k.keyno = 6
AND k.id = t.id)
AS 'column6',
(SELECT c.name FROM syscolumns c INNER JOIN sysindexkeys k
ON k.indid = i.indid
AND c.colid = k.colid
AND c.id = t.id
AND k.keyno = 7
AND k.id = t.id)
AS 'column7',
(SELECT c.name FROM syscolumns c INNER JOIN sysindexkeys k
ON k.indid = i.indid
AND c.colid = k.colid
AND c.id = t.id
AND k.keyno = 8
AND k.id = t.id)
AS 'column8',
(SELECT c.name FROM syscolumns c INNER JOIN sysindexkeys k
ON k.indid = i.indid
AND c.colid = k.colid
AND c.id = t.id
AND k.keyno = 9
AND k.id = t.id)
AS 'column9',
(SELECT c.name FROM syscolumns c INNER JOIN sysindexkeys k
ON k.indid = i.indid
AND c.colid = k.colid
AND c.id = t.id
AND k.keyno = 10
AND k.id = t.id)
AS 'column10',
FROM sysobjects t
INNER JOIN sysindexes i ON i.id = t.id
INNER JOIN sysobjects it ON it.parent_obj = t.id AND it.name = i.name
WHERE it.xtype = 'PK'
ORDER BY t.name, i.name
A: This one gives you the columns that are PK.
SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE WHERE TABLE_NAME = 'TableName'
A: For a comma separated list of primary key columns for a given TableName and Schema:
Select distinct SUBSTRING ( stuff(( select distinct ',' + [COLUMN_NAME]
from INFORMATION_SCHEMA.KEY_COLUMN_USAGE
where OBJECTPROPERTY(OBJECT_ID(CONSTRAINT_SCHEMA + '.' + QUOTENAME(CONSTRAINT_NAME)), 'IsPrimaryKey') = 1
AND TABLE_NAME = 'TableName' AND TABLE_SCHEMA = 'Schema'
order by 1 FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'),1,0,'' )
,2,9999)
A: SELECT Col.Column_Name from
INFORMATION_SCHEMA.TABLE_CONSTRAINTS Tab,
INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE Col
WHERE
Col.Constraint_Name = Tab.Constraint_Name
AND Col.Table_Name = Tab.Table_Name
AND Tab.Constraint_Type = 'PRIMARY KEY'
AND Col.Table_Name = '<your table name>'
A: Is using MS SQL Server you can do the following:
-- List all tables primary keys
SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS
WHERE CONSTRAINT_TYPE = 'PRIMARY KEY'
You can also filter on the table_name column if you want a specific table.
A: I like the INFORMATION_SCHEMA technique, but another I've used is:
exec sp_pkeys 'table'
A: Thanks Guy.
With a slight variation I used it to find all the primary keys for all the tables.
SELECT A.Name,Col.Column_Name from
INFORMATION_SCHEMA.TABLE_CONSTRAINTS Tab,
INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE Col ,
(select NAME from dbo.sysobjects where xtype='u') AS A
WHERE
Col.Constraint_Name = Tab.Constraint_Name
AND Col.Table_Name = Tab.Table_Name
AND Constraint_Type = 'PRIMARY KEY '
AND Col.Table_Name = A.Name
A: SELECT A.TABLE_NAME as [Table_name], A.CONSTRAINT_NAME as [Primary_Key]
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS A, INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE B
WHERE CONSTRAINT_TYPE = 'PRIMARY KEY' AND A.CONSTRAINT_NAME = B.CONSTRAINT_NAME
A: Below query will list primary keys of particular table:
SELECT DISTINCT
CONSTRAINT_NAME AS [Constraint],
TABLE_SCHEMA AS [Schema],
TABLE_NAME AS TableName
FROM
INFORMATION_SCHEMA.KEY_COLUMN_USAGE
WHERE
TABLE_NAME = 'mytablename'
A: If you are looking to do your own ORM or generate code from a given table, then this might be what you are looking form:
declare @table varchar(100) = 'mytable';
with cte as
(
select
tc.CONSTRAINT_SCHEMA
, tc.CONSTRAINT_TYPE
, tc.TABLE_NAME
, ccu.COLUMN_NAME
, IS_NULLABLE
, DATA_TYPE
, CHARACTER_MAXIMUM_LENGTH
, NUMERIC_PRECISION
from
INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc
inner join INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu on tc.TABLE_NAME=ccu.TABLE_NAME and tc.TABLE_SCHEMA=ccu.TABLE_SCHEMA
inner join information_schema.COLUMNS c on ccu.COLUMN_NAME=c.COLUMN_NAME and ccu.TABLE_NAME=c.TABLE_NAME and ccu.TABLE_SCHEMA=c.TABLE_SCHEMA
where
tc.table_name=@table
and
ccu.CONSTRAINT_NAME=tc.CONSTRAINT_NAME
union
select TABLE_SCHEMA,'COLUMN', TABLE_NAME, COLUMN_NAME, IS_NULLABLE, DATA_TYPE,CHARACTER_MAXIMUM_LENGTH, NUMERIC_PRECISION from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME=@table
and COLUMN_NAME not in (select COLUMN_NAME from INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE where TABLE_NAME = @table)
)
select
cast(iif(CONSTRAINT_TYPE='PRIMARY KEY',1,0) as bit) PrimaryKey
,cast(iif(CONSTRAINT_TYPE='FOREIGN KEY',1,0) as bit) ForeignKey
,cast(iif(CONSTRAINT_TYPE='COLUMN',1,0) as bit) NotKey
,COLUMN_NAME
,cast(iif(is_nullable='NO',0,1) as bit) IsNullable
, DATA_TYPE
, CHARACTER_MAXIMUM_LENGTH
, NUMERIC_PRECISION
from
cte
order by
case CONSTRAINT_TYPE
when 'PRIMARY KEY' then 1
when 'FOREIGN KEY' then 2
else 3 end
, COLUMN_NAME
Here is what the result would look like:
<table cellspacing=0 border=1>
<tr>
<td style=min-width:50px>PrimaryKey</td>
<td style=min-width:50px>ForeignKey</td>
<td style=min-width:50px>NotKey</td>
<td style=min-width:50px>COLUMN_NAME</td>
<td style=min-width:50px>IsNullable</td>
<td style=min-width:50px>DATA_TYPE</td>
<td style=min-width:50px>CHARACTER_MAXIMUM_LENGTH</td>
<td style=min-width:50px>NUMERIC_PRECISION</td>
</tr>
<tr>
<td style=min-width:50px>1</td>
<td style=min-width:50px>0</td>
<td style=min-width:50px>0</td>
<td style=min-width:50px>LectureNoteID</td>
<td style=min-width:50px>0</td>
<td style=min-width:50px>int</td>
<td style=min-width:50px>NULL</td>
<td style=min-width:50px>10</td>
</tr>
<tr>
<td style=min-width:50px>0</td>
<td style=min-width:50px>1</td>
<td style=min-width:50px>0</td>
<td style=min-width:50px>LectureId</td>
<td style=min-width:50px>0</td>
<td style=min-width:50px>int</td>
<td style=min-width:50px>NULL</td>
<td style=min-width:50px>10</td>
</tr>
<tr>
<td style=min-width:50px>0</td>
<td style=min-width:50px>1</td>
<td style=min-width:50px>0</td>
<td style=min-width:50px>NoteTypeID</td>
<td style=min-width:50px>0</td>
<td style=min-width:50px>int</td>
<td style=min-width:50px>NULL</td>
<td style=min-width:50px>10</td>
</tr>
<tr>
<td style=min-width:50px>0</td>
<td style=min-width:50px>0</td>
<td style=min-width:50px>1</td>
<td style=min-width:50px>Body</td>
<td style=min-width:50px>0</td>
<td style=min-width:50px>nvarchar</td>
<td style=min-width:50px>-1</td>
<td style=min-width:50px>NULL</td>
</tr>
<tr>
<td style=min-width:50px>0</td>
<td style=min-width:50px>0</td>
<td style=min-width:50px>1</td>
<td style=min-width:50px>DisplayOrder</td>
<td style=min-width:50px>0</td>
<td style=min-width:50px>int</td>
<td style=min-width:50px>NULL</td>
<td style=min-width:50px>10</td>
</tr>
</table>
A: If Primary Key and type needed, this query may be useful:
SELECT L.TABLE_SCHEMA, L.TABLE_NAME, L.COLUMN_NAME, R.TypeName
FROM(
SELECT COLUMN_NAME, TABLE_NAME, TABLE_SCHEMA
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
WHERE OBJECTPROPERTY(OBJECT_ID(CONSTRAINT_SCHEMA + '.' + QUOTENAME(CONSTRAINT_NAME)), 'IsPrimaryKey') = 1
)L
LEFT JOIN (
SELECT
OBJECT_NAME(c.OBJECT_ID) TableName ,c.name AS ColumnName ,t.name AS TypeName
FROM sys.columns AS c
JOIN sys.types AS t ON c.user_type_id=t.user_type_id
)R ON L.COLUMN_NAME = R.ColumnName AND L.TABLE_NAME = R.TableName
A: Give this a try:
SELECT
CONSTRAINT_CATALOG AS DataBaseName,
CONSTRAINT_SCHEMA AS SchemaName,
TABLE_NAME AS TableName,
CONSTRAINT_Name AS PrimaryKey
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS
WHERE CONSTRAINT_TYPE = 'Primary Key' and Table_Name = 'YourTable'
A: I found this useful, gives a list of tables with a comma separate list of the columns and then also a comma separate list of which ones are the primary key
SELECT T.TABLE_SCHEMA, T.TABLE_NAME,
STUFF((
SELECT ', ' + C.COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS C
WHERE C.TABLE_SCHEMA = T.TABLE_SCHEMA
AND T.TABLE_NAME = C.TABLE_NAME
FOR XML PATH ('')
), 1, 2, '') AS Columns,
STUFF((
SELECT ', ' + C.COLUMN_NAME
FROM INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE C
INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS TC
ON C.TABLE_SCHEMA = TC.TABLE_SCHEMA
AND C.TABLE_NAME = TC.TABLE_NAME
WHERE C.TABLE_SCHEMA = T.TABLE_SCHEMA
AND T.TABLE_NAME = C.TABLE_NAME
AND TC.CONSTRAINT_TYPE = 'PRIMARY KEY'
FOR XML PATH ('')
), 1, 2, '') AS [Key]
FROM INFORMATION_SCHEMA.TABLES T
ORDER BY T.TABLE_SCHEMA, T.TABLE_NAME
A: This version displays the schema, the table name and an ordered, comma separated list of primary keys. Object_Id() does not work for link servers so we filter by the table name.
Without the REPLACE(Si1.Column_Name, '', '') it would show the xml opening and closing tags for Column_Name on the database I was testing on. I am not sure why the database required a replace for 'Column_Name' so if someone knows then please comment.
DECLARE @TableName VARCHAR(100) = '';
WITH Sysinfo
AS (SELECT Kcu.Table_Name
, Kcu.Table_Schema AS Schema_Name
, Kcu.Column_Name
, Kcu.Ordinal_Position
FROM [LinkServer].Information_Schema.Key_Column_Usage Kcu
JOIN [LinkServer].Information_Schema.Table_Constraints AS Tc ON Tc.Constraint_Name = Kcu.Constraint_Name
WHERE Tc.Constraint_Type = 'Primary Key')
SELECT Schema_Name
,Table_Name
, STUFF(
(
SELECT ', '
, REPLACE(Si1.Column_Name, '', '')
FROM Sysinfo Si1
WHERE Si1.Table_Name = Si2.Table_Name
ORDER BY Si1.Table_Name
, Si1.Ordinal_Position
FOR XML PATH('')
), 1, 2, '') AS Primary_Keys
FROM Sysinfo Si2
WHERE Table_Name = CASE
WHEN @TableName NOT IN( '', 'All')
THEN @TableName
ELSE Table_Name
END
GROUP BY Si2.Table_Name, Si2.Schema_Name;
And the same pattern using George's query:
DECLARE @TableName VARCHAR(100) = '';
WITH Sysinfo
AS (SELECT S.Name AS Schema_Name
, T.Name AS Table_Name
, Tc.Name AS Column_Name
, Ic.Key_Ordinal AS Ordinal_Position
FROM [LinkServer].Sys.Schemas S
JOIN [LinkServer].Sys.Tables T ON S.Schema_Id = T.Schema_Id
JOIN [LinkServer].Sys.Indexes I ON T.Object_Id = I.Object_Id
JOIN [LinkServer].Sys.Index_Columns Ic ON I.Object_Id = Ic.Object_Id
AND I.Index_Id = Ic.Index_Id
JOIN [LinkServer].Sys.Columns Tc ON Ic.Object_Id = Tc.Object_Id
AND Ic.Column_Id = Tc.Column_Id
WHERE I.Is_Primary_Key = 1)
SELECT Schema_Name
,Table_Name
, STUFF(
(
SELECT ', '
, REPLACE(Si1.Column_Name, '', '')
FROM Sysinfo Si1
WHERE Si1.Table_Name = Si2.Table_Name
ORDER BY Si1.Table_Name
, Si1.Ordinal_Position
FOR XML PATH('')
), 1, 2, '') AS Primary_Keys
FROM Sysinfo Si2
WHERE Table_Name = CASE
WHEN @TableName NOT IN('', 'All')
THEN @TableName
ELSE Table_Name
END
GROUP BY Si2.Table_Name, Si2.Schema_Name;
A:
Sys.Objects Table contains row for each user-defined, schema-scoped
object .
Constraints created like Primary Key or others will be the object and
Table name will be the parent_object
Query sys.Objects and collect the Object's Ids of Required Type
declare @TableName nvarchar(50)='TblInvoice' -- your table name
declare @TypeOfKey nvarchar(50)='PK' -- For Primary key
SELECT Name FROM sys.objects
WHERE type = @TypeOfKey
AND parent_object_id = OBJECT_ID (@TableName)
A: May I suggest a more accurate simple answer to the original question below
SELECT
KEYS.table_schema, KEYS.table_name, KEYS.column_name, KEYS.ORDINAL_POSITION
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE keys
INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS CONS
ON cons.TABLE_SCHEMA = keys.TABLE_SCHEMA
AND cons.TABLE_NAME = keys.TABLE_NAME
AND cons.CONSTRAINT_NAME = keys.CONSTRAINT_NAME
WHERE cons.CONSTRAINT_TYPE = 'PRIMARY KEY'
Notes:
*
*Some of the answers above are missing a filter for just primary key
columns!
*I'm using below in a CTE to join to a larger column
listing to provide the metadata from a source to feed BIML generation of staging tables and SSIS code
A: Might be lately posted but hopefully this will help someone to see primary key list in sql server by using this t-sql query:
SELECT schema_name(t.schema_id) AS [schema_name], t.name AS TableName,
COL_NAME(ic.OBJECT_ID,ic.column_id) AS PrimaryKeyColumnName,
i.name AS PrimaryKeyConstraintName
FROM sys.tables t
INNER JOIN sys.indexes AS i on t.object_id=i.object_id
INNER JOIN sys.index_columns AS ic ON i.OBJECT_ID = ic.OBJECT_ID
AND i.index_id = ic.index_id
WHERE OBJECT_NAME(ic.OBJECT_ID) = 'YourTableNameHere'
You can see the list of all foreign keys by using this query if you may want:
SELECT
f.name as ForeignKeyConstraintName
,OBJECT_NAME(f.parent_object_id) AS ReferencingTableName
,COL_NAME(fc.parent_object_id, fc.parent_column_id) AS ReferencingColumnName
,OBJECT_NAME (f.referenced_object_id) AS ReferencedTableName
,COL_NAME(fc.referenced_object_id, fc.referenced_column_id) AS
ReferencedColumnName ,delete_referential_action_desc AS
DeleteReferentialActionDesc ,update_referential_action_desc AS
UpdateReferentialActionDesc
FROM sys.foreign_keys AS f
INNER JOIN sys.foreign_key_columns AS fc
ON f.object_id = fc.constraint_object_id
--WHERE OBJECT_NAME(f.parent_object_id) = 'YourTableNameHere'
--If you want to know referecing table details
WHERE OBJECT_NAME(f.referenced_object_id) = 'YourTableNameHere'
--If you want to know refereced table details
ORDER BY f.name
A: I found this from my friend, very effective if you are looking for all the table's primary keys under particular schema.
SELECT tc.constraint_name AS IndexName,tc.table_name AS TableName,tc.table_schema
AS SchemaName,kc.column_name AS COLUMN_NAME
FROM information_schema.table_constraints tc,information_schema.key_column_usage kc
WHERE tc.constraint_type = 'PRIMARY KEY' AND kc.table_name = tc.table_name AND kc.table_schema = tc.table_schema
AND kc.constraint_name = tc.constraint_name AND tc.table_schema='<SCHEMA_NAME>'
A: Probably the simplest solution :)
EXEC sp_pkeys YourTable
A: Here's my attempt at it for listing keys' data types as well based on any key constraints for primary or foreign keys.
SELECT
ksu.table_name as TableName,
ksu.column_name as ColumnName,
tc.constraint_type as ConstraintType,
c.Data_Type as DataType,
ksu.ordinal_position as OrdinalPosition
FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS tc
JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE ksu
ON tc.Table_Name = ksu.Table_Name
and tc.Constraint_Name = ksu.Constraint_Name
JOIN INFORMATION_SCHEMA.COLUMNS c
ON c.Table_Name = ksu.Table_Name
and c.Column_Name = ksu.Column_Name
WHERE tc.Constraint_Type = 'Primary Key'
--or tc.Constraint_Type = 'Foreign Key'
GROUP BY
ksu.table_name, ksu.column_name, tc.constraint_type, c.Data_Type, ksu.ordinal_position
ORDER BY ksu.table_name, ksu.column_name, tc.constraint_type, c.Data_Type
A: If you need it in Oracle it is so simple.
SELECT `Constraint_Name`
FROM `All_Constraints`
WHERE `Constraint_Type` = `'P'`
AND `Owner` = `'your schema here';`
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "135"
}
|
Q: Do access modifiers affect reflection also? I always believe they did, but seeing some answers here make me doubt...
Can I access private fields/properties/methods from outside a class through reflection?
A: Yes you can access private fields via reflection. This is how a lot of ORMs go about populating an object without going through your properties (which will invoke business logic you might not have intended to be run on an object load).
Access modifiers are not a form of security!
A: You do, however, need extra permissions for accessing private/protected/internal fields/properties/methods from outside a class through reflection.
A: Yes you can, you just specify the access modifier in the BindingFlags when you access them.
A: Yes you can: but you really should questions yourself why you're going to :)
There is actually only one case, where it can make sense and this is a UnitTest.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How to insert multiple records and get the identity value? I'm inserting multiple records into a table A from another table B. Is there a way to get the identity value of table A record and update table b record with out doing a cursor?
Create Table A
(id int identity,
Fname nvarchar(50),
Lname nvarchar(50))
Create Table B
(Fname nvarchar(50),
Lname nvarchar(50),
NewId int)
Insert into A(fname, lname)
SELECT fname, lname
FROM B
I'm using MS SQL Server 2005.
A: Reading your question carefully, you just want to update table B based on the new identity values in table A.
After the insert is finished, just run an update...
UPDATE B
SET NewID = A.ID
FROM B INNER JOIN A
ON (B.FName = A.Fname AND B.LName = A.LName)
This assumes that the FName / LName combination can be used to key match the records between the tables. If this is not the case, you may need to add extra fields to ensure the records match correctly.
If you don't have an alternate key that allows you to match the records then it doesn't make sense at all, since the records in table B can't be distinguished from one another.
A: Use the ouput clause from 2005:
DECLARE @output TABLE (id int)
Insert into A (fname, lname)
OUTPUT inserted.ID INTO @output
SELECT fname, lname FROM B
select * from @output
now your table variable has the identity values of all the rows you insert.
A: As far as I understand it the issue you are having is that you want to INSERT into Table A, which has an identity column, and you want to preserve the identity from Table B which does not.
In order to do that you should just have to turn on identity insert on table A. This will allow you to define your ID's on insert and as long as they don't conflict, you should be fine. Then you can just do:
Insert into A(identity, fname, lname) SELECT newid, fname, lname FROM B
Not sure what DB you are using but for sql server the command to turn on identity insert would be:
set identity_insert A on
A: I suggest using uniqueidentifier type instead of identity. I this case you can generate IDs before insertion:
update B set NewID = NEWID()
insert into A(fname,lname,id) select fname,lname,NewID from B
A: If you always want this behavior, you could put an AFTER INSERT trigger on TableA that will update table B.
A: You can get the by joining on the row number. This is possible because since it's an identity, it will just increment as you add items, which will be in the order that you are selecting them.
A: -- first create a table for show how its works
CREATE TABLE [dbo].[myTable]
(
[id] [INT] IDENTITY(1, 1) NOT NULL,
[text] [VARCHAR](10) NULL
)
ON [PRIMARY]
GO
-- var table for keep new inserted id
DECLARE @tblNewInserted TABLE
(
newids INT
)
--use the output clause in insert statement
INSERT INTO [dbo].[myTable]
output inserted.id
INTO @tblNewInserted
VALUES ('aa'),('bb'),('cc')
SELECT *
FROM @tblNewInserted
A: MBelly is right on the money - But then the trigger will always try and update table B even if that's not required (Because you're also inserting from table C?).
Darren is also correct here, you can't get multiple identities back as a result set. Your options are using a cursor and taking the identity for each row you insert, or using Darren's approach of storing the identity before and after. So long as you know the increment of the identity this should work, so long as you make sure the table is locked for all three events.
If it was me, and it wasn't time critical I'd go with a cursor.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/95988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "65"
}
|
Q: Multiple relations to the same model in Rails Let's say I have two models, Classes and People. A Class might have one or two People as instructors, and twenty people as students. So, I need to have multiple relationships between the models -- one where it's 1->M for instructors, and one where it's 1->M for students.
Edit: Instructors and Students must be the same; instructors could be students in other classes, and vice versa.
I'm sure this is quite easy, but Google isn't pulling up anything relevant and I'm just not finding it in my books.
A: in my case i have Asset and User model
Asset can be create by an user and could be assigned to a user
and User can create many assets and can have many Asset
solution of my problem was
asset.rb
class Asset < ActiveRecord::Base
belongs_to :creator ,:class_name=>'User'
belongs_to :assigned_to, :class_name=>'User'
end
and
user.rb
class User < ActiveRecord::Base
has_many :created_assets, :foreign_key => 'creator_id', :class_name => 'Asset'
has_many :assigned_assets , :foreign_key => 'assigned_to_id', :class_name => 'Asset'
end
so your solution could be
class Course < ActiveRecord::Base
has_many :students ,:foreign_key => 'student_id', :class_name => 'Person'
has_many :teachers, :foreign_key => 'teacher_id', :class_name => 'Person'
end
and
class Person < ActiveRecord::Base
belongs_to :course_enrolled,:class_name=>'Course'
belongs_to :course_instructor,:class_name=>'Course'
end
A: There are many options here, but assuming instructors are always instructors and students are always students, you can use inheritance:
class Person < ActiveRecord::Base; end # btw, model names are singular in rails
class Student < Person; end
class Instructor < Person; end
then
class Course < ActiveRecord::Base # renamed here because class Class already exists in ruby
has_many :students
has_many :instructors
end
Just remember that for single table inheritance to work, you need a type column in the people table.
using an association model might solve your issue
class Course < ActiveRecord::Base
has_many :studentships
has_many :instructorships
has_many :students, :through => :studentships
has_many :instructors, :through => :instructorships
end
class Studentship < ActiveRecord::Base
belongs_to :course
belongs_to :student, :class_name => "Person", :foreign_key => "student_id"
end
class Instructorship < ActiveRecord::Base
belongs_to :course
belongs_to :instructor, :class_name => "Person", :foreign_key => "instructor_id"
end
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: What is the easier way to know if a type param implements an interface in c# 2.0? For example, given a type param method i'm looking for something like the part in bold
void MyMethod< T >() {
if ( typeof(T).Implements( IMyInterface ) )
{
//Do something
else
//Do something else
}
Anwers using C# 3.0 are also welcome, but first drop the .NET 2.0 ones please ;)
A: Type.IsAssignableFrom
if(typeof(IMyInterface).IsAssignableFrom(typeof(T)))
{
// something
}
else
{
// something else
}
A: I think
if (typeof (IMyInterFace).IsAssignableFrom(typeof(T))
should also work: but i don't see an advantage...
A: Ï've just tried using
if( typeof(T).Equals(typeof(IMyInterface) )
...
And also works, but your answer seems more robust and was what I was looking for. Thanks!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Get URL of ASP.Net Page in code-behind I have an ASP.Net page that will be hosted on a couple different servers, and I want to get the URL of the page (or even better: the site where the page is hosted) as a string for use in the code-behind. Any ideas?
A: Do you want the server name? Or the host name?
Request.Url.Host ala Stephen
Dns.GetHostName - Server name
Request.Url will have access to most everything you'll need to know about the page being requested.
A: Request.Url.GetLeftPart(UriPartial.Authority) + Request.FilePath + "?theme=blue";
that will give you the full path to the page you are sitting on. I added in the querystring.
A: I'm facing same problem and so far I found:
new Uri(Request.Url,Request.ApplicationPath)
or
Request.Url.GetLeftPart(UriPartial.Authority)+Request.ApplicationPath
A: I am using
Request.Url.GetLeftPart(UriPartial.Authority) +
VirtualPathUtility.ToAbsolute("~/")
A: Request.Url.Host
A: Use this:
Request.Url.AbsoluteUri
That will get you the full path (including http://...)
A: If you want to include any unique string on the end, similar to example.com?id=99999, then use the following
Dim rawUrl As String = Request.RawUrl.ToString()
A: Using a js file you can capture the following, that can be used in the codebehind as well:
<script type="text/javascript">
alert('Server: ' + window.location.hostname);
alert('Full path: ' + window.location.href);
alert('Virtual path: ' + window.location.pathname);
alert('HTTP path: ' +
window.location.href.replace(window.location.pathname, ''));
</script>
A: I use this in my code in a custom class. Comes in handy for sending out emails like no-reply@example.com
"no-reply@" + BaseSiteUrl
Works fine on any site.
// get a sites base urll ex: example.com
public static string BaseSiteUrl
{
get
{
HttpContext context = HttpContext.Current;
string baseUrl = context.Request.Url.Authority + context.Request.ApplicationPath.TrimEnd('/');
return baseUrl;
}
}
If you want to use it in codebehind get rid of context.
A: If you want only the scheme and authority part of the request (protocol, host and port) use
Request.Url.GetLeftPart(UriPartial.Authority)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "202"
}
|
Q: Flex best practices? I have the feeling that is easy to find samples, tutorials and simple examples on Flex.
It seems harder to find tips and good practices based on real-life projects.
Any tips on how to :
*
*How to write maintainable actionscript code
*How to ensure a clean separation of concern. Has anybody used an MVC framework such as cairngorm, puremvc or easymvc on a real Flex project ?
*How to fetch data from a server with blazeds/amfphp ?
*How to reduce latency for the end-user ?
*...
A: I work often with Flex in my job, and I will be happy to help.. but your questions deserve an article for each one :) I'll try some short answer.
Maintenable code: I think that the same rules of any other OO languages apply. Some flex-specific rules I'm use to follow: use strong typed variables, always consider dispatching events as the way for your UI components talk each other (a little more initial work, very flexible and decoupled later).
Frameworks: looked at it, read the documentation.. very nice, but I still feel that their complications are not balanced by the benefits they provide. Anyway I'd like to change my mind on this point..
Talking with server: Right now I'm using BlazeDS, it works very well.. there are many tutorials on the subject out there, if you find any trouble setting up it I would be happy to help.
Latency: Do you mean in client/server comunications? If so, you should explore the various type of channels BlazeDS implements.. pull-only, two-way http polling, near real-time on http (comet).. if you need more, LiveCycle Data Services ES, the commrcial implementation from which BlazeDS is born, among other things offer another protocol called RTMP, it isn't http-tunnelled so there can be problem with firewalls and proxies, but it offers better performance (there is a free closed-source version of LCDS). I use the standard http channels in intranet environments, and found no real performance problems even with large datasets.
Well.. quite a lot of stuff, can't be more specific now on each of this points, ask you if need :)
A: Here are a couple of great resources to do with Flex/AS3 best practices and standards:
Flex SDK coding conventions and best practices
Flex best practices – Part 1: Setting up your Flex project
The first one I found especially useful and I try to make sure any team I work with have all read it
A: I have found the MVC framework RIAWave link to be absolutely incredible. It is super lightweight and easy to use. I found Cairngorm and PureMVC to have a pretty steep learning curve and they both feel a bit too bulky for me. RIAWave stays out of the way and just gives you the MVC basics to work with.
AMFPHP on the backend is very nice as well. AMFPHP also has an apache module that will take care of serializing/unserializing the sent and received data all in C which is blazing fast.
If latency is a worry, you will want to make sure you get a good webhost or even deploy to multiple data centers so that your users are never far from a server. Sounds like a bit early to be worrying about that though.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: UI design alternatives with Groovy/JRuby/Jython or other JVM languages? For a developer in the Java eco-system, there is a handful of choices when it comes to UI design. The best known are:
*
*Swing (preferred when used with Netbeans and its GUI builder)
*Eclipse's SWT (mostly preferred for Eclipse plug-ins)
Now, are there any frameworks or design alternatives to this which target JRuby / Groovy / Jython or other "dynamic" JVM languages ?
Some UI frameworks are layers over Swing or SWT, for example, a framework could read a description of a Screen in XML and instantiate the corresponding Swing components.
If you know a framework like that but which targets JVM "dynamic" languages, I'd like to see them in the answers as well.
A: Not exactly UI design, but you could try Griffon.
A: Clojure has a few GUI libraries / frameworks that look priomising:
seesaw wraps Swing in a very concise DSL, which could certainly be used to declaratively create GUI interfaces:
(defn -main [& args]
(invoke-later
(-> (frame :title "Hello",
:content "Hello, Seesaw",
:on-close :exit)
pack!
show!)))
Incanter provides quite a lot of graphing and visualisation functionality (wrapping JFreeChart among other things). Not quite a general GUI library, but very useful if you're focusing on stats:
;; show a histogram of 1000 samples from a normal distribution
(view (histogram (sample-normal 1000)))
There is also some neat example code popping up for wrapping JavaFX 2.0 in Clojure - again this is more like a declarative DSL:
(defn -start [app stage]
(eval
(fx Stage :visible true :width 300 :height 200 :title "hello world"
:scene (fx Scene
(fx BorderPane :left (fx Text "hello")
:right (fx Text "Right")
:top (fx Text "top")
:bottom (fx Text "Bottom")
:center (fx Text "In the middle!"))))))
A: I think the two most mature frameworks for Jruby are Monkeybars (http://monkeybars.rubyforge.org/) and Limelight (http://limelight.8thlight.com/).
Monkeybars is a full rubyesque MVC implementation which can be used in conjunction with a Swing GUI builder, whereas Limelight goes for a minimal code / maximum effect ratio like Shoes does.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What's safecall? I'm working on the creation of an ActiveX EXE using VB6, and the only example I got is all written in Delphi.
Reading the example code, I noticed there are some functions whose signatures are followed by the safecall keyword. Here's an example:
function AddSymbol(ASymbol: OleVariant): WordBool; safecall;
What is the purpose of this keyword?
A: In COM, every method is a function that returns an HRESULT:
IThingy = interface
['{357D8D61-0504-446F-BE13-4A3BBE699B05}']
function AddSymbol(ASymbol: OleVariant; out RetValue: WordBool): HRESULT; stdcall;
end;
This is an absolute rule in COM:
*
*there are no exceptions in COM
*everything returns an HRESULT
*negative HRESULT indicates a failure
*in higher level languages, failures are mapped to exceptions
It was the intention of the COM designers that higher level languages would automatically translate Failed methods into an exception.
So in your own language, the COM invocation would be represented without the HRESULT. E.g.:
*
*Delphi-like: function AddSymbol(ASymbol: OleVariant): WordBool;
*C#-like: WordBool AddSymbol(OleVariant ASymbol);
In Delphi you can choose to use the raw function signature:
IThingy = interface
['{357D8D61-0504-446F-BE13-4A3BBE699B05}']
function AddSymbol(ASymbol: OleVariant; out RetValue: WordBool): HRESULT; stdcall;
end;
And handle the raising of exceptions yourself:
bAdded: WordBool;
thingy: IThingy;
hr: HRESULT;
hr := thingy.AddSymbol('Seven', {out}bAdded);
if Failed(hr) then
OleError(hr);
or the shorter equivalent:
bAdded: WordBool;
thingy: IThingy;
hr: HRESULT;
hr := thingy.AddSymbol('Seven', {out}bAdded);
OleCheck(hr);
or the shorter equivalent:
bAdded: WordBool;
thingy: IThingy;
OleCheck(thingy.AddSymbol('Seven'), {out}bAdded);
COM didn't intend for you to deal with HRESULTs
But you can ask Delphi to hide that plumbing away from you, so you can get on with the programming:
IThingy = interface
['{357D8D61-0504-446F-BE13-4A3BBE699B05}']
function AddSymbol(ASymbol: OleVariant): WordBool; safecall;
end;
Behind the scenes, the compiler will still check the return HRESULT, and throw an EOleSysError exception if the HRESULT indicated a failure (i.e. was negative). The compiler-generated safecall version is functionally equivalent to:
function AddSymbol(ASymbol: OleVariant): WordBool; safecall;
var
hr: HRESULT;
begin
hr := AddSymbol(ASymbol, {out}Result);
OleCheck(hr);
end;
But it frees you to simply call:
bAdded: WordBool;
thingy: IThingy;
bAdded := thingy.AddSymbol('Seven');
tl;dr: You can use either:
function AddSymbol(ASymbol: OleVariant; out RetValue: WordBool): HRESULT; stdcall;
function AddSymbol(ASymbol: OleVariant): WordBool; safecall;
But the former requires you to handle the HRESULTs every time.
Bonus Chatter
You almost never want to handle the HRESULTs yourself; it clutters up the program with noise that adds nothing. But sometimes you might want to check the HRESULT yourself (e.g. you want to handle a failure that isn't very exceptional). Never versions of Delphi have starting included translated Windows header interfaces that are declared both ways:
IThingy = interface
['{357D8D61-0504-446F-BE13-4A3BBE699B05}']
function AddSymbol(ASymbol: OleVariant; out RetValue: WordBool): HRESULT; stdcall;
end;
IThingySC = interface
['{357D8D61-0504-446F-BE13-4A3BBE699B05}']
function AddSymbol(ASymbol: OleVariant): WordBool); safecall;
end;
or from the RTL source:
ITransaction = interface(IUnknown)
['{0FB15084-AF41-11CE-BD2B-204C4F4F5020}']
function Commit(fRetaining: BOOL; grfTC: UINT; grfRM: UINT): HResult; stdcall;
function Abort(pboidReason: PBOID; fRetaining: BOOL; fAsync: BOOL): HResult; stdcall;
function GetTransactionInfo(out pinfo: XACTTRANSINFO): HResult; stdcall;
end;
{ Safecall Version }
ITransactionSC = interface(IUnknown)
['{0FB15084-AF41-11CE-BD2B-204C4F4F5020}']
procedure Commit(fRetaining: BOOL; grfTC: UINT; grfRM: UINT); safecall;
procedure Abort(pboidReason: PBOID; fRetaining: BOOL; fAsync: BOOL); safecall;
procedure GetTransactionInfo(out pinfo: XACTTRANSINFO); safecall;
end;
The SC suffix stands for safecall. Both interfaces are equivalent, and you can choose which to declare your COM variable as depending on your desire:
//thingy: IThingy;
thingy: IThingySC;
You can even cast between them:
thingy: IThingSC;
bAdded: WordBool;
thingy := CreateOleObject('Supercool.Thingy') as TThingySC;
if Failed(IThingy(thingy).AddSymbol('Seven', {out}bAdded) then
begin
//Couldn't seven? No sixty-nine for you
thingy.SubtractSymbol('Sixty-nine');
end;
Extra Bonus Chatter - C#
C# by default does the equivalent of Delphi safecall, except in C#:
*
*you have to opt-out of safecall mapping
*rather than opt-in
In C# you would declare your COM interface as:
[ComImport]
[Guid("{357D8D61-0504-446F-BE13-4A3BBE699B05}")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
public interface IThingy
{
WordBool AddSymbol(OleVariant ASymbol);
WordBool SubtractSymbol(OleVariant ASymbol);
}
You'll notice that the COM HRESULT is hidden from you. The C# compiler, like the Delphi compiler, will automatically check the returned HRESULT and throw an exception for you.
And in C#, as in Delphi, you can choose to handle the HRESULTs yourself:
[ComImport]
[Guid("{357D8D61-0504-446F-BE13-4A3BBE699B05}")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
public interface IThingy
{
[PreserveSig]
HRESULT AddSymbol(OleVariant ASymbol, out WordBool RetValue);
WordBool SubtractSymbol(OleVariant ASymbol);
}
The [PreserveSig] tells the compiler to preserve the method signature exactly as is:
Indicates whether unmanaged methods that have HRESULT or retval return values are directly translated or whether HRESULT or retval return values are automatically converted to exceptions.
A: What Francois said and if it wasn't for safecall your COM method call would have looked like below and you would have to do your own error checking instead of getting exceptions.
function AddSymbol(ASymbol: OleVariant; out Result: WordBool): HResult; stdcall;
A: Safecall passes parameters from right to left, instead of the pascal or register (default) from left to right
With safecall, the procedure or function removes parameters from the stack upon returning (like pascal, but not like cdecl where it's up to the caller)
Safecall implements exception 'firewalls'; esp on Win32, this implements interprocess COM error notification. It would otherwise be identical to stdcall (the other calling convention used with the win api)
A: Additionally, the exception firewalls work by calling SetErrorInfo() with an object that supports IErrorInfo, so that the caller can get extended information about the exception. This is done by the TObject.SafeCallException override in both TComObject and TAutoIntfObject. Both of these types also implement ISupportErrorInfo to mark this fact.
In the event of an exception, the safecall method's caller can query for ISupportErrorInfo, then query that for the interface whose method resulted in a failure HRESULT (high bit set), and if that returns S_OK, GetErrorInfo() can get the exception info (description, help, etc., in the form of the IErrorInfo implementation that was passed to SetErrorInfo() by the Delphi RTL in the SafeCallException overrides).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
}
|
Q: Is is possible to convert C# double[,,] array to double[] without making a copy I have huge 3D arrays of numbers in my .NET application. I need to convert them to a 1D array to pass it to a COM library. Is there a way to convert the array without making a copy of all the data?
I can do the conversion like this, but then I use twice the ammount of memory which is an issue in my application:
double[] result = new double[input.GetLength(0) * input.GetLength(1) * input.GetLength(2)];
for (i = 0; i < input.GetLength(0); i++)
for (j = 0; j < input.GetLength(1); j++)
for (k = 0; k < input.GetLength(2); k++)
result[i * input.GetLength(1) * input.GetLength(2) + j * input.GetLength(2) + k)] = input[i,j,l];
return result;
A: I don't believe the way C# stores that data in memory would make it feasible the same way a simple cast in C would. Why not use a 1d array to begin with and perhaps make a class for the type so you can access it in your program as if it were a 3d array?
A: Unfortunately, C# arrays aren't guaranteed to be in contiguous memory like they are in closer-to-the-metal languages like C. So, no. There's no way to convert double[,,] to double[] without an element-by-element copy.
A: Consider abstracting access to the data with a Proxy (similar to iterators/smart-pointers in C++). Unfortunately, syntax isn't as clean as C++ as operator() not available to overload and operator[] is single-arg, but still close.
Of course, this extra level of abstraction adds complexity and work of its own, but it would allow you to make minimal changes to existing code that uses double[,,] objects, while allowing you to use a single double[] array for both interop and your in-C# computation.
class Matrix3
{
// referece-to-element object
public struct Matrix3Elem{
private Matrix3Impl impl;
private uint dim0, dim1, dim2;
// other constructors
Matrix3Elem(Matrix3Impl impl_, uint dim0_, uint dim1_, uint dim2_) {
impl = impl_; dim0 = dim0_; dim1 = dim1_; dim2 = dim2_;
}
public double Value{
get { return impl.GetAt(dim0,dim1,dim2); }
set { impl.SetAt(dim0, dim1, dim2, value); }
}
}
// implementation object
internal class Matrix3Impl
{
private double[] data;
uint dsize0, dsize1, dsize2; // dimension sizes
// .. Resize()
public double GetAt(uint dim0, uint dim1, uint dim2) {
// .. check bounds
return data[ (dim2 * dsize1 + dim1) * dsize0 + dim0 ];
}
public void SetAt(uint dim0, uint dim1, uint dim2, double value) {
// .. check bounds
data[ (dim2 * dsize1 + dim1) * dsize0 + dim0 ] = value;
}
}
private Matrix3Impl impl;
public Matrix3Elem Elem(uint dim0, uint dim1, uint dim2){
return new Matrix2Elem(dim0, dim1, dim2);
}
// .. Resize
// .. GetLength0(), GetLength1(), GetLength1()
}
And then using this type to both read and write -- 'foo[1,2,3]' is now written as 'foo.Elem(1,2,3).Value', in both reading values and writing values, on left side of assignment and value expressions.
void normalize(Matrix3 m){
double s = 0;
for (i = 0; i < input.GetLength0; i++)
for (j = 0; j < input.GetLength(1); j++)
for (k = 0; k < input.GetLength(2); k++)
{
s += m.Elem(i,j,k).Value;
}
for (i = 0; i < input.GetLength0; i++)
for (j = 0; j < input.GetLength(1); j++)
for (k = 0; k < input.GetLength(2); k++)
{
m.Elem(i,j,k).Value /= s;
}
}
Again, added development costs, but shares data, removing copying overhead and copying related developtment costs. It's a tradeoff.
A: Without knowing details of your COM library, I'd look into creating a facade class in .Net and exposing it to COM, if necessary.
Your facade would take a double[,,] and have an indexer that will map from [] to [,,].
Edit: I agree about the points made in the comments, Lorens suggestion is better.
A: As a workaround you could make a class which maintains the array in one dimensional form (maybe even in closer to bare metal form so you can pass it easily to the COM library?) and then overload operator[] on this class to make it usable as a multidimensional array in your C# code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: What are the relative advantages of XMLEncoder and XStream? Suppose I want to store many small configuration objects in XML, and I don't care too much about the format. The XMLDecoder class built into the JDK would work, and from what I hear, XStream works in a similar way.
What are the advantages to each library?
A: I really like the XStream
library. It does a really good job of outputting fairly simple xml
as a result of a provided Java object. It works great for reproducing
the object back from the xml as well. And, one of our 3rd party libraries
already depended on it anyway.
*
*We chose to use it because we wanted
our xml to be human readable. Using
the alias function makes it much
nicer.
*You can extend the library if you
want some portion of an object to
deserialize in a nicer fashion. We
did this in one case so the file
would have a set of degrees,
minutes, and seconds for a latitude
and longitude, instead of two
doubles.
The two minute tutorial sums up the basic usage, but in the
interest of keeping the information in one spot, I'll try to sum it
up here, just a little shorter.
// define your classes
public class Person {
private String firstname;
private PhoneNumber phone;
// ... constructors and methods
}
public class PhoneNumber {
private int code;
private String number;
// ... constructors and methods
}
Then use the library for write out the xml.
// initial the libray
XStream xstream = new XStream();
xstream.alias("person", Person.class); // elementName, Class
xstream.alias("phone", PhoneNumber.class);
// make your objects
Person joe = new Person("Joe");
joe.setPhone(new PhoneNumber(123, "1234-456"));
// convert xml
String xml = xstream.toXML(joe);
You output will look like this:
<person>
<firstname>Joe</firstname>
<phone>
<code>123</code>
<number>1234-456</number>
</phone>
</person>
To go back:
Person newJoe = (Person)xstream.fromXML(xml);
The XMLEncoder is provided for Java bean serialization. The last time I used it,
the file looked fairly nasty. If really don't care what the file looks like, it could
work for you and you get to avoid a 3rd party dependency, which is also nice. I'd expect the possibility of making the serialization prettier would be more a challenge with the XMLEncoder as well.
XStream outputs the full class name if you don't alias the name. If the Person class above had package example; the xml would have "example.Person" instead of just "person".
A: Another suggestion: consider using JAXB (http://jaxb.dev.java.net). If you are using JDK 1.6, it comes bundled, check out "javax.xml.bind" for details, so no need for additional external jars.
JAXB is rather fast. I like XStream too, but it's bit slower. Also, XMLEncoder is bit of a toy (compared to other options)... but if it works, there's no harm in using it.
Also: one benefit of JAXB is that you can also bind partial document (sub-trees) with it; no need to create object(s) for the whole file. For this you need to use Stax (XMLStreamReader) to point to root element of the sub-tree, then bind. No need to use SAX, even for most large files, as long as it can be processed chunk by chunk.
A: If you are planning on storing all those configuration objects in a single file, and that file will be quite large, both the options you've outlined above could be quite memory intensive, as they both require the entire file to be read into memory to be deserialized.
If memory usage is a concern (the file containing the XML will be very large), I recommend SAX.
If memory usage is not a concern (the file containing the XML will not be very large), I'd use whatever is included with the default JRE (in this case XMLDecoder) just to remove 3rd party dependencies.
A: I'd also prefer XStream as it is really easy to use and to extend. You can quickly start if you're going with the default setup. If you need to customize the behavior it has a very clean API and a lot of extension points, so you have really fine grained control over the things you want to tweak without interfering with other parts of the marshalling process.
As the XML that is created by XStream looks nice, manual editing is also simple. If the output doesn't fulfill your needs and the long list of available Converters doesn't contain the one you need, it's fairly simple to write your own.
A big plus is also the good documentation on their homepage.
A: I always find XStream very tempting, because it's so easy to get going. However, invariably I end up replacing it. It's really quite buggy, and its collection handling could use a lot of work.
As a result, I usually switch to JAXB. It's an awful lot more robust, it's pretty much bug-free, and a more flexible than XStream.
A: Addition to @jay answer with example:
Code:
PortfolioAlternateIdentifier identifier = new PortfolioAlternateIdentifier();
identifier.setEffectiveDate(new Date());
identifier.setSchemeCode("AAA");
identifier.setIdentifier("123456");
The output using XStream:
<PortfolioAlternateIdentifier>
<effectiveDate>2014-05-02 20:14:15.961 IST</effectiveDate>
<schemeCode>AAA</schemeCode>
<identifier>123456</identifier>
</PortfolioAlternateIdentifier>
The output using XMLEncoder:
<?xml version="1.0" encoding="UTF-8"?>
<java version="1.6.0_38" class="java.beans.XMLDecoder">
<object class="PortfolioAlternateIdentifier">
<void property="effectiveDate">
<object class="java.util.Date">
<long>1399041855961</long>
</object>
</void>
<void property="identifier">
<string>123456</string>
</void>
<void property="schemeCode">
<string>AAA</string>
</void>
</object>
</java>
A: Java also has a new utility class aimed at storing Key-Value paired sets typical to configurations. It is the old style but very simple and handy. This is done via the java.util.Properties class, a Map object with serialization options. This might be all you need unless you are storing entire objects.
A: You should avoid XMLEncoder/XMLDecoder like the plague if you're going to be persisting a non-trivial number of objects or your system needs to be multithreaded. See http://matthew.mceachen.us/blog/do-not-want-xmlencoder-129.html for the grisly details.
If you must use XML, XStream is great. But ask yourself if you really need to use XML. Here's a serialization benchmark project that might turn you on to better solutions:
http://code.google.com/p/thrift-protobuf-compare/wiki/Benchmarking
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Automated unit testing with JavaScript I'm trying to incorporate some JavaScript unit testing into my automated build process. Currently JSUnit works well with JUnit, but it seems to be abandonware and lacks good support for Ajax, debugging, and timeouts.
Has anyone had any luck automating (with Ant) a unit testing library such as YUI test, jQuery's QUnit, or jQUnit?
Note: I use a custom built Ajax library, so the problem with Dojo's DOH is that it requires you to use their own Ajax function calls and event handlers to work with any Ajax unit testing.
A: I recently read an article by Bruno using JSUnit and creating a JsMock framework on top of that... very interesting. I'm thinking of using his work to start unit testing my JavaScript code.
Mock JavaScript or How to unit test JavaScript outside the Browser environment
A: I just got Hudson CI to run JasmineBDD (headless), at least for pure JavaScript unit testing.
(Hudson running Java via shell, running Envjs, running JasmineBDD.)
I haven't got it to play nice with a big library yet, though, like prototype.
A: I'm just about to start doing JavaScript TDD on a new project I am working on. My current plan is to use QUnit to do the unit testing. While developing the tests can be run by simply refreshing the test page in a browser.
For continuous integration (and ensuring the tests run in all browsers), I will use Selenium to automatically load the test harness in each browser, and read the result. These tests will be run on every checkin to source control.
I am also going to use JSCoverage to get code coverage analysis of the tests. This will also be automated with Selenium.
I'm currently in the middle of setting this up. I'll update this answer with more exact details once I have the setup hammered out.
Testing tools:
*
*qunit
*JSCoverage
*Selenium
A: There are many JavaScript unit test framework out there (JSUnit, scriptaculous, ...), but JSUnit is the only one I know that may be used with an automated build.
If you are doing 'true' unit test you should not need AJAX support. For example, if you are using an RPC Ajax framework such as DWR, you can easily write a mock function:
function mockFunction(someArg, callback) {
var result = ...; // Some treatments
setTimeout(
function() { callback(result); },
300 // Some fake latency
);
}
And yes, JSUnit does handle timeouts: Simulating Time in JSUnit Tests
A: Look into YUITest
A: I am in agreement that JSUnit is kind of dying on the vine. We just finished up replacing it with YUI Test.
Similar to the example using qUnit, we are running the tests using Selenium. We are running this test independently from our other Selenium tests simply because it does not have the dependencies that the normal UI regression tests have (e.g. deploying the application to a server).
To start out, we have a base JavaScript file that is included in all of our test HTML files. This handles setting up the YUI instance, the test runner, the YUI.Test.Suite object as well as the Test.Case. It has methods that can be accessed via Selenium to run the test suite, check to see if the test runner is still running (results are not available until after it's done), and get the test results (we chose JSON format):
var yui_instance; // The YUI instance
var runner; // The YAHOO.Test.Runner
var Assert; // An instance of YAHOO.Test.Assert to save coding
var testSuite; // The YAHOO.Test.Suite that will get run.
/**
* Sets the required value for the name property on the given template, creates
* and returns a new YUI Test.Case object.
*
* @param template the template object containing all of the tests
*/
function setupTestCase(template) {
template.name = "jsTestCase";
var test_case = new yui_instance.Test.Case(template);
return test_case;
}
/**
* Sets up the test suite with a single test case using the given
* template.
*
* @param template the template object containing all of the tests
*/
function setupTestSuite(template) {
var test_case = setupTestCase(template);
testSuite = new yui_instance.Test.Suite("Bond JS Test Suite");
testSuite.add(test_case);
}
/**
* Runs the YAHOO.Test.Suite
*/
function runTestSuite() {
runner = yui_instance.Test.Runner;
Assert = yui_instance.Assert;
runner.clear();
runner.add(testSuite);
runner.run();
}
/**
* Used to see if the YAHOO.Test.Runner is still running. The
* test results are not available until it is done running.
*/
function isRunning() {
return runner.isRunning();
}
/**
* Gets the results from the YAHOO.Test.Runner
*/
function getTestResults() {
return runner.getResults(yui_instance.Test.Format.JSON);
}
As for the Selenium side of things, we used a parameterized test. We run our tests in both Internet Explorer and Firefox in the data method, parsing the test results into a list of Object arrays with each array containing the browser name, the test file name, the test name, the result (pass, fail or ignore) and the message.
The actual test just asserts the test result. If it is not equal to "pass" then it fails the test with the message returned from the YUI Test result.
@Parameters
public static List<Object[]> data() throws Exception {
yui_test_codebase = "file:///c://myapppath/yui/tests";
List<Object[]> testResults = new ArrayList<Object[]>();
pageNames = new ArrayList<String>();
pageNames.add("yuiTest1.html");
pageNames.add("yuiTest2.html");
testResults.addAll(runJSTestsInBrowser(IE_NOPROXY));
testResults.addAll(runJSTestsInBrowser(FIREFOX));
return testResults;
}
/**
* Creates a Selenium instance for the given browser, and runs each
* YUI Test page.
*
* @param aBrowser
* @return
*/
private static List<Object[]> runJSTestsInBrowser(Browser aBrowser) {
String yui_test_codebase = "file:///c://myapppath/yui/tests/";
String browser_bot = "this.browserbot.getCurrentWindow()"
List<Object[]> testResults = new ArrayList<Object[]>();
selenium = new DefaultSelenium(APPLICATION_SERVER, REMOTE_CONTROL_PORT, aBrowser.getCommand(), yui_test_codebase);
try {
selenium.start();
/*
* Run the test here
*/
for (String page_name : pageNames) {
selenium.open(yui_test_codebase + page_name);
//Wait for the YAHOO instance to be available
selenium.waitForCondition(browser_bot + ".yui_instance != undefined", "10000");
selenium.getEval("dom=runYUITestSuite(" + browser_bot + ")");
// Output from the tests is not available until
// the YAHOO.Test.Runner is done running the suite
selenium.waitForCondition("!" + browser_bot + ".isRunning()", "10000");
String output = selenium.getEval("dom=getYUITestResults(" + browser_bot + ")");
JSONObject results = JSONObject.fromObject(output);
JSONObject test_case = results.getJSONObject("jsTestCase");
JSONArray testCasePropertyNames = test_case.names();
Iterator itr = testCasePropertyNames.iterator();
/*
* From the output, build an array with the following:
* Test file
* Test name
* status (result)
* message
*/
while(itr.hasNext()) {
String name = (String)itr.next();
if(name.startsWith("test")) {
JSONObject testResult = test_case.getJSONObject(name);
String test_name = testResult.getString("name");
String test_result = testResult.getString("result");
String test_message = testResult.getString("message");
Object[] testResultObject = {aBrowser.getCommand(), page_name, test_name, test_result, test_message};
testResults.add(testResultObject);
}
}
}
} finally {
// If an exception is thrown, this will guarantee that the selenium instance
// is shut down properly
selenium.stop();
selenium = null;
}
return testResults;
}
/**
* Inspects each test result and fails if the testResult was not "pass"
*/
@Test
public void inspectTestResults() {
if(!this.testResult.equalsIgnoreCase("pass")) {
fail(String.format(MESSAGE_FORMAT, this.browser, this.pageName, this.testName, this.message));
}
}
A: I'm a big fan of js-test-driver.
It works well in a CI environment and is able to capture actual browsers for cross-browser testing.
A: There's a new project that lets you run QUnit tests in a Java environment (like Ant) so you can fully integrate your client-side test suite with your other unit tests.
http://qunit-test-runner.googlecode.com
I've used it to unit test jQuery plugins, objx code, custom OO JavaScript and it works for everything without modification.
A: The project I'm working on uses Js-Test-Driver hosting Jasmine on Chrome 10 with Jasmine-JSTD-Adapter including making use of code coverage tests included in JS-Test-Driver.
While there are some problems each time we change or update browsers on the CI environment the Jasmine tests are running pretty smoothly with only minor issues with ansynchronous tests, but as far as I'm aware these can be worked around using Jasmine Clock, but I haven't had a chance to patch them yet.
A: I've published a little library for verifying browser-dependent JavaScript tests without having to use a browser. It is a Node.js module that uses zombie.js to load the test page and inspect the results. I've wrote about it on my blog. Here is what the automation looks like:
var browsertest = require('../browsertest.js').browsertest;
describe('browser tests', function () {
it('should properly report the result of a mocha test page', function (done) {
browsertest({
url: "file:///home/liam/work/browser-js-testing/tests.html",
callback: function() {
done();
}
});
});
});
A: I looked on your question date and back then there were a few good JavaScript testing libraries and frameworks.
Today you can find much more and in different focus like TDD, BDD, Assetion and with/without runners support.
There are many players in this game, like Mocha, Chai, QUnit, Jasmine, etc...
You can find some more information in this blog about JavaScript, mobile, and web testing...
A: Another JavaScript testing framework that can be run with Ant is CrossCheck. There's an example of running CrossCheck via Ant in the build file for the project.
CrossCheck attempts, with limited success, to emulate a browser, including mock-style implementations of XMLHttpRequest and timeout/interval.
It does not currently handle loading JavaScript from a web page, though. You have to specify the JavaScript files that you want to load and test. If you keep all of your JavaScript code separated from your HTML, it might work for you.
A: I've written an Ant task which uses PhantomJS, a headless WebKit browser, to run QUnit HTML test files within an Ant build process. It can also fail the build if any tests fail.
https://github.com/philmander/ant-jstestrunner
A: This is a good evaluation of several testing tools.
JavaScript unit test tools for TDD
I personally prefer
https://code.google.com/p/js-test-driver/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "62"
}
|
Q: Developing UI in JavaScript using TDD Principles I've had a lot of trouble trying to come up with the best way to properly follow TDD principles while developing UI in JavaScript. What's the best way to go about this?
Is it best to separate the visual from the functional? Do you develop the visual elements first, and then write tests and then code for functionality?
A: I've never successfully TDDed UI code. The closest we came was indeed to separate UI code as much as possible from the application logic. This is one reason why the model-view-controller pattern is useful - the model and controller can be TDDed without much trouble and without getting too complicated.
In my experience, the view was always left for our user-acceptance tests (we wrote web applications and our UATs used Java's HttpUnit). However, at this level it's really an integration test, without the test-in-isolation property we desire with TDD. Due to this setup, we had to write our controller/model tests/code first, then the UI and corresponding UAT. However, in the Swing GUI code I've been writing lately, I've been writing the GUI code first with stubs to explore my design of the front end, before adding to the controller/model/API. YMMV here though.
So to reiterate, the only advice I can give is what you already seem to suspect - separate your UI code from your logic as much as possible and TDD them.
A: I've done some TDD with Javascript in the past, and what I had to do was make the distinction between Unit and Integration tests. Selenium will test your overall site, with the output from the server, its post backs, ajax calls, all of that. But for unit testing, none of that is important.
What you want is just the UI you are going to be interacting with, and your script. The tool you'll use for this is basically JsUnit, which takes an HTML document, with some Javascript functions on the page and executes them in the context of the page. So what you'll be doing is including the Stubbed out HTML on the page with your functions. From there,you can test the interaction of your script with the UI components in the isolated unit of the mocked HTML, your script, and your tests.
That may be a bit confusing so lets see if we can do a little test. Lets to some TDD to assume that after a component is loaded, a list of elements is colored based on the content of the LI.
tests.html
<html>
<head>
<script src="jsunit.js"></script>
<script src="mootools.js"></script>
<script src="yourcontrol.js"></script>
</head>
<body>
<ul id="mockList">
<li>red</li>
<li>green</li>
</ul>
</body>
<script>
function testListColor() {
assertNotEqual( $$("#mockList li")[0].getStyle("background-color", "red") );
var colorInst = new ColorCtrl( "mockList" );
assertEqual( $$("#mockList li")[0].getStyle("background-color", "red") );
}
</script>
</html>
Obviously TDD is a multi-step process, so for our control, we'll need multiple examples.
yourcontrol.js (step1)
function ColorCtrl( id ) {
/* Fail! */
}
yourcontrol.js (step2)
function ColorCtrl( id ) {
$$("#mockList li").forEach(function(item, index) {
item.setStyle("backgrond-color", item.getText());
});
/* Success! */
}
You can probably see the pain point here, you have to keep your mock HTML here on the page in sync with the structure of what your server controls will be. But it does get you a nice system for TDD'ing with JavaScript.
A: See also: JavaScript unit test tools for TDD
A: I've found the MVP architecture to be very suitable for writing testable UIs. Your Presenter and Model classes can simply be 100% unit tested. You only have to worry about the View (which should be a dumb, thin layer only that fires events to the Presenter) for UI testing (with Selenium etc.)
Note that in the I'm talking about using MVP entirely in the UI context, without necessarily crossing to the server-side. Your UI can have its own Presenter and Model that lives entirely on the client-side. The Presenter drives the UI interaction/validation etc. logic while the Model keeps state information and provides a portal to the backend (where you can have a separate Model).
You should also take a look at the Presenter First TDD technique.
A: This is the primary reason I switched to the Google Web Toolkit ... I develop and test in Java and have a reasonable expectation that the compiled JavaScript will function properly on a variety of browsers. Since TDD is primarily a unit testing function, most of the project can be developed and tested before compilation and deployment.
Integration and Functional test suites verify that the resulting code is functioning as expected after it's deployed to a test server.
A: I'm just about to start doing Javascript TDD on a new project I am working on. My current plan is to use qunit to do the unit testing. While developing the tests can be run by simply refreshing the test page in a browser.
For continuous integration (and ensuring the tests run in all browsers), I will use Selenium to automatically load the test harness in each browser, and read the result. These tests will be run on every checkin to source control.
I am also going to use JSCoverage to get code coverage analysis of the tests. This will also be automated with Selenium.
I'm currently in the middle of setting this up. I'll update this answer with more exact details once I have the setup hammered out.
Testing tools:
*
*qunit
*JSCoverage
*Selenium
A: What I do is to poke the Dom to see if I'm getting what I expect. A great side effect of this is that in making your tests fast, you also make your app fast.
I just released an open source toolkit which will help with JavaScript tdd immensely. It is a composition of many open source tools which gives you a working requirejs backbone app out of the box.
It provides single commands to run: dev web server, jasmine single browser test runner, jasmine js-test-driver multi browser test runner, and concatenization/minification for JavaScript and CSS. It also outputs an unminified version of your app for production debugging, precompiles your handlebar templates, and supports internationalization.
No setup is required. It just works.
http://github.com/davidjnelson/agilejs
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
}
|
Q: Can the ffmpeg av libs return an accurate PTS? I'm working with an mpeg stream that uses a IBBP... GOP sequence. The (DTS,PTS) values returned for the first 4 AVPackets are as follows: I=(0,3) B=(1,1) B=(2,2) P=(3,6)
The PTS on the I frame looks like it is legit, but then the PTS on the B frames cannot be right, since the B frames shouldn't be displayed before the I frame as their PTS values indicate. I've also tried decoding the packets and using the pts value in the resulting AVFrame, put that PTS is always set to zero.
Is there any way to get an accurate PTS out of ffmpeg? If not, what's the best way to sync audio then?
A: Ok, scratch my previous confused reply.
For a IBBPBBI movie, you'd expect the PTSes to look like this (in decoding order)
0, 3, 1, 2, 6, 4, 5, ...
corresponding to the frames
I, P, B, B, I, B, B, ...
So you appear to be missing an I at the start of your sequence but otherwise the timestamps look correct.
A: I think I finally figured out what's going on based on a comment made in http://www.dranger.com/ffmpeg/tutorial05.html:
ffmpeg reorders the packets so that the DTS of the packet being processed by avcodec_decode_video() will always be the same as the PTS of the frame it returns
Translation: If I feed a packet into avcodec_decode_video() that has a PTS of 12, avcodec_decode_video() will not return the decoded frame contained in that packet until I feed it a later packet that has a DTS of 12. If the packet's PTS is the same as its DTS, then the packet given is the same as the frame returned. If the packet's PTS is 2 frames later than its DTS, then avcodec_decode_video() will delay the frame and not return it until I provide 2 more packets.
Based on this behavior, I'm guessing that av_read_frame() is maybe reordering the packets from IPBB to IBBP so that avcodec_decode_video() only has to buffer the P frames for 3 frames instead of 5. For example, the difference between the input and the output of the P frame with this ordering is 3 (6 - 3):
| I B B P B B P
| DTS: 0 1 2 3 4 5 6
| decode() result: I B B P
vs. a difference of 5 with the standard ordering (6 - 1):
| I P B B P B B
| DTS: 0 1 2 3 4 5 6
| decode() result: I B B P
<shrug/> but that is pure conjecture.
A: I'm fairly certain you are getting accurate values. It might help if you thing of an MPEG stream as, well, a stream. In that case, prior to the IBBPBB that you see there would normally be another GOP. Maybe something like this (using same notation as original question):
P(-3,-2) B(-2,-1) B(-1,0)
Basically the B frames after the I frames are based on the I frame and the last P frame from the previous GOP.
While it makes logical sense for a video to start off with this:
Start GOP: IPBBPBBPBB...
Later on it must be
Start GOP: IBBPBBPBBPBB
Start GOP: IBBPBBPBBPBB
Start GOP: IBB...
Remember that decoding any B frame requires a complete frame before it and after it. So each pair of B frames should be displayed before the I or P frame just prior to it in the file.
FFMPEG may just have forgone the "special case" of first GOP.
Since the first two B frames don't have a prior frame to manipulate, you should be able to safely discard them. Just rebase your timestamps off of the first I frame and adjust the audio stream the same amount.
Whether this will actually result in a loss of frames will depend on FFMPEG's implementation, but worse case scenario is that you lose 83 milliseconds (2 frames at 24 frames/sec).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Unix commands like ping, ssh, work fine but socket-based programs are failing in connect I got a call from a tester about a machine that was failing our software. When I examined the problem machine, I quickly realized the problem was fairly low level: Inbound network traffic works fine. Basic outbound command like ping and ssh are working fine, but anything involving the connect() call is failing with "No route to host".
For example - on this particular machine this program will fail on the connect() statement for any IP address other than 127.0.0.1:
#!/usr/bin/perl -w
use strict;
use Socket;
my ($remote,$port, $iaddr, $paddr, $proto, $line);
$remote = shift || 'localhost';
$port = shift || 2345; # random port
if ($port =~ /\D/) { $port = getservbyname($port, 'tcp') }
die "No port" unless $port;
$iaddr = inet_aton($remote) || die "no host: $remote";
$paddr = sockaddr_in($port, $iaddr);
$proto = getprotobyname('tcp');
socket(SOCK, PF_INET, SOCK_STREAM, $proto) || die "socket: $!";
connect(SOCK, $paddr) || die "connect: $!";
while (defined($line = <SOCK>)) {
print $line;
}
close (SOCK) || die "close: $!";
exit;
Any suggestions about where this machine is broken? It's running SUSE-10.2.
A: I would check firewall configuration on that machine. It is possible for iptables (I guess your SUSE has iptables firewall) to be setup to let trough only ping ICMP packets.
A: Is the firewall turned off?
A: Firewall is always possible, but it does say that ssh can connect, so that seems unlikely.
I'd say have a look at the routes ("route" command on Linux), and make sure you don't have like two default routes, or weird ones or whatever. All in all I'd say test ping and ssh and your program on the same distant IP, and if they all fail, you have a route problem. If only your program fails, you probably have either a firewall problem or program problem :)
A: Try pointing connect() to the same host:port where your SSH command works. Also, keep in mind that some firewalls can apply different rules for different user accounts (and sometimes for different executables). Therefore, make sure you run ssh and your test app under the same user account and that SUID isn't set for SSH.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Rational Functional Tester wait for object existence I'm currently modifying a Java script in Rational Functional Tester and I'm trying to tell RFT to wait for an object with a specified set of properties to appear. Specifically, I want to wait until a table with X number of rows appear. The only way I have been able to do it so far is to add a verification point that just verifies that the table has X number of rows, but I have not been able to utilize the wait for object type of VP, so this seems a little bit hacky. Is there a better way to do this?
Jeff
A: No, there is not a built-in waitForProperty() type of method, so you cannot do something simple like tableObject.waitForProperty("rowCount", x);
Your options are to use a verification point as you already are doing (if it ain't broke...) or to roll your own synchronization point using a do/while loop and the find() method.
The find() codesample below assumes that doc is an html document. Adjust this to be your parent java window.
TestObject[] tables = doc.find(atDescendant(".rowCount", x), false);
If you are not familiar with find(), do a search in the RFT API reference in the help menu. find() will be your best friend in RFT scripting.
A: You can do one thing.... you can try getting the particular property and check that you are getting the desired value of that. If not then iterate in a IF loop.
while (!flag) {
if (obj.getproperty(".text").equals("Desired Text")) {
flag = true
}
}
A: You can use:
getobject.gettext();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: open IE without toolbar or address bar from Windows VB Application Shell ("explorer.exe www.google.com")
is how I'm currently opening my products ad page after successful install. However I think it would look much nicer if I could do it more like Avira does, or even a popup where there are no address bar links etc. Doing this via an inbrowser link is easy enough
<a href="http://page.com"
onClick="javascript:window.open('http://page.com','windows','width=650,height=350,toolbar=no,menubar=no,scrollbars=yes,resizable=yes,location=no,directories=no,status=no'); return false")">Link text</a>
But how would I go about adding this functionality in VB?
A: If you want it to look professional, you need to use an actual browser component. VB.NET comes with one. If you are using an older version of VB, you'd need to go third party. If you want to stay with a shell open, you would have to individually target the browser command-line and pass arguments to indicate that it should not have toolbars etc.
A: Speaking as a user, I find castrated popup windows annoying and unproductive.
So my answer is: "don't".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: REDUX: How to overcome an incompatibility between the ksh on Linux vs. that installed on AIX/Solaris/HPUX? I have uncovered another problem in the effort that we are making to port several hundreds of ksh scripts from AIX, Solaris and HPUX to Linux. See here for the previous problem.
This code:
#!/bin/ksh
if [ -a k* ]; then
echo "Oh yeah!"
else
echo "No way!"
fi
exit 0
(when run in a directory with several files whose name starts with k) produces "Oh yeah!" when called with the AT&T ksh variants (ksh88 and ksh93). On the other hand it produces and error message followed by "No way!" on the other ksh variants (pdksh, MKS ksh and bash).
Again, my question are:
*
*Is there an environment variable that will cause pdksh to behave like ksh93? Failing that:
*Is there an option on pdksh to get the required behavior?
A: I wouldn't use pdksh on Linux anymore.
Since AT&T ksh has become OpenSource there are packages available from the various Linux distributions. E.g. RedHat Enterprise Linux and CentOS include ksh93 as the "ksh" RPM package.
pdksh is still mentioned in many installation requirement documentations from software vendors. We replaced pdksh on all our Linux systems with ksh93 with no problems so far.
A: You do realize that [ is an alias (often a link, symbolic or hard) for /usr/bin/test, right? So perhaps the actual problem is different versions of /usr/bin/test ?
OTOH, ksh overrides it with a builtin. Maybe there's a way to get it to not do that? or maybe you can explicitly alias [ to /usr/bin/test, if /usr/bin/test on all platforms is compatible?
A: Well after one year there seems to be no solution to my problem.
I am adding this answer to say that I will have to live with it......
A: in Bash the test -a operation is for a single file.
I'm guessing that in Ksh88 the test -a operation is for a single file, but doesn't complain because the other test words are an unspecified condition to the -a.
you want something like
for K in /etc/rc2.d/K* ; do test -a $K && echo heck-yea ; done
I can say that ksh93 works just like bash in this regard.
Regrettably I think the code was written poorly, my opinion, and likely a bad opinion since the root cause of the problem is the ksh88 built-in test allowing for sloppy code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: DIVs vs. TABLEs a rebuttal please There are lots of people out there asking "why shouldn't we use tables for structuring our HTML" and while a lot of answers come in, I rarely see anyone being converted to the world of semantics. That said, I've yet to see any convincing rebuttals to support the rationale for why we should (or might) use tables.
Anyone care to offer a rationale for when tables are valid structural markup?
Nov 7, 2008
Considering that this question didn't go away like I thought it would, I suppose I'd better clarify my question and explain its existence.
Through frustration having read the "tables are easier" argument once too many times following the "DIVs vs. TABLEs" question I wanted to expose the question a little more and not let the table lovers get let off the hook so easily.
Each to their own others might say, but I'm forever being given some application to put on our sites that's been created by some 'tables are easier' developer that dumps a chunk of crappy HTML into my pages, and to be honest, I'm just not seeing enough of the table lovers listening to the arguments.
Anyone use Mambo back in the day? Anyone had to take a bash at putting a design on the top of Microsoft's Sharepoint? Having to fight your way through all that nested table crap was hell, and considering that it was written by some bloody good coders annoys the heck out of me. Reasonable semantic markup has been around for long enough that there should be no reason for developers to still be championing "tables are easier". Tables are not easier - they are lazy!
My question deserved the negative rep for the negative manner in which it was presented, but I'm still waiting for people to accept that the only reason they use tables is because THEY DON'T KNOW HTML. Because if they did, then they'd understand, as jjrv says, that tables are for tabular data.
A: RE: Why tables?
Because some people are (still, after all these years) afraid of change. They've heard that using semantic HTML is a good thing (and don't usually fully grasp the concept). So they try to put together a layout using CSS having never done it before. They run into a few (well documented, and usually easy to solve) issues, throw their hands up, and go running back to tables.
They then decide that CSS is 'too time consuming' ("I'm not willing to spend the time to learn it") or 'not practical' ("I don't get it. It's too hard") and that tables are the only true way. Through stubbornness and ignorance, they believe their own bullsh!t and convince their clients and peers.
And their world remains happy and unchanged, fading further into the past and deeper into obsolescence*
And that's "why tables". The end.
(*except that they are well suited for coding HTML emails)
A: Tables are for developers who can't be bothered to fiddle for hours with CSS to get two adjacent columnesque divs to expand to 100% height and width regardless of content, and then get the hack to work in all browsers without adding extra div wrappers and then finally in absolute frustration they resort to the 5 second fix:
<table width="100%">
<tr><td valign="top">Left nav</td><td valign="top">Main content</td></tr>
</table>
The hard truth is that most users (excluding those using screenreaders) really don't care how the page is marked up, as long as it loads quickly.
Developers have budget and time constraints and "good" CSS and markup takes time.
The fact that there are a multitude of resources on the web explaining in great laborious detail how you can line up two divs to replace that simple table, says quite plainly to me that this design is inherently as flawed as tables. How many tutorials are needed to explain how to add a table with two columns to a page?
HTML5 should bring us all some sanity with the new header, footer, section, nav and aside tags. Example taken from Nettuts+:
<div id="content">
<div id="mainContent">
<section>
<!-- Blog post -->
</section>
<section id="comments">
<!-- Comments -->
</section>
<form>
<!-- Comment form -->
</form>
</div>
<aside>
<!-- Sidebar -->
</aside>
</div>
and then this for the CSS:
#content {
display: table;
}
#mainContent {
display: table-cell;
width: 620px;
padding-right: 22px;
}
aside {
display: table-cell;
width: 300px;
}
Those of you with a keen eye will love the sense of irony, when you notice that the CSS has the properties: display: table; and display: table-cell;.
Tables are back baby! Snuck in through the HTML5 back door ;-)
A: Tables are valid when you have a table of data. I've seen interactive grid widgets where they go out of their way to use a bunch of divs to avoid the dreaded table tag. When it's tabular data, make it a table.
A more controversial view of mine is that when you have problems dealing with vertical layout issues in CSS, you can just use a table and often resolve it immediately. Not as pretty as it ought to be perhaps, mixing content with presentation, but it gets the job done and avoids CSS hacks to get around IE.
A: Using modern semantic markup is much easier when you're adding features or fixing bugs or changing the look of a data-driven web site. Adding AJAX features or any kind of interactive scripting will work much better with DIVs and CSS than with TABLEs.
Moving to a content manager like Drupal, Joomla, WordPress, or the like will be much easier if you're already organized with semantic markup, too.
The newer browser editions will also support modern markup more efficiently and your site will display faster. Rearranging all those tables can result in slow display times.
On the other hand, tables are here to stay. Some people will continue using them and browsers will continue displaying them. There is nothing inherently wrong with non-semantic markup if that's what you want. A completely static site that will never be changed can run as well with tables as with modern markup.
As for valid structural markup, there is this: Tables are a great way to display tabular data, like database or spreadsheet tables. They are not really valid markup for anything else.
A: DIV-based layouts suffer from limitations. Without tables it is essentially impossible to implement a two column layout that grows properly based on the height of the content.
A: An interesting note is related to highly complex JavaScript applications. If you pick apart Gmail or Google Calendar with Firebug, you'll see that tables are used extensively, even for layout. Granted, these are usually dynamically generated but this goes to show that in rare cases some very visually complex interactive user interfaces are extremely difficult to build using only DIVs.
A: Tables are supported even in crusty old HTML v1.0 browsers. If your target market includes people using embedded browsers in mobile phones from the 1990s, that might be a good reason to go with tables.
Lots of existing auto-generated HTML uses tables. If your code needs to interact with or include those tables, it'd be better to go for consistency.
A: i would say jjrv is right in that tables are excellent for tabular data, going out of your way to make something "work" like a table instead of just using a table is borderline retarded.
if you care about standards, and moving toward a solid implementation across all browsers then most of your markup should be in table-less liquid layouts... and your tabular data is in.. you guessed it tables!
if you need to cater to really old browsers, that is before the dreaded ie6 then you will have lots of problems in css, and given current usage statistics its pretty safe to assume that everyone will have a "modern" browser that supports css layouts.
all this said and their are times when your banging your head against the wall on a layout and you want/do say f___ it through it in a table and it works. I would hope this is a deprecated practice, but in a clinch this does give predicable results.
A: Use tables for lowest-common denominator html or for tabular data where it makes sense to span across columns or rows. Otherwise css layouts are much less verbose and much easier to maintain once you get the hang of it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: OLEDBConnection.Open() generates 'Unspecified error' I have an application that uploads an Excel .xls file to the file system, opens the file with an oledbconnection object using the .open() method on the object instance and then stores the data in a database. The upload and writing of the file to the file system works fine but I get an error when trying to open the file on our production server only. The application works fine on two other servers (development and testing servers).
The following code generates an 'Unspecified Error' in the Exception.Message.
Quote:
System.Data.OleDb.OleDbConnection x = new System.Data.OleDb.OleDbConnection(@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + location + ";Extended Properties='Excel 8.0;HDR=Yes;IMEX=1'");
try
{
x.Open();
}
catch (Exception exp)
{
string errorEmailBody = " OpenExcelSpreadSheet() in Utilities.cs. " + exp.Message;
Utilities.SendErrorEmail(errorEmailBody);
}
:End Quote
The server's c:\\temp and c:\Documents and Settings\\aspnet\local settings\temp folder both give \aspnet full control.
I believe that there is some kind of permissions issue but can't seem to find any difference between the permissions on the noted folders and the folder/directory where the Excel file is uploaded. The same location is used to save the file and open it and the methods do work on my workstation and two web servers. Windows 2000 SP4 servers.
A: While the permissions issue may be more common you can also encounter this error from Windows file system/Access Jet DB Engine connection limits, 64/255 I think. If you bust the 255 Access read/write concurrent connections or the 64(?) connection limit per process you can get this exact same error. At least I've come across that in an application where connections were being continually created and never properly closed. A simple Conn.close(); dropped in and life was good. I imagine Excel could have similar issues.
A: Try wrapping the location in single quotes
System.Data.OleDb.OleDbConnection x = new System.Data.OleDb.OleDbConnection(@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source='" + location + "';Extended Properties='Excel 8.0;HDR=Yes;IMEX=1'");
A: If you're using impersonation you'll need to give permission to the impersonation user instead of/in addition to the aspnet user.
A: Anything in the inner exception? Is this a 64-bit application? The OLEDB providers don't work in 64-bit. You have to have your application target x86. Found this when getting an error trying to open access DB on my 64-bit computer.
A: I've gotten that error over the permissions thing, but it looks like you have that covered. I also have seen it with one of the flags in the connection string -- you might play with that a bit.
A: Yup. I did that too. Took out IMEX=1, took out Extended Properties, etc. I managed to break it on the dev and test servers. :) I put those back in one at a time until it was fixed on dev and test again but still no workie on prod.
A: not sure if this is the problem you are facing,
but, before disposing of the connection, you should do Connection.Close(),because the Connection.Dispose() command is inherited from Component and does not properly dispose of certain connection resources.
not properly disposing of the connection could lead to access issues.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How to click a button on an ASP.NET web page programmatically? I am trying to figure out how to click a button on a web page programmatically.
Specifically, I have a WinForm with a WebBrowser control. Once it navigates to the target ASP.NET login page I'm trying to work with, in the DocumentCompleted event handler I have the following coded:
HtmlDocument doc = webBrowser1.Document;
HtmlElement userID = doc.GetElementById("userIDTextBox");
userID.InnerText = "user1";
HtmlElement password = doc.GetElementById("userPasswordTextBox");
password.InnerText = "password";
HtmlElement button = doc.GetElementById("logonButton");
button.RaiseEvent("onclick");
This fills the userid and password text boxes fine, but I am not having any success getting that darned button to click; I've also tried "click", "Click", and "onClick" -- what else is there?. A search of msdn of course gives me no clues, nor groups.google.com. I gotta be close. Or maybe not -- somebody told me I should call the POST method of the page, but how this is done was not part of the advice given.
BTW The button is coded:
<input type="submit" name="logonButton" value="Login" onclick="if (typeof(Page_ClientValidate) == 'function') Page_ClientValidate(); " language="javascript" id="logonButton" tabindex="4" />
A: How does this work? Works for me
HtmlDocument doc = webBrowser1.Document;
doc.All["userIDTextBox"].SetAttribute("Value", "user1");
doc.All["userPasswordTextBox"].SetAttribute("Value", "Password!");
doc.All["logonButton"].InvokeMember("Click");
A: var btn = document.getElementById(btnName);
if (btn) btn.click();
A: There is an example of how to submit the form using InvokeMember here.
http://msdn.microsoft.com/en-us/library/ms171716.aspx
A: You can try and invoke the Page_ClientValidate() method directly through the clientscript instead of clicking the button, let me dig up an example.
Using MSHTML
mshtml.IHTMLWindow2 myBroserWindow = (mshtml.IHTMLWindow2)MyWebBrowser.Document.Window.DomWindow;
myBroserWindow.execScript("Page_ClientValidate();", "javascript");
A: Have you tried fireEvent instead of RaiseEvent?
A: You could call the method directly and pass in generic object and EventArgs parameters. Of course, this might not work if you were looking at the sender and EventArgs parameters for specific data. How I usually handle this is to refactor the guts of the method to a doSomeAction() method and the event handler for the button click will simply call this function. That way I don't have to figure out how to invoke what is usually just an event handler to do some bit of logic on the page/form.
In the case of javascript clicking a button for a form post, you can invoke form.submit() in the client side script -- which will run any validation scripts you defined in the tag -- and then parse the Form_Load event and grab the text value of the submit button on that form (assuming there is only one) -- at least that's the ASP.NET 1.1 way with which I'm very familiar... anyone know of something more elegant with 2.0+?
A: Just a possible useful extra where the submit button has not been given an Id - as is frequently the case.
private HtmlElement GetInputElement(string name, HtmlDocument doc) {
HtmlElementCollection elems = doc.GetElementsByTagName("input");
foreach (HtmlElement elem in elems)
{
String nameStr = elem.GetAttribute("value");
if (!String.IsNullOrEmpty (nameStr) && nameStr.Equals (name))
{
return elem;
}
}
return null;
}
So you can call it like so:
GetInputElement("Login", webBrowser1.Document).InvokeMember("Click");
It'll raise an exception if the submit input with the value 'Login', but you can break it up if you want to conditionally check before invoking the click.
A: You posted a comment along the lines of not wanting to use a client side script on @Phunchak's answer. I think what you are trying to do is impossible. The only way to interact with the form is via a client side script. The C# code can only control what happens before the page is sent out to the browser.
A: try this
button.focus
System.Windows.Forms.SendKeys.Send("{ENTER}")
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: chapters in videos for the iPhone Is it possible to use chapters in videos for the iPhone in an application?
For example:
I have a 3 minutes video to play. I have chapter 1 starting at 0s, chapter 2 at 50s, chapter 3 at 95s.
Can I start plating the video at 50s (chapter 2) until the end? Can I make it play just the chapter 2 from 50s to 95s?
My question is not about how to add chapters to a video. I want to know if this behaviour is available on the iphone.
A: iPhone SDK 3.0+ has a new MPMoviePlayerController.initialPlaybackTime property for setting the time to start movie playback. This will be "rounded" to the nearest earlier keyframe time, so does not provide exact start positioning, but pretty close.
A: player.currentPlaybackTime = time;
A: This is definitely possible sending the non-documented message setCurrentTime to MPMoviePlayerController. It takes one parameter of type double which specifies the playback position in seconds. Find below a short example:
Extend the MPMoviePlayerController to avoid compiler warnings:
@interface MPMoviePlayerController (extended)
-(void)setCurrentTime:(double)seconds;
@end
Then you can call it wherever you need it - before start or during playback.
MPMoviePlayerController* player = [[ MPMoviePlayerController alloc] initWithContentURL:url ];
[ player setCurrentTime:95.0 ];
[ player play ];
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Web Dev - Where to store state of a shopping-cart-like object? You're building a web application. You need to store the state for a shopping cart like object during a user's session.
Some notes:
*
*This is not exactly a shopping cart, but more like an itinerary that the user is building... but we'll use the word cart for now b/c ppl relate to it.
*You do not care about "abandoned" carts
*Once a cart is completed we will persist it to some server-side data store for later retrieval.
Where do you store that stateful object? And how?
*
*server (session, db, etc?)
*client (cookie key-vals, cookie JSON object, hidden form-field, etc?)
*other...
Update: It was suggested that I list the platform we're targeting - tho I'm not sure its totally necessary... but lets say the front-end is built w/ASP.NET MVC.
A: I have considered what you are suggesting but have not had a client project yet to try it. The closest actually is a shopping list that you can find here...
http://www.scottcommonsense.com/toolbox.aspx
Click on Grocery Checklist to open the window. It does use ASPX, but only to manage the JS references placed on the page. The rest is done via AJAX using web services.
Previously I built an ASP.NET 2.0 site for a commerce site which used anon/auth cookies automatically. Each provides you with a GUID value which you can use to identify a user which is then associated with data in your database. I wanted the auth cookies so a user could move to different computers; work, home, etc. I avoided using the Profile fields to hold onto a complex ShoppingBasket object which was popular during the time in all the ASP.NET 2.0 books. I did not want to deal with "magic" serialization issues as the data structure changed over time. I prefer to manage db schema changes with update/alter scripts synced with software changes.
With the anon/auth cookies identifying the user on the client you can use the ASP.NET AJAX client-side to call the authentication web services using the JS proxies that are provided for you as a part of ASP.NET. You need to implement the Membership API to at least authenticate the user. The rest of the provider implementation can throw a NotImplementedException safely. You can then use your own custom ASMX web services via AJAX (see ScriptReference attribute) and update the pages with server-side data. You can completely do away with ASPX pages and just use static HTML/CSS/JS if you like.
The one big caveat is memory leaks in JS. Staying on the same page a long time increases your potential issue with memory leaks. It is a risk you can minimize by testing for long sessions and using tools like Firebug and others to look for memory leaks. Use the JS Lint tool as well as it will help identify major problems as you go.
A: It's been my experience with the Commerce Starter Kit and MVC Storefront (and other sites I've built) that no matter what you think now, information about user interactions with your "products" is paramount to the business guys. There's so many metrics to capture - it's nuts.
I'll save you all the stuff I've been through - what's by far been the most successful for me is just creating an Order object with "NotCheckedOut" status and then adding items to it and the user adds items. This lets users have more than one cart and allows you to mine the tar out of the Orders table. It also is quite easy to transact the order - just change the status.
Persisting "as they go" also allows the user to come back and finish the cart off if they can't, for some reason. Forgiveness is massive with eCommerce.
Cookies suck, session sucks, Profile is attached to the notion of a user and it hits the DB so you might as well use the DB.
You might think you don't want to do this - but you need to trust me and know that you WILL indeed need to feed the stats wonks some data later. I promise you.
A: I'd be inclined to store it as a session object. This is because you're not concerned with abandoned carts, and can therefore remove the overhead of storing it in the database as it's not necessary (not to mention that you'd also need some kind of cleanup routine to remove abandoned carts from the database).
However, if you'd like users to be able to persist their carts, then the database option is better. This way, a user who is logged in will have their cart saved across sessions (so when they come back to the site and login, their cart will be restored).
You could also use a combination of the two. Users who come to the site use the session-based cart by default. When they log in, all items are moved from the session-based cart to a database-based cart, and any subsequent cart activity is applied directly to the database.
A: In the DB tied to whatever you're using for sessions (db/memcache sessions, signed cookies) or to an authenticated user.
A: Store it in the database.
A: Without knowing the platform I can't give a direct answer. However, since you don't care about abandoned carts, then I would differ from my colleagues here and suggest storing it on the client. Why store it in the database if you don't care if it's abandoned?
Then again, it does depend on the size of the object you're storing -- cookies have their limits after all.
Edit: Ahh, asp.net MVC? Why not use the profile system? You can enable an anonymous profile if you don't want to bother making them log in
A: Do you envision folks needing to be able to start on one machine (e.g. their work PC) but continue/finsih from a different machine (e.g. home PC)? If so, the answer is obvious.
A: If you don't care about abandoned carts and have things in place for someone messing with the data on the client side... I think a cookie would be good -- especially if it's just a cookie of JSON data.
A: I'd use an (encrypted) cookie on the client which holds the ID of the users basket. Unless it's a really busy site then abandoned baskets won't fill up the database by too much, and you can run a regular admin task to clear the abandoned orders down if you care that much. Also doing it this way the user will keep their order if they close their browser and go away, a basket in the session would be cleared at this point..
Finally this means that you don't have to worry about writing code to deal with de/serialising the data from a client-side cookie, while later worrying about actually putting that data into the database when it gets converted into an order (too many points of failure for my liking)..
A: I'd say store the state somewhere on the server and correlate it to the user's session. While a cookie could ostensibly be an equal place to store things, if you consider security and data size, keeping as much data on the server as possible becomes a good thing.
For example, in a public terminal setting, would it be OK for someone to look at the contents of the cookie and see the list? If so, cookie's fine; if not, you'll just want an ID that links the user to the data. Doing that would also allow you to ensure the user is authenticated to the site in order to get to that data rather than storing everything on the machine - they'd need some form of credentials as well as the session identifier.
From a size perspective, sure, you're not going to be too concerned about a 4K cookie or something for a browser/broadband user, but if one of your targets is to allow a mobile phone or BlackBerry (not on 3G) to connect and have a snappy experience (and not get billed for the data), minimizing the amount of data getting passed to the client will be key.
The server storage also gives you some flexibility mentioned in some of the other answers - the user can save their cart on one machine and resume working with it on another; you can tie the cart to some form of credentials (rather than a transient session) and persist the cart long after the user has cleared their cookies; you get a little more in the way of fault tolerance - if the user's browser crashes, the site still has the data safe and sound.
If fault tolerance is important, you'll need some sort of persistent store like a database. If not, in application memory is probably fine, but you'll lose data if the app restarts. If you're in a farm environment, the store has to be centrally accessible, so you're again looking at a database.
Whether you choose to key by transient session or by credentials is going to depend on whether the users can save their data and come back later to get it. Transient session will eventually get cleaned up as "abandoned," and maybe that's OK. Tying to a user profile will let the user keep their data and explicitly abandon it. Either way, I'd make use of some sort of backing store like a database for fault tolerance and central accessibility. (Or maybe I'm overengineering the solution?)
A: If you care about supporting users without Javascript enabled, then the server side sessions will let you use URL rewriting.
A: If a relatively short time-out (around 2 hours, depending on your server config) is OK for the cart, then I'd say the server-side session. It's faster and more efficient than accessing the DB.
If you need a longer persistence (say some users like to leave and come back the next day), then store it in a cookie that is tamper-evident (use encryption or hashes).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Partial site SSL using asp.net login control I'm attempting to convert a home-grown login system to the standard asp.net login control included in .net. I want all communication on the website for a user not logged in to be in clear text, but lock everything in SSL once the user logs in - including the transmission of the username and password.
I had this working before by loading a second page - "loginaction.aspx" - with a https: prefix, then pulling out the username and password by looking for the proper textbox controls in Request.Form.Keys. Is there a way to do something similar using the .net login controls? I dont want to have a seperate login page, but rather include this control (within a loginview) on every page on the site.
A: You're not going to be able to do what you're talking about simply, because the postback (which is what the login control uses) is going to be whatever the page's security is (SSL or non-SSL).
Your best bet in this scenario is to use an IFRAME which contains an HTTPS (SSL) page that just contains thelogin control. You might have to redirect to another page after login that lets you jump out of the IFRAME.
Plan B would be to have a separate form on the page (outside your main FORM) which has the ACTION property point to another page where you handle the login. You will have to roll your your own login code to handle the forms authentication.
A: I was able to accomplish this by adding an OnClientClick event to the login button control and set it to the following javascript function.
`
function forceSSLSubmit()
{
var strAction = document.forms[0].action.toString();
if (strAction.toLowerCase().indexOf("http:") == 0) {
strAction = "https" + strAction.substring(4);
document.forms[0].action = strAction;
}
}
`
A: You aren't going to be able to have your site as non-SSL, with a login box on every page, and then submit the username and password via SSL.
The only way to really accomplish this is to use frames of some sort. This way your entire page could be non-SSL, but the login frame would have to be SSL.
The usual ways of doing this is to either lock down the entire site with SSL, don't worry about having the username and password SSL encrypted and go to SSL after they log in, or go the frame route I mentioned above.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Is there any WYSIWYG html editing tool or component that allows Format Copying from a paragraph to another? I believe the question says it all ... (I'll update if needed)
A: http://www.openwebware.com/
A: FCKEditor can do the job.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: When are C++ macros beneficial? The C preprocessor is justifiably feared and shunned by the C++ community. In-lined functions, consts and templates are usually a safer and superior alternative to a #define.
The following macro:
#define SUCCEEDED(hr) ((HRESULT)(hr) >= 0)
is in no way superior to the type safe:
inline bool succeeded(int hr) { return hr >= 0; }
But macros do have their place, please list the uses you find for macros that you can't do without the preprocessor.
Please put each use-cases in a seperate answer so it can be voted up and if you know of how to achieve one of the answers without the preprosessor point out how in that answer's comments.
A: Methods must always be complete, compilable code; macros may be code fragments. Thus you can define a foreach macro:
#define foreach(list, index) for(index = 0; index < list.size(); index++)
And use it as thus:
foreach(cookies, i)
printf("Cookie: %s", cookies[i]);
Since C++11, this is superseded by the range-based for loop.
A: We use the __FILE__ and __LINE__ macros for diagnostic purposes in information rich exception throwing, catching and logging, together with automated log file scanners in our QA infrastructure.
For instance, a throwing macro OUR_OWN_THROW might be used with exception type and constructor parameters for that exception, including a textual description. Like this:
OUR_OWN_THROW(InvalidOperationException, (L"Uninitialized foo!"));
This macro will of course throw the InvalidOperationException exception with the description as constructor parameter, but it'll also write a message to a log file consisting of the file name and line number where the throw occured and its textual description. The thrown exception will get an id, which also gets logged. If the exception is ever caught somewhere else in the code, it will be marked as such and the log file will then indicate that that specific exception has been handled and that it's therefore not likely the cause of any crash that might be logged later on. Unhandled exceptions can be easily picked up by our automated QA infrastructure.
A: Code repetition.
Have a look to boost preprocessor library, it's a kind of meta-meta-programming. In topic->motivation you can find a good example.
A: One common use is for detecting the compile environment, for cross-platform development you can write one set of code for linux, say, and another for windows when no cross platform library already exists for your purposes.
So, in a rough example a cross-platform mutex can have
void lock()
{
#ifdef WIN32
EnterCriticalSection(...)
#endif
#ifdef POSIX
pthread_mutex_lock(...)
#endif
}
For functions, they are useful when you want to explicitly ignore type safety. Such as the many examples above and below for doing ASSERT. Of course, like a lot of C/C++ features you can shoot yourself in the foot, but the language gives you the tools and lets you decide what to do.
A: I occasionally use macros so I can define information in one place, but use it in different ways in different parts of the code. It's only slightly evil :)
For example, in "field_list.h":
/*
* List of fields, names and values.
*/
FIELD(EXAMPLE1, "first example", 10)
FIELD(EXAMPLE2, "second example", 96)
FIELD(ANOTHER, "more stuff", 32)
...
#undef FIELD
Then for a public enum it can be defined to just use the name:
#define FIELD(name, desc, value) FIELD_ ## name,
typedef field_ {
#include "field_list.h"
FIELD_MAX
} field_en;
And in a private init function, all the fields can be used to populate a table with the data:
#define FIELD(name, desc, value) \
table[FIELD_ ## name].desc = desc; \
table[FIELD_ ## name].value = value;
#include "field_list.h"
A: Header file guards necessitate macros.
Are there any other areas that necessitate macros? Not many (if any).
Are there any other situations that benefit from macros? YES!!!
One place I use macros is with very repetitive code. For example, when wrapping C++ code to be used with other interfaces (.NET, COM, Python, etc...), I need to catch different types of exceptions. Here's how I do that:
#define HANDLE_EXCEPTIONS \
catch (::mylib::exception& e) { \
throw gcnew MyDotNetLib::Exception(e); \
} \
catch (::std::exception& e) { \
throw gcnew MyDotNetLib::Exception(e, __LINE__, __FILE__); \
} \
catch (...) { \
throw gcnew MyDotNetLib::UnknownException(__LINE__, __FILE__); \
}
I have to put these catches in every wrapper function. Rather than type out the full catch blocks each time, I just type:
void Foo()
{
try {
::mylib::Foo()
}
HANDLE_EXCEPTIONS
}
This also makes maintenance easier. If I ever have to add a new exception type, there's only one place I need to add it.
There are other useful examples too: many of which include the __FILE__ and __LINE__ preprocessor macros.
Anyway, macros are very useful when used correctly. Macros are not evil -- their misuse is evil.
A: Something like
void debugAssert(bool val, const char* file, int lineNumber);
#define assert(x) debugAssert(x,__FILE__,__LINE__);
So that you can just for example have
assert(n == true);
and get the source file name and line number of the problem printed out to your log if n is false.
If you use a normal function call such as
void assert(bool val);
instead of the macro, all you can get is your assert function's line number printed to the log, which would be less useful.
A: Mostly:
*
*Include guards
*Conditional compilation
*Reporting (predefined macros like __LINE__ and __FILE__)
*(rarely) Duplicating repetitive code patterns.
*In your competitor's code.
A: Inside conditional compilation, to overcome issues of differences between compilers:
#ifdef WE_ARE_ON_WIN32
#define close(parm1) _close (parm1)
#define rmdir(parm1) _rmdir (parm1)
#define mkdir(parm1, parm2) _mkdir (parm1)
#define access(parm1, parm2) _access(parm1, parm2)
#define create(parm1, parm2) _creat (parm1, parm2)
#define unlink(parm1) _unlink(parm1)
#endif
A: #define ARRAY_SIZE(arr) (sizeof arr / sizeof arr[0])
Unlike the 'preferred' template solution discussed in a current thread, you can use it as a constant expression:
char src[23];
int dest[ARRAY_SIZE(src)];
A: When you want to make a string out of an expression, the best example for this is assert (#x turns the value of x to a string).
#define ASSERT_THROW(condition) \
if (!(condition)) \
throw std::exception(#condition " is false");
A: String constants are sometimes better defined as macros since you can do more with string literals than with a const char *.
e.g. String literals can be easily concatenated.
#define BASE_HKEY "Software\\Microsoft\\Internet Explorer\\"
// Now we can concat with other literals
RegOpenKey(HKEY_CURRENT_USER, BASE_HKEY "Settings", &settings);
RegOpenKey(HKEY_CURRENT_USER, BASE_HKEY "TypedURLs", &URLs);
If a const char * were used then some sort of string class would have to be used to perform the concatenation at runtime:
const char* BaseHkey = "Software\\Microsoft\\Internet Explorer\\";
RegOpenKey(HKEY_CURRENT_USER, (string(BaseHkey) + "Settings").c_str(), &settings);
RegOpenKey(HKEY_CURRENT_USER, (string(BaseHkey) + "TypedURLs").c_str(), &URLs);
Since C++20 it is however possible to implement a string-like class type that can be used as a non-type template parameter type of a user-defined string literal operator which allows such concatenation operations at compile-time without macros.
A: When you are making a decision at compile time over Compiler/OS/Hardware specific behavior.
It allows you to make your interface to Comppiler/OS/Hardware specific features.
#if defined(MY_OS1) && defined(MY_HARDWARE1)
#define MY_ACTION(a,b,c) doSothing_OS1HW1(a,b,c);}
#elif define(MY_OS1) && defined(MY_HARDWARE2)
#define MY_ACTION(a,b,c) doSomthing_OS1HW2(a,b,c);}
#elif define(MY_SUPER_OS)
/* On this hardware it is a null operation */
#define MY_ACTION(a,b,c)
#else
#error "PLEASE DEFINE MY_ACTION() for this Compiler/OS/HArdware configuration"
#endif
A: You can use #defines to help with debugging and unit test scenarios. For example, create special logging variants of the memory functions and create a special memlog_preinclude.h:
#define malloc memlog_malloc
#define calloc memlog calloc
#define free memlog_free
Compile you code using:
gcc -Imemlog_preinclude.h ...
An link in your memlog.o to the final image. You now control malloc, etc, perhaps for logging purposes, or to simulate allocation failures for unit tests.
A: When you want to change the program flow (return, break and continue) code in a function behaves differently than code that is actually inlined in the function.
#define ASSERT_RETURN(condition, ret_val) \
if (!(condition)) { \
assert(false && #condition); \
return ret_val; }
// should really be in a do { } while(false) but that's another discussion.
A: The obvious include guards
#ifndef MYHEADER_H
#define MYHEADER_H
...
#endif
A: Compilers can refuse your request to inline.
Macros will always have their place.
Something I find useful is #define DEBUG for debug tracing -- you can leave it 1 while debugging a problem (or even leave it on during the whole development cycle) then turn it off when it is time to ship.
A: You can #define constants on the compiler command line using the -D or /D option. This is often useful when cross-compiling the same software for multiple platforms because you can have your makefiles control what constants are defined for each platform.
A: In my last job, I was working on a virus scanner. To make thing easier for me to debug, I had lots of logging stuck all over the place, but in a high demand app like that, the expense of a function call is just too expensive. So, I came up with this little Macro, that still allowed me to enable the debug logging on a release version at a customers site, without the cost of a function call would check the debug flag and just return without logging anything, or if enabled, would do the logging... The macro was defined as follows:
#define dbgmsg(_FORMAT, ...) if((debugmsg_flag & 0x00000001) || (debugmsg_flag & 0x80000000)) { log_dbgmsg(_FORMAT, __VA_ARGS__); }
Because of the VA_ARGS in the log functions, this was a good case for a macro like this.
Before that, I used a macro in a high security application that needed to tell the user that they didn't have the correct access, and it would tell them what flag they needed.
The Macro(s) defined as:
#define SECURITY_CHECK(lRequiredSecRoles) if(!DoSecurityCheck(lRequiredSecRoles, #lRequiredSecRoles, true)) return
#define SECURITY_CHECK_QUIET(lRequiredSecRoles) (DoSecurityCheck(lRequiredSecRoles, #lRequiredSecRoles, false))
Then, we could just sprinkle the checks all over the UI, and it would tell you which roles were allowed to perform the action you tried to do, if you didn't already have that role. The reason for two of them was to return a value in some places, and return from a void function in others...
SECURITY_CHECK(ROLE_BUSINESS_INFORMATION_STEWARD | ROLE_WORKER_ADMINISTRATOR);
LRESULT CAddPerson1::OnWizardNext()
{
if(m_Role.GetItemData(m_Role.GetCurSel()) == parent->ROLE_EMPLOYEE) {
SECURITY_CHECK(ROLE_WORKER_ADMINISTRATOR | ROLE_BUSINESS_INFORMATION_STEWARD ) -1;
} else if(m_Role.GetItemData(m_Role.GetCurSel()) == parent->ROLE_CONTINGENT) {
SECURITY_CHECK(ROLE_CONTINGENT_WORKER_ADMINISTRATOR | ROLE_BUSINESS_INFORMATION_STEWARD | ROLE_WORKER_ADMINISTRATOR) -1;
}
...
Anyways, that's how I've used them, and I'm not sure how this could have been helped with templates... Other than that, I try to avoid them, unless REALLY necessary.
A: I use macros to easily define Exceptions:
DEF_EXCEPTION(RessourceNotFound, "Ressource not found")
where DEF_EXCEPTION is
#define DEF_EXCEPTION(A, B) class A : public exception\
{\
public:\
virtual const char* what() const throw()\
{\
return B;\
};\
}\
A: Unit test frameworks for C++ like UnitTest++ pretty much revolve around preprocessor macros. A few lines of unit test code expand into a hierarchy of classes that wouldn't be fun at all to type manually. Without something like UnitTest++ and it's preprocessor magic, I don't know how you'd efficiently write unit tests for C++.
A: Let's say we'll ignore obvious things like header guards.
Sometimes, you want to generate code that needs to be copy/pasted by the precompiler:
#define RAISE_ERROR_STL(p_strMessage) \
do \
{ \
try \
{ \
std::tstringstream strBuffer ; \
strBuffer << p_strMessage ; \
strMessage = strBuffer.str() ; \
raiseSomeAlert(__FILE__, __FUNCSIG__, __LINE__, strBuffer.str().c_str()) \
} \
catch(...){} \
{ \
} \
} \
while(false)
which enables you to code this:
RAISE_ERROR_STL("Hello... The following values " << i << " and " << j << " are wrong") ;
And can generate messages like:
Error Raised:
====================================
File : MyFile.cpp, line 225
Function : MyFunction(int, double)
Message : "Hello... The following values 23 and 12 are wrong"
Note that mixing templates with macros can lead to even better results (i.e. automatically generating the values side-by-side with their variable names)
Other times, you need the __FILE__ and/or the __LINE__ of some code, to generate debug info, for example. The following is a classic for Visual C++:
#define WRNG_PRIVATE_STR2(z) #z
#define WRNG_PRIVATE_STR1(x) WRNG_PRIVATE_STR2(x)
#define WRNG __FILE__ "("WRNG_PRIVATE_STR1(__LINE__)") : ------------ : "
As with the following code:
#pragma message(WRNG "Hello World")
it generates messages like:
C:\my_project\my_cpp_file.cpp (225) : ------------ Hello World
Other times, you need to generate code using the # and ## concatenation operators, like generating getters and setters for a property (this is for quite a limited cases, through).
Other times, you will generate code than won't compile if used through a function, like:
#define MY_TRY try{
#define MY_CATCH } catch(...) {
#define MY_END_TRY }
Which can be used as
MY_TRY
doSomethingDangerous() ;
MY_CATCH
tryToRecoverEvenWithoutMeaningfullInfo() ;
damnThoseMacros() ;
MY_END_TRY
(still, I only saw this kind of code rightly used once)
Last, but not least, the famous boost::foreach !!!
#include <string>
#include <iostream>
#include <boost/foreach.hpp>
int main()
{
std::string hello( "Hello, world!" );
BOOST_FOREACH( char ch, hello )
{
std::cout << ch;
}
return 0;
}
(Note: code copy/pasted from the boost homepage)
Which is (IMHO) way better than std::for_each.
So, macros are always useful because they are outside the normal compiler rules. But I find that most the time I see one, they are effectively remains of C code never translated into proper C++.
A: You can't perform short-circuiting of function call arguments using a regular function call. For example:
#define andm(a, b) (a) && (b)
bool andf(bool a, bool b) { return a && b; }
andm(x, y) // short circuits the operator so if x is false, y would not be evaluated
andf(x, y) // y will always be evaluated
A: To fear the C preprocessor is like to fear the incandescent bulbs just because we get fluorescent bulbs. Yes, the former can be {electricity | programmer time} inefficient. Yes, you can get (literally) burned by them. But they can get the job done if you properly handle it.
When you program embedded systems, C uses to be the only option apart form assembler. After programming on desktop with C++ and then switching to smaller, embedded targets, you learn to stop worrying about “inelegancies” of so many bare C features (macros included) and just trying to figure out the best and safe usage you can get from those features.
Alexander Stepanov says:
When we program in C++ we should not be ashamed of its C heritage, but make
full use of it. The only problems with C++, and even the only problems with C, arise
when they themselves are not consistent with their own logic.
A: As wrappers for debug functions, to automatically pass things like __FILE__, __LINE__, etc:
#ifdef ( DEBUG )
#define M_DebugLog( msg ) std::cout << __FILE__ << ":" << __LINE__ << ": " << msg
#else
#define M_DebugLog( msg )
#endif
Since C++20 the magic type std::source_location can however be used instead of __LINE__ and __FILE__ to implement an analogue as a normal function (template).
A: Some very advanced and useful stuff can still be built using preprocessor (macros), which you would never be able to do using the c++ "language constructs" including templates.
Examples:
Making something both a C identifier and a string
Easy way to use variables of enum types as string in C
Boost Preprocessor Metaprogramming
A: If you have a list of fields that get used for a bunch of things, e.g. defining a structure, serializing that structure to/from some binary format, doing database inserts, etc, then you can (recursively!) use the preprocessor to avoid ever repeating your field list.
This is admittedly hideous. But maybe sometimes better than updating a long list of fields in multiple places? I've used this technique exactly once, and it was quite helpful that one time.
Of course the same general idea is used extensively in languages with proper reflection -- just instrospect the class and operate on each field in turn. Doing it in the C preprocessor is fragile, illegible, and not always portable. So I mention it with some trepidation. Nonetheless, here it is...
(EDIT: I see now that this is similar to what @Andrew Johnson said on 9/18; however the idea of recursively including the same file takes the idea a bit further.)
// file foo.h, defines class Foo and various members on it without ever repeating the
// list of fields.
#if defined( FIELD_LIST )
// here's the actual list of fields in the class. If FIELD_LIST is defined, we're at
// the 3rd level of inclusion and somebody wants to actually use the field list. In order
// to do so, they will have defined the macros STRING and INT before including us.
STRING( fooString )
INT( barInt )
#else // defined( FIELD_LIST )
#if !defined(FOO_H)
#define FOO_H
#define DEFINE_STRUCT
// recursively include this same file to define class Foo
#include "foo.h"
#undef DEFINE_STRUCT
#define DEFINE_CLEAR
// recursively include this same file to define method Foo::clear
#include "foo.h"
#undef DEFINE_CLEAR
// etc ... many more interesting examples like serialization
#else // defined(FOO_H)
// from here on, we know that FOO_H was defined, in other words we're at the second level of
// recursive inclusion, and the file is being used to make some particular
// use of the field list, for example defining the class or a single method of it
#if defined( DEFINE_STRUCT )
#define STRING(a) std::string a;
#define INT(a) long a;
class Foo
{
public:
#define FIELD_LIST
// recursively include the same file (for the third time!) to get fields
// This is going to translate into:
// std::string fooString;
// int barInt;
#include "foo.h"
#endif
void clear();
};
#undef STRING
#undef INT
#endif // defined(DEFINE_STRUCT)
#if defined( DEFINE_ZERO )
#define STRING(a) a = "";
#define INT(a) a = 0;
#define FIELD_LIST
void Foo::clear()
{
// recursively include the same file (for the third time!) to get fields.
// This is going to translate into:
// fooString="";
// barInt=0;
#include "foo.h"
#undef STRING
#undef int
}
#endif // defined( DEFINE_ZERO )
// etc...
#endif // end else clause for defined( FOO_H )
#endif // end else clause for defined( FIELD_LIST )
A: I've used the preprocesser to calculate fixed-point numbers from floating point values used in embedded systems that cannot use floating point in the compiled code. It's handy to have all of your math in Real World Units and not have to think about them in fixed-point.
Example:
// TICKS_PER_UNIT is defined in floating point to allow the conversions to compute during compile-time.
#define TICKS_PER_UNIT 1024.0
// NOTE: The TICKS_PER_x_MS will produce constants in the preprocessor. The (long) cast will
// guarantee there are no floating point values in the embedded code and will produce a warning
// if the constant is larger than the data type being stored to.
// Adding 0.5 sec to the calculation forces rounding instead of truncation.
#define TICKS_PER_1_MS( ms ) (long)( ( ( ms * TICKS_PER_UNIT ) / 1000 ) + 0.5 )
A: Yet another foreach macros. T: type, c: container, i: iterator
#define foreach(T, c, i) for(T::iterator i=(c).begin(); i!=(c).end(); ++i)
#define foreach_const(T, c, i) for(T::const_iterator i=(c).begin(); i!=(c).end(); ++i)
Usage (concept showing, not real):
void MultiplyEveryElementInList(std::list<int>& ints, int mul)
{
foreach(std::list<int>, ints, i)
(*i) *= mul;
}
int GetSumOfList(const std::list<int>& ints)
{
int ret = 0;
foreach_const(std::list<int>, ints, i)
ret += *i;
return ret;
}
Better implementations available: Google "BOOST_FOREACH"
Good articles available: Conditional Love: FOREACH Redux (Eric Niebler) http://www.artima.com/cppsource/foreach.html
A: Maybe the greates usage of macros is in platform-independent development.
Think about cases of type inconsistency - with macros, you can simply use different header files -- like:
--WIN_TYPES.H
typedef ...some struct
--POSIX_TYPES.h
typedef ...some another struct
--program.h
#ifdef WIN32
#define TYPES_H "WINTYPES.H"
#else
#define TYPES_H "POSIX_TYPES.H"
#endif
#include TYPES_H
Much readable than implementing it in other ways, to my opinion.
A: Seems VA_ARGS have only been mentioned indirectly so far:
When writing generic C++03 code, and you need a variable number of (generic) parameters, you can use a macro instead of a template.
#define CALL_RETURN_WRAPPER(FnType, FName, ...) \
if( FnType theFunction = get_op_from_name(FName) ) { \
return theFunction(__VA_ARGS__); \
} else { \
throw invalid_function_name(FName); \
} \
/**/
Note: In general, the name check/throw could also be incorporated into the hypothetical get_op_from_name function. This is just an example. There might be other generic code surrounding the VA_ARGS call.
Once we get variadic templates with C++11, we can solve this "properly" with a template.
A: Often times I end up with code like:
int SomeAPICallbackMethod(long a, long b, SomeCrazyClass c, long d, string e, string f, long double yx) { ... }
int AnotherCallback(long a, long b, SomeCrazyClass c, long d, string e, string f, long double yx) { ... }
int YetAnotherCallback(long a, long b, SomeCrazyClass c, long d, string e, string f, long double yx) { ... }
In some cases I'll use the following to make my life easier:
#define APIARGS long a, long b, SomeCrazyClass c, long d, string e, string f, long double yx
int SomeAPICallbackMethod(APIARGS) { ... }
It comes with the caveat of really hiding the variable names, which can be an issue in larger systems, so this isn't always the right thing to do, only sometimes.
A: I think this trick is a clever use of the preprocessor that can't be emulated with a function :
#define COMMENT COMMENT_SLASH(/)
#define COMMENT_SLASH(s) /##s
#if defined _DEBUG
#define DEBUG_ONLY
#else
#define DEBUG_ONLY COMMENT
#endif
Then you can use it like this:
cout <<"Hello, World!" <<endl;
DEBUG_ONLY cout <<"This is outputed only in debug mode" <<endl;
You can also define a RELEASE_ONLY macro.
A: You need a macros for resource identifiers in Visual Studio as the resource compiler only understands them (i.e., it doesn't work with const or enum).
A: Can you implement this as an inline function?
#define my_free(x) do { free(x); x = NULL; } while (0)
A: You can enable additional logging in a debug build and disable it for a release build without the overhead of a Boolean check. So, instead of:
void Log::trace(const char *pszMsg) {
if (!bDebugBuild) {
return;
}
// Do the logging
}
...
log.trace("Inside MyFunction");
You can have:
#ifdef _DEBUG
#define LOG_TRACE log.trace
#else
#define LOG_TRACE void
#endif
...
LOG_TRACE("Inside MyFunction");
When _DEBUG is not defined, this will not generate any code at all. Your program will run faster and the text for the trace logging won't be compiled into your executable.
A: #define COLUMNS(A,B) [(B) - (A) + 1]
struct
{
char firstName COLUMNS( 1, 30);
char lastName COLUMNS( 31, 60);
char address1 COLUMNS( 61, 90);
char address2 COLUMNS( 91, 120);
char city COLUMNS(121, 150);
};
A: Macros are useful for simulating the syntax of switch statements:
switch(x) {
case val1: do_stuff(); break;
case val2: do_other_stuff();
case val3: yet_more_stuff();
default: something_else();
}
for non-integral value types. In this question:
Using strings in switch statements - where do we stand with C++17?
you'll find answers suggesting some approaches involving lambdas, but unfortunately, it's macros that get us the closest:
SWITCH(x)
CASE val1 do_stuff(); break;
CASE val2 do_other_stuff();
CASE val3 yet_more_stuff();
DEFAULT something_else();
END
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "183"
}
|
Q: How to determine size in bytes of a result set from LINQ to SQL When writing manual SQL its pretty easy to estimate the size and shape of data returned by a query. I'm increasingly finding it hard to do this with LINQ to SQL queries. Sometimes I find WAY more data than I was expecting - which can really slow down a remote client that is accessing a database directly.
I'd like to be able to run a query and then tell exactly how much data has been returned across the wire, and use this to help me optimize.
I have already hooked up a log using the DataContext.Log method, but that only gives me an indication of the SQL sent, not the data received.
Any tips?
A: Looks like you can grab the SqlConnection of your DataContext and turn on statistics.
One of the statistics is "bytes returned".
MSDN Reference Link
A: Note: You need to cast the connection to a SqlConnection if you have an existing DataContext
((SqlConnection)dc.Connection).StatisticsEnabled = true;
then retrieve the statistics with :
((SqlConnection)dc.Connection).RetrieveStatistics()
A: I found no way to grab the SqlConnection of the DataContext, so i created the SqlConnection manually:
SqlConnection sqlConnection = new SqlConnection("your_connection_string");
// enable statistics
cn.StatisticsEnabled = true;
// create your DataContext with the SqlConnection
NorthWindDataContext nwContext = new NorthWindDataContext(sqlConnection);
var products = from product in nwContext
where product.Category.CategoryName = "Beverages"
select product;
foreach (var product in products)
{
//do something with product
}
// retrieve statistics - for keys see http://msdn.microsoft.com/en-us/library/7h2ahss8(VS.80).aspx
string bytesSent = sqlConnection.RetrieveStatistics()["BytesSent"].ToString();
string bytesReceived = sqlConnection.RetrieveStatistics()["BytesReceived"].ToString();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: .NET Compact Framework Printing libraries Can anyone point to libraries that can be used for Printing from Compact .Net Framework 1.0?
Criteria:
*
*I need to be able to print Text and Bar codes.
*The library should preferably be upgradable to .Net 2.0 or above with minimal disruption.
*Can be either Open Source [that can be distributed as part of Commercial application] or that can be purchased.
Edit
More information:
*
*We are an ISV and this application is sold to our customers.
*This application is usually installed on Symbol, Opticon devices. But occasionally this is installed on a generic Windows Mobile PDA or Phone devices.
*I want the library to work with Printers from multiple vendors. [I now have printers from O'Neil and Citizen-Systems for testing].
*We want the printers to be connected using bluetooth. I guess the library should in general work with any serial port connections.
*PrinterCE.NetCF from FieldSoftware appears to fit the bill. Thanks ctacke. I am looking for something similar.
Thanks,
Kishore
A: You've not given us much detail, like the device you're using or the printer type you want to print to (local, lan, serial, network, etc), however I'll see if I can at least point you in the right direction.
The de-facto standard for CF printing is PrinterCE from Field Software. PrintBoy from Bachmann Software also works well. I'm not certain if eitehr has the ability to print barcodes though.
Now if you're printing barcodes, that suggests that you're using a device like a Symbol (now Motorola) or Intermec handheld. If that is the case then those manufacturers have their own SDKs that allow printing.
If you are printing to something like a Zebra barcode printer, they typically have some serial PCL commands for printing barcodes as well, so you don't actually need to "print" the barcode. Instead you send the PCL command to tell the printer that the data should be output a barcode instead of text. The printer manufacturer can provide a PCL reference, as the PCL for these types of things isn't standardized.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Well developed web site architecture using linq to sql? Anybody found yet a good web site architecture using linq to sql? Any help will be very helpful!
A: We just finished up an internal IT project banking heavily on Linq2Sql and it paid off. I was a bit skeptical at first, but I think it worked out great in the end. Just remember, the fundamentals don't change.
*
*try to stay as stateless as possible
*keep clean lines between your services and data access
*don't fight linq, use it. If it isn't helping you, you are probably doing something wrong
Our implementation ended up being a hybrid of the Andrew Siemer and Beth Massi approach (a bit heavier on the Andrew side) and in C#
A: What, apart from StackOverflow? ;-)
A: Remember Linq is a technology that sits atop the typical data access structures. Therefore all rules that have applied thus far still hold. Just because you can get to data a little easier in the client app doesnt mean you throw out the architecture best practices for data access.
A: Rob Conery's MVC Storefront
As others have said, linq-to-sql is no different to any other ORM so the architecture is the same as you would use for NHibernate and others.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why would I use 2's complement to compare two doubles instead of comparing their differences against an epsilon value? Referenced here and here...Why would I use two's complement over an epsilon method? It seems like the epsilon method would be good enough for most cases.
Update: I'm purely looking for a theoretical reason why you'd use one over the other. I've always used the epsilon method.
Has anyone used the 2's complement comparison successfully? Why? Why Not?
A: the second link you reference mentions an article that has quite a long description of the issue:
http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
but unless you are tweaking performance I would stick with epsilon so people can debug your code
A: In short, when comparing two floats with unknown origins, picking an epsilon that is valid is almost impossible.
For example:
What is a good epsilon when comparing distance in miles between Atlanta GA, Dallas TX and some place in Ohio?
What is a good epsilon when comparing distance in miles between my left foot, my right foot and the computer under my desk?
EDIT:
Ok, I'm getting a fair number of people not understanding why you wouldn't know what your epsilon is.
Back in the old days of lore, I wrote two programs that worked with NeverWinter Nights (a game made by BioWare). One of the programs took a binary model and converted it to ASCII. The other program took an ASCII model and compiled it into binary. One of the tests I wrote was to take all of BioWare's binary models, decompile them to ASCII and then back to binary. Then I compared my binary version with original one from BioWare. One of the problems during the comparison was dealing with some of the slight variances in floating point values. So instead of coming up with a bunch of different EPSILONS for each type of floating point number (vertex, normal, etc), I wanted to use something such as this twos compliment compare. Thus avoiding the whole multiple EPSILON issue.
The same type of issue can apply to any type of software that processes 3rd party data and then needs to validate their results with the original. In these cases you might not even know what the floating point values represent, you just have to compare them. We ran into this issue with our industrial automation software.
EDIT:
LOL, this has been voted up and down by different people.
I'll boil the problem down to this, given two arbitrary floating point numbers, how do you decide what epsilon to use? You can't.
How can you compare 1e23 and 1.0001e23 with an epsilon and still compare 1e-23 and 5.2e-23 using the same epsilon? Sure, you can do some dynamic epsilon tricks, but that is the whole point to the integer compare (which does NOT require the integers be exact).
The integer compare is able to compare two floats using an epsilon relative to the magnitude of the numbers.
EDIT
Steve, lets look at what you said in the comments:
"But you know what equality means to you... Hence, you should be able to find an appropriate epsilon".
Turn this statement around to say:
"If you know what equality means to you, then you should be able to find an appropriate epsilon."
The whole point to what I am trying to say is that there are applications where we don't know what equality means in the absolute sense, thus we have to resort to a relative compare which is what the integer version is trying to do.
A: The bits method might be faster. I say might because on modern (multicore, highly pipelined) processors it is often impossible to guess what is really faster.
Code the simplest most obviously correct implementation, then measure, then optomise.
A: When it comes to speed, follow these rules:
*
*If you're not a very experienced developer, don't optimize.
*If you are an experienced developer, don't optimize yet.
Do the easiest method.
Alex
A: Oskar's right. Don't screw with this unless you really, really need that performance.
And you don't. If you were in the situation that did, you wouldn't have needed to ask the question -- you'd already know. If you think you do, then you don't. Your performance problems lie elsewhere. Just use the readable version.
A: Using any method that compares bitwise will result in trouble when fractions are represented by approximations. All floating point numbers with fractions that are not denominated in powers of two (1/2, 1/4, 1/8, 1/65536, &c) are approximated. So, of course, are all irrational numbers.
float third = 1/3;
float two=2.0;
float another_two=third*6.0;
if(two != another_two)
print ("Approximation!\n");
The only time comparing bitwise would work is when you derive the floating point numbers exactly the same way or they are exact representations (whole numbers, fraction powers of two). Even then, there can be multiple representations of some numbers, though I have never seen this in a working system.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What is the best place to store a configuration file in a Java web application (WAR)? I create a web application (WAR) and deploy it on Tomcat. In the webapp there is a page with a form where an administrator can enter some configuration data. I don't want to store this data in an DBMS, but just in an XML file on the file system. Where to put it?
I would like to put the file somewhere in the directory tree where the application itself is deployed. Should my configuration file be in the WEB-INF directory? Or put it somewhere else?
And what is the Java code to use in a servlet to find the absolute path of the directory? Or can it be accessed with a relative path?
A: Putting it in WEB-INF will hide the XML file from users who try to access it directly through a URL, so yes, I'd say put it in WEB-INF.
A: I would not store it in the application folder, because that would override the configuration with a new deployment of the application.
I suggest you have a look at the Preferences API, or write something in the users folder (the user that is running Tomcat).
A: What we do is to put it in a separate directory on the server (you could use something like /config, /opt/config, /root/config, /home/username/config, or anything you want). When our servlets start up, they read the XML file, get a few things out of it (most importantly DB connection information), and that's it.
I asked about why we did this once.
It would be nice to store everything in the DB, but obviously you can't store DB connection information in the DB.
You could hardcode things in the code, but that's ugly for many reasons. If the info ever has to change you have to rebuild the code and redeploy. If someone gets a copy of your code or your WAR file they would then get that information.
Putting things in the WAR file seems nice, but if you want to change things much it could be a bad idea. The problem is that if you have to change the information, then next time you redeploy it will overwrite the file so anything you didn't remember to change in the version getting built into the WAR gets forgotten.
The file in a special place on the file system thing works quite well for us. It doesn't have any big downsides. You know where it is, it's stored seperatly, makes deploying to multiple machines easy if they all need different config values (since it's not part of the WAR).
The only other solution I can think of that would work well would be keeping everything in the DB except the DB login info. That would come from Java system properties that are retrieved through the JVM. This the Preferences API thing mentioned by Hans Doggen above. I don't think it was around when our application was first developed, if it was it wasn't used.
As for the path for accessing the configuration file, it's just a file on the filesystem. You don't need to worry about the web path. So when your servlet starts up it just opens the file at "/config/myapp/config.xml" (or whatever) and it will find the right thing. Just hardcodeing the path in for this one seems pretty harmless to me.
A: The answer to this depends on how you intend to read and write that config file.
For example, the Spring framework gives you the ability to use XML configuration files (or Java property files); these can be stored in your classpath (e.g., in the WEB-INF directory), anywhere else on the filesystem, or even in memory. If you were to use Spring for this, then the easiest place to store the config file is in your WEB-INF directory, and then use Spring's ClassPathXmlApplicationContext class to access your configuration file.
But again, it all depends on how you plan to access that file.
A: WEB-INF is a good place to put your config file. Here's some code to get the absolute path of the directory from a servlet.
public void init(ServletConfig servletConfig) throws ServletException{
super.init(servletConfig);
String path = servletConfig.getServletContext().getRealPath("/WEB-INF")
A: If it is your custom config WEB-INF is a good place for it. But some libraries may require configs to reside in WEB-INF/classes.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "57"
}
|
Q: How do I append to an alist in scheme? Adding an element to the head of an alist (Associative list) is simple enough:
> (cons '(ding . 53) '((foo . 42) (bar . 27)))
((ding . 53) (foo . 42) (bar . 27))
Appending to the tail of an alist is a bit trickier though. After some experimenting, I produced this:
> (define (alist-append alist pair) `(,@alist ,pair))
> (alist-append '((foo . 42) (bar . 27)) '(ding . 53))
'((foo . 42) (bar . 27) (ding . 53))
However, it seems to me, that this isn't the idiomatic solution. So how is this usually done in scheme? Or is this in fact the way?
A: Common Lisp defines a function called ACONS for exactly this purpose, where
(acons key value alist)
is equivalent to:
(cons (cons key value) alist)
This strongly suggests that simply consing onto an alist is idiomatic. Note that this means two things:
*
*As searches are usually performed from front to back, recently added associations take precedence over older ones. This can be used for a naive implementation of both lexical and dynamic environments.
*While consing onto a list is O(1), appending is generally O(n) where n is the length of the list, so the idiomatic usage is best for performance as well as being stylistically preferable.
A: You don't append to an a-list. You cons onto an a-list.
An a-list is logically a set of associations. You don't care about the order of elements in a set. All you care about is presence or absence of a particular element. In the case of an a-list, all you care about is whether there exists an association for a given tag (i.e., a pair whose CAR is the specified value), and, given that association, the associated value (i.e., in this implementation, the CDR of the pair).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Can I create a Visual Studio macro to launch a specific project in the debugger? My project has both client and server components in the same solution file. I usually have the debugger set to start them together when debugging, but it's often the case where I start the server up outside of the debugger so I can start and stop the client as needed when working on client-side only stuff. (this is much faster).
I'm trying to save myself the hassle of poking around in Solution Explorer to start individual projects and would rather just stick a button on the toolbar that calls a macro that starts the debugger for individual projects (while leaving "F5" type debugging alone to start up both processess).
I tried recording, but that didn't really result in anything useful.
So far all I've managed to do is to locate the project item in the solution explorer:
Dim projItem As UIHierarchyItem
projItem = DTE.ToolWindows.SolutionExplorer.GetItem("SolutionName\ProjectFolder\ProjectName").Select(vsUISelectionType.vsUISelectionTypeSelect)
(This is based loosely on how the macro recorder tried to do it. I'm not sure if navigating the UI object model is the correct approach, or if I should be looking at going through the Solution/Project object model instead).
A: Ok. This appears to work from most UI (all?) contexts provided the solution is loaded:
Sub DebugTheServer()
DTE.Windows.Item(Constants.vsWindowKindSolutionExplorer).Activate()
DTE.ActiveWindow.Object.GetItem("Solution\ServerFolder\ServerProject").Select(vsUISelectionType.vsUISelectionTypeSelect)
DTE.Windows.Item(Constants.vsWindowKindOutput).Activate()
DTE.ExecuteCommand("ClassViewContextMenus.ClassViewProject.Debug.Startnewinstance")
End Sub
A: From a C# add-in, the following worked for me:
Dte.Windows.Item(Constants.vsWindowKindSolutionExplorer).Activate();
Dte.ToolWindows.SolutionExplorer.GetItem("SolutionName\\SolutionFolderName\\ProjectName").Select(vsUISelectionType.vsUISelectionTypeSelect);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Batch file to copy files from one directory to another I have two code bases of an application. I need to copy all the files in all the directories with .java from the newer code base, to the older (so I can commit it to svn).
How can I write a batch files to do this?
A: XCOPY /D ?
xcopy c:\olddir\*.java c:\newdir /D /E /Q /Y
A: If you've lots of different instances of this problem to solve, I've had some success with Apache Ant for this kind of copy/update/backup kind of thing.
There is a bit of a learning curve, though, and it does require you to have a Java runtime environment installed.
A: I like Robocopy ("Robust File Copy"). It is a command-line directory replication command. It was available as part of the Windows Resource Kit, and is introduced as a standard feature of Windows Vista and Windows Server 2008.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What is a better way to create a game loop on the iPhone other than using NSTimer? I am programming a game on the iPhone. I am currently using NSTimer to trigger my game update/render. The problem with this is that (after profiling) I appear to lose a lot of time between updates/renders and this seems to be mostly to do with the time interval that I plug into NSTimer.
So my question is what is the best alternative to using NSTimer?
One alternative per answer please.
A: I don't know about the iPhone in particular, but I may still be able to help:
Instead of simply plugging in a fixed delay at the end of the loop, use the following:
*
*Determine a refresh interval that you would be happy with and that is larger than a single pass through your main loop.
*At the start of the loop, take a current timestamp of whatever resolution you have available and store it.
*At the end of the loop, take another timestamp, and determine the elapsed time since the last timestamp (initialize this before the loop).
*sleep/delay for the difference between your ideal frame time and the already elapsed time this for the frame.
*At the next frame, you can even try to compensate for inaccuracies in the sleep interval by comparing to the timestamp at the start of the previous loop. Store the difference and add/subtract it from the sleep interval at the end of this loop (sleep/delay can go too long OR too short).
You might want to have an alert mechanism that lets you know if you're timing is too tight(i,e, if your sleep time after all the compensating is less than 0, which would mean you're taking more time to process than your frame rate allows). The effect will be that your game slows down. For extra points, you may want to simplify rendering for a while, if you detect this happening, until you have enough spare capacity again.
A: You can get a better performance with threads, try something like this:
- (void) gameLoop
{
while (running)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
[self renderFrame];
[pool release];
}
}
- (void) startLoop
{
running = YES;
#ifdef THREADED_ANIMATION
[NSThread detachNewThreadSelector:@selector(gameLoop)
toTarget:self withObject:nil];
#else
timer = [NSTimer scheduledTimerWithTimeInterval:1.0f/60
target:self selector:@selector(renderFrame) userInfo:nil repeats:YES];
#endif
}
- (void) stopLoop
{
[timer invalidate];
running = NO;
}
In the renderFrame method You prepare the framebuffer, draw frame and present the framebuffer on screen. (P.S. There is a great article on various types of game loops and their pros and cons.)
A: Use the CADisplayLink, you can find how in the OpenGL ES template project provided in XCODE (create a project starting from this template and give a look to the EAGLView class, this example is based on open GL, but you can use CADisplayLink only for other kind of games
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: How do I intercept a paste event in an editbox? How do I intercept a paste event in an editbox, possibly before the value is transferred to the object?
A: Look up subclassing windows.
A: If you subclass then intercept the WM_PASTE message you can do what you want, throw the message away to prevent the paste, manipulate the clipboard data, whatever.
A: Subclass the edit box and handle the WM_PASTE message.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Temporarily load SSL Client Key for Client Authentication in C# I am using the WebBrowser control to add a WebInterface to C# app. My desire is to verify that only such app is able to connect to our Web server using SSL client certificates.
My idea was to embed the client certificate in the app and just use when connecting via my app. Anybody have a sugestion on how to do this? Or the only way to make it work is to load the key in the X509Store.
If I put it in X509Store, will it make my key available for general Internet Explorer Usage?
A: Are you sure this is what you want to do? If you embed the private key in your application (as your approach entails), an attacker can extract it and use it to authenticate their rogue software.
A server cannot authenticate client software. It can only test whether a client possesses some secret. When you embed a private key in your client and distribute it, it will not be a secret anymore.
I'd recommend authenticating users of your software, rather than the software itself. You need to let users generate their own secret, whether it is a password or a private key, and give them an incentive to protect it.
A: So, several thoughts here:
1.
I agree with 'erickson', validating that ONLY your app can communicate with the app is nearly impossible with your current design. It's just a matter of time before someone reverse engineer's your app and then its game over (if that's you only form of security). If you want to validate that its your app and a valid user then you need to authenticate the user as well as some mechanism of checking the signature of the app in question (which I don't believe is possible in a client-server model...after all I can always lie and say that my 'hackyou' app has the same signature as your 'realapp' and you can't verify that from the server-side)
2.
Remember the WebBrowser control is essentially a wrapper around IE, so without some tricks (which I'll get to in a sec) you would have to add the cert to the user store.
3.
Here's a hacky way to accomplish what you're asking (even though its a bad idea):
*
*First use the WebRequest.Create to create a HttpWebRequest object
*Manually load a X509Certificate2 object from either a file or the binary stream encoded in the program
*use the HttpWebRequest.ClientCertificates to add your cert to the webrequest
*Send the request, get the response
*Send the response to the WebBrowser by pushing the ResponseStream of the HttpWebResponse to the DocumentStream of the WebBrowser
This essentially means that you will have to write some wrapper classes to handle the Requests and Responses to and from the Server and are just using the WebBrowser to handling the viewing of the HTML.
In reality, you need to redesign and look at the threats you're trying to handle!
A: The intent of using the key is not so much to validate the users as to restrict access to users of the app instead of using any WebBrowser. This is sort of an intranet behavior over the public internet.
This is a poor man's DRM. The losses due to people extracting the key are not that important. I think the risk of this happening is low and what we could loose is minimal.
Nevertheless if there is any other idea to restrict access to the WebServer to only users of the App I am open for any suggestions. Basically my desire is now to have a public WebServer wide open to be read by anyone, but access over the public network from diverse places is necessary so setting up a intranet infrastructure is not possible either.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Visual Studio: Detecting unneeded Assemblies On larger and/or long running projects, I tend to reference many assemblies and namespaces, and often I end up removing some functionality later on or moving it into a different project.
I just wonder, is there a way to check every project (heck, every .cs file) in my whole Visual Studio solution and get a list of all referenced Assemblies and Namespaces that are not actually being used and can be safely removed? I know that ReSharper can do it for a single Code File, but I did not see an option to check all files or to check for unneeded Assemblies.
Using Visual Studio 2005 and 2008 Professional if that matters.
Edit: Thanks so far. The Problem with ReSharper or "Remove and Readd if build breaks" is that it's quite tedious on every single file and assembly (My project has about 120 .cs Files in 7 Assemblies, and references a total of 18 Assemblies outside of the solution), so ideally i'm looking for something "one-click". Big Bonus points for some automatic way that can be used in buildscripts to generate a report :)
A: Resharper will do this for you and you can set it up in the Clean Code option that you can run solution wide ;o)
A: If you have ReSharper then, select the solution, right click and select cleanup code. Resharper will then go through every code file in the solution.
As to removing project references, when the compiler runs it won't add the reference if no code uses that dll.
A: If you have ReSharper installed, then from within the Solution Explorer you can right-click on a reference and click Find Dependent Code. If it comes back with a dialog of results then you're using that reference/assembly. If you get the message "Code dependent on module module name not found." Then you should be OK to remove that reference/assembly because it's not being used.
A: (Cross-posted from here)
Given that VisualStudio (or is it msbuild?) detects unused references and doesn't include them in the output file, you can write a script which parses the references out of the csproj, and compares that with the referenced Assemblies detected by reflexion on the project output.
If you're motivated...
A: I found this question while searching in Google and looking for a way to remove unused "Using" statements from my code.
Refactor is great, but it isn't available to me. However, as it turns out Visual Studio 2008 does this on its own.
Here are the steps:
*
*Load the code in Visual Studio.
*Right-click anywhere on the code page.
*Click "Organize Usings" from the menu.
*Click "Remove Unused Usings"
Done. I realize this doesn't answer the original question (doing this en masse for an entire project) but it does answer my question.
A: Removing unused references is a feature Visual Studio 2008 already supports. Unfortunately, only for VB .NET projects.
I have opened a suggestion on Microsoft Connect to get this feature for C# projects too:
http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=510326
If you like this feature as well then you might vote my suggestion.
A: As was mentioned by previous responses, Resharper works well for this as well as a variety of other cases such as ways to make code cleaner and generate certain things for you. The best way to really figure out all the advantages of Resharper is to download the trial, print out a Keymap Cheatsheet and stick it next to you where you're developing at.
Long story short, it does solve this problem but it does a heck of a lot more than just that.
A: Any decent code profiler will do this for you. I like DevPartner personally.
A: It won't get rid of any references in code to your referenced assemblies (I think), but you can go to the properties of each of your projects, got to References, and click "Unused References...". Visual Studio then gives you the option to remove them right there.
No way to do it at the solution level though.
A: Yeah, i don't think there is one. I just delete some i don't think are needed then build :/
BTW. that will keep using errors. You can use the Visual Studio Power commands to remove and sort usings, so do that first, then delete random assemblies :D
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: In C++ I Cannot Grasp Pointers and Classes I'm fresh out of college and have been working in C++ for some time now. I understand all the basics of C++ and use them, but I'm having a hard time grasping more advanced topics like pointers and classes. I've read some books and tutorials and I understand the examples in them, but then when I look at some advanced real life examples I cannot figure them out. This is killing me because I feel like its keeping me from bring my C++ programming to the next level. Did anybody else have this problem? If so, how did you break through it?
Does anyone know of any books or tutorials that really describe pointers and class concepts well?
or maybe some example code with good descriptive comments using advanced pointers and class techniques?
any help would be greatly appreciated.
A: This link has a video describing how pointers work, with claymation. Informative, and easy to digest.
This page has some good information on the basic of classes.
A: I used to have a problem understand pointers in pascal way back :) Once i started doing assembler pointers was really the only way to access memory and it just hit me. It might sound like a far shot, but trying out assembler (which is always a good idea to try and understand what computers is really about) probably will teach you pointers. Classes - well i don't understand your problem - was your schooling pure structured programming? A class is just a logical way of looking at real life models - you're trying to solve a problem which could be summed up in a number of objects/classes.
A: Pointers and classes are completely different topics so I wouldn't really lump them in together like this. Of the two, I would say pointers are more fundamental.
A good exercise for learning about what pointers are is the following:
*
*create a linked list
*iterate through it from start to finish
*reverse it so that the head is now the back and the back is now the head
Do it all on a whiteboard first. If you can do this easily, you should have no more problems understanding what pointers are.
A: Pointers and classes aren't really advanced topics in C++. They are pretty fundamental.
For me, pointers solidified when I started drawing boxes with arrows. Draw a box for an int. And int* is now a separate box with an arrow pointing to the int box.
So:
int foo = 3; // integer
int* bar = &foo; // assigns the address of foo to my pointer bar
With my pointer's box (bar) I have the choice of either looking at the address inside the box. (Which is the memory address of foo). Or I can manipulate whatever I have an address to. That manipulation means I'm following that arrow to the integer (foo).
*bar = 5; // asterix means "dereference" (follow the arrow), foo is now 5
bar = 0; // I just changed the address that bar points to
Classes are another topic entirely. There's some books on object oriented design, but I don't know good ones for beginners of the top of my head. You might have luck with an intro Java book.
A: Understanding Pointers in C/C++
Before one can understand how pointers work, it is necessary to understand how variables are stored and accessed in programs. Every variable has 2 parts to it - (1) the memory address where the data is stored and (2) the value of the data stored.
The memory address is often referred to as the lvalue of a variable, and the value of the data stored is referred to as the rvalue (l and r meaning left and right).
Consider the statement:
int x = 10;
Internally, the program associates a memory address with the variable x. In this case, let's assume that the program assigns x to reside at the address 1001 (not a realistic address, but chosen for simplicity). Therefore, the lvalue (memory address) of x is 1001, and the rvalue (data value) of x is 10.
The rvalue is accessed by simply using the variable “x”. In order to access the lvalue, the “address of” operator (‘&’) is needed. The expression ‘&x’ is read as "the address of x".
Expression Value
----------------------------------
x 10
&x 1001
The value stored in x can be changed at any time (e.g. x = 20), but the address of x (&x) can never be changed.
A pointer is simply a variable that can be used to modify another variable. It does this by having a memory address for its rvalue. That is, it points to another location in memory.
Creating a pointer to “x” is done as follows:
int* xptr = &x;
The “int*” tells the compiler that we are creating a pointer to an integer value. The “= &x” part tells the compiler that we are assigning the address of x to the rvalue of xptr. Thus, we are telling the compiler that xptr “points to” x.
Assuming that xptr is assigned to a memory address of 1002, then the program’s memory might look like this:
Variable lvalue rvalue
--------------------------------------------
x 1001 10
xptr 1002 1001
The next piece of the puzzle is the "indirection operator" (‘*’), which is used as follows:
int y = *xptr;
The indirection operator tells the program to interpret the rvalue of xptr as a memory address rather than a data value. That is, the program looks for the data value (10) stored at the address provided by xptr (1001).
Putting it all together:
Expression Value
--------------------------------------------
x 10
&x 1001
xptr 1001
&xptr 1002
*xptr 10
Now that the concepts have been explained, here is some code to demonstrate the power of pointers:
int x = 10;
int *xptr = &x;
printf("x = %d\n", x);
printf("&x = %d\n", &x);
printf("xptr = %d\n", xptr);
printf("*xptr = %d\n", *xptr);
*xptr = 20;
printf("x = %d\n", x);
printf("*xptr = %d\n", *xptr);
For output you would see (Note: the memory address will be different each time):
x = 10
&x = 3537176
xptr = 3537176
*xptr = 10
x = 20
*xptr = 20
Notice how assigning a value to ‘*xptr’ changed the value of ‘x’. This is because ‘*xptr’ and ‘x’ refer to the same location in memory, as evidenced by ‘&x’ and ‘xptr’ having the same value.
A: Pointers already seem to be addressed (no pun intended) in other answers.
Classes are fundamental to OO. I had tremendous trouble wrenching my head into OO - like, ten years of failed attempts. The book that finally helped me was Craig Larman's "Applying UML and Patterns". I know it sounds as if it's about something different, but it really does a great job of easing you into the world of classes and objects.
A: We were just discussing some of the aspects of C++ and OO at lunch, someone (a great engineer actually) was saying that unless you have a really strong programming background before you learn C++, it will literally ruin you.
I highly recommend learning another language first, then shifting to C++ when you need it. It's not like there is anything great about pointers, they are simply a vestigial piece left over from when it was difficult for a compiler convert operations to assembly efficiently without them.
These days if a compiler can't optimize an array operation better then you can using pointers, your compiler is broken.
Please don't get me wrong, I'm not saying C++ is horrible or anything and don't want to start an advocacy discussion, I've used it and use it occasionally now, I'm just recommending you start with something else.
It's really NOT like learning to drive a manual car then easily being able to apply that to an automatic, it's more like learning to drive on one of those huge construction cranes then assuming that will apply when you start to drive a car--then you find yourself driving your car down the middle of the street at 5mph with your emergency lights on.
[edit] reviewing that last paragraph--I think that may have been my most accurate analogy ever!
A: For Pointers:
I found this post had very thoughtful discussion about pointers. Maybe that would help. Are you familar with refrences such as in C#? That is something that actually refers to something else? Thats probably a good start for understanding pointers.
Also, look at Kent Fredric's post below on another way to introduce yourself to pointers.
A: To understand pointers, I can't recommend the K&R book highly enough.
A: The book that cracked pointers for me was Illustrating Ansi C by Donald Alcock. Its full of hand-drawn-style box and arrow diagrams that illustrate pointers, pointer arithmetic, arrays, string functions etc...
Obviously its a 'C' book but for core fundamentals its hard to beat
A: From lassevek's response to a similar question on SO:
Pointers is a concept that for many
can be confusing at first, in
particular when it comes to copying
pointer values around and still
referencing the same memory block.
I've found that the best analogy is to
consider the pointer as a piece of
paper with a house address on it, and
the memory block it references as the
actual house. All sorts of operations
can thus be easily explained:
*
*Copy pointer value, just write the address on a new piece of paper
*Linked lists, piece of paper at the house with the address of the next
house on it
*Freeing the memory, demolish the house and erase the address
*Memory leak, you lose the piece of paper and cannot find the house
*Freeing the memory but keeping a (now invalid) reference, demolish the
house, erase one of the pieces of
paper but have another piece of paper
with the old address on it, when you
go to the address, you won't find a
house, but you might find something
that resembles the ruins of one
*Buffer overrun, you move more stuff into the house than you can
possibly fit, spilling into the
neighbours house
A: Learn assembly language and then learn C. Then you will know what the underlying principles of machine are (and thefore pointers).
Pointers and classes are fundamental aspects of C++. If you don't understand them then it means that you don't really understand C++.
Personally I held back on C++ for several years until I felt I had a firm grasp of C and what was happening under the hood in assembly language. Although this was quite a long time ago now I think it really benefited my career to understand how the computer works at a low-level.
Learning to program can take many years, but you should stick with it because it is a very rewarding career.
A: There's no substitute for practicing.
It's easy to read through a book or listen to a lecture and feel like you're following what's going on.
What I would recommend is taking some of the code examples (I assume you have them on disk somewhere), compile them and run them, then try to change them to do something different.
*
*Add another subclass to a hierarchy
*Add a method to an existing class
*Change an algorithm that iterates
forward through a collection to go
backward instead.
I don't think there's any "silver bullet" book that's going to do it.
For me, what drove home what pointers meant was working in assembly, and seeing that a pointer was actually just an address, and that having a pointer didn't mean that what it pointed to was a meaningful object.
A: In a sense, you can consider "pointers" to be one of the two most fundamental types in software - the other being "values" (or "data") - that exist in a huge block of uniquely-addressable memory locations. Think about it. Objects and structs etc don't really exist in memory, only values and pointers do. In fact, a pointer is a value too....the value of a memory address, which in turn contains another value....and so on.
So, in C/C++, when you declare an "int" (intA), you are defining a 32bit chunk of memory that contains a value - a number. If you then declare an "int pointer" (intB), you are defining a 32bit chunk of memory that contains the address of an int. I can assign the latter to point to the former by stating "intB = &intA", and now the 32bits of memory defined as intB, contains an address corresponding to intA's location in memory.
When you "dereference" the intB pointer, you are looking at the address stored within intB's memory, finding that location, and then looking at the value stored there (a number).
Commonly, I have encountered confusion when people lose track of exactly what it is they're dealing with as they use the "&", "*" and "->" operators - is it an address, a value or what? You just need to keep focused on the fact that memory addresses are simply locations, and that values are the binary information stored there.
A: For pointers and classes, here is my analogy. I'll use a deck of cards. The deck of cards has a face value and a type (9 of hearts, 4 of spades, etc.). So in our C++ like programming language of "Deck of Cards" we'll say the following:
HeartCard card = 4; // 4 of hearts!
Now, you know where the 4 of hearts is because by golly, you're holding the deck, face up in your hand, and it's at the top! So in relation to the rest of the cards, we'll just say the 4 of hearts is at BEGINNING. So, if I asked you what card is at BEGINNING, you would say, "The 4 of hearts of course!". Well, you just "pointed" me to where the card is. In our "Deck of Cards" programming language, you could just as well say the following:
HeartCard card = 4; // 4 of hearts!
print &card // the address is BEGINNING!
Now, turn your deck of cards over. The back side is now BEGINNING and you don't know what the card is. But, let's say you can make it whatever you want because you're full of magic. Let's do this in our "Deck of Cards" langauge!
HeartCard *pointerToCard = MakeMyCard( "10 of hearts" );
print pointerToCard // the value of this is BEGINNING!
print *pointerToCard // this will be 10 of hearts!
Well, MakeMyCard( "10 of hearts" ) was you doing your magic and knowing that you wanted to point to BEGINNING, making the card a 10 of hearts! You turn your card over and, voila! Now, the * may throw you off. If so, check this out:
HeartCard *pointerToCard = MakeMyCard( "10 of hearts" );
HeartCard card = 4; // 4 of hearts!
print *pointerToCard; // prints 10 of hearts
print pointerToCard; // prints BEGINNING
print card; // prints 4 of hearts
print &card; // prints END - the 4 of hearts used to be on top but we flipped over the deck!
As for classes, we've been using classes in the example by defining a type as HeartCard. We know what a HeartCard is... It's a card with a value and the type of heart! So, we've classified that as a HeartCard. Each language has a similar way of defining or "classifying" what you want, but they all share the same concept! Hope this helped...
A: In the case of classes I had three techniques that really helped me make the jump into real object oriented programming.
The first was I worked on a game project that made heavy use of classes and objects, with heavy use of generalization (kind-of or is-a relationship, ex. student is a kind of person) and composition (has-a relationship, ex. student has a student loan). Breaking apart this code took a lot of work, but really brought things into perspective.
The second thing that helped was in my System Analysis class, where I had to make http://www.agilemodeling.com/artifacts/classDiagram.htm">UML class diagrams. These I just really found helped me understand the structure of classes in a program.
Lastly, I help tutor students at my college in programming. All I can really say about this is you learn a lot by teaching and by seeing other people's approach to a problem. Many times a student will try things that I would never have thought of, but usually make a lot of sense and they just have problems implementing their idea.
My best word of advice is it takes a lot of practice, and the more you program the better you will understand it.
A: Pretend a pointer is an array address.
x = 500; // memory address for hello;
MEMORY[x] = "hello";
print MEMORY[x];
its a graphic oversimplification, but for the most part as long as you never want to know what that number is or set it by hand you should be fine.
Back when I understood C I had a few macros I had which more or less permitted you to use pointers just like they were an array index in memory. But I've long since lost that code and long since forgotten.
I recall it started with
#define MEMORY 0;
#define MEMORYADDRESS( a ) *a;
and that on its own is hardly useful. Hopefully somebody else can expand on that logic.
A: The best book I've read on these topics is Thinking in C++ by Bruce Eckel. You can download it for free here.
A: For classes:
The breakthru moment for me was when I learned about interfaces. The idea of abstracting away the details of how you wrote solved a problem, and giving just a list of methods that interact with the class was very insightful.
In fact, my professor explicitly told us that he would grade our programs by plugging our classes into his test harness. Grading would be done based on the requirements he gave to us and whether the program crashed.
Long story short, classes let you wrap up functionality and call it in a cleaner manner (most of the time, there are always exceptions)
A: One of the things that really helped me understand these concepts is to learn UML - the Unified Modeling Language. Seeing concepts of object-oriented design in a graphical format really helped me learn what they mean. Sometimes trying to understand these concepts purely by looking at what source code implements them can be difficult to comprehend.
Seeing object-oriented paradigms like inheritance in graphical form is a very powerful way to grasp the concept.
Martin Fowler's UML Distilled is a good, brief introduction.
A: To better understand pointers, I think, it may be useful to look at how the assembly language works with pointers. The concept of pointers is really one of the fundamental parts of the assembly language and x86 processor instruction architecture. Maybe it'll kind of let you fell like pointers are a natural part of a program.
As to classes, aside from the OO paradigm I think it may be interesting to look at classes from a low-level binary perspective. They aren't that complex in this respect on the basic level.
You may read Inside the C++ Object Model if you want to get a better understanding of what is underneath C++ object model.
A: Classes are relatively easy to grasp; OOP can take you many years. Personally, I didn't fully grasp true OOP until last year-ish. It is too bad that Smalltalk isn't as widespread in colleges as it should be. It really drives home the point that OOP is about objects trading messages, instead of classes being self-contained global variables with functions.
If you truly are new to classes, then the concept can take a while to grasp. When I first encountered them in 10th grade, I didn't get it until I had someone who knew what they were doing step through the code and explain what was going on. That is what I suggest you try.
A: The point at which I really got pointers was coding TurboPascal on a FatMac (around 1984 or so) - which was the native Mac language at the time.
The Mac had an odd memory model whereby when allocated the address the memory was stored in a pointer on the heap, but the location of that itself was not guaranteed and instead the memory handling routines returned a pointer to the pointer - referred to as a handle. Consequently to access any part of the allocated memory it was necessary to dereference the handle twice. It took a while, but constant practice eventually drove the lesson home.
Pascal's pointer handling is easier to grasp than C++, where the syntax doesn't help the beginner. If you are really and truly stuck understanding pointers in C then your best option might be to obtain a copy a a Pascal compiler and try writing some basic pointer code in it (Pascal is near enough to C you'll get the basics in a few hours). Linked lists and the like would be a good choice. Once you're comfortable with those return to C++ and with the concepts mastered you'll find that the cliff won't look so steep.
A: Did you read Bjarne Stroustrup's The C++ Programming Language? He created C++.
The C++ FAQ Lite is also good.
A: You may find this article by Joel instructive. As an aside, if you've been "working in C++ for some time" and have graduated in CS, you may have gone to a JavaSchool (I'd argue that you haven't been working in C++ at all; you've been working in C but using the C++ compiler).
Also, just to second the answers of hojou and nsanders, pointers are very fundamental to C++. If you don't understand pointers, then you don't understand the basics of C++ (acknowledging this fact is the beginning of understanding C++, by the way). Similarly, if you don't understand classes, then you don't understand the basics of C++ (or OO for that matter).
For pointers, I think drawing with boxes is a fine idea, but working in assembly is also a good idea. Any instructions that use relative addressing will get you to an understanding of what pointers are rather quickly, I think.
As for classes (and object-oriented programming more generally), I would recommend Stroustrups "The C++ Programming Language" latest edition. Not only is it the canonical C++ reference material, but it also has quite a bit of material on a lot of other things, from basic object-oriented class hierarchies and inheritance all the way up to design principles in large systems. It's a very good read (if not a little thick and terse in spots).
A: Pointers are not some sort of magical stuff, you're using them all the time!
When you say:
int a;
and the compiler generates storage for 'a', you're practically saying that you're declaringan int and you want to name its memory location 'a'.
When you say:
int *a;
you're declaring a variable that can hold a memory location of an int.
It's that simple. Also, don't be scared about pointer arithmetics, just always
have in mind a "memory map" when you're dealing with pointers and think in terms
of walking through memory addresses.
Classes in C++ are just one way of defining abstract data types. I'd suggest reading a good OOP book to understand the concept, then, if you're interested, learn how C++ compilers generate code to simulate OOP. But this knowledge will come in time, if you stick with C++ long enough :)
A: Your problem seems to be the C core in C++, not C++ itself. Get yourself the Kernighan & Ritchie (The C Programming Language). Inhale it. It's very good stuff, one of the best programming language books ever written.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: What are some popular naming conventions for Unit Tests? General
*
*Follow the same standards for all tests.
*Be clear about what each test state is.
*Be specific about the expected behavior.
Examples
1) MethodName_StateUnderTest_ExpectedBehavior
Public void Sum_NegativeNumberAs1stParam_ExceptionThrown()
Public void Sum_NegativeNumberAs2ndParam_ExceptionThrown ()
Public void Sum_simpleValues_Calculated ()
Source: Naming standards for Unit Tests
2) Separating Each Word By Underscore
Public void Sum_Negative_Number_As_1st_Param_Exception_Thrown()
Public void Sum_Negative_Number_As_2nd_Param_Exception_Thrown ()
Public void Sum_Simple_Values_Calculated ()
Other
*
*End method names with Test
*Start method names with class name
A: I am pretty much with you on this one man. The naming conventions you have used are:
*
*Clear about what each test state is.
*Specific about the expected behaviour.
What more do you need from a test name?
Contrary to Ray's answer I don't think the Test prefix is necessary. It's test code, we know that. If you need to do this to identify the code, then you have bigger problems, your test code should not be mixed up with your production code.
As for length and use of underscore, its test code, who the hell cares? Only you and your team will see it, so long as it is readable, and clear about what the test is doing, carry on! :)
That said, I am still quite new to testing and blogging my adventures with it :)
A: The first set of names is more readable to me, since the CamelCasing separates words and the underbars separate parts of the naming scheme.
I also tend to include "Test" somewhere, either in the function name or the enclosing namespace or class.
A: This is also worth a read: Structuring Unit Tests
The structure has a test class per class being tested. That’s not so unusual. But what was unusual to me was that he had a nested class for each method being tested.
e.g.
using Xunit;
public class TitleizerFacts
{
public class TheTitleizerMethod
{
[Fact]
public void NullName_ReturnsDefaultTitle()
{
// Test code
}
[Fact]
public void Name_AppendsTitle()
{
// Test code
}
}
public class TheKnightifyMethod
{
[Fact]
public void NullName_ReturnsDefaultTitle()
{
// Test code
}
[Fact]
public void MaleNames_AppendsSir()
{
// Test code
}
[Fact]
public void FemaleNames_AppendsDame()
{
// Test code
}
}
}
And here is why:
Well for one thing, it’s a nice way to keep tests organized. All the
tests (or facts) for a method are grouped together. For example, if
you use the CTRL+M, CTRL+O shortcut to collapse method bodies, you can
easily scan your tests and read them like a spec for your code.
I also like this approach:
MethodName_StateUnderTest_ExpectedBehavior
So perhaps adjust to:
StateUnderTest_ExpectedBehavior
Because each test will already be in a nested class
A: I tend to use the convention of MethodName_DoesWhat_WhenTheseConditions so for example:
Sum_ThrowsException_WhenNegativeNumberAs1stParam
However, what I do see a lot is to make the test name follow the unit testing structure of
*
*Arrange
*Act
*Assert
Which also follows the BDD / Gherkin syntax of:
*
*Given
*When
*Then
which would be to name the test in the manner of: UnderTheseTestConditions_WhenIDoThis_ThenIGetThis
so to your example:
WhenNegativeNumberAs1stParam_Sum_ThrowsAnException
However I do much prefer putting the method name being tested first, because then the tests can be arranged alphabetically, or appear alphabetically sorted in the member dropdown box in VisStudio, and all the tests for 1 method are grouped together.
In any case, I like separating the major sections of the test name with underscores, as opposed to every word, because I think it makes it easier to read and get the point of the test across.
In other words, I like: Sum_ThrowsException_WhenNegativeNumberAs1stParam better than Sum_Throws_Exception_When_Negative_Number_As_1st_Param.
A: I do name my test methods like other methods using "PascalCasing" without any underscores or separators. I leave the postfix Test for the method out, cause it adds no value. That the method is a test method is indicated by the attribute TestMethod.
[TestMethod]
public void CanCountAllItems() {
// Test the total count of items in collection.
}
Due to the fact that each Test class should only test one other class i leave the name of the class out of the method name. The name of the class that contains the test methods is named like the class under test with the postfix "Tests".
[TestClass]
public class SuperCollectionTests(){
// Any test methods that test the class SuperCollection
}
For methods that test for exceptions or actions that are not possible, i prefix the test method with the word Cannot.
[TestMethod]
[ExpectedException(typeOf(ArgumentException))]
public void CannotAddSameObjectAgain() {
// Cannot add the same object again to the collection.
}
My naming convension are base on the article "TDD Tips: Test Naming Conventions & Guidelines" of Bryan Cook. I found this article very helpful.
A: I use a 'T' prefix for test namespaces, classes and methods.
I try to be neat and create folders that replicate the namespaces, then create a tests folder or separate project for the tests and replicate the production structure for the basic tests:
AProj
Objects
AnObj
AProp
Misc
Functions
AFunc
Tests
TObjects
TAnObj
TAnObjsAreEqualUnderCondition
TMisc
TFunctions
TFuncBehavesUnderCondition
I can easily see that something is a test, I know exactly what original code it pertains to, (if you can't work that out, then the test is too convoluted anyway).
It looks just like the interfaces naming convention, (I mean, you don't get confused with things starting with 'I', nor will you with 'T').
It's easy to just compile with or without the tests.
It's good in theory anyway, and works pretty well for small projects.
A: As long as you follow a single practice, it doesn't really matter. Generally, I write a single unit test for a method that covers all the variations for a method (I have simple methods;) and then write more complex sets of tests for methods that require it. My naming structure is thus usually test (a holdover from JUnit 3).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "208"
}
|
Q: How can I restore svn control if the .svn folder has been damaged? I've got a couple large checkouts where the .svn folder has become damaged so I'm getting and error, "Cleanup failed to process the following path.." And I can no longer commit or update files in that directory.
I'd just delete and do the checkout again but the whole directory is over a gig.
Is there a tool that will restore the .svn folders for specific folders without having to download everything?
I understand that it's going to have to download all the files in that one folder so that it can determine if they've been changed..but subdirectories with valid .svn folders should be fine.
Oh.. I'm a big fan of TortoiseSVN or the command line for linux.
Thoughts?
A: If you know which folder has the damaged .svn directory, you can just delete that one directory and run an svn update again. You may have to delete the whole directory including its current contents. Of course, if the folder with the damaged .svn directory is the one containing a gigabyte, then you're back where you started.
A: Make a backup of the folder that has the missing .svn
Then delete the folder
If it is the root of the checkout, you will have to re-checkout
If it is not the root, just run an update from a directory above.
Then move the backup folder on top of it. (Ideally do not move back the .svn folders)
Continue working and be sure to update/commit!
A: In case you have changes to the files, and cannot delete them, you can use the Subversion 1.5 feature that allows you to 'checkout with obstructions'.
Just delete the .svn directory in this directory and:
(you don't need to delete inside directories when using --depth files, thanks Eric)
In case the broken directory was the top directory of the working copy:
svn checkout --depth files --force REPOS WC
And if the directory above the broken one is still versioned run:
svn update --depth files --force WC
in that directory.
In both samples REPOS is the url in the repository that matches the broken directory, and WC is the path to the directory.
Files that were originally modified will be in the modified state after this.
A: I've hit this in the past and found no working solution except the "nuclear option" (i.e. delete the directory and re-checkout).
Not sure if this is your problem, but my corruption was being caused by an on-access virus scanner on the same machine as SVN server.
A: If the subdirectories and OK and it's the subdirectories that are large, you could try a non-recursive fresh checkout.
A: The selected solution worked for me to restore the top-level .svn folder, but it doesn't recognize the child objects, so everything seems foreign to SVN at this point, despite versioning being intact in subfolders.
A: I encountered the same error today. It happened when I tried to switch branches and fail to delete one of the file that is not in svn repository. After that, the folder was locked and I can't use any command to get it work again.
I basically deleted what I had and redo the checkout. It is time consuming, but I really want to make sure svn is clear before I can start working again. Thanks!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
}
|
Q: How do you get the root namespace of an assembly? Given an instance of System.Reflection.Assembly.
A: I have come across this dilemma plenty of times when I want to load a resource from the current assembly by its manifest resource stream.
The fact is that if you embed a file as a resource in your assembly using Visual Studio its manifest resource name will be derived from the default namespace of the assembly as defined in the Visual Studio project.
The best solution I've come up with (to avoid hardcoding the default namespace as a string somewhere) is to simply ensure your resource loading code is ALWAYS happening from inside a class that's also in the default namespace and then the following near-generic approach may be used.
This example is loading an embedded schema.
XmlSchema mySchema;
string resourceName = "MyEmbeddedSchema.xsd";
string resourcesFolderName = "Serialisation";
string manifestResourceName = string.Format("{0}.{1}.{2}",
this.GetType().Namespace, resourcesFolderName, resourceName);
using (Stream schemaStream = currentAssembly.GetManifestResourceStream(manifestResourceName))
mySchema = XmlSchema.Read(schemaStream, errorHandler);
See also: How to get Namespace of an Assembly?
Edit: Also noticed a very detailed answer to the question I'm answering at http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/3a469f5d-8f55-4b25-ac25-4778f260bb7e
Another edit in case people with same question come looking: Excellent idea to solve the resource-loading question here: How get the default namespace of project csproj (VS 2008)
A: Not possible. Nothing specifies a "Root" namespace. The default namespace in the options is a visual studio thing, not a .net thing
A: Assemblies don't necessarily have a root namespace. Namespaces and Assemblies are orthogonal.
What you may be looking for instead, is to find a type within that Assembly, and then find out what its namespace is.
You should be able to accomplish this by using the GetExportedTypes() member and then using the Namespace property from one of the returned Type handles.
Again though, no guarantees all the types are in the same namespace (or even in the same namespace hierarchy).
A: I use typeof(App).Namespace in my WPF application.
App class is mandatory for any WPF application and it's located in root.
A: Get Types gives you a list of Type objects defined in the assembly. That object has a namespace property. Remember that an assembly can have multiple namespaces.
A: GetType(frm).Namespace
frm is the startup Form
A: There could be any number of namespaces in a given assembly, and nothing requires them to all start from a common root. The best you could do would be to reflect over all the types in an assembly and build up a list of unique namespaces contained therein.
A: I just created an empty internal class called Root and put it in the project root (assuming this is your root namespace). Then I use this everywhere I need the root namespace:
typeof(Root).Namespace;
Sure I end up with an unused file, but it's clean.
A: Namespaces have nothing to do with assemblies - any mapping between a namespace and the classes in an assembly is purely due to a naming convention (or coincidence).
A: There actually is an indirect way to get it, by enumerating the names of the assembly's manifest resources. The name you want ends with the part of it that you know.
Rather than repeat the code here, please see get Default namespace name for Assembly.GetManifestResourceStream() method
A: The question I had that landed me here was, "If I call library code N methods deep and want the namespace of the Project - for example the MVC app that's actually running - how do I get that?"
A little hacky but you can just grab a stacktrace and filter:
public static string GetRootNamespace()
{
StackTrace stackTrace = new StackTrace();
StackFrame[] stackFrames = stackTrace.GetFrames();
string ns = null;
foreach(var frame in stackFrames)
{
string _ns = frame.GetMethod().DeclaringType.Namespace;
int indexPeriod = _ns.IndexOf('.');
string rootNs = _ns;
if (indexPeriod > 0)
rootNs = _ns.Substring(0, indexPeriod);
if (rootNs == "System")
break;
ns = _ns;
}
return ns;
}
All this is doing is getting the stacktrace, running down the methods from most recently called to root, and filtering for System. Once it finds a System call it knows it's gone too far, and returns you the namespace immediately above it. Whether you're running a Unit Test, an MVC App, or a Service, the System container is going to be sitting 1 level deeper than the root namespace of your Project, so voila.
In some scenarios where System code is an intermediary (like System.Task) along the trace this is going to return the wrong answer. My goal was to take for example some startup code and let it easily find a class or Controller or whatever in the root Namespace, even if the code doing the work sits out in a library. This accomplishes that task.
I'm sure that can be improved - I'm sure this hacky way of doing things can be improved in many ways, and improvements are welcome.
A: Adding to all the other answers here, hopefully without repeating information, here is how I solved this using Linq. My situation is similar to Lisa's answer.
My solution comes with the following caveats:
*
*You're using Visual Studio and have a Root Namespace defined for your project, which I assume is what you're asking for since you use the term "root namespace"
*You're not embedding interop types from referenced assemblies
Dim baseNamespace = String.Join("."c,
Me.GetType().Assembly.ManifestModule.GetTypes().
Select(Function(type As Type)
Return type.Namespace.Split("."c)
End Function
).
Aggregate(Function(seed As String(), splitNamespace As String())
Return seed.Intersect(splitNamespace).ToArray()
End Function
)
)
A: Here as a rather simple way to get the root namespace for a website project.
''' <summary>
''' Returns the namespace of the currently running website
''' </summary>
Public Function GetWebsiteRootNamespace() As String
For Each Asm In AppDomain.CurrentDomain.GetAssemblies()
If Asm Is Nothing OrElse Asm.IsDynamic Then Continue For
For Each Typ In Asm.GetTypes
If Typ Is Nothing OrElse Typ.Name Is Nothing Then Continue For
If Typ.Name = "MyProject" Then Return Typ.Namespace.Split("."c)(0)
Next
Next
Return Nothing
End Function
This simply checks all the loaded assemblies for the "MyProject" type and returns the root namespace for that type. This is useful for logging when you have multiple web projects in a single solution sharing a log system. Hope this helps someone.
A: This solution works if you are trying to load an embedded resource.
var assembly = System.Reflection.Assembly.GetExecutingAssembly();
string[] resourceNames = assembly.GetManifestResourceNames();
string resourceNameNoNamespace = $"Languages.{languageSupport.IsoCode}.Languages.xml";
var match = resourceNames.SingleOrDefault(rn => rn.EndsWith(resourceNameNoNamespace));
A: Dim applicationNamespace = TextBeforeFirst(Assembly.GetCallingAssembly().EntryPoint.DeclaringType.Namespace, ".")
Public Function TextBeforeFirst(value As String, expression As String) As String
If String.IsNullOrEmpty(value) Or String.IsNullOrEmpty(expression) Then Return Nothing
Dim index = value.IndexOf(expression)
If index = -1 Then Return Nothing
Dim length = index
Return value.Substring(0, length)
End Function
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55"
}
|
Q: KornShell (ksh) code to send attachments with mailx and uuencode? I need to attach a file with mailx but at the moment I am not having success.
Here's my code:
subject="Something happened"
to="somebody@somewhere.com"
body="Attachment Test"
attachment=/path/to/somefile.csv
uuencode $attachment | mailx -s "$subject" "$to" << EOF
The message is ready to be sent with the following file or link attachments:
somefile.csv
Note: To protect against computer viruses, e-mail programs may prevent
sending or receiving certain types of file attachments. Check your
e-mail security settings to determine how attachments are handled.
EOF
Any feedback would be highly appreciated.
Update
I have added the attachment var to avoid having to use the path every time.
A: You have to concat both the text of your message and the uuencoded attachment:
$ subject="Something happened"
$ to="somebody@somewhere.com"
$ body="Attachment Test"
$ attachment=/path/to/somefile.csv
$
$ cat >msg.txt <<EOF
> The message is ready to be sent with the following file or link attachments:
>
> somefile.csv
>
> Note: To protect against computer viruses, e-mail programs may prevent
> sending or receiving certain types of file attachments. Check your
> e-mail security settings to determine how attachments are handled.
>
> EOF
$ ( cat msg.txt ; uuencode $attachment somefile.csv) | mailx -s "$subject" "$to"
There are different ways to provide the message text, this is just an example that is close to your original question. If the message should be reused it makes sense to just store it in a file and use this file.
A: Well, here are the first few problems you've got.
*
*You appear to be assuming that a mail client is going to handle uuencoded attachment without any headers. That won't happen.
*You're misusing I/O redirection: uuencode's output and the here-document are both being fed to mailx, which can't happen.
*You're misusing uuencode: if one path is given, it's just a name to give the decoded file, not an input file name. Giving the file twice will assign the same name to the decoded file as that which was read. The -m flag forces base64 encode. But this still isn't going to provide attachment headers for mailx.
You're way better getting a copy of mpack, which will do what you want.
If you must do it, you could do something like this:
cat <<EOF | ( cat -; uuencode -m /path/to/somefile.csv /path/to/somefile.csv; ) | mailx -s "$subject" "$to"
place your message from the here block in your example here
EOF
There are lots of other possibilities... but this one still has the here document
as in your example and was easy off the top of my head, and there's no temp file involved.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Best Approach For Configuring Multiple .Net Applications We have a suite of interlinked .Net 3.5 applications. Some are web sites, some are web services, and some are windows applications. Each app currently has its own configuration file (app.config or web.config), and currently there are some duplicate keys across the config files (which at the moment are kept in sync manually) as multiple apps require the same config value. Also, this suite of applications is deployed across various envrionemnts (dev, test, live etc)
What is the best approach to managing the configuration of these multiple apps from a single configuration source, so configuration values can be shared between multiple apps if required? We would also like to have separate configs for each environment (so when deploying you don't have to manually change certain config values that are environment specific such as conenction strings), but at the same time don't want to maintain multiple large config files (one for each environment) as keeping this in sync when adding new config keys will prove troublesome.
A: Visual Studio has a relatively obscure feature that lets you add existing items as links, which should accomplish what you're looking for. Check out Derik Whittaker's post on this topic for more detail.
Visual Studio really should make this option more visible. Nobody really thinks to click on that little arrow next to the "Add" button.
A: You can split App.config into multiple configuration files. You just specify the name of the file that contains the config section.
Change app.config:
<SomeConfigSection>
<SettingA/>
<SettingB/>
</SomeConfigSection>
<OtherSection>
<SettingX/>
</OtherSection>
Into app.config and SomeSetting.xml:
<SomeConfigSection file="SomeSetting.xml" />
<OtherSection file="Other.xml" />
Where SomeSetting.xml contains:
Now you can compose your app.config from different section files with some sort of build or deploy script. E.g.:
if debug copy SomeSettingDebug.xml deploydir/SomeSetting.xml
if MySql copy OtherSectionMySql.xml deploydir/OtherSetting.xml
A: We use file templates such as MyApp.config.template and MyWeb.config.template with NAnt properties for the bits that are different between environments. So the template file might look a bit like this:
<MyAppConfig>
<DbConnString>${DbConnString}</DbConnString>
<WebServiceUri uri="${WebServiceUri}" />
</MyAppConfig>
During a build we generate all the configs for the different environments by just looping through each environment in a NAnt script, changing the value of the NAnt properties ${DbConnString} and ${WebServiceUri} for each environment (in fact these are all set in a single file with sections for each environment), and doing a NAnt copy with the option to expand properties turned on.
It took a little while to get set up but it has paid us back at least tenfold in the amount of time saved messing around with different versions of config files.
A: These 2 questions might help you: Utilizing machine.config and Managing app.config for large projects
A: Check out the prism framework from Microsofts patterns and practices group?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Automatically verify my website's links are pointing to urls that exist? Is there a tool to automatically search through my site and test all the links? I hate running across bad urls.
A: Xenu link sleuth is excellent (and free)
A: w3.org checklink
A: If I were you, I'd check out the W3C Link Checker.
A: Something like this should work: http://www.dead-links.com/
Do google searches for "404 checker" or "broken link checker"
A: I used Xenu's Link Sleuth in the past. It will crawl your site and tell you which links point to nowhere. It is not super fancy but it works.
http://en.wikipedia.org/wiki/Xenu%27s_Link_Sleuth
The Wikipedia page lists a whole bunch of other products.
A: WebHTTrack
Can take a long time to go through a large web site (I archived a 250MB website and it took approximately 2 hours - it wasn't local though) It has a log so you should be able to track 404s easily.
A: Also check out Google's webmaster tools.
http://www.google.com/webmasters/tools/
They give you the ability to see the 404's that GoogleBot discovers when crawling your website (along with lots and lots of other stuff).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Writing post data from one java servlet to another I am trying to write a servlet that will send a XML file (xml formatted string) to another servlet via a POST.
(Non essential xml generating code replaced with "Hello there")
StringBuilder sb= new StringBuilder();
sb.append("Hello there");
URL url = new URL("theservlet's URL");
HttpURLConnection connection = (HttpURLConnection)url.openConnection();
connection.setRequestMethod("POST");
connection.setRequestProperty("Content-Length", "" + sb.length());
OutputStreamWriter outputWriter = new OutputStreamWriter(connection.getOutputStream());
outputWriter.write(sb.toString());
outputWriter.flush();
outputWriter.close();
This is causing a server error, and the second servlet is never invoked.
A: I recommend using Apache HTTPClient instead, because it's a nicer API.
But to solve this current problem: try calling connection.setDoOutput(true); after you open the connection.
StringBuilder sb= new StringBuilder();
sb.append("Hello there");
URL url = new URL("theservlet's URL");
HttpURLConnection connection = (HttpURLConnection)url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("POST");
connection.setRequestProperty("Content-Length", "" + sb.length());
OutputStreamWriter outputWriter = new OutputStreamWriter(connection.getOutputStream());
outputWriter.write(sb.toString());
outputWriter.flush();
outputWriter.close();
A: Don't forget to use:
connection.setDoOutput( true)
if you intend on sending output.
A: The contents of an HTTP post upload stream and the mechanics of it don't seem to be what you are expecting them to be. You cannot just write a file as the post content, because POST has very specific RFC standards on how the data included in a POST request is supposed to be sent. It is not just the formatted of the content itself, but it is also the mechanic of how it is "written" to the outputstream. Alot of the time POST is now written in chunks. If you look at the source code of Apache's HTTPClient you will see how it writes the chunks.
There are quirks with the content length as result, because the content length is increased by a small number identifying the chunk and a random small sequence of characters that delimits each chunk as it is written over the stream. Look at some of the other methods described in newer Java versions of the HTTPURLConnection.
http://java.sun.com/javase/6/docs/api/java/net/HttpURLConnection.html#setChunkedStreamingMode(int)
If you don't know what you are doing and don't want to learn it, dealing with adding a dependency like Apache HTTPClient really does end up being much easier because it abstracts all the complexity and just works.
A: This kind of thing is much easier using a library like HttpClient. There's even a post XML code example:
PostMethod post = new PostMethod(url);
RequestEntity entity = new FileRequestEntity(inputFile, "text/xml; charset=ISO-8859-1");
post.setRequestEntity(entity);
HttpClient httpclient = new HttpClient();
int result = httpclient.executeMethod(post);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Using Validation in WPF With Dependency Property and Style Triggers I am trying to use Validation in WPF. I created a NotNullOrEmptyValidationRule as shown below:
public class NotNullOrEmptyValidationRule : ValidationRule
{
public override ValidationResult Validate(object value, CultureInfo cultureInfo)
{
if (String.IsNullOrEmpty(value as String))
return new ValidationResult(false, "Value cannot be null or empty");
return new ValidationResult(true, null);
}
}
Now, I need to use it in my application. In my App.xaml file I declared the Style for the TextBox. Here is the declaration.
<Style x:Key="textBoxStyle" TargetType="{x:Type TextBox}">
<Setter Property="Background" Value="Green"/>
<Style.Triggers>
<Trigger Property="Validation.HasError" Value="True">
<Setter Property="Background" Value="Red"/>
<Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self},Path=(Validation.Errors)[0].ErrorContent}"/>
</Trigger>
</Style.Triggers>
</Style>
Now, I want to use it on my TextBox so I am using the following code:
<TextBox Style="{StaticResource textBoxStyle}">
<TextBox.Text>
<Binding>
<Binding.ValidationRules>
<NotNullOrEmptyValidationRule />
</Binding.ValidationRules>
</Binding>
</TextBox.Text>
</TextBox>
The error comes on the Tag NotNullOrEmptyValidationRule. The XAML syntax checker is not able to resolve the NotNullOrEmptyValidationRule. I have even tried putting the namespace but it does not seem to work.
A: You just need to add the xmlns to your Window, and use that to reference your ValidationRule.
In WPF, the object is perfectly fine to be used from the same assembly.
Since your rule isn't defined in the standard XAML namespace, you have to create a mapping to your clr namespace like so:
<Window ...
xmlns:local="clr-namespace:MyNamespaceName">
And then you would use it like so:
<Binding Path=".">
<Binding.ValidationRules>
<local:NotNullOrEmptyValidationRule />
</Binding.ValidationRules>
</Binding>
Edit
I added a Path statement to the Binding. You have to tell the Binding what to bind to :)
A: i see your binding on the TextBox is set to a path of 'Text' - is that a field on whatever the datacontext of this textbox is? is the textbox actually getting a value put into it? also, if you put a breakpoint in your validation method, is that ever getting fired?
you may want to lookup how to log failures in binding and review those as well..
A: You do not have this line in ur code behind
Public Sub New()
' This call is required by the Windows Form Designer.
InitializeComponent()
Me.**NameOfTextBox**.DataContext = Me
End Sub
A: There is a bug in Visual Studio and Expression Blend that causes this problem. What you need to do is make sure that the Validation rule is in a separately project/assembly that you can reference. This should resolve the problem.
However, you will have to add back the namespace in order for it to work.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How To Tell What Files IE Thinks Are "nonsecure"? We have a CMS system whose web interface gets served over HTTPS. This works beautifully for Firefox, but when we load it in IE6 or IE7, it complains that "This page contains both secure and nonsecure items."
I've loaded the page in Firefox and checked with Firebug, and every connection seems to be going through HTTPS, as should be the case.
Is there any way to tell what is causing IE to throw this apparently spurious error?
A: Use Fiddler to watch the traffic between the server and IE.
Be sure to go to Tools > Fiddler Options... > HTTPS > and check 'Decrypt HTTPS traffic'
Any non-HTTPS traffic generated between any server and IE should be easy to spot in the Web Sessions list.
A: I used Eric's tool (thanks Eric you saved me hours...) and it turns out that IE6 treats a background image specified with a relative path as nonsecure content. Even though it actually requests it over https. So if you're stumped - converting your relative paths to absolute ones might really help...
A: Are one or more resources (CSS url-image ref overlooked easily) pointing to a subdomain that's not covered by the certificate (https://www.example.com vs https://static.example.com)?
A: Firefox has a number of bugs in mixed content detection. Generally you should try using Fiddler to spot insecure resources.
If you install a tool I wrote (www.bayden.com/dl/scriptfreesetup.exe) you will get a different mixed content prompt which shows the exact URL of the first insecure resource on the page. That tool is basically a prototype and you should uninstall it when you're done with it.
A: If you can't see anything that isn't using SSL, then this is usually down to a broken SSL certificate somewhere. I don't know of anything off-hand that will tell you what exactly what the problem is, but you can get a list of everything that's loaded easily enough.
The media tab on Firefox's 'page info' dialog (right click on the page) will do it, it might also be worth having a go with Fiddler (which is an excellent, and extremely useful piece of software).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: SQL server 2000 Like Statement Usage I have a SQL statement that looks like:
SELECT [Phone]
FROM [Table]
WHERE
(
[Phone] LIKE '[A-Z][a-z]'
OR [Phone] = 'N/A'
OR [Phone] LIKE '[0]'
)
The part I'm having trouble with is the where statement with the "LIKEs". I've seen SQL statements where authors used like statements in the way I'm using them above. At first, I thought this might be a version of Regular Expressions but I've since learned.
Is anyone familiar with using like statements in such a way. Note: the "N/A" is working fine.
What I need to match is phone numbers that have characters. Or phone numbers which contain nothing but zero.
A: Check here.
[] matches a range of characters.
I think you want something like this:
SELECT [Phone]
FROM [Table]
WHERE
(
[Phone] LIKE '%[A-Z]%'
OR [Phone] LIKE '%[a-z]%'
OR [Phone] = 'N/A'
OR [Phone] LIKE '0'
)
A: Try using the t-sql ISNUMERIC function. That will show you which ones are/are not numeric.
You may also need to TRIM or REPLACE spaces to get what you want.
For example, to find valid phone numbers, replace spaces with '', test with ISNUMERIC, and test with LEN.
Although I will warn you, this will be tedious if you have to deal with international phone numbers.
The thing to note with your SQL above, is that SQL Server doesn't understand Regex.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Deploying to multiple servers I have to deploy my php/html/css/etc code to multiple servers and i am looking at my options for software that allows easy and secure deployment to multiple servers.
Also helps if it could be tied into my SVN.
Any suggestions?
A: Setting up password-less publickey authentication with ssh would allow you to scp your files to any of your servers very quickly (or be automated by a shell script).
Here's a simple tutorial: http://rcsg-gsir.imsb-dsgi.nrc-cnrc.gc.ca/documents/internet/node31.html
A: If you're running on Redhat or Debian, consider packaging up your code into RPM's or Debs. Then build a yum or dpkg repository and put your packages there. You can then use your system's package management to do upgrades/rollbacks, etc. You can even use puppet to automate the process.
If you want to tie it into subversion, you can create a branch for each new version. Use the commit scripts to build the RPM's when a new branch shows up in a directory.
A: I'll second Capistrano. It's incredibly powerful and flexible. Our current project uses Capistrano for deploying to different servers as well as multiple servers. We pass two arguments to the cap command:
1) the name of the set of machine specific config options to run and
2) the name of the action to run
ends up looking like this:
cap -f deploy.rb live deploy
or
cap -f deploy.rb dev deploy
Of course the default use case - deploy to lots of machines at once - is a doddle with Capistrano AND you don't need to have Capistrano on the machines you are deploying to. All in all, tasty technology.
A: Capistrano is pretty handy for that. There's a few people using it (1, 2, 3) for deploying PHP code as evidenced by doing a quick search.
A: I've used Automated Build Studio before for a similar task. It gives you a lot of flexibility in what you can do.
A: I concur -- set your svn tree up, and use rsync over ssh to copy the tree out to the remote locations. rsync will make it fast and efficient, only copying changes rather than full files.
You want to export your svn tree to some directory, then rsync from there to the remote host's directory tree.
A: I also forgot to mention that if you use rsync, you can set up rsync to use ssh, so you will only transfer the files that have changed, saving on time and bandwidth.
A: You can also use kwateeSDCM which is free and allows remote installation via ssh. It also enables you to manage server-specific configuration from a central location and make upgrades seemless.
A: I had marked a post on how to deploy your websites using Subversion : http://blog.lavablast.com/post/2008/02/I2c-for-one2c-welcome-our-new-revision-control-overlords!.aspx
A: I found capistrano to be very easy to use once it's setup. The configuration file can be a bit confusing at first for more complicated environments but it soon becomes worthwhile. I deploy to 14 servers on production. I also use multiple environments for deployment to a staging server. One quirk, there's a bug in Ruby that breaks parallel deployment but serially isn't too bad with svn exports.
A: Capistrano setup is just too complicated. We found that KwateeSDCM was very straightforward to use with a simple web interface and no scripting. We've got our deployment configuration done in no time for Dev and QA configuration on windows and linux servers.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: C++: Step 1: ExtractIconEx. Step 2: ??? Step 3: SetMenuItemBitmaps I'm experimenting with adding icons to a shell extension. I have this code (sanitized for easy reading), which works:
InsertMenu(hmenu, index, MF_POPUP|MF_BYPOSITION, (UINT)hParentMenu, namestring);
The next step is this code:
HICON hIconLarge, hIconSmall;
ICONINFO oIconInfo;
ExtractIconEx("c:\\progra~1\\winzip\\winzip32.exe", 0, &hIconLarge, &hIconSmall, 1);
GetIconInfo(hIconSmall, &oIconInfo);
//???????
SetMenuItemBitmaps(hParentMenu, indexMenu-1, MF_BITMAP | MF_BYPOSITION, hbmp, hbmp);
What do I put in to replace the ?'s. Attempts to Google this knowledge have found many tips that I failed to get working. Any advice on getting this to work, especially on older machines (e.g. no .net framework, no vista) is appreciated.
A: Vista has proper support for icons in menus, for pre Vista, you must use owner draw menu items (MF_OWNERDRAW) if you want real 16x16 full color icons
Vista style menus...
Vista style+pre Vista callback
A: This works, though the back color is black instead of transparent.
GetIconInfo(hIconSmall, &oIconInfo);
SetMenuItemBitmaps(hmenu, uMenuIndex+i+popUpMenuCount-1, MF_BITMAP | MF_BYPOSITION, oIconInfo.hbmColor, oIconInfo.hbmColor);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I split a string, breaking at a particular character? I have this string
'john smith~123 Street~Apt 4~New York~NY~12345'
Using JavaScript, what is the fastest way to parse this into
var name = "john smith";
var street= "123 Street";
//etc...
A: With JavaScript’s String.prototype.split function:
var input = 'john smith~123 Street~Apt 4~New York~NY~12345';
var fields = input.split('~');
var name = fields[0];
var street = fields[1];
// etc.
A: According to ECMAScript6 ES6, the clean way is destructuring arrays:
const input = 'john smith~123 Street~Apt 4~New York~NY~12345';
const [name, street, unit, city, state, zip] = input.split('~');
console.log(name); // john smith
console.log(street); // 123 Street
console.log(unit); // Apt 4
console.log(city); // New York
console.log(state); // NY
console.log(zip); // 12345
You may have extra items in the input string. In this case, you can use rest operator to get an array for the rest or just ignore them:
const input = 'john smith~123 Street~Apt 4~New York~NY~12345';
const [name, street, ...others] = input.split('~');
console.log(name); // john smith
console.log(street); // 123 Street
console.log(others); // ["Apt 4", "New York", "NY", "12345"]
I supposed a read-only reference for values and used the const declaration.
Enjoy ES6!
A: If Spliter is found then only
it will Split it
else return the same string
function SplitTheString(ResultStr) {
if (ResultStr != null) {
var SplitChars = '~';
if (ResultStr.indexOf(SplitChars) >= 0) {
var DtlStr = ResultStr.split(SplitChars);
var name = DtlStr[0];
var street = DtlStr[1];
}
}
}
A: split() method in JavaScript is used to convert a string to an array.
It takes one optional argument, as a character, on which to split. In your case (~).
If splitOn is skipped, it will simply put string as it is on 0th position of an array.
If splitOn is just a “”, then it will convert array of single characters.
So in your case:
var arr = input.split('~');
will get the name at arr[0] and the street at arr[1].
You can read for a more detailed explanation at
Split on in JavaScript
A: well, easiest way would be something like:
var address = theEncodedString.split(/~/)
var name = address[0], street = address[1]
A: You don't need jQuery.
var s = 'john smith~123 Street~Apt 4~New York~NY~12345';
var fields = s.split(/~/);
var name = fields[0];
var street = fields[1];
console.log(name);
console.log(street);
A: You can use split to split the text.
As an alternative, you can also use match as follow
var str = 'john smith~123 Street~Apt 4~New York~NY~12345';
matches = str.match(/[^~]+/g);
console.log(matches);
document.write(matches);
The regex [^~]+ will match all the characters except ~ and return the matches in an array. You can then extract the matches from it.
A: Something like:
var divided = str.split("/~/");
var name=divided[0];
var street = divided[1];
Is probably going to be easiest
A: Zach had this one right.. using his method you could also make a seemingly "multi-dimensional" array.. I created a quick example at JSFiddle http://jsfiddle.net/LcnvJ/2/
// array[0][0] will produce brian
// array[0][1] will produce james
// array[1][0] will produce kevin
// array[1][1] will produce haley
var array = [];
array[0] = "brian,james,doug".split(",");
array[1] = "kevin,haley,steph".split(",");
A: This string.split("~")[0]; gets things done.
source: String.prototype.split()
Another functional approach using curry and function composition.
So the first thing would be the split function. We want to make this "john smith~123 Street~Apt 4~New York~NY~12345" into this ["john smith", "123 Street", "Apt 4", "New York", "NY", "12345"]
const split = (separator) => (text) => text.split(separator);
const splitByTilde = split('~');
So now we can use our specialized splitByTilde function. Example:
splitByTilde("john smith~123 Street~Apt 4~New York~NY~12345") // ["john smith", "123 Street", "Apt 4", "New York", "NY", "12345"]
To get the first element we can use the list[0] operator. Let's build a first function:
const first = (list) => list[0];
The algorithm is: split by the colon and then get the first element of the given list. So we can compose those functions to build our final getName function. Building a compose function with reduce:
const compose = (...fns) => (value) => fns.reduceRight((acc, fn) => fn(acc), value);
And now using it to compose splitByTilde and first functions.
const getName = compose(first, splitByTilde);
let string = 'john smith~123 Street~Apt 4~New York~NY~12345';
getName(string); // "john smith"
A: Try in Plain Javascript
//basic url=http://localhost:58227/ExternalApproval.html?Status=1
var ar= [url,statu] = window.location.href.split("=");
A: JavaScript: Convert String to Array JavaScript Split
var str = "This-javascript-tutorial-string-split-method-examples-tutsmake."
var result = str.split('-');
console.log(result);
document.getElementById("show").innerHTML = result;
<html>
<head>
<title>How do you split a string, breaking at a particular character in javascript?</title>
</head>
<body>
<p id="show"></p>
</body>
</html>
https://www.tutsmake.com/javascript-convert-string-to-array-javascript/
A: Even though this is not the simplest way, you could do this:
var addressString = "~john smith~123 Street~Apt 4~New York~NY~12345~",
keys = "name address1 address2 city state zipcode".split(" "),
address = {};
// clean up the string with the first replace
// "abuse" the second replace to map the keys to the matches
addressString.replace(/^~|~$/g).replace(/[^~]+/g, function(match){
address[ keys.unshift() ] = match;
});
// address will contain the mapped result
address = {
address1: "123 Street"
address2: "Apt 4"
city: "New York"
name: "john smith"
state: "NY"
zipcode: "12345"
}
Update for ES2015, using destructuring
const [address1, address2, city, name, state, zipcode] = addressString.match(/[^~]+/g);
// The variables defined above now contain the appropriate information:
console.log(address1, address2, city, name, state, zipcode);
// -> john smith 123 Street Apt 4 New York NY 12345
A: You'll want to look into JavaScript's substr or split, as this is not really a task suited for jQuery.
A: Since the splitting on commas question is duplicated to this question, adding this here.
If you want to split on a character and also handle extra whitespace that might follow that character, which often happens with commas, you can use replace then split, like this:
var items = string.replace(/,\s+/, ",").split(',')
A: This isn't as good as the destructuring answer, but seeing as this question was asked 12 years ago, I decided to give it an answer that also would have worked 12 years ago.
function Record(s) {
var keys = ["name", "address", "address2", "city", "state", "zip"], values = s.split("~"), i
for (i = 0; i<keys.length; i++) {
this[keys[i]] = values[i]
}
}
var record = new Record('john smith~123 Street~Apt 4~New York~NY~12345')
record.name // contains john smith
record.address // contains 123 Street
record.address2 // contains Apt 4
record.city // contains New York
record.state // contains NY
record.zip // contains zip
A: Use this code --
function myFunction() {
var str = "How are you doing today?";
var res = str.split("/");
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "624"
}
|
Q: How can I tell how many SQL Connections I have open in a windows service? I'm seeing some errors that would indicate a "connection leak". That is, connections that were not closed properly and the pool is running out. So, how do I go about instrumenting this to see exactly how many are open at a given time?
A: If you're using .net, there's the .net data provider for SQL server in PerfMon. You can look at NumberOfPooledConnections there
A: sp_who2 stored procedure in the master table is nice for this from a database side. It will show you connections to the database. If you're looking for more data try profiling as well.
A: Implement a service that all connections are created, opened and closed through. Hold a counter there. Log with your logging framework each time a connection is opened or closed.
A: you can use the profiler tool to trace all existing and opening and closing connections
You can open profiler from enterprise manager
A: If you're using SQL 2000, you can check in SQL 2000 Enterprise Manager:
To view the Current Activity window In
SQL Server Enterprise Manager, expand
a server group, and then expand a
server. Expand Management, and then
expand Current Activity. Click Process
Info.
The current server activity is
displayed in the details pane.
(http://technet.microsoft.com/en-us/library/cc738560.aspx)
(From Google search: sql 2000 current activity)
A: You could run sp_who2 in SQL Server Management Studio or Query Analyser to see all of your curent connections. That is SQL Server. I'm not sure which RDBMS that you are using.
Also, look in your code and make sure that you close a connection as soon as you don't need it anymore. Be anal about this!
A: Use the "using" statement to ensure your connections are always closed and you'll never have this problem again:
using(SqlConnection connection = new SqlConnection())
{
...
} // connection is always disposed (i.e. closed) here, even if an exception is thrown
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: How to read the value of a text input in a Flash SWF from a Flex App? I have a Flex application, which loads a SWF from CS3. The loaded SWF contains a text input called "myText". I can see this in the SWFLoader.content with no problems, but I don't know what type I should be treating it as in my Flex App. I thought the flex docs covered this but I can only find how to interact with another Flex SWF.
The Flex debugger tells me it is of type fl.controls.TextInput, which makes sense. But FlexBuilder doesn't seem to know this class. While Flash and Flex both use AS3, Flex has a whole new library of GUI classes. I thought it also had all the Flash classes, but I can't get it to know of ANY fl.*** packages.
A: The fl.* hierarchy of classes is Flash CS3-only. It's the Flash Components 3 library (I believe it's called, I might be wrong). However, you don't need the class to work with the object. As long as you can get a reference to it in your code, which you seem to have, you can assign the reference to an untyped variable and work with it anyway:
var textInput : * = getTheTextInput(); // insert your own method here
textInput.text = "Lorem ipsum dolor sit amet";
textInput.setSelection(4, 15);
There is no need to know the type of an object in order to interact with it. Of course you lose type checking at compile time, but that's really not much of an issue, you just have to be extra careful.
If you really, really want to reference the object as its real type, the class in question is located in
Adobe Flash CS3/Configuration/Component Source/ActionScript 3.0/User Interface/fl/controls/TextInput.as
...if you have Flash CS3 installed, because it only ships with that application.
A: Flex and Flash SWFs are essentially the same, just built using different tools. I'm not sure if they share the same component libraries, but based on the package names I'm guessing they at least mostly do.
If it's a normal Text Input then I would guess it's an instance of mx.controls.TextInput.
A: Keep in mind that if you do as Theo said and reference it with the correct type it will compile that class in both swfs, even if you're not using it in the first one. Unfortunately the fl.* classes don't implement any interfaces so you can't type them to the interface instead of the implementation. If you could, only the interface would get compiled, which is much smaller than the implementation. For this one it won't be a big deal, it's probably going to add only a couple kb, but in the long run it adds up. Just a heads up ;)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: SQL Bulk import from CSV I need to import a large CSV file into an SQL server. I'm using this :
BULK
INSERT CSVTest
FROM 'c:\csvfile.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
problem is all my fields are surrounded by quotes (" ") so a row actually looks like :
"1","","2","","sometimes with comma , inside", ""
Can I somehow bulk import them and tell SQL to use the quotes as field delimiters?
Edit: The problem with using '","' as delimiter, as in the examples suggested is that :
What most examples do, is they import the data including the first " in the first column and the last " in the last, then they go ahead and strip that out. Alas my first (and last) column are datetime and will not allow a "20080902 to be imported as datetime.
From what I've been reading arround I think FORMATFILE is the way to go, but documentation (including MSDN) is terribly unhelpfull.
A: Try OpenRowSet. This can be used to import Excel stuff. Excel can open CSV files, so you only need to figure out the correct [ConnectionString][2].
[2]: Driver={Microsoft Text Driver (*.txt; *.csv)};Dbq=c:\txtFilesFolder\;Extensions=asc,csv,tab,txt;
A: I know this isn't a real solution but I use a dummy table for the import with nvarchar set for everything. Then I do an insert which strips out the " characters and does the conversions. It isn't pretty but it does the job.
A: Id say use FileHelpers its an open source library
A: Try FIELDTERMINATOR='","'
Here is a great link to help with the first and last quote...look how he used the substring the SP
http://www.sqlteam.com/article/using-bulk-insert-to-load-a-text-file
A: Another hack which I sometimes use, is to open the CSV in Excel, then write your sql statement into a cell at the end of each row.
For example:
=concatenate("insert into myTable (columnA,columnB) values ('",a1,"','",b1,"'")")
A fill-down can populate this into every row for you. Then just copy and paste the output into a new query window.
It's old-school, but if you only need to do imports once in a while it saves you messing around with reading all the obscure documentation on the 'proper' way to do it.
A: Do you need to do this programmatically, or is it a one-time shot?
Using the Enterprise Manager, right-click Import Data lets you select your delimiter.
A: You have to watch out with BCP/BULK INSERT because neither BSP or Bulk Insert handle this well if the quoting is not consistent, even with format files (even XML format files don't offer the option) and dummy ["] characters at the beginning and end and using [","] as the separator. Technically CSV files do not need to have ["] characters if there are no embedded [,] characters
It is for this reason that comma-delimited files are sometimes referred to as comedy-limited files.
OpenRowSet will require Excel on the server and could be problematic in 64-bit environments - I know it's problematic using Excel in Jet in 64-bit.
SSIS is really your best bet if the file is likely to vary from your expectations in the future.
A: u can try this code which is very sweet if you want ,
this will remove unwanted semicolons from your code.
if for example your data is like this :"Kelly","Reynold","kelly@reynold.com"
Bulk insert test1
from 'c:\1.txt' with (
fieldterminator ='","'
,rowterminator='\n')
update test1<br>
set name =Substring (name , 2,len(name))
where name like **' "% '**
update test1
set email=substring(email, 1,len(email)-1)
where email like **' %" '**
A: Firs you need to import CSV file into Data Table
Then you can insert bulk rows using SQLBulkCopy
using System;
using System.Data;
using System.Data.SqlClient;
namespace SqlBulkInsertExample
{
class Program
{
static void Main(string[] args)
{
DataTable prodSalesData = new DataTable("ProductSalesData");
// Create Column 1: SaleDate
DataColumn dateColumn = new DataColumn();
dateColumn.DataType = Type.GetType("System.DateTime");
dateColumn.ColumnName = "SaleDate";
// Create Column 2: ProductName
DataColumn productNameColumn = new DataColumn();
productNameColumn.ColumnName = "ProductName";
// Create Column 3: TotalSales
DataColumn totalSalesColumn = new DataColumn();
totalSalesColumn.DataType = Type.GetType("System.Int32");
totalSalesColumn.ColumnName = "TotalSales";
// Add the columns to the ProductSalesData DataTable
prodSalesData.Columns.Add(dateColumn);
prodSalesData.Columns.Add(productNameColumn);
prodSalesData.Columns.Add(totalSalesColumn);
// Let's populate the datatable with our stats.
// You can add as many rows as you want here!
// Create a new row
DataRow dailyProductSalesRow = prodSalesData.NewRow();
dailyProductSalesRow["SaleDate"] = DateTime.Now.Date;
dailyProductSalesRow["ProductName"] = "Nike";
dailyProductSalesRow["TotalSales"] = 10;
// Add the row to the ProductSalesData DataTable
prodSalesData.Rows.Add(dailyProductSalesRow);
// Copy the DataTable to SQL Server using SqlBulkCopy
using (SqlConnection dbConnection = new SqlConnection("Data Source=ProductHost;Initial Catalog=dbProduct;Integrated Security=SSPI;Connection Timeout=60;Min Pool Size=2;Max Pool Size=20;"))
{
dbConnection.Open();
using (SqlBulkCopy s = new SqlBulkCopy(dbConnection))
{
s.DestinationTableName = prodSalesData.TableName;
foreach (var column in prodSalesData.Columns)
s.ColumnMappings.Add(column.ToString(), column.ToString());
s.WriteToServer(prodSalesData);
}
}
}
}
}
A: This is an old question, so I write this to help anyone who stumble upon it.
SQL Server 2017 introduces the FIELDQUOTE parameter which is intended for this exact use case.
A: Yup, K Richard is right: FIELDTERMINATOR = '","'
See http://www.sqlteam.com/article/using-bulk-insert-to-load-a-text-file for more info.
A: You could also use DTS or SSIS.
A: Do you have control over the input format? | (pipes), and \t usually make for better field terminators.
A: If you figure out how to get the file parsed into a DataTable, I'd suggest the SqlBulkInsert class for inserting it into SQL Server.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
}
|
Q: What's the most current, good practice, and easiest way to use sessions in PHP? Sessions in PHP seemed to have changed since the last time I used them, so I'm looking for a simple way of using sessions but at the same time for it to be relatively secure and a good common practice.
A: Session management changed some time back (I think it was around 4.4). The old mechanism still works, but is deprecated. It's rather confusing, so I recommend staying clear of it. Today, you use sessions by accessing the global variable $_SESSION (It's an array). You can put object instances in there, but you need to load the class definitions for those objects before starting the session on the next page. Using autoload can help you out here.
You must start a session before you can use $_SESSION. Since starting the session sends headers, you can't have any output before. This can be solved in one of two ways:
Either you always begin the session at the start of your script. Or you buffer all output, and send it out at the end of the script.
One good idea is to regenerate the session on each request. this makes hijack much less likely.
That's (slightly) bad advice, since it can make the site inaccessible. You should regenerate the session-id whenever a users privileges changes though. In general that means, whenever they log in. This is to prevent session-fixation (A form of session-hijacking). See this recent thread @ Sitepoint for more on the subject.
Using cookiebased sessions only is OK, but if you regenerate session id's on login, it doesn't add any additional security, and it lowers accessibility a bit.
A: As far as simplicity, it doesn't get any better than:
# Start the session manager
session_start();
# Set a var
$_SESSION['foo'] = 'whatever';
# Access the var
print $_SESSION['foo'];
A: While database might be more secure for sessions, you should focus on what you're storing in the session in the first place - it should not really contain anything but an ID to identify the user (and MAYBE a firstname or a temporary variable between pages).
I would suggest simply using the default, cookies. Database sessions give an extra hit ON EVERY PAGE, and even though not every site is slashdot, there's no harm in pre-optimizing something as simple as this.
For usage, I would recommend the standard global variable:
$_SESSION['yourvar'] = 'somevalue';
If you use that method in all your code, you can easily change the back-end later through the use of session_set_save_handler, which gives a unified way of implementing session backends. Note that you can use an object to contain all the session handling, simply give arrays to each entry - array('Staticclass', 'staticmethod').
For more in-depth usage, I would recommend you take a look at how sessions are handled in KohanaPHP.
A: You can store PHP sessions in database, as described in this
book. I have used this method and I find it secure and easy to implement, so I would reccomend it.
A: Encapsulate the $SESSION array in a Session() Object that allows you to get variables from session, get, and post with a similar (yet dissociable) way, including automatic security filters, flash variables (var that are used once then distroyed), and default value setters.
Have a look to the behaviour of Symfony on that point, it´s very helpful.
A: Sessions were a critical part of my PHP Knowledge because it helped me solve my log in authentication problem back when I was developing my first web application.
session_start();
if( isset($_POST['username']) && isset($_POST['password']) )
{
if( auth($_POST['username'], $_POST['password']) )
{
//Authentication passed
$_SESSION['user'] = $_POST['username'];
// redirect to required page
header( "Location: index.php" );
}
else
{
//Authentication failed redirect to login
header( "Location: loginform.html" );
}
}
else
{
//Username and Password are required
header( "Location: loginform.html" );
}
A: First off, use cookie based only unless you have a very specific good business reason not to. I had a client that insisted on url based sessions only for a project. very insecure and a pain to work with.
One good idea is to regenerate the session on each request. this makes hijack much less likely. For example.
session_start();
$old_sessionid = session_id();
session_regenerate_id();
$new_sessionid = session_id();
Another thing that is good practice is if you are doing some kind of user login as part of the system, completely invalidate and empty the session data on a logout to insure that the user is truly logged out of the system. I have seen systems where logout is just accomplished by removing the session cookie.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do you get the response from the Request object in MooTools? How do you access the response from the Request
object in MooTools? I've been looking at the documentation and the MooTorial, but
I can't seem to make any headway. Other Ajax stuff I've done with
MooTools I haven't had to manipulate the response at all, so I've just
been able to inject it straight into the document, but now I need to
make some changes to it first. I don't want to alert the response, I'd like to access it so I can make further changes to it. Any help would be greatly appreciated.
Thanks.
Edit:
I'd like to be able to access
the response after the request has already been made, preferably
outside of the Request object. It's for an RSS reader, so I need to do
some parsing and Request is just being used to get the feed from a
server file. This function is a method in a class, which should return
the response in a string, but it isn't returning anything but
undefined:
fetch: function(site){
var feed;
var req = new Request({
method: this.options.method,
url: this.options.rssFetchPath,
data: { 'url' : site },
onRequest: function() {
if (this.options.targetId) { $
(this.options.targetId).setProperty('html',
this.options.onRequestMessage); }
}.bind(this),
onSuccess: function(responseText) {
feed = responseText;
}
});
req.send();
return feed;
}
A: The response content is returned to the anonymous function defined in onComplete.
It can be accessed from there.
var req = new Request({
method: 'get',
url: ...,
data: ...,
onRequest: function() { alert('Request made. Please wait...'); },
// the response is passed to the callback as the first parameter
onComplete: function(response) { alert('Response: ' + response); }
}).send();
A: I was able to find my answer on the MooTools Group at Google.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Javascript drawing library? Any suggestion for a JavaScript interactive drawing library? Just need to draw lines, polygons, texts of different colors. IE/Firefox/Opera/Safari compatible.
A: You can use the canvas object directly to draw in 2D. IE requires the excanvas library.
http://developer.mozilla.org/En/Drawing_Graphics_with_Canvas
A: Raphael is pretty cool for that, and works across browsers since it uses VML (for MSIE) and SVG (for everything else).
A: Try http://www.walterzorn.de/en/jsgraphics/jsgraphics_e.htm. It's the best I've found (without resorting to SVG) and works in most browsers without add-ins.
A: Drawing text with the canvas tag is a big pain. Your options are to use regular divs absolutely positioned in the right places, or find/write a font layout engine (example), or wait for a new standard to be implemented that lets you draw text. SVG deals with this much better.
In IE you have ExplorerCanvas to simulate the canvas API with IE's own VML markup. However, native VML can do text on a path and such things much like SVG. I think theoretically if you want complex text handling you'd want SVG and VML like the Raphael library that Dan mentioned.
You might also consider Flash for a moment before starting.
A: As mentioned above, canvas is the way you should go. IE doesn't support it natively, so you'll need to download ExCanvas to ensure cross-browser compatibility. I'd recommend looking at Ajaxian for some projects that use the canvas tag.
A: Checkout the jQuery Drawing plugin, and you can also look at the Mozilla Canvas reference and tutorial.
A: Also mxGraph. This doesn't use excanvas for IE. Excanvas is way slower than using VML, specifically, re-using the same VML nodes rather than deleting, adding DOM nodes for redrawing. This is often a overlooked point, but excanvas on IE performance is just awful.
A: John Resig's Processing.js is a nice framework for that.
A: Depending on how cross-browser you need to be and your goal of doing the output, you might look into the Canvas element and the related javascript.
Canvas
A: D3.js
D3.js is a JavaScript library for manipulating documents based on
data. D3 helps you bring data to life using HTML, SVG, and CSS. D3’s
emphasis on web standards gives you the full capabilities of modern
browsers without tying yourself to a proprietary framework, combining
powerful visualization components and a data-driven approach to DOM
manipulation.
Take a look at this discussion too.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
}
|
Q: Is there anything wrong with returning default constructed values? Suppose I have the following code:
class some_class{};
some_class some_function()
{
return some_class();
}
This seems to work pretty well and saves me the trouble of having to declare a variable just to make a return value. But I don't think I've ever seen this in any kind of tutorial or reference. Is this a compiler-specific thing (visual C++)? Or is this doing something wrong?
A: Returning objects from a function call is the "Factory" Design Pattern, and is used extensively.
However, you will want to be careful whether you return objects, or pointers to objects. The former of these will introduce you to copy constructors / assignment operators, which can be a pain.
A: It is valid, but performance may not be ideal depending on how it is called.
For example:
A a;
a = fn();
and
A a = fn();
are not the same.
In the first case the default constructor is called, and then the assignment operator is invoked on a which requires a temporary variable to be constructed.
In the second case the copy constructor is used.
An intelligent enough compiler will work out what optimizations are possible. But, if the copy constructor is user supplied then I don't see how the compiler can optimize out the temporary variable. It has to invoke the copy constructor, and to do that it has to have another instance.
A: The difference between Rob Walker's example is called Return Value Optimisation (RVO) if you want to google for it.
Incidentally, if you want to enure your object gets returned in the most efficient manner, create the object on the heap (ie via new) using a shared_ptr and return a shared_ptr instead. The pointer gets returned and reference counts correctly.
A: No this is perfectly valid. This will also be more efficient as the compiler is actually able to optimise away the temporary.
A: That is perfectly reasonable C++.
A: This is perfectly legal C++ and any compiler should accept it. What makes you think it might be doing something wrong?
A: That's the best way to do it if your class is pretty lightweight - I mean that it isn't very expensive to make a copy of it.
One side effect of that method though is that it does tend to make it more likely to have temporary objects created, although that can depend on how well the compiler can optimize things.
For more heavyweight classes that you want to make sure are not copied (say for example a large bitmap image) then it is a good idea to pass stuff like that around as a reference parameter which then gets filled in, just to make absolutely sure that there won't be any temporary objects created.
Overall it can happen that simplifying syntax and making things turned more directly can have a side effect of creating more temporary objects in expressions, just something that you should keep in mind when designing the interfaces for more heavyweight objects.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How to associated the cn in an ssl cert of pyOpenSSL verify_cb to a generated socket I am a little new to pyOpenSSL. I am trying to figure out how to associate the generated socket to an ssl cert. verify_cb gets called which give me access to the cert and a conn but how do I associate those things when this happens:
cli,addr = self.server.accept()
A: After the handshake is complete, you can get the client certificate. While the client certificate is also available in the verify callback (verify_cb), there's not really any reason to try to do anything aside from verify the certificate in that callback. Setting up an application-specific mapping is better done after the handshake has completely successfully. So, consider using the OpenSSL.SSL.Connection instance returned by the accept method to get the certificate (and from there, the commonName) and associate it with the connection object at that point. For example,
client, clientAddress = self.server.accept()
client.do_handshake()
commonNamesToConnections[client.get_peer_certificate().commonName] = client
You might want to check the mapping to make sure you're not overwriting any existing connection (perhaps using a list of connections instead of just mapping each common name to one). And of course you need to remove entries when connections are lost.
The `do_handshake´ call forces the handshake to actually happen. Without this, the handshake will happen when application data is first transferred over the connection. That's fine, but it would make setting up this mapping slightly more complicated.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Change ContextMenu Font Size in C# Is it possible to change the font size used in a ContextMenu using the .NET Framework 3.5 and C# for a desktop application? It seems it's a system-wide setting, but I would like to change it only within my application.
A: If you are defining your own context menu via a ContextMenuStrip in Windows Forms, use the Font property.
If you are defining your own context menu via a ContextMenu in WPF, use the various Fontxxx properties such as FontFamily and FontSize.
You cannot change the default context menus that come with controls; those are determined by system settings. So if you want the "Copy/Cut/Paste/etc." menu with a custom font size for a WinForms TextBox, you'll have to create a ContextMenuStrip with the appropriate font size and assign it to the TextBox's ContextMenuStrip property.
A: In WPF:
<Window.ContextMenu FontSize="36">
<!-- ... -->
</Window.ContextMenu
In WinForms:
contextMenuStrip1.Font = new System.Drawing.Font("Segoe UI", 24F);
A: You can change the font size of a System.Windows.Forms.ContextMenuStrip.
If you need to change the font size of the default Cut/Copy/Paste context menu on text boxes I guess you need to set the ContextMenu property to a custom menu that replaces the default menu.
A: You mention .NET 3.5 - are you writing in WPF? If so, you can specify font size for the TextBlock.FontSize attached property
<Whatever.ContextMenu TextBlock.FontSize="12">
<MenuItem ... /> <!-- Will get the font size from parent -->
</Whatever.ContextMenu>
Or, you could specify it in a style that affects all menu items
<Style TargetType="MenuItem">
<Setter Property="TextBlock.FontSize" Value="12" />
</Style>
Of course, it's always better to let the system setting determine the font size. Some people may have changed it to better fit their physical condition (like poor eye sight) or hardware (big/small screen). Whatever you force in your code will be the wrong choice for some people, while you give them no way to change it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: interacting with a CMutex without MFC We have multiple MFC apps, which use CMutex( false, "blah" ), where "blah" allows the mutex to work across process boundaries.
One of these apps was re-written without MFC (using Qt instead). How can I simulate the CMutex using Win32 calls? (Qt's QMutex is not inter-process.) I prefer not to modify the MFC apps.
A: For inter-process mutexes you want these calls:
CreateMutex
WaitForSingleObject
ReleaseMutex
CloseHandle
These are the underlying Win32 API calls that CMutex is a wrapper around.
For in-process only mutexes you can also use these calls, which are faster:
InitializeCriticalSection
EnterCriticalSection
LeaveCriticalSection
DeleteCriticalSection
A: The following funcs will probably be what you want, they are all documented on MSDN.
CreateMutex(...)
WaitForSingleObject(...)
ReleaseMutex(...)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Is there an elegant way to instantiate a variable type with parameters? This isn't legal:
public class MyBaseClass
{
public MyBaseClass() {}
public MyBaseClass(object arg) {}
}
public void ThisIsANoNo<T>() where T : MyBaseClass
{
T foo = new T("whoops!");
}
In order to do this, you have to do some reflection on the type object for T or you have to use Activator.CreateInstance. Both are pretty nasty. Is there a better way?
A: Nope. If you weren't passing in parameters, then you could constrain your type param to require a parameterless constructor. But, if you need to pass arguments you are out of luck.
A: You can't constrain T to have a particular constructor signature other than an empty constructor, but you can constrain T to have a factory method with the desired signature:
public abstract class MyBaseClass
{
protected MyBaseClass() {}
protected abstract MyBaseClass CreateFromObject(object arg);
}
public void ThisWorksButIsntGreat<T>() where T : MyBaseClass, new()
{
T foo = new T().CreateFromObject("whoopee!") as T;
}
However, I would suggest perhaps using a different creational pattern such as Abstract Factory for this scenario.
A: where T : MyBaseClass, new()
only works w/ parameterless public constructor. beyond that, back to activator.CreateInstance (which really isn't THAT bad).
A: I can see that not working.
But what is stopping you from doing this?
public void ThisIsANoNo<T>() where T : MyBaseClass
{
MyBaseClass foo = new MyBaseClass("whoops!");
}
Since everything is going to inherit from MyBaseClass they will al be MyBaseClass, right?
I tried it and this works.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
ThisIsANoNo<MyClass>();
ThisIsANoNo<MyBaseClass>();
}
public class MyBaseClass
{
public MyBaseClass() { }
public MyBaseClass(object arg) { }
}
public class MyClass :MyBaseClass
{
public MyClass() { }
public MyClass(object arg, Object arg2) { }
}
public static void ThisIsANoNo<T>() where T : MyBaseClass
{
MyBaseClass foo = new MyBaseClass("whoops!");
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Practical limit to length of SQL query (specifically MySQL) Is it particularly bad to have a very, very large SQL query with lots of (potentially redundant) WHERE clauses?
For example, here's a query I've generated from my web application with everything turned off, which should be the largest possible query for this program to generate:
SELECT *
FROM 4e_magic_items
INNER JOIN 4e_magic_item_levels
ON 4e_magic_items.id = 4e_magic_item_levels.itemid
INNER JOIN 4e_monster_sources
ON 4e_magic_items.source = 4e_monster_sources.id
WHERE (itemlevel BETWEEN 1 AND 30)
AND source!=16 AND source!=2 AND source!=5
AND source!=13 AND source!=15 AND source!=3
AND source!=4 AND source!=12 AND source!=7
AND source!=14 AND source!=11 AND source!=10
AND source!=8 AND source!=1 AND source!=6
AND source!=9 AND type!='Arms' AND type!='Feet'
AND type!='Hands' AND type!='Head'
AND type!='Neck' AND type!='Orb'
AND type!='Potion' AND type!='Ring'
AND type!='Rod' AND type!='Staff'
AND type!='Symbol' AND type!='Waist'
AND type!='Wand' AND type!='Wondrous Item'
AND type!='Alchemical Item' AND type!='Elixir'
AND type!='Reagent' AND type!='Whetstone'
AND type!='Other Consumable' AND type!='Companion'
AND type!='Mount' AND (type!='Armor' OR (false ))
AND (type!='Weapon' OR (false ))
ORDER BY type ASC, itemlevel ASC, name ASC
It seems to work well enough, but it's also not particularly high traffic (a few hundred hits a day or so), and I wonder if it would be worth the effort to try and optimize the queries to remove redundancies and such.
A:
SELECT @@global.max_allowed_packet
this is the only real limit it's adjustable on a server so there is no real straight answer
A: Reading your query makes me want to play an RPG.
This is definitely not too long. As long as they are well formatted, I'd say a practical limit is about 100 lines. After that, you're better off breaking subqueries into views just to keep your eyes from crossing.
I've worked with some queries that are 1000+ lines, and that's hard to debug.
By the way, may I suggest a reformatted version? This is mostly to demonstrate the importance of formatting; I trust this will be easier to understand.
select *
from
4e_magic_items mi
,4e_magic_item_levels mil
,4e_monster_sources ms
where mi.id = mil.itemid
and mi.source = ms.id
and itemlevel between 1 and 30
and source not in(16,2,5,13,15,3,4,12,7,14,11,10,8,1,6,9)
and type not in(
'Arms' ,'Feet' ,'Hands' ,'Head' ,'Neck' ,'Orb' ,
'Potion' ,'Ring' ,'Rod' ,'Staff' ,'Symbol' ,'Waist' ,
'Wand' ,'Wondrous Item' ,'Alchemical Item' ,'Elixir' ,
'Reagent' ,'Whetstone' ,'Other Consumable' ,'Companion' ,
'Mount'
)
and ((type != 'Armor') or (false))
and ((type != 'Weapon') or (false))
order by
type asc
,itemlevel asc
,name asc
/*
Some thoughts:
==============
0 - Formatting really matters, in SQL even more than most languages.
1 - consider selecting only the columns you need, not "*"
2 - use of table aliases makes it short & clear ("MI", "MIL" in my example)
3 - joins in the WHERE clause will un-clutter your FROM clause
4 - use NOT IN for long lists
5 - logically, the last two lines can be added to the "type not in" section.
I'm not sure why you have the "or false", but I'll assume some good reason
and leave them here.
*/
A: Default MySQL 5.0 server limitation is "1MB", configurable up to 1GB.
This is configured via the max_allowed_packet setting on both client and server, and the effective limitation is the lessor of the two.
Caveats:
*
*It's likely that this "packet" limitation does not map directly to characters in a SQL statement. Surely you want to take into account character encoding within the client, some packet metadata, etc.)
A: From a practical perspective, I generally consider any SELECT that ends up taking more than 10 lines to write (putting each clause/condition on a separate line) to be too long to easily maintain. At this point, it should probably be done as a stored procedure of some sort, or I should try to find a better way to express the same concept--possibly by creating an intermediate table to capture some relationship I seem to be frequently querying.
Your mileage may vary, and there are some exceptionally long queries that have a good reason to be. But my rule of thumb is 10 lines.
Example (mildly improper SQL):
SELECT x, y, z
FROM a, b
WHERE fiz = 1
AND foo = 2
AND a.x = b.y
AND b.z IN (SELECT q, r, s, t
FROM c, d, e
WHERE c.q = d.r
AND d.s = e.t
AND c.gar IS NOT NULL)
ORDER BY b.gonk
This is probably too large; optimizing, however, would depend largely on context.
Just remember, the longer and more complex the query, the harder it's going to be to maintain.
A: Most databases support stored procedures to avoid this issue. If your code is fast enough to execute and easy to read, you don't want to have to change it in order to get the compile time down.
An alternative is to use prepared statements so you get the hit only once per client connection and then pass in only the parameters for each call
A: I'm assuming you mean by 'turned off' that a field doesn't have a value?
Instead of checking if something is not this, and it's also not that etc. can't you just check if the field is null? Or set the field to 'off', and check if type or whatever equals 'off'.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: Arial Font doesn't display properly in Mac I have a flash movie with a dynamic text that supposedly is Arial, and in my windows machine it displays as Arial. But when I tried it in a Mac, it shows as something like Times New Roman.
I tried every property available and can't seem to get it to show as Arial on the Mac.
I found another movie I had that didn't have this problem, so to pinpoint the problem I made a very simple movie.
First I took a dynamic text from the other movie that worked and pasted it a new fla. Then I created a new text, and copied every property one by one. When I published it, the original text was showing as Arial, but not the new one, even if they had the same properties! (at least the ones I can edit in flash's properties editor.
I'm using Adobe Flash CS3 Professional.
What do you think can be the problem?? Are there any properties that aren't in the property editor? (I also checked filters, and Transformations)
Both are Dynamic texts, with no Instance Name, "Anti-alias for animation", Multiline, I'm not embeding the font and have checked "Render text as HTML".
A: When you use a dynamic text field, you have two options - either you embed (part of) the font into your SWF, or you use device fonts. If you embed, then the actual character shapes will be built into your SWF; if you don't, then you're including only the font's name - and if the OS doesn't have any fonts of that name, it will choose a default instead.
From your issues with the field you copied from another file, it sounds like you may have missed the "Embed" settings. Look for the button labeled "Embed settings" or similar in the Property Inspector.
If you choose to Embed, then you are guaranteed that your text will render in Arial on all platforms. However, this only holds true for the characters you embed. If you embed only the capital letters, and then set the text to "Hello", on the screen all you'll see is "H". (Be careful of embedding the entire font - for full unicode fonts that will be several megabytes, since they include Japanese and Chinese and so on.)
If you choose not to embed, then to avoid the problems you're having you should probably use one of the "device" fonts listed first in your font menu: _sans, _serif, _typewriter. In nearly all cases, these will translate to Arial, Times, and Courier on the PC, and similar fonts on the Mac.
A: Use Arial/Helvetica FontFamily
A: If you're not embedding the font then you're at the mercy of the player, and Arial is a Windows-only font. If all you want is the system's sans-serif font, try setting the font name to _sans
A: You can embed fonts in Flash.
A: Arial is the Windows version of Helvatica. DTP was invented on the Mac and Microsoft basically created their own versions of the Mac fonts sometime around Windows 3.0
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: STL vectors with uninitialized storage? I'm writing an inner loop that needs to place structs in contiguous storage. I don't know how many of these structs there will be ahead of time. My problem is that STL's vector initializes its values to 0, so no matter what I do, I incur the cost of the initialization plus the cost of setting the struct's members to their values.
Is there any way to prevent the initialization, or is there an STL-like container out there with resizeable contiguous storage and uninitialized elements?
(I'm certain that this part of the code needs to be optimized, and I'm certain that the initialization is a significant cost.)
Also, see my comments below for a clarification about when the initialization occurs.
SOME CODE:
void GetsCalledALot(int* data1, int* data2, int count) {
int mvSize = memberVector.size()
memberVector.resize(mvSize + count); // causes 0-initialization
for (int i = 0; i < count; ++i) {
memberVector[mvSize + i].d1 = data1[i];
memberVector[mvSize + i].d2 = data2[i];
}
}
A: To clarify on reserve() responses: you need to use reserve() in conjunction with push_back(). This way, the default constructor is not called for each element, but rather the copy constructor. You still incur the penalty of setting up your struct on stack, and then copying it to the vector. On the other hand, it's possible that if you use
vect.push_back(MyStruct(fieldValue1, fieldValue2))
the compiler will construct the new instance directly in the memory thatbelongs to the vector. It depends on how smart the optimizer is. You need to check the generated code to find out.
A: You can use boost::noinit_adaptor to default initialize new elements (which is no initialization for built-in types):
std::vector<T, boost::noinit_adaptor<std::allocator<T>> memberVector;
As long as you don't pass an initializer into resize, it default initializes the new elements.
A: So here's the problem, resize is calling insert, which is doing a copy construction from a default constructed element for each of the newly added elements. To get this to 0 cost you need to write your own default constructor AND your own copy constructor as empty functions. Doing this to your copy constructor is a very bad idea because it will break std::vector's internal reallocation algorithms.
Summary: You're not going to be able to do this with std::vector.
A: You can use a wrapper type around your element type, with a default constructor that does nothing. E.g.:
template <typename T>
struct no_init
{
T value;
no_init() { static_assert(std::is_standard_layout<no_init<T>>::value && sizeof(T) == sizeof(no_init<T>), "T does not have standard layout"); }
no_init(T& v) { value = v; }
T& operator=(T& v) { value = v; return value; }
no_init(no_init<T>& n) { value = n.value; }
no_init(no_init<T>&& n) { value = std::move(n.value); }
T& operator=(no_init<T>& n) { value = n.value; return this; }
T& operator=(no_init<T>&& n) { value = std::move(n.value); return this; }
T* operator&() { return &value; } // So you can use &(vec[0]) etc.
};
To use:
std::vector<no_init<char>> vec;
vec.resize(2ul * 1024ul * 1024ul * 1024ul);
A: Err...
try the method:
std::vector<T>::reserve(x)
It will enable you to reserve enough memory for x items without initializing any (your vector is still empty). Thus, there won't be reallocation until to go over x.
The second point is that vector won't initialize the values to zero. Are you testing your code in debug ?
After verification on g++, the following code:
#include <iostream>
#include <vector>
struct MyStruct
{
int m_iValue00 ;
int m_iValue01 ;
} ;
int main()
{
MyStruct aaa, bbb, ccc ;
std::vector<MyStruct> aMyStruct ;
aMyStruct.push_back(aaa) ;
aMyStruct.push_back(bbb) ;
aMyStruct.push_back(ccc) ;
aMyStruct.resize(6) ; // [EDIT] double the size
for(std::vector<MyStruct>::size_type i = 0, iMax = aMyStruct.size(); i < iMax; ++i)
{
std::cout << "[" << i << "] : " << aMyStruct[i].m_iValue00 << ", " << aMyStruct[0].m_iValue01 << "\n" ;
}
return 0 ;
}
gives the following results:
[0] : 134515780, -16121856
[1] : 134554052, -16121856
[2] : 134544501, -16121856
[3] : 0, -16121856
[4] : 0, -16121856
[5] : 0, -16121856
The initialization you saw was probably an artifact.
[EDIT] After the comment on resize, I modified the code to add the resize line. The resize effectively calls the default constructor of the object inside the vector, but if the default constructor does nothing, then nothing is initialized... I still believe it was an artifact (I managed the first time to have the whole vector zerooed with the following code:
aMyStruct.push_back(MyStruct()) ;
aMyStruct.push_back(MyStruct()) ;
aMyStruct.push_back(MyStruct()) ;
So...
:-/
[EDIT 2] Like already offered by Arkadiy, the solution is to use an inline constructor taking the desired parameters. Something like
struct MyStruct
{
MyStruct(int p_d1, int p_d2) : d1(p_d1), d2(p_d2) {}
int d1, d2 ;
} ;
This will probably get inlined in your code.
But you should anyway study your code with a profiler to be sure this piece of code is the bottleneck of your application.
A: std::vector must initialize the values in the array somehow, which means some constructor (or copy-constructor) must be called. The behavior of vector (or any container class) is undefined if you were to access the uninitialized section of the array as if it were initialized.
The best way is to use reserve() and push_back(), so that the copy-constructor is used, avoiding default-construction.
Using your example code:
struct YourData {
int d1;
int d2;
YourData(int v1, int v2) : d1(v1), d2(v2) {}
};
std::vector<YourData> memberVector;
void GetsCalledALot(int* data1, int* data2, int count) {
int mvSize = memberVector.size();
// Does not initialize the extra elements
memberVector.reserve(mvSize + count);
// Note: consider using std::generate_n or std::copy instead of this loop.
for (int i = 0; i < count; ++i) {
// Copy construct using a temporary.
memberVector.push_back(YourData(data1[i], data2[i]));
}
}
The only problem with calling reserve() (or resize()) like this is that you may end up invoking the copy-constructor more often than you need to. If you can make a good prediction as to the final size of the array, it's better to reserve() the space once at the beginning. If you don't know the final size though, at least the number of copies will be minimal on average.
In the current version of C++, the inner loop is a bit inefficient as a temporary value is constructed on the stack, copy-constructed to the vectors memory, and finally the temporary is destroyed. However the next version of C++ has a feature called R-Value references (T&&) which will help.
The interface supplied by std::vector does not allow for another option, which is to use some factory-like class to construct values other than the default. Here is a rough example of what this pattern would look like implemented in C++:
template <typename T>
class my_vector_replacement {
// ...
template <typename F>
my_vector::push_back_using_factory(F factory) {
// ... check size of array, and resize if needed.
// Copy construct using placement new,
new(arrayData+end) T(factory())
end += sizeof(T);
}
char* arrayData;
size_t end; // Of initialized data in arrayData
};
// One of many possible implementations
struct MyFactory {
MyFactory(int* p1, int* p2) : d1(p1), d2(p2) {}
YourData operator()() const {
return YourData(*d1,*d2);
}
int* d1;
int* d2;
};
void GetsCalledALot(int* data1, int* data2, int count) {
// ... Still will need the same call to a reserve() type function.
// Note: consider using std::generate_n or std::copy instead of this loop.
for (int i = 0; i < count; ++i) {
// Copy construct using a factory
memberVector.push_back_using_factory(MyFactory(data1+i, data2+i));
}
}
Doing this does mean you have to create your own vector class. In this case it also complicates what should have been a simple example. But there may be times where using a factory function like this is better, for instance if the insert is conditional on some other value, and you would have to otherwise unconditionally construct some expensive temporary even if it wasn't actually needed.
A: I tested a few of the approaches suggested here.
I allocated a huge set of data (200GB) in one container/pointer:
Compiler/OS:
g++ (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Settings: (c++-17, -O3 optimizations)
g++ --std=c++17 -O3
I timed the total program runtime with linux-time
1.) std::vector:
#include <vector>
int main(){
constexpr size_t size = 1024lu*1024lu*1024lu*25lu;//25B elements = 200GB
std::vector<size_t> vec(size);
}
real 0m36.246s
user 0m4.549s
sys 0m31.604s
That is 36 seconds.
2.) std::vector with boost::noinit_adaptor
#include <vector>
#include <boost/core/noinit_adaptor.hpp>
int main(){
constexpr size_t size = 1024lu*1024lu*1024lu*25lu;//25B elements = 200GB
std::vector<size_t,boost::noinit_adaptor<std::allocator<size_t>>> vec(size);
}
real 0m0.002s
user 0m0.001s
sys 0m0.000s
So this solves the problem. Just allocating without initializing costs basically nothing (at least for large arrays).
3.) std::unique_ptr<T[]>:
#include <memory>
int main(){
constexpr size_t size = 1024lu*1024lu*1024lu*25lu;//25B elements = 200GB
auto data = std::unique_ptr<size_t[]>(new size_t[size]);
}
real 0m0.002s
user 0m0.002s
sys 0m0.000s
So basically the same performance as 2.), but does not require boost.
I also tested simple new/delete and malloc/free with the same performance as 2.) and 3.).
So the default-construction can have a huge performance penalty if you deal with large data sets.
In practice you want to actually initialize the allocated data afterwards.
However, some of the performance penalty still remains, especially if the later initialization is performed in parallel.
E.g., I initialize a huge vector with a set of (pseudo)random numbers:
(now I use fopenmp for parallelization on a 24 core AMD Threadripper 3960X)
g++ --std=c++17-fopenmp -O3
1.) std::vector:
#include <vector>
#include <random>
int main(){
constexpr size_t size = 1024lu*1024lu*1024lu*25lu;//25B elements = 200GB
std::vector<size_t> vec(size);
#pragma omp parallel
{
std::minstd_rand0 gen(42);
#pragma omp for schedule(static)
for (size_t i = 0; i < size; ++i) vec[i] = gen();
}
}
real 0m41.958s
user 4m37.495s
sys 0m31.348s
That is 42s, only 6s more than the default initialization.
The problem is, that the initialization of std::vector is sequential.
2.) std::vector with boost::noinit_adaptor:
#include <vector>
#include <random>
#include <boost/core/noinit_adaptor.hpp>
int main(){
constexpr size_t size = 1024lu*1024lu*1024lu*25lu;//25B elements = 200GB
std::vector<size_t,boost::noinit_adaptor<std::allocator<size_t>>> vec(size);
#pragma omp parallel
{
std::minstd_rand0 gen(42);
#pragma omp for schedule(static)
for (size_t i = 0; i < size; ++i) vec[i] = gen();
}
}
real 0m10.508s
user 1m37.665s
sys 3m14.951s
So even with the random-initialization, the code is 4 times faster because we can skip the sequential initialization of std::vector.
So if you deal with huge data sets and plan to initialize them afterwards in parallel, you should avoid using the default std::vector.
A: In C++11 (and boost) you can use the array version of unique_ptr to allocate an uninitialized array. This isn't quite an stl container, but is still memory managed and C++-ish which will be good enough for many applications.
auto my_uninit_array = std::unique_ptr<mystruct[]>(new mystruct[count]);
A: C++0x adds a new member function template emplace_back to vector (which relies on variadic templates and perfect forwarding) that gets rid of any temporaries entirely:
memberVector.emplace_back(data1[i], data2[i]);
A: From your comments to other posters, it looks like you're left with malloc() and friends. Vector won't let you have unconstructed elements.
A: From your code, it looks like you have a vector of structs each of which comprises 2 ints. Could you instead use 2 vectors of ints? Then
copy(data1, data1 + count, back_inserter(v1));
copy(data2, data2 + count, back_inserter(v2));
Now you don't pay for copying a struct each time.
A: If you really insist on having the elements uninitialized and sacrifice some methods like front(), back(), push_back(), use boost vector from numeric . It allows you even not to preserve existing elements when calling resize()...
A: I'm not sure about all those answers that says it is impossible or tell us about undefined behavior.
Sometime, you need to use an std::vector. But sometime, you know the final size of it. And you also know that your elements will be constructed later.
Example : When you serialize the vector contents into a binary file, then read it back later.
Unreal Engine has its TArray::setNumUninitialized, why not std::vector ?
To answer the initial question
"Is there any way to prevent the initialization, or is there an STL-like container out there with resizeable contiguous storage and uninitialized elements?"
yes and no.
No, because STL doesn't expose a way to do so.
Yes because we're coding in C++, and C++ allows to do a lot of thing. If you're ready to be a bad guy (and if you really know what you are doing). You can hijack the vector.
Here a sample code that works only for the Windows's STL implementation, for another platform, look how std::vector is implemented to use its internal members :
// This macro is to be defined before including VectorHijacker.h. Then you will be able to reuse the VectorHijacker.h with different objects.
#define HIJACKED_TYPE SomeStruct
// VectorHijacker.h
#ifndef VECTOR_HIJACKER_STRUCT
#define VECTOR_HIJACKER_STRUCT
struct VectorHijacker
{
std::size_t _newSize;
};
#endif
template<>
template<>
inline decltype(auto) std::vector<HIJACKED_TYPE, std::allocator<HIJACKED_TYPE>>::emplace_back<const VectorHijacker &>(const VectorHijacker &hijacker)
{
// We're modifying directly the size of the vector without passing by the extra initialization. This is the part that relies on how the STL was implemented.
_Mypair._Myval2._Mylast = _Mypair._Myval2._Myfirst + hijacker._newSize;
}
inline void setNumUninitialized_hijack(std::vector<HIJACKED_TYPE> &hijackedVector, const VectorHijacker &hijacker)
{
hijackedVector.reserve(hijacker._newSize);
hijackedVector.emplace_back<const VectorHijacker &>(hijacker);
}
But beware, this is hijacking we're speaking about. This is really dirty code, and this is only to be used if you really know what you are doing. Besides, it is not portable and relies heavily on how the STL implementation was done.
I won't advise you to use it because everyone here (me included) is a good person. But I wanted to let you know that it is possible contrary to all previous answers that stated it wasn't.
A: Use the std::vector::reserve() method. It won't resize the vector, but it will allocate the space.
A: Do the structs themselves need to be in contiguous memory, or can you get away with having a vector of struct*?
Vectors make a copy of whatever you add to them, so using vectors of pointers rather than objects is one way to improve performance.
A: I don't think STL is your answer. You're going to need to roll your own sort of solution using realloc(). You'll have to store a pointer and either the size, or number of elements, and use that to find where to start adding elements after a realloc().
int *memberArray;
int arrayCount;
void GetsCalledALot(int* data1, int* data2, int count) {
memberArray = realloc(memberArray, sizeof(int) * (arrayCount + count);
for (int i = 0; i < count; ++i) {
memberArray[arrayCount + i].d1 = data1[i];
memberArray[arrayCount + i].d2 = data2[i];
}
arrayCount += count;
}
A: I would do something like:
void GetsCalledALot(int* data1, int* data2, int count)
{
const size_t mvSize = memberVector.size();
memberVector.reserve(mvSize + count);
for (int i = 0; i < count; ++i) {
memberVector.push_back(MyType(data1[i], data2[i]));
}
}
You need to define a ctor for the type that is stored in the memberVector, but that's a small cost as it will give you the best of both worlds; no unnecessary initialization is done and no reallocation will occur during the loop.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "51"
}
|
Q: How do I Upgrade to Subversion 1.5 On CentOS 5? My development server (CentOS 5) is running Subversion 1.4.2, and I wish to upgrade it to 1.5. I have read in various blogs and documents scattered around the web that this may be done by using RPMForge. I have followed the instructions found on CentOS Wiki, including installing yum-priorities and setting my priorities as indicated (1 and 2 for core repo sources, and 20 for RPMForge).
However, when I attempt to run:
$ yum info subversion
the version number given to me is still 1.4.2, with a status of Installed. My other option at this point is compiling from source, but I would like to find a package-managed solution for ease of future upgrades.
Any thoughts?
A: What you are trying to do is to replace a "core" package (one which is
contained in the CentOS repository) with a newer package from a "3rd
party" repository (RPMForge), which is what the priorities plugin is
designed to prevent.
The RPMForge repository contains both additional packages not found in
CentOS, as well as newer versions of core packages. Unfortunately, yum
is pretty stupid and will always update a package to the latest version
it can find in any repository. So running "yum update" with RPMforge
enabled will update half of your system with the latest (bleeding edge,
possibly unstable and less well supported) packages from RPMForge.
Therefore, the recommended way to use repos like RPMForge is to use them
only together with a yum plugin like "priorites", which prevents
packages from "high" priority repos to overwrite those from "low"
priority repos (the name of the "priority" parameter is very
misleading). This way you can savely install additional packages (that
are not in core) from RPMForge, which is what most people want.
Now to your original question ...
If you want to replace a core package, things get a little tricky.
Basically, you have two options:
*
*Uninstall the priority plugin, and disable the RPMForge repository by
default (set enabled = 0 in /etc/yum.repos.d/rpmforge.repo). You can
then selectively enable it on the command line:
yum --enablerepo=rpmforge install subversion
will install the latest subversion and dependencies from RPMForge.
The problem with this approach is that if there is an update to the
subversion package in RPMForge, you will not see it when the repo is
disabled. To keep subversion up to date, you have to remember to run
yum --enablerepo=rpmforge update subversion
from time to time.
*The second possibility is to use the priorites plugin, but manually
"mask" the core subversion package (add exclude=subversion to the
[base] and [update] sections in /etc/yum.repos.d/CentOS-Base.repo).
Now yum will behave as if there is no package named "subversion" in
the core repository and happily install the latest version from
RPMForge. Plus, you will always get the latest subversion updates
when running yum update.
A: 1.- if you are using yum-priorities disable this in the file /etc/yum/pluginconf.d/priorities.conf
2.- check the version of subversion
$ rpm -qa|grep subversion
subversion-1.4.2-4.el5_3.1
subversion-1.4.2-4.el5_3.1
3.- search the last version of the subversion from rpmforge repository
$ yum --enablerepo=rpmforge check-update subversion
subversion.x86_64 1.6.6-0.1.el5.rf rpmforge
4.- now proced to upgrade subversion with rpmforge repository
$ yum shell
>erase mod_dav_svn-1.4.2-4.el5_3.1
>erase subversion-1.4.2-4.el5_3.1
>install mod_dav_svn-1.6.6-0.1.el5.rf
>install subversion-1.6.6-0.1.el5.rf.x86_64
>run
that's all it works for me im running centos 5.4
A: Thanks Matt - we also have the only distro of SVN 1.7 on SVN.
You may also want to try uberSVN.
A: If you install RPMForge's repos, you should then be able to get a newer package - this isn't working for you?
You should see rpmforge.list in /etc/apt/sources.list.d with a line like:
repomd http://apt.sw.be redhat/el$(VERSION)/en/$(ARCH)/dag
I just tested on a clean CentOS 5 install, and yum check-update shows
subversion.i386 1.5.2-0.1.el5.rf rpmforge
subversion-perl.i386 1.5.2-0.1.el5.rf rpmforge
So check your sources list and run check-update again.
Edit: Whoops, lost part of my answer. Added it back above.
A:
I'm not overly concerned about the other outdated packages at the moment, but as you can see there is no Subversion update available.
Nor any packages from rpmforge. It's your priority settings. Try disabling yum-priorities (change enabled=1 to enabled=0 in /etc/yum/pluginconf.d/priorities.conf) - then it should work.
So I guess the next question is why the priority is screwing it up.... I'm not sure on this, though.
Edit: See 8jean's answer for more about priorities.
A: its up to v 1.4.6 in Dag's repository.
You can try the one from Fedora's repo or have a bit of patience for the main repositories to upgrade it.
Making it from source is easy, read the INSTALL file when you download the source package, bear in mind CentOS may have moved where the files get installed. (Use "rpm -ql subversion" to see where the old files were installed to).
When v1.5.0 gets released to the repository, you can delete your built version and install using yum as before.
A: RPMForge is already in /etc/yum.repos.d/ as rpmforge.repo, and the contents are:
# Name: RPMforge RPM Repository for Red Hat Enterprise 5 - dag
# URL: http://rpmforge.net/
[rpmforge]
name = Red Hat Enterprise $releasever - RPMforge.net - dag
#baseurl = http://apt.sw.be/redhat/el5/en/$basearch/dag
mirrorlist = http://apt.sw.be/redhat/el5/en/mirrors-rpmforge
#mirrorlist = file:///etc/yum.repos.d/mirrors-rpmforge
enabled = 1
protect = 0
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rpmforge-dag
gpgcheck = 1
priority=20
A: I have that exact line in in /etc/apt/sources.list.d/rpmforge.list .
When I run check-update, I get:
Loading "priorities" plugin
Loading "fastestmirror" plugin
Loading mirror speeds from cached hostfile
* epel: mirror.unl.edu
* rpmforge: fr2.rpmfind.net
* base: mirrors.portafixe.com
* updates: mirrors.portafixe.com
* addons: mirrors.portafixe.com
* extras: mirrors.portafixe.com
2202 packages excluded due to repository priority protections
bzip2.i386 1.0.3-4.el5_2 updates
bzip2-devel.i386 1.0.3-4.el5_2 updates
bzip2-libs.i386 1.0.3-4.el5_2 updates
libxml2.i386 2.6.26-2.1.2.6 updates
libxml2-devel.i386 2.6.26-2.1.2.6 updates
libxml2-python.i386 2.6.26-2.1.2.6 updates
perl.i386 4:5.8.8-15.el5_2.1 updates
sos.noarch 1.7-9.2.el5_2.2 updates
tzdata.noarch 2008e-1.el5 updates
I'm not overly concerned about the other outdated packages at the moment, but as you can see there is no Subversion update available.
A: All you need to do is get this script. worked perfectly for me on CentOS 5.3
http://wandisco.com/subversion/os/downloads
No, i don't work there or have any affiliation what-so-ever ... just found it and figured I would let you guys knows.
Good luck.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
}
|
Q: Git - is it pull or rebase when working on branches with other people So if I'm using branches that are remote (tracked) branches, and I want to get the lastest, I'm still unclear if I should be doing git pull or git rebase. I thought I had read that doing git rebase when working on a branch with other users, it can screw them up when they pull or rebase. Is that true? Should we all be using git pull?
A: git pull does a merge if you've got commits that aren't in the remote branch. git rebase rewrites any existing commits you have to be relative to the tip of the remote branch. They're similar in that they can both cause conflicts, but I think using git rebase if you can allows for smoother collaboration. During the rebase operation you can refine your commits so they look like they were newly applied to the latest revision of the remote branch. A merge is perhaps more appropriate for longer development cycles on a branch that have more history.
Like most other things in git, there is a lot of overlapping functionality to accommodate different styles of working.
A: Git pull is a combination of 2 commands
*
*git fetch (syncs your local repo with the newest stuff on the remote)
*git merge (merges the changes from the distant branch, if any, into your local tracking branch)
git rebase is only a rough equivalent to git merge. It doesn't fetch anything remotely. In fact it doesn't do a proper merge either, it replays the commits of the branch you're standing on after the new commits from a second branch.
Its purpose is mainly to let you have a cleaner history. It doesn't take many merges by many people before the past history in gitk gets terribly spaghetti-like.
The best graphical explanation can be seen in the first 2 graphics here. But let me explain here with an example.
I have 2 branches: master and mybranch. When standing on mybranch I can run
git rebase master
and I'll get anything new in master inserted before my most recent commits in mybranch. This is perfect, because if I now merge or rebase the stuff from mybranch in master, my new commits are added linearly right after the most recent commits.
The problem you refer to happens if I rebase in the "wrong" direction. If I just got the most recent master (with new changes) and from master I rebase like this (before syncing my branch):
git rebase mybranch
Now what I just did is that I inserted my new changes somewhere in master's past. The main line of commits has changed. And due to the way git works with commit ids, all the commits (from master) that were just replayed over my new changes have new ids.
Well, it's a bit hard to explain just in words... Hope this makes a bit of sense :-)
Anyway, my own workflow is this:
*
*'git pull' new changes from remote
*switch to mybranch
*'git rebase master' to bring master's new changes in my commit history
*switch back to master
*'git merge mybranch', which only fast-forwards when everything in master is also in mybranch (thus avoiding the commit reordering problem on a public branch)
*'git push'
One last word. I strongly recommend using rebase when the differences are trivial (e.g. people working on different files or at least different lines). It has the gotcha I tried to explain just up there, but it makes for a much cleaner history.
As soon as there may be significant conflicts (e.g. a coworker has renamed something in a bunch of files), I strongly recommend merge. In this case, you'll be asked to resolve the conflict and then commit the resolution. On the plus side, a merge is much easier to resolve when there are conflicts. The down side is that your history may become hard to follow if a lot of people do merges all the time :-)
Good luck!
A: Check out the excellent Gitcasts on Branching and merging as well as rebasing.
A: Git rebase is a re-write of history. You should never do this on branches that are "public" (i.e., branches that you share with others). If someone clones your branch and then you rebase that branch -- then they can no longer pull/merge changes from your branch -- they'll have to throw their old one away and re-pull.
This article on packaging software with git is a very worthwhile read. It's more about managing software distributions but it's quite technical and talks about how branches can be used/managed/shared. They talk about when to rebase and when to pull and what the various consequences of each are.
In short, they both have their place but you need to really grok the difference.
A: If you want to pull source without affecting remote branches and without any changes in your local copy, it's best to use git pull.
I believe if you have a working branch that you have made changes to, use git rebase to change the base of that branch to be latest remote master, you will keep all of your branch changes, however the branch will now be branching from the master location, rather than where it was previously branched from.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
}
|
Q: How to scroll only the right side of a table, listview, or datagrid? Let's say I have data structures that're something like this:
Public Class AttendenceRecord
Public CourseDate As Date
Public StudentsInAttendence As Integer
End Class
Public Class Course
Public Name As String
Public CourseID As String
Public Attendance As List(Of AttendenceRecord)
End Class
And I want a table that looks something like this:
| Course Name | Course ID | [Attendence(0).CourseDate] | [Attendence(1).CourseDate]| ...
| Intro to CS | CS-1000 | 23 | 24 | ...
| Data Struct | CS-2103 | 15 | 14 | ...
How would I, in the general case, get everything to the right of Course ID to be horizontally scrollable, while holding Course Name and Course ID in place? Ideally using a table, listview, or datagrid inside ASP.NET and/or WinForms.
A: In pure .Net I don't know of anything. There are CSS Solutions for a fixed header. But a fixed left column, in my experience, requires some javascript finangling.
Took me a minute to find the old example. Host has since gone down. http://web.archive.org/web/20080215013647/http://www.litotes.demon.co.uk/example_scripts/tableScroll.html
This is the mechanism I used to get it to work: Take a normal table, and separate it out into 4 other tables. Get the column widths and row heights to match up using business constraints, and then link the onscroll event to scroll the other tables.
A: You can get this functionality from the System.Windows.Forms.DataGridView control. When you create columns you can set them to be frozen which will then only scroll those columns to the right of the frozen column(s).
A: Here's an example using just HTML and CSS to achieve what I think you're looking for:
http://www.shrutigupta.com/index.php/2005/12/12/how-to-create-table-with-first-column-frozen/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to prevent others from using my .Net assembly? I have an assembly which should not be used by any application other than the designated executable. Please give me some instructions to do so.
A: 100% completely impossible without jumping through some hoops.
One of the perks of using the .NET is the ability to use reflection, that is load up an assembly and inspect it, dynamically call methods, etc. This is what makes interop between VB.NET and F# possible.
However, since your code is in a managed assembly that means that anybody can add a reference to your code and invoke its public methods or load it using reflection and call private methods. Even if you 'obfuscate' your code, people will still be able to use reflection and invoke your code. However, since all the names will be masked doing anything is prohibitavely difficult.
If you must ship your .NET code in a fashion that prevents other people from executing it, you might be able to NGEN your binary (compile it to x86) and ship those binaries.
I don't know the specifics of your situation, but obfuscation should be good enough.
A: You could also look at using the Netz executable packer and compressor.
This takes your assemblies and your .exe file and packs them into a single executable so they're not visible to the outside world without a bit of digging around.
My guess is that this is sufficient to prevent access for most .net programmers.
A big benefit of the .netz approach is that it does not require you to change your code. Another benefit is that it really simplifies your installation process.
A: You should be able to make everything internally scoped, and then use the InternalsVisibleTo Attribute to grant only that one assembly access to the internal methods.
A: The Code Access Security attribute that @Charles Graham mentions is StrongNameIdentityPermissionAttribute
A: As some people have mentioned, use the InternalsVisibleTo attribute and mark everything as internal. This of course won't guard against reflection.
One thing that hasnt been mentioned is to ilmerge your assemblies into your main .exe/.dll/whatever, this will up the barrier for entry a bit (people won't be able to see your assemby sitting on its own asking to be referenced), but wont stop the reflection route..
UPDATE: Also, IIRC, ilmerge has a feature where it can automaticaly internalise the merged assemblies, which would mean you don't need to use InternalsVisibleTo at all
A: I'm not sure if this is an available avenue for you, but perhaps you can host the assembly using WCF or ASP.NET web services and use some sort of authentication scheme (LDAP, public/rpivate key pairs, etc.) to ensure only allowed clients connect. This would keep your assembly physically out of anyone else's hands and you can control who connects to it. Just a thought.
A: You can sign the assembly and the executable with the same key and then put a check in the constructor of the classes you want to protect:
public class NotForAnyoneElse {
public NotForAnyoneElse() {
if (typeof(NotForAnyoneElse).Assembly.GetName().GetPublicKeyToken() != Assembly.GetEntryAssembly().GetName().GetPublicKeyToken()) {
throw new SomeException(...);
}
}
}
A: In .Net 2.0 or better, make everything internal, and then use Friend Assemblies
http://msdn.microsoft.com/en-us/library/0tke9fxk.aspx
This will not stop reflection. I want to incorporate some of the information from below. If you absolutely need to stop anyone from calling, probably the best solution is:
*
*ILMerge the .exe and .dll
*obfuscate the final .exe
You could also check up the call stack and get the assembly for each caller and make sure that they are all signed with the same key as the assembly.
A: You might be able to set this in the Code Access Security policies on the assembly.
A: You can use obfuscation.
That will turn:
int MySecretPrimeDetectionAlgorithm(int lastPrimeNumber);
Into something unreadable like:
int Asdfasdfasdfasdfasdfasdfasdf(int qwerqwerqwerqwerqwerqwer);
Others will still be able to use your assembly, but it will be difficult to make any sensible.
A: It sounds like you are looking for a protection or obfuscation tool. While there isn't a silver bullet, the protection tool I recommend is smartassembly. Some alternatives are Salamander Obfuscator, dotfuscator, and Xenocode.
Unfortunately, if you give your bytes to someone to be read... if they have enough time and effort, they can find way to load and call your code. To preemptively answer a comment I see you ask frequently: Salamander will prevent your code from being loaded directly into the Reflector tool, but I've had better (ie: more reliable) experiences with smartassembly.
Hope this helps. :)
A: If the assembly was a web service for example, you could ensure the designated executable passes a secret value in the SOAP message.
A: Just require a pass code to be sent in using a function call and if it hasn't been authorized then nothing works, like .setAuthorizeCode('123456') then in every single place that can be used have it check if authorizeCode != 123456 then throw error or just exit out... It doesn't sound like a good answer for re-usability but that is exactly the point.
The only time it could be used is by you and when you hard code the authorize code into the program.
Just a thought, could be what you are looking for or could inspire you to something better.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Problems with Visual Studio Express installation I just installed 'Visual C# 2008 Express Edition' and 'Visual Web Developer 2008 Express Edition' on my Vista machine. Previously I have been running these in Win XP. When launching the software, starting a new project and trying to build it I get warnings like "The referenced component 'System' could not be found."; one row for each namespace used. I have .NET Framework 3.5 installed and are able to browse through the tabs in 'Add reference', but I cannot make it work. (A re-install did not help.) Is there an easy fix?
A: Problem solved. I used the most radical solution I could come up with - a clean Vista install. Somehow reinstalling Visual Studio does not include all essential steps. First time the software is launched it configures itself. Something must have gone wrong the first time and when the procedure was done again it tried to use to "broken" configuration. Well, now it is fixed.
A: Are you launching via "Run As Administrator"? It's possible the permissions on your .Net framework directories aren't what they should be.
A: Do you have any patches or updates that there may be for that software? The other thought would be to copy the System.dll into the bin folder of the project which is what I used to do in previous projects to get things working.
A: I really didn't want to follow the accepted answer and do a clean install. A better solution I got from the MSDN forums:
Go to Control Panel > Programs and Features > Turn Windows Features on or off...
Then enable everything under Microsoft .Net Framework 3.0
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Displaying the correct size in Windows' Add/Remove Programs I have a need to manually setup the registry settings for an entry in Window's Add/Remove Programs (for XP and Vista). Everything works except for the displayed size.
According to this 2004 post by Raymond Chen it should be possible by setting the EstimatedSize registry value but it doesn't work. This more recent MSDN page says the EstimatedSize value is "Determined and set by the Windows Installer." Does anyhow know how I can manually set the size value outside the Windows Installer?
(Suggestions to use a single large MSI are appreciated but we have done that in the past and its proven difficult and inflexible. Our current approach is a custom application to manage hundreds of smaller MSI packages, but this means the application itself has to write out the registry settings for Add/Remove Programs.)
A: you could try building the sub-projects into msm (merge modules) and then linking the lot into a single msi - you get the benefits of having individual modules, and a single msi that way.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How can I make MS Access Query Parameters Optional? I have a query that I would like to filter in different ways at different times. The way I have done this right now by placing parameters in the criteria field of the relevant query fields, however there are many cases in which I do not want to filter on a given field but only on the other fields. Is there any way in which a wildcard of some sort can be passed to the criteria parameter so that I can bypass the filtering for that particular call of the query?
A: Note the * wildcard with the LIKE keyword will only have the desired effect in ANSI-89 Query Mode.
Many people mistakenly assume the wildcard character in Access/Jet is always *. Not so. Jet has two wildcards: % in ANSI-92 Query Mode and * in ANSI-89 Query Mode.
ADO is always ANSI-92 and DAO is always ANSI-89 but the Access interface can be either.
When using the LIKE keyword in a database object (i.e. something that will be persisted in the mdb file), you should to think to yourself: what would happen if someone used this database using a Query Mode other than the one I usually use myself? Say you wanted to restrict a text field to numeric characters only and you'd written your Validation Rule like this:
NOT LIKE "*[!0-9]*"
If someone unwittingly (or otherwise) connected to your .mdb via ADO then the validation rule above would allow them to add data with non-numeric characters and your data integrity would be shot. Not good.
Better IMO to always code for both ANSI Query Modes. Perhaps this is best achieved by explicitly coding for both Modes e.g.
NOT LIKE "*[!0-9]*" AND NOT LIKE "%[!0-9]%"
But with more involved Jet SQL DML/DDL, this can become very hard to achieve concisely. That is why I recommend using the ALIKE keyword, which uses the ANSI-92 Query Mode wildcard character regardless of Query Mode e.g.
NOT ALIKE "%[!0-9]%"
Note ALIKE is undocumented (and I assume this is why my original post got marked down). I've tested this in Jet 3.51 (Access97), Jet 4.0 (Access2000 to 2003) and ACE (Access2007) and it works fine. I've previously posted this in the newsgroups and had the approval of Access MVPs. Normally I would steer clear of undocumented features myself but make an exception in this case because Jet has been deprecated for nearly a decade and the Access team who keep it alive don't seem interested in making deep changes to the engines (or bug fixes!), which has the effect of making the Jet engine a very stable product.
For more details on Jet's ANSI Query modes, see About ANSI SQL query mode.
A: If you construct your query like so:
PARAMETERS ParamA Text ( 255 );
SELECT t.id, t.topic_id
FROM SomeTable t
WHERE t.id Like IIf(IsNull([ParamA]),"*",[ParamA])
All records will be selected if the parameter is not filled in.
A: Back to my previous exampe in your previous question. Your parameterized query is a string looking like that:
qr = "Select Tbl_Country.* From Tbl_Country WHERE id_Country = [fid_country]"
depending on the nature of fid_Country (number, text, guid, date, etc), you'll have to replace it with a joker value and specific delimitation characters:
qr = replace(qr,"[fid_country]","""*""")
In order to fully allow wild cards, your original query could also be:
qr = "Select Tbl_Country.* From Tbl_Country _
WHERE id_Country LIKE [fid_country]"
You can then get wild card values for fid_Country such as
qr = replace(qr,"[fid_country]","G*")
Once you're done with that, you can use the string to open a recordset
set rs = currentDb.openRecordset(qr)
A: I don't think you can. How are you running the query?
I'd say if you need a query that has that many open variables, put it in a vba module or class, and call it, letting it build the string every time.
A: I'm not sure this helps, because I suspect you want to do this with a saved query rather than in VBA; however, the easiest thing you can do is build up a query line by line in VBA, and then creating a recordset from it.
A quite hackish way would be to re-write the saved query on the fly and then access that; however, if you have multiple people using the same DB you might run into conflicts, and you'll confuse the next developer down the line.
You could also programatically pass default value to the query (as discussed in you r previous question)
A: Well, you can return non-null values by passing * as the parameter for fields you don't wish to use in the current filter. In Access 2003 (and possibly earlier and later versions), if you are using like [paramName] as your criterion for a numeric, Text, Date, or Boolean field, an asterisk will display all records (that match the other criteria you specify). If you want to return null values as well, then you can use like [paramName] or Is Null as the criterion so that it returns all records. (This works best if you are building the query in code. If you are using an existing query, and you don't want to return null values when you do have a value for filtering, this won't work.)
If you're filtering a Memo field, you'll have to try another approach.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What's the difference between | and || in Java?
Possible Duplicate:
Why do we usually use || not |, what is the difference?
Title says it all. It's been discussed for other languages, but I haven't seen it for Java yet.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: How do I get flash to reload the parent HTML page it is embedded in? I have a flash app (SWF) running Flash 8 embedded in an HTML page. How do I get flash to reload the parent HTML page it is embedded in? I've tried using ExternalInterface to call a JavaScript function to reload the page but that doesn't seem to work.
A: Try something like this:
getURL("javascript:location.reload(true)");
A: Simple one line solution.
ExternalInterface.call("document.location.reload", true);
A: Quick and dirty: This will work in most cases (without modifying the HTML page at all):
import flash.external.ExternalInterface;
ExternalInterface.call("history.go", 0);
A: Check the ExternalInterface in Action Script. Using this you can call any JavaScript function in your code:
if (ExternalInterface.available)
{
var result = ExternalInterface.call("reload");
}
In the Embedding HTML code enter a JavaScript function:
function reload()
{
document.location.reload(true);
return true;
}
This has the advantage that you can also check, if the function call succeeded and act accordingly. getUrl along with a call to JavaScript should not be used today anymore. It's an old hack.
A: In Flash 10 you can do:
navigateToURL(new URLRequest("path_to_page"), "_self");
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Organizing Extension Methods How do you organize your Extension Methods? Say if I had extensions for the object class and string class I'm tempted to separate these extension methods into classes IE:
public class ObjectExtensions
{
...
}
public class StringExtensions
{
...
}
am I making this too complicated or does this make sense?
A: There are two ways that I organize the extension methods which I use,
1) If the extension is specific to the project I am working on, then I keep it in the same project/assembly, but in its own namespace.
2) If the extension is of a kind so that I may or is using it in other projects too, then I separate them in a common assembly for extensions.
The most important thing to keep in mind is, what is the scope which I will be using these in? Organizing them isn't hard if I just keep this in mind.
A: I organize extension methods using a combination of namespace and class name, and it's similar to the way you describe in the question.
Generally I have some sort of "primary assembly" in my solution that provides the majority of the shared functionality (like extension methods). We'll call this assembly "Framework" for the sake of discussion.
Within the Framework assembly, I try to mimic the namespaces of the things for which I have extension methods. For example, if I'm extending System.Web.HttpApplication, I'd have a "Framework.Web" namespace. Classes like "String" and "Object," being in the "System" namespace, translate to the root "Framework" namespace in that assembly.
Finally, naming goes along the lines you've specified in the question - the type name with "Extensions" as a suffix. This yields a class hierarchy like this:
*
*Framework (namespace)
*
*Framework.ObjectExtensions (class)
*Framework.StringExtensions (class)
*Framework.Web (namespace)
*
*Framework.Web.HttpApplicationExtensions (class)
The benefit is that, from a maintenance perspective, it's really easy later to go find the extension methods for a given type.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Any suggestions for effectively testing AJAX enabled web pages using MSVS Tester Edition Tools? It seems like MS really left a massive gaping hole in their automated testing tools in Visual Studio for web pages with AJAX components and I have been hard pressed to find any commentary or third party add-ons that remedy the problem. Anyone have any advice on automating web tests in MSVS for AJAX pages?
A: I eventually gave up trying, and just stuck with WATIR
A: I don't know if this will help, but you can try this:
https://github.com/pivotal/jsunit
EDIT:Sorry I reread your Q and realized you meant specific to VS. I don't know if you are familiar with Script#, but I had read some talk a little while back that someone was building a testing framework to use with that, and Script# can be used with MSAjax. Might be worth some investigation.
http://scriptsharp.com/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Embedding one dll inside another as an embedded resource and then calling it from my code I've got a situation where I have a DLL I'm creating that uses another third party DLL, but I would prefer to be able to build the third party DLL into my DLL instead of having to keep them both together if possible.
This with is C# and .NET 3.5.
The way I would like to do this is by storing the third party DLL as an embedded resource which I then place in the appropriate place during execution of the first DLL.
The way I originally planned to do this is by writing code to put the third party DLL in the location specified by System.Reflection.Assembly.GetExecutingAssembly().Location.ToString()
minus the last /nameOfMyAssembly.dll. I can successfully save the third party .DLL in this location (which ends up being
C:\Documents and Settings\myUserName\Local Settings\Application
Data\assembly\dl3\KXPPAX6Y.ZCY\A1MZ1499.1TR\e0115d44\91bb86eb_fe18c901
), but when I get to the part of my code requiring this DLL, it can't find it.
Does anybody have any idea as to what I need to be doing differently?
A: I've had success doing what you are describing, but because the third-party DLL is also a .NET assembly, I never write it out to disk, I just load it from memory.
I get the embedded resource assembly as a byte array like so:
Assembly resAssembly = Assembly.LoadFile(assemblyPathName);
byte[] assemblyData;
using (Stream stream = resAssembly.GetManifestResourceStream(resourceName))
{
assemblyData = ReadBytesFromStream(stream);
stream.Close();
}
Then I load the data with Assembly.Load().
Finally, I add a handler to AppDomain.CurrentDomain.AssemblyResolve to return my loaded assembly when the type loader looks it.
See the .NET Fusion Workshop for additional details.
A: You can achieve this remarkably easily using Netz, a .net NET Executables Compressor & Packer.
A: Once you've embedded the third-party assembly as a resource, add code to subscribe to the AppDomain.AssemblyResolve event of the current domain during application start-up. This event fires whenever the Fusion sub-system of the CLR fails to locate an assembly according to the probing (policies) in effect. In the event handler for AppDomain.AssemblyResolve, load the resource using Assembly.GetManifestResourceStream and feed its content as a byte array into the corresponding Assembly.Load overload. Below is how one such implementation could look like in C#:
AppDomain.CurrentDomain.AssemblyResolve += (sender, args) =>
{
var resName = args.Name + ".dll";
var thisAssembly = Assembly.GetExecutingAssembly();
using (var input = thisAssembly.GetManifestResourceStream(resName))
{
return input != null
? Assembly.Load(StreamToBytes(input))
: null;
}
};
where StreamToBytes could be defined as:
static byte[] StreamToBytes(Stream input)
{
var capacity = input.CanSeek ? (int) input.Length : 0;
using (var output = new MemoryStream(capacity))
{
int readLength;
var buffer = new byte[4096];
do
{
readLength = input.Read(buffer, 0, buffer.Length);
output.Write(buffer, 0, readLength);
}
while (readLength != 0);
return output.ToArray();
}
}
Finally, as a few have already mentioned, ILMerge may be another option to consider, albeit somewhat more involved.
A: In the end I did it almost exactly the way raboof suggested (and similar to what dgvid suggested), except with some minor changes and some omissions fixed. I chose this method because it was closest to what I was looking for in the first place and didn't require using any third party executables and such. It works great!
This is what my code ended up looking like:
EDIT: I decided to move this function to another assembly so I could reuse it in multiple files (I just pass in Assembly.GetExecutingAssembly()).
This is the updated version which allows you to pass in the assembly with the embedded dlls.
embeddedResourcePrefix is the string path to the embedded resource, it will usually be the name of the assembly followed by any folder structure containing the resource (e.g. "MyComapny.MyProduct.MyAssembly.Resources" if the dll is in a folder called Resources in the project). It also assumes that the dll has a .dll.resource extension.
public static void EnableDynamicLoadingForDlls(Assembly assemblyToLoadFrom, string embeddedResourcePrefix) {
AppDomain.CurrentDomain.AssemblyResolve += (sender, args) => { // had to add =>
try {
string resName = embeddedResourcePrefix + "." + args.Name.Split(',')[0] + ".dll.resource";
using (Stream input = assemblyToLoadFrom.GetManifestResourceStream(resName)) {
return input != null
? Assembly.Load(StreamToBytes(input))
: null;
}
} catch (Exception ex) {
_log.Error("Error dynamically loading dll: " + args.Name, ex);
return null;
}
}; // Had to add colon
}
private static byte[] StreamToBytes(Stream input) {
int capacity = input.CanSeek ? (int)input.Length : 0;
using (MemoryStream output = new MemoryStream(capacity)) {
int readLength;
byte[] buffer = new byte[4096];
do {
readLength = input.Read(buffer, 0, buffer.Length); // had to change to buffer.Length
output.Write(buffer, 0, readLength);
}
while (readLength != 0);
return output.ToArray();
}
}
A: Instead of writing the assembly to disk you can try to do Assembly.Load(byte[] rawAssembly) where you create rawAssembly from the embedded resource.
A: There's a tool called IlMerge that can accomplish this: http://research.microsoft.com/~mbarnett/ILMerge.aspx
Then you can just make a build event similar to the following.
Set Path="C:\Program Files\Microsoft\ILMerge"
ilmerge /out:$(ProjectDir)\Deploy\LevelEditor.exe $(ProjectDir)\bin\Release\release.exe $(ProjectDir)\bin\Release\InteractLib.dll $(ProjectDir)\bin\Release\SpriteLib.dll $(ProjectDir)\bin\Release\LevelLibrary.dll
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60"
}
|
Q: Are iframes a terrible idea? I'm building a widget, and I've been using iframes to present content within it. At some point, I might start serving third party HTML and JS, so I thought iframes would be a good idea.
It does make the widget javascript a little more complicated, and I'm concerned that this might not be the best implementation.
Do you have any advice? It would be a huge help to hear what other people think about iframes.
A: One thing I discovered recently is that .aspx pages embedded inside iframes sometimes have problems with losing cookies, which led to lost session state in an application I was involved with.
For me, it was in a scenario where a different development shop was consuming one of my .aspx pages in their own page. This means we were on seperate servers, which may or may not be salient.
Apparently this was caused by the parent page rejecting cookies for the child page... As goes the session cookie, so goes the session.
The specific mechanics of how this works are a little involved: More Details
This problem did not impact FireFox, but it did show up in IE7 and it was a real mystery for a few hours.
Also, I have to contradict the article I linked to above on one point. The article says that you don't get this if the containing page is also an .aspx... In this case, that was not true because both pages were .aspxs.
That casts some doubt on everything else the article says about this situation, but it did lead to a resolution, so that's something as well.
As the article suggested, I put in the following code, which injects a p3p (Privacy Preferences Project - I had never heard of it) header in the page's Init event:
HttpContext.Current.Response.AddHeader("p3p", "CP=\""IDC DSP COR ADM DEVi TAIi PSA PSD IVAi IVDi CONi HIS OUR IND CNT\""")
...And that fixed the problem.
A: I'm going to disagree with the majority and say that yes, iframes are an absolutely terrible idea. Anyone that has worked within the Web Design community for a while will agree that iframes are pure evil and should be avoided unless ABSOLUTELY essential.
My reasons for believing that they are bad is because they break the navigational pattern of a web page. By using an iframe you can effectively break the back and forward buttons on browsers and confuse your users. It breaks the entire idea behind the HTTP protocol; that a URL will always lead to a unique location. If the iframe were a horse it would've retired long ago. There are other ways to serve content dynamically and these should be used instead.
If you're creating a widget then the immediate concerns with using iframes disappear (bad for Search Engines, bad for Bookmarking, etc), but either way content would be better served dynamically or even in a new window rather than in an iframe.
A: There is only one "really bad" thing with them that I'm aware of.
If your 3rd party does some JavaScript, that attempts to modify their DOM a bit too early... IE6 and IE7 will throw the oh so unhelpful "Operation Aborted" error, then blank out not only the iframe, but the entire surrounding page. (e.g. your site appears down)
It isn't fixed in IE8, but the crash is better handled.
A: Personally, I'd avoid it if you can without too much hassle. Using Javascript (or AJAX if you need to load them dynamically), you can quite easily just use a div and change the contents as necessary - in some cases this will give you much more flexibility and will simplify your JS, especially if there's a lot of interaction between your widget and the rest of the page.
That said, I'd investigate both options, and if the JS path seems too tricky or complicated, just go with iframes.
A: In my experience, iframes are either hacks or time-savers - make sure that if you're using them they're neccesary for those reasons. If you have control over the content (or can gain control through mirroring or scraping) you should consider using AJAX or server-side includes to pull external data onto and push it off of the page - it'll end up being more flexible, more robust, and easier to manage in the end.
A: No, nothing wrong with iframes. Iframes are probably a better idea if you're going to start serving third party content.
The upcoming HTML5 spec also plans to build more security features into iframes for situations like this, so I would consider it good practice to use them now also.
A: Depends what the widget does. Iframes have their place, but they do cause few layout headaches (not to mention making your js more complicated) so most people tend to avoid them unless absolutely necessary..
A: iframes, like frames, are just controls to use for the task at hand. As such, it is neither good nor bad in itself, but could be good or bad based on the task at hand and the client's requirements. As far as I know, all modern browsers (and non-linux users) will be able to "see and consume" iframes without a problem.
A: Before XMLHTTPRequest became widely used, people were using a combination of JavaScript and iframes to serve up content in a dynamic fashion without doing full page refreshes.
There's lots of information about developing sites this way so you should have a relatively easy time of it finding workaround to a lot of the snags that you are likely to hit.
The one thing that I have found to be a pain is cross-domain use of JavaScript in iframes. If the page you embed in the iframe is from a different domain than the "parent" page, browsers have security restrictions against letting you access one from the other. The trick is for both pages to declare
document.domain = 'somedomain.com';
There's plenty of stuff on the Web about this kind of workaround.
Good luck!
A: A good option is to use the overflow CSS property. The default value is visible but you can set it to hidden, scroll or auto. I would use auto in your case. If your content gets too big it will look like you have an iframe but it is still right on the page.
see: overflow property
A: Iframes are not evil they are just another tool like anything else and to determine their merits you have to determine the context in which they will be used. Google Image Search, and several other high profile sites, use iframes for limited purposes.
In general I find they are used for branding or to enable a user to return to a site that redirects the user off site.
Note, if you are using cross domain iframes e.g. an iframe that refers to a domain outside where the page is being served you are limited by design for security reasons and cannot access through javascript the internals of a DOM outside the domain it is associated with.
Also please note many sites prevent their site from being embeded and will stripe the iframe off (redirecting the top url to their domain).
A: Not necessarily, as long as the content within the iframe is predictable.
A: Technically there is nothing wronger with iFrame that with alternatives. But semantically, there are evil.
The Web is based on HTTP, a protocol that says a given URL will always leads to a unique ressource.
Using iFrame, you just serve several ressources melted in a web pages behind one URL for all of them. If you have concerns about how the Web should grow, it's troublesome. What's more, for the search engine robots, it's tricky.
A: Re: "the entire idea behind the HTTP protocol; that a URL will always lead to a unique location"
I serve my entire CMS from the same URL for security and scaleability (using mostly POST instead of GET parameters). I don't want secure content visible without authentication, and a dispatch system makes development easier for me as I don't have to worry about authentication for every new page.
Also, for some applications SEO is not applicable (such as for web-based ERP).
I use an iFrame for serving content from a PHP generated assembly tree. I don't want the tree (and node visibilities) refreshed whenever the user wants to view details for a part/assembly.
A: There are several usability and accessability issues with iframes. Some browsers and screenreaders can not display iframes, so you should provide alternative content:
<iframe src="content.html">
<p>
This content will only be displayed by browsers that do not support
iframes. You should provide a link to the content, or in your
case an alternative way to use your widget.
</p>
</iframe>
If you start serving third party content, you should watch out for the iframe grabbing focus after it has finished loading. While a minor annoyance for regular users, it can be very confusing for users browsing with screenreaders.
A: There is a significant issue with iframes that hardly gets a mention but which bugs the stuffing out of me.
Our colleague has a lifetime of work invested in a dynamically changing database which we have loaded into Google Docs spreadsheets which we then display on our site alongside a lot of supporting material.
There is absolutely nothing to stop someone grabbing the iframe code out my page source and shoving it onto their page. Now they are getting all our data, refreshed right up to a few minutes ago, served to their page for absolutely nothing at all.
If a google iframe could be tied to a specific domain, that would stop that in its tracks.
Any ideas, bright sparks?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
}
|
Q: How do I remove the .NET Compact Framework from my Windows Mobile 5 device? I'm trying to get a number of third party applications to work on my Windows Mobile 5 smartphone.
I've installed the latest version (3.5) of the Microsoft.NET Compact Framework, but whenever I run the apps I get an error message which states: "This application [Application Name] requires a newer version of the Microsoft .NET Compact Framework than the version installed on this device."
Given I've supposedly successfully installed the latest version, this doesn't make sense, leading me to believe I need to remove the .NET Compact Framework and start again. (I've tried reinstalling it, but as far as I can tell there's no automated way of removing it on the device, or from my PC.)
Does anyone have any suggestions as to what I need to do? Thanks!
A: It's probably better to not uninstall, and if it's on the device in ROM you can't uninstall it anyway.
There are a couple options available to you.
*
*The different CF versions coexist fine, so you can install the older version and leave 3.5 on it.
*The CF can be set for compatibility mode. That means you can tell just a single app compiled against an old version use the 3.5 runtimes in compatibility mode or you can set that device-wide so all older CF apps will run agains the 3.5 EE in compatibility mode.
For online resources discussing configuration files and compatibility mode, see these links:
*
*MSDN Article on Configuration File Settings
*MSDN Article on Configuring Runtime Versions
*David Kline's blog on Compatibility Mode
*The CF 3.5 Power Toys (includes an app for setting configurations)
Note: I forgot to mention in the original response that using option #2 (running against CF 3.5) will very likely improve the performance of the app as well, since it will be running with the newest CLR.
A: Have you tried using Microsoft ActiveSync to uninstall it?
A: If you have installed it yourself you should be able to remove it by going to Settings -> System -> Remove Programs (could be a slightly different path on the smartphone OS, I'm used to Pocket PCs).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How do I Sort a Multidimensional Array in PHP I have CSV data loaded into a multidimensional array. In this way each "row" is a record and each "column" contains the same type of data. I am using the function below to load my CSV file.
function f_parse_csv($file, $longest, $delimiter)
{
$mdarray = array();
$file = fopen($file, "r");
while ($line = fgetcsv($file, $longest, $delimiter))
{
array_push($mdarray, $line);
}
fclose($file);
return $mdarray;
}
I need to be able to specify a column to sort so that it rearranges the rows. One of the columns contains date information in the format of Y-m-d H:i:s and I would like to be able to sort with the most recent date being the first row.
A: I know it's 2 years since this question was asked and answered, but here's another function that sorts a two-dimensional array. It accepts a variable number of arguments, allowing you to pass in more than one key (ie column name) to sort by. PHP 5.3 required.
function sort_multi_array ($array, $key)
{
$keys = array();
for ($i=1;$i<func_num_args();$i++) {
$keys[$i-1] = func_get_arg($i);
}
// create a custom search function to pass to usort
$func = function ($a, $b) use ($keys) {
for ($i=0;$i<count($keys);$i++) {
if ($a[$keys[$i]] != $b[$keys[$i]]) {
return ($a[$keys[$i]] < $b[$keys[$i]]) ? -1 : 1;
}
}
return 0;
};
usort($array, $func);
return $array;
}
Try it here: http://www.exorithm.com/algorithm/view/sort_multi_array
A: You can sort an array using usort function.
$array = array(
array('price'=>'1000.50','product'=>'product 1'),
array('price'=>'8800.50','product'=>'product 2'),
array('price'=>'200.0','product'=>'product 3')
);
function cmp($a, $b) {
return $a['price'] > $b['price'];
}
usort($array, "cmp");
print_r($array);
Output :
Array
(
[0] => Array
(
[price] => 134.50
[product] => product 1
)
[1] => Array
(
[price] => 2033.0
[product] => product 3
)
[2] => Array
(
[price] => 8340.50
[product] => product 2
)
)
Example
A: Introducing: a very generalized solution for PHP 5.3+
I 'd like to add my own solution here, since it offers features that other answers do not.
Specifically, advantages of this solution include:
*
*It's reusable: you specify the sort column as a variable instead of hardcoding it.
*It's flexible: you can specify multiple sort columns (as many as you want) -- additional columns are used as tiebreakers between items that initially compare equal.
*It's reversible: you can specify that the sort should be reversed -- individually for each column.
*It's extensible: if the data set contains columns that cannot be compared in a "dumb" manner (e.g. date strings) you can also specify how to convert these items to a value that can be directly compared (e.g. a DateTime instance).
*It's associative if you want: this code takes care of sorting items, but you select the actual sort function (usort or uasort).
*Finally, it does not use array_multisort: while array_multisort is convenient, it depends on creating a projection of all your input data before sorting. This consumes time and memory and may be simply prohibitive if your data set is large.
The code
function make_comparer() {
// Normalize criteria up front so that the comparer finds everything tidy
$criteria = func_get_args();
foreach ($criteria as $index => $criterion) {
$criteria[$index] = is_array($criterion)
? array_pad($criterion, 3, null)
: array($criterion, SORT_ASC, null);
}
return function($first, $second) use (&$criteria) {
foreach ($criteria as $criterion) {
// How will we compare this round?
list($column, $sortOrder, $projection) = $criterion;
$sortOrder = $sortOrder === SORT_DESC ? -1 : 1;
// If a projection was defined project the values now
if ($projection) {
$lhs = call_user_func($projection, $first[$column]);
$rhs = call_user_func($projection, $second[$column]);
}
else {
$lhs = $first[$column];
$rhs = $second[$column];
}
// Do the actual comparison; do not return if equal
if ($lhs < $rhs) {
return -1 * $sortOrder;
}
else if ($lhs > $rhs) {
return 1 * $sortOrder;
}
}
return 0; // tiebreakers exhausted, so $first == $second
};
}
How to use
Throughout this section I will provide links that sort this sample data set:
$data = array(
array('zz', 'name' => 'Jack', 'number' => 22, 'birthday' => '12/03/1980'),
array('xx', 'name' => 'Adam', 'number' => 16, 'birthday' => '01/12/1979'),
array('aa', 'name' => 'Paul', 'number' => 16, 'birthday' => '03/11/1987'),
array('cc', 'name' => 'Helen', 'number' => 44, 'birthday' => '24/06/1967'),
);
The basics
The function make_comparer accepts a variable number of arguments that define the desired sort and returns a function that you are supposed to use as the argument to usort or uasort.
The simplest use case is to pass in the key that you 'd like to use to compare data items. For example, to sort $data by the name item you would do
usort($data, make_comparer('name'));
See it in action.
The key can also be a number if the items are numerically indexed arrays. For the example in the question, this would be
usort($data, make_comparer(0)); // 0 = first numerically indexed column
See it in action.
Multiple sort columns
You can specify multiple sort columns by passing additional parameters to make_comparer. For example, to sort by "number" and then by the zero-indexed column:
usort($data, make_comparer('number', 0));
See it in action.
Advanced features
More advanced features are available if you specify a sort column as an array instead of a simple string. This array should be numerically indexed, and must contain these items:
0 => the column name to sort on (mandatory)
1 => either SORT_ASC or SORT_DESC (optional)
2 => a projection function (optional)
Let's see how we can use these features.
Reverse sort
To sort by name descending:
usort($data, make_comparer(['name', SORT_DESC]));
See it in action.
To sort by number descending and then by name descending:
usort($data, make_comparer(['number', SORT_DESC], ['name', SORT_DESC]));
See it in action.
Custom projections
In some scenarios you may need to sort by a column whose values do not lend well to sorting. The "birthday" column in the sample data set fits this description: it does not make sense to compare birthdays as strings (because e.g. "01/01/1980" comes before "10/10/1970"). In this case we want to specify how to project the actual data to a form that can be compared directly with the desired semantics.
Projections can be specified as any type of callable: as strings, arrays, or anonymous functions. A projection is assumed to accept one argument and return its projected form.
It should be noted that while projections are similar to the custom comparison functions used with usort and family, they are simpler (you only need to convert one value to another) and take advantage of all the functionality already baked into make_comparer.
Let's sort the example data set without a projection and see what happens:
usort($data, make_comparer('birthday'));
See it in action.
That was not the desired outcome. But we can use date_create as a projection:
usort($data, make_comparer(['birthday', SORT_ASC, 'date_create']));
See it in action.
This is the correct order that we wanted.
There are many more things that projections can achieve. For example, a quick way to get a case-insensitive sort is to use strtolower as a projection.
That said, I should also mention that it's better to not use projections if your data set is large: in that case it would be much faster to project all your data manually up front and then sort without using a projection, although doing so will trade increased memory usage for faster sort speed.
Finally, here is an example that uses all the features: it first sorts by number descending, then by birthday ascending:
usort($data, make_comparer(
['number', SORT_DESC],
['birthday', SORT_ASC, 'date_create']
));
See it in action.
A: With usort. Here's a generic solution, that you can use for different columns:
class TableSorter {
protected $column;
function __construct($column) {
$this->column = $column;
}
function sort($table) {
usort($table, array($this, 'compare'));
return $table;
}
function compare($a, $b) {
if ($a[$this->column] == $b[$this->column]) {
return 0;
}
return ($a[$this->column] < $b[$this->column]) ? -1 : 1;
}
}
To sort by first column:
$sorter = new TableSorter(0); // sort by first column
$mdarray = $sorter->sort($mdarray);
A: You can use array_multisort()
Try something like this:
foreach ($mdarray as $key => $row) {
// replace 0 with the field's index/key
$dates[$key] = $row[0];
}
array_multisort($dates, SORT_DESC, $mdarray);
For PHP >= 5.5.0 just extract the column to sort by. No need for the loop:
array_multisort(array_column($mdarray, 0), SORT_DESC, $mdarray);
A: Here is a php4/php5 class that will sort one or more fields:
// a sorter class
// php4 and php5 compatible
class Sorter {
var $sort_fields;
var $backwards = false;
var $numeric = false;
function sort() {
$args = func_get_args();
$array = $args[0];
if (!$array) return array();
$this->sort_fields = array_slice($args, 1);
if (!$this->sort_fields) return $array();
if ($this->numeric) {
usort($array, array($this, 'numericCompare'));
} else {
usort($array, array($this, 'stringCompare'));
}
return $array;
}
function numericCompare($a, $b) {
foreach($this->sort_fields as $sort_field) {
if ($a[$sort_field] == $b[$sort_field]) {
continue;
}
return ($a[$sort_field] < $b[$sort_field]) ? ($this->backwards ? 1 : -1) : ($this->backwards ? -1 : 1);
}
return 0;
}
function stringCompare($a, $b) {
foreach($this->sort_fields as $sort_field) {
$cmp_result = strcasecmp($a[$sort_field], $b[$sort_field]);
if ($cmp_result == 0) continue;
return ($this->backwards ? -$cmp_result : $cmp_result);
}
return 0;
}
}
/////////////////////
// usage examples
// some starting data
$start_data = array(
array('first_name' => 'John', 'last_name' => 'Smith', 'age' => 10),
array('first_name' => 'Joe', 'last_name' => 'Smith', 'age' => 11),
array('first_name' => 'Jake', 'last_name' => 'Xample', 'age' => 9),
);
// sort by last_name, then first_name
$sorter = new Sorter();
print_r($sorter->sort($start_data, 'last_name', 'first_name'));
// sort by first_name, then last_name
$sorter = new Sorter();
print_r($sorter->sort($start_data, 'first_name', 'last_name'));
// sort by last_name, then first_name (backwards)
$sorter = new Sorter();
$sorter->backwards = true;
print_r($sorter->sort($start_data, 'last_name', 'first_name'));
// sort numerically by age
$sorter = new Sorter();
$sorter->numeric = true;
print_r($sorter->sort($start_data, 'age'));
A: Multiple row sorting using a closure
Here's another approach using uasort() and an anonymous callback function (closure). I've used that function regularly. PHP 5.3 required – no more dependencies!
/**
* Sorting array of associative arrays - multiple row sorting using a closure.
* See also: http://the-art-of-web.com/php/sortarray/
*
* @param array $data input-array
* @param string|array $fields array-keys
* @license Public Domain
* @return array
*/
function sortArray( $data, $field ) {
$field = (array) $field;
uasort( $data, function($a, $b) use($field) {
$retval = 0;
foreach( $field as $fieldname ) {
if( $retval == 0 ) $retval = strnatcmp( $a[$fieldname], $b[$fieldname] );
}
return $retval;
} );
return $data;
}
/* example */
$data = array(
array( "firstname" => "Mary", "lastname" => "Johnson", "age" => 25 ),
array( "firstname" => "Amanda", "lastname" => "Miller", "age" => 18 ),
array( "firstname" => "James", "lastname" => "Brown", "age" => 31 ),
array( "firstname" => "Patricia", "lastname" => "Williams", "age" => 7 ),
array( "firstname" => "Michael", "lastname" => "Davis", "age" => 43 ),
array( "firstname" => "Sarah", "lastname" => "Miller", "age" => 24 ),
array( "firstname" => "Patrick", "lastname" => "Miller", "age" => 27 )
);
$data = sortArray( $data, 'age' );
$data = sortArray( $data, array( 'lastname', 'firstname' ) );
A: The "Usort" function is your answer.
http://php.net/usort
A: Before I could get the TableSorter class to run I had came up with a function based on what Shinhan had provided.
function sort2d_bycolumn($array, $column, $method, $has_header)
{
if ($has_header) $header = array_shift($array);
foreach ($array as $key => $row) {
$narray[$key] = $row[$column];
}
array_multisort($narray, $method, $array);
if ($has_header) array_unshift($array, $header);
return $array;
}
*
*$array is the MD Array you want to sort.
*$column is the column you wish to sort by.
*$method is how you want the sort performed, such as SORT_DESC
*$has_header is set to true if the first row contains header values that you don't want sorted.
A: I tried several popular array_multisort() and usort() answers and none of them worked for me. The data just gets jumbled and the code is unreadable. Here's a quick a dirty solution. WARNING: Only use this if you're sure a rogue delimiter won't come back to haunt you later!
Let's say each row in your multi array looks like: name, stuff1, stuff2:
// Sort by name, pull the other stuff along for the ride
foreach ($names_stuff as $name_stuff) {
// To sort by stuff1, that would be first in the contatenation
$sorted_names[] = $name_stuff[0] .','. name_stuff[1] .','. $name_stuff[2];
}
sort($sorted_names, SORT_STRING);
Need your stuff back in alphabetical order?
foreach ($sorted_names as $sorted_name) {
$name_stuff = explode(',',$sorted_name);
// use your $name_stuff[0]
// use your $name_stuff[1]
// ...
}
Yeah, it's dirty. But super easy, won't make your head explode.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "203"
}
|
Q: Why is Visual Studio constantly crashing? Visual Studio randomly crashes when adding/removing references and projects.
Any thoughts why? Will installing Sp1 help?
EDIT: I do not work with any addons except SourceSafe. I do most of my development in connected mode.
Developing using:
Visual Studio 2008
WinXp Terminal Service -> Win2k3 Sp2 (64bit)
VSS 8.0, 32bit
A: Try deleting your .user and .suo files - these are the user options files that VS creates. You get a .user file for each project and a .suo file for your solution. When they get corrupted, odd things happen. Deleting them will make you lose little things like which project is selected as the startup project when you start debugging, but it usually clears up odd behavior like this.
You may also want to clear out any temporary file locations, like the Temporary ASP.NET Files folders (if you're working in ASP.NET) just in case something odd is being cached somewhere.
A: Beware if you suspect a corrupt .suo file and are integrated with Source Safe.
When you restart VS after a crash, you may get the following message:
The Open from Source Control operation is still in progress but you can start working now. the rest of the projects will be retrieved asynchronously.
This basically means VS will load all projects in your opened solution from Source Safe and Overwrite any files that are checked out and contain unchecked-in changes!
After a VS crash, start Source Safe standalone and CHECK IN what you want to preserve.
Then work at fixing the corruption before starting VS again.
A: If .suo or .ncb file has become corrupted, that causes for visual studio crashing also.
To solve this crashing you can use the following step as mentioned in the image.
*
*Go to the folder that contain visual studio execution file(devenv.exe).
*open the command prompt with the folder path mentioned in the step one and run the command devenv.exe /ResetSettings.
*If step two doesn't solve your problem then run the command devenv.exe /ResetUserData.
A: My Visual Studio 2005 started crashing and locking up recently. The way I finally fixed it was to run this from the command line:
devenv /resetuserdata
That cleared out all my customisations, but it did fix the problem. If you've customised VS a lot, you could try exporting your settings first and then see if you can safely import them afterwards. Alternatively, take snapshots of your IDE so you can remember which buttons etc. you had where.
A: Most commonly, if Visual Studio is crashing repeatedly, your .suo or .ncb file has become corrupted. Close your project, delete those files, and reopen. This may resolve your problem.
.suo is a hidden file.
A: Hopefully this helps someone. It felt like I had tried everything. I even repaired the installation, which made no difference as well as removed VS entirely and the issue was still there. The log option told me nothing significant, so I eventually deleted all the bin and obj folders in my solution as well as all the .suo and .user files and moved it to a completely different folder off the root folder on my harddrive and rebuilt it. It magically came right!
A: Search for and delete any .ncb files associated with your solution. In past versions these (debugging) files used to get corrupt and deleting them would fix the problem (Visual Studio will regenerate them automatically).
A: I Find on mine even in SP1 that it crashes rarely when adding stuff to a project, but mainly when switching to an ASP.NET Design View and when it autogenerates controls in the tools. I just disabled it from creating them and i dont get many crashes any more.
I know this doesnt have much to do with your issue, but the point i am making SP1 may not be the answer to your problem.
A: I tried all of the suggested options, and a few more found at this link. No luck.
Then I tried to add a reference from a Web Site (as opposed to Web App). The process is different: you have to right click the project and go to Property Pages, and there's an Add... button on the References tab.
It still crashed, but there was a message in the Event Viewer this time that pointed me to the full path of a DLL in a 3rd party component I have installed. The DLL couldn't be read from disk (corrupt). So, a repair of that lib and a reboot later, and I'm back to good.
UPDATE:
I came to find out the real reason, several files on my SSD had become corrupt. CHKDSK /R got me back going for a while. Eventually had to replace the drive. Just a reminder that it may not be VS's fault.
A: changing the default solution location solved my problem.
A: I had to remove older reference for the project that no longer existed from the solution and that worked fine for me.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: What are appropriate library naming conventions? There are two popular naming conventions:
*
*vc90/win64/debug/foo.dll
*foo-vc90-win64-debug.dll
Please discuss the problems/benefits associated with either approach.
I am also wondering if it is possible to expose meta-data (i.e. compiler, platform, build-type) in approach #1 in an easy to use, cross-platform manner.
A: #2 is good for distribution, where several variation will be packaged in the same folder/zip file together. However, you probably don't want all that information in the file name itself, as it make it difficult to vary those via parameters to your makefile/csproj/nant script etc. It would be easier to have several files called "foo" in different folders (where you can decide the folder structure)
A: For .NET assemblies, you can store this information in the assembly itself:
http://www.codinghorror.com/blog/archives/000142.html
I'm not familiar enough with other assembly types to know what they provide.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Vim: How do I search for a word which is not followed by another word? Pretty basic question, I'm trying to write a regex in Vim to match any phrase starting with "abc " directly followed by anything other than "defg".
I've used "[^defg]" to match any single character other than d, e, f or g.
My first instinct was to try /abc [^\(defg\)] or /abc [^\<defg\>] but neither one of those works.
A: Here's the search string.
/abc \(defg\)\@!
The concept you're looking for is called a negative look-ahead assertion. Try this in vim for more info:
:help \@!
A: preceeded or followed by?
If it's anything starting with 'abc ' that's not (immediately) followed by 'defg', you want bmdhacks' solution.
If it's anything starting with 'abc ' that's not (immediately) preceeded by 'defg', you want a negative lookbehind:
/\%(defg\)\@<!abc /
This will match any occurance of 'abc ' as long as it's not part of 'defgabc '. See :help \@<! for more details.
If you want to match 'abc ' as long as it's not part of 'defg.*abc ', then just add a .*:
/\%(defg.*\)\@<!abc /
Matching 'abc ' only on lines where 'defg' doesn't occur is similar:
/\%(defg.*\)\@<!abc \%(.*defg\)\@!/
Although, if you're just doing this for a substitution, you can make this easier by combining :v// and :s//
:%v/defg/s/abc /<whatever>/g
This will substitute '<whatever>' for 'abc ' on all lines that don't contain 'defg'. See :help :v for more.
A: Here we go, this is a hairy one:
/\%(\%(.\{-}\)\@<=XXXXXX\zs\)*
(replace XXXXXX with the search word). This will search for everything that does not contain XXXXXX. I imagine if you did:
/abc \%(\%(.\{-}\)\@<=defg\zs\)*
This may work like you want it to. Hope this helps a little!
A: I don't have enough reputation to comment Andy's answer, but his answer is wrong, so I explain it here, the expression"/abc\ [^d][^e][^f][^g]" is not match 'abc d111', 'abc def1' etc.
the best way is the expression "/abc (defg)@!"
A: /abc\ [^d][^e][^f][^g]
It's pretty cumbersome for larger words, but works like a charm.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/96826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.