text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: How do I access a database in C# Basically, I would like a brief explanation of how I can access a SQL database in C# code. I gather that a connection and a command is required, but what's going on? I guess what I'm asking is for someone to de-mystify the process a bit. Thanks.
For clarity, in my case I'm doing web apps, e-commerce stuff. It's all ASP.NET, C#, and SQL databases.
I'm going to go ahead and close this thread. It's a little to general and I am going to post some more pointed and tutorial-esque questions and answers on the subject.
A: The old ADO.Net (sqlConnection, etc.) is a dinosaur with the advent of LINQ. LINQ requires .Net 3.5, but is backwards compatible with all .Net 2.0+ and Visual Studio 2005, etc.
To start with linq is ridiculously easy.
*
*Add a new item to your project, a linq-to-sql file, this will be placed in your App_Code folder (for this example, we'll call it example.dbml)
*from your server explorer, drag a table from your database into the dbml (the table will be named items in this example)
*save the dbml file
You now have built a few classes. You built the exampleDataContext class, which is your linq initializer, and you built the item class which is a class for objects in the items table. This is all done automatically and you don't need to worry about it. Now say I want to get record with the itemID of 3, this is all I need to do:
exampleDataContext db = new exampleDataContext(); // initializes your linq-to-sql
item item_I_want = (from i in db.items where i.itemID == 3 select i).First(); // using the 'item' class your dbml made
And that's all it takes. Now you have a new item named item_I_want... now, if you want some information from the item you just call it like this:
int intID = item_I_want.itemID;
string itemName = item_I_want.name;
Linq is very simple to use! And this is just the tip of the iceberg.
No need to learn antiquated ADO when you have a more powerful, easier tool at your disposal :)
A: Reads like a beginner question. That calls for beginner video demos.
http://www.asp.net/learn/data-videos/
They are ASP.NET focused, but pay attention to the database aspects.
A: MSDN has a pretty good writeup here:
http://msdn.microsoft.com/en-us/library/s7ee2dwt(VS.71).aspx
You should take a look at the data-reader for simple select-statements. Sample from the MSDN page:
private static void ReadOrderData(string connectionString)
{
string queryString =
"SELECT OrderID, CustomerID FROM dbo.Orders;";
using (SqlConnection connection = new SqlConnection(
connectionString))
{
SqlCommand command = new SqlCommand(
queryString, connection);
connection.Open();
SqlDataReader reader = command.ExecuteReader();
try
{
while (reader.Read())
{
Console.WriteLine(String.Format("{0}, {1}",
reader[0], reader[1]));
}
}
finally
{
// Always call Close when done reading.
reader.Close();
}
}
}
It basicly first creates a SqlConnection object and then creates the SqlCommand-object that holds the actual select you are going to do, and a reference to the connection we just created. Then it opens the connection and on the next line, executes your statements and returns a SqlDataReader object.
In the while-loop it then outputs the values from the first row in the reader. Every time "reader.Read()" is called the reader will contain a new row.
Then the reader is then closed, and because we are exiting the "using"-secret, the connection is also closed.
EDIT: If you are looking for info on selecting/updating data in ASP.NET, 4GuysFromRolla has a very nice Multipart Series on ASP.NET 2.0's Data Source Controls
EDIT2: As others have pointed out, if you are using a newer version of .NET i would recommend looking into LINQ. An introduction, samples and writeup can be found on this MSDN page.
A: topics to look at:
*
*ADO.NET basics
*LINQ to SQL
*Managed database providers
A: If it is a web application here are some good resources for getting started with data access in .NET:
http://weblogs.asp.net/scottgu/archive/2007/04/14/working-with-data-in-asp-net-2-0.aspx
A: To connect/perform operations on an SQL server db:
using System.Data;
using System.Data.SqlClient;
string connString = "Data Source=...";
SqlConnection conn = new SqlConnection(connString); // you can also use ConnectionStringBuilder
connection.Open();
string sql = "..."; // your SQL query
SqlCommand command = new SqlCommand(sql, conn);
// if you're interested in reading from a database use one of the following methods
// method 1
SqlDataReader reader = command.ExecuteReader();
while (reader.Read()) {
object someValue = reader.GetValue(0); // GetValue takes one parameter -- the column index
}
// make sure you close the reader when you're done
reader.Close();
// method 2
DataTable table;
SqlDataAdapter adapter = new SqlDataAdapter(command);
adapter.Fill(table);
// then work with the table as you would normally
// when you're done
connection.Close();
Most other database servers like MySQL and PostgreSQL have similar interfaces for connection and manipulation.
A: If what you are looking for is an easy to follow tutorial, then you should head over to the www.ASP.net website.
Here is a link to the starter video page: http://www.asp.net/learn/videos/video-49.aspx
Here is the video if you want to download it: video download
and here is a link to the C# project from the video: download project
Good luck.
A: I would also recommend using DataSets. They are really easy to use, just few mouse clicks, without writing any code and good enough for small apps.
A: If you have Visual Studio 2008 I would recommend skipping ADO.NET and leaping right in to LINQ to SQL
A: @J D OConal is basically right, but you need to make sure that you dispose of your connections:
string connString = "Data Source=...";
string sql = "..."; // your SQL query
//this using block
using( SqlConnection conn = new SqlConnection(connString) )
using( SqlCommand command = new SqlCommand(sql, conn) )
{
connection.Open();
// if you're interested in reading from a database use one of the following methods
// method 1
SqlDataReader reader = command.ExecuteReader();
while (reader.Read()) {
object someValue = reader.GetValue(0); // GetValue takes one parameter -- the column index
}
// make sure you close the reader when you're done
reader.Close();
// method 2
DataTable table;
SqlDataAdapter adapter = new SqlDataAdapter(command);
adapter.Fill(table);
// then work with the table as you would normally
// when you're done
connection.Close();
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Should I mysql_real_escape_string all the cookies I get from the user to avoid mysql injection in php? When a user goes to my site, my script checks for 2 cookies which store the user id + part of the password, to automatically log them in.
It's possible to edit the contents of cookies via a cookie editor, so I guess it's possible to add some malicious content to a written cookie?
Should I add mysql_real_escape_string (or something else) to all my cookie calls or is there some kind of built in procedure that will not allow this to happen?
A: What you really need to do is not send these cookie values that are hackable in the first place. Instead, why not hash the username and password and a (secret) salt and set that as the cookie value? i.e.:
define('COOKIE_SALT', 'secretblahblahlkdsfklj');
$cookie_value = sha1($username.$password.COOKIE_SALT);
Then you know the cookie value is always going to be a 40-character hexidecimal string, and can compare the value the user sends back with whatever's in the database to decide whether they're valid or not:
if ($user_cookie_value == sha1($username_from_db.$password_drom_db.COOKIE_SALT)) {
# valid
} else {
#not valid
}
mysql_real_escape_string makes an additional hit to the database, BTW (a lot of people don't realize it requires a DB connection and queries MySQL).
The best way to do what you want if you can't change your app and insist on using hackable cookie values is to use prepared statements with bound parameters.
A: The point of mysql_real_escape_string isn't to protect against injection attacks, it's to ensure your data is accurately stored in the database. Thus, it should be called on ANY string going into the database, regardless of its source.
You should, however, also be using parameterized queries (via mysqli or PDO) to protect yourself from SQL injection. Otherwise you risk ending up like little Bobby Tables' school.
A: I only use mysql_real_escape_string before inserting variables into an SQL statement. You'll just get yourself confused if some of your variables are already escaped, and then you escape them again. It's a classic bug you see in newbies' blog webapps:
When someone writes an apostrophe it keeps on adding slashes ruining the blog\\\\\\\'s pages.
The value of a variable isn't dangerous by itself: it's only when you put it into a string or something similar that you start straying into dangerous waters.
Of course though, never trust anything that comes from the client-side.
A: Prepared statements and parameter binding is always a good way to go.
PEAR::MDB2 supports prepared statements, for example:
$db = MDB2::factory( $dsn );
$types = array( 'integer', 'text' );
$sth = $db->prepare( "INSERT INTO table (ID,Text) (?,?)", $types );
if( PEAR::isError( $sth ) ) die( $sth->getMessage() );
$data = array( 5, 'some text' );
$result = $sth->execute( $data );
$sth->free();
if( PEAR::isError( $result ) ) die( $result->getMessage() );
This will only allow proper data and pre-set amount of variables to get into database.
You of course should validate data before getting this far, but preparing statements is the final validation that should be done.
A: You should mysql_real_escape_string anything that could be potentially harmful. Never trust any type of input that can be altered by the user.
A: I agree with you. It is possible to modify the cookies and send in malicious data.
I believe that it is good practice to filter the values you get from the cookies before you use them. As a rule of thumb I do filter any other input that may be tampered with.
A: mysql_real_escape_string is so passé... These days you should really use parameter binding instead.
I'll elaborate by mentionning that i was referring to prepared statements and provide a link to an article that demonstrates that sometimes mysl_real_escape_string isn't sufficient enough: http://www.webappsec.org/projects/articles/091007.txt
A: Yegor, you can store the hash when a user account is created/updated, then whenever a login is initiated, you hash the data posted to the server and compare against what was stored in the database for that one username.
(Off the top of my head in loose php - treat as pseudo code):
$usernameFromPostDbsafe = LimitToAlphaNumUnderscore($usernameFromPost);
$result = Query("SELECT hash FROM userTable WHERE username='$usernameFromPostDbsafe' LIMIT 1;");
$hashFromDb = $result['hash'];
if( (sha1($usernameFromPost.$passwordFromPost.SALT)) == $hashFromDb ){
//Auth Success
}else{
//Auth Failure
}
After a successful authentication, you could store the hash in $_SESSION or in a database table of cached authenticated username/hashes. Then send the hash back to the browser (in a cookie for instance) so subsequent page loads send the hash back to the server to be compared against the hash held in your chosen session storage.
A: I would recommend using htmlentities($input, ENT_QUOTES) instead of mysql_real_escape_string as this will also prevent any accidental outputting of actual HTML code. Of course, you could use mysql_real_escape_string and htmlentities, but why would you?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What is your session management strategy for NHibernate in desktop applications? I find it much more difficult to manage your session in a desktop application, because you cannot take advantage of such a clear bondary like HttpContext.
So how do you manage your session lifetime to take advantage of lazy loading but without having one session open for the entire application?
A: I think it boils down to the design of your objects. Because lazy-loading can be enforced in the per-object level, you can take advantage of that fact when you think about session management.
For example, I have a bunch of objects which are data-rich and lazy loaded, and I have a grid/summary view, and a details view for them. In the grid-summary view, I do not use the lazy-loaded version of the object. I use a surrogate object to present that data, and that surrogate object is not lazy loaded.
On the other hand, once a user selects that record for viewing/editing, and you enter a multi-paged details view of the object, that's when we apply lazy-loading to the specific object. Data is now lazy loaded depending on which details are viewed only on demand. That way, the scope of my session being open for lazy loading only lasts as long as the details view is being used.
A: As you said before, you cannot use the boundary of the HttpRequest, but you can understand what is a "HttpRequest" in your desktop application.
Let me explain. Usually your HttpRequest will be a controller for an action and you will limit your session to that specific action. Now in your desktop application the "controllers" (events) can be smaller, but as @Jon said, a window can easily represent a boundary: you work with the things there, let them be on your session.
A: Ayende has recently written a great article on the subject in MSDN.
A: Maybe we can think of a Command pattern set up. Each significative event will feed and trigger a Command, and Execute it. The base AbstractCommand.Execute() implementation is in charge of initializing the session, wrapping the transaction, calling the concrete SomeCommand._Execute() implementation and closing all the stuff.
Anyway, this is far from being persistence agnostic, as it should be when I have loaded my object and I (want to) deal just with plain instances (I'm expecially referring to lazy-load here).
Is it otherwise possible to implement some sort of auto-open/auto-close behaviour? This should be accomplished by making persistence layer sensitive to needs for queries by higher layers, even in the implicit cases such as lazy-load triggers. As for closing the connection, the persistence layer might close after a given timeout (10 seconds?) of DB inactivity.
I know, this is not sharp. But it would really make higher layers persistence agnostic.
Thanks,
Marcello
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Inheriting Event Handlers in C# I've kind of backed myself into a corner here.
I have a series of UserControls that inherit from a parent, which contains a couple of methods and events to simplify things so I don't have to write lines and lines of near-identical code. As you do. The parent contains no other controls.
What I want to do is just have one event handler, in the parent UserControl, which goes and does stuff that only the parent control can do (that is, conditionally calling an event, as the event's defined in the parent). I'd then hook up this event handler to all my input boxes in my child controls, and the child controls would sort out the task of parsing the input and telling the parent control whether to throw that event. Nice and clean, no repetitive, copy-paste code (which for me always results in a bug).
Here's my question. Visual Studio thinks I'm being too clever by half, and warns me that "the method 'CheckReadiness' [the event handler in the parent] cannot be the method for an event because a class this class derives from already defines the method." Yes, Visual Studio, that's the point. I want to have an event handler that only handles events thrown by child classes, and its only job is to enable me to hook up the children without having to write a single line of code. I don't need those extra handlers - all the functionality I need is naturally called as the children process the user input.
I'm not sure why Visual Studio has started complaining about this now (as it let me do it before), and I'm not sure how to make it go away. Preferably, I'd like to do it without having to define a method that just calls CheckReadiness. What's causing this warning, what's causing it to come up now when it didn't an hour ago, and how can I make it go away without resorting to making little handlers in all the child classes?
A: Declare the parent method virtual, override it in the child classes and call
base.checkReadyness(sender, e);
(or derevation thereof) from within the child class. This allows for future design evolution say if you want to do some specific error checking code before calling the parent event handler. You might not need to write millions of event handlers like this for each control, you could just write one, hook all the controls to this event handler which in turn calls the parent's event handler.
One thing that I have noted is that if all this code is being placed within a dll, then you might experience a performance hit trying to call an event handler from within a dll.
A: I've just come across this one as well, I agree that it feels like you're doing everything correctly. Declaring the method virtual is a work-around at best, not a solution.
What is being done is valid - a control which only exists in the derived class, and the derived class is attaching an event handler to one of that control's events. The fact that the method which is handling the event is defined in the base class is neither here nor there, it is available at the point of binding to the event. The event isn't being attached to twice or anything silly like that, it's simply a matter of where the method which handles the event is defined.
Most definitely it is not a virtual method - I don't want the method to be overridable by a derived class. Very frustrating, and in my opinion, a bug in dev-studio.
A: I too have experienced this issue because in earlier versions of VS, you could "inherit" the event handlers. So the solution I found without having to override methods is simply to assign the event handler somewhere in the initialization phase of the form. In my case, done in the constructor (I'm sure OnLoad() would work as well):
public MyForm()
{
InitializeComponent();
btnOK.Click += Ok_Click;
}
...where the Ok_Click handler resides in the base form. Food for thought.
A: I've just run into the exact problem Merus first raised and, like others who posted responses, I'm not at all clear why VS (I'm now using Visual C# 2010 Express) objects to having the event handler defined in the base class. The reason I'm posting a response is that in the process of getting around the problem by making the base class code a protected method that the derived classes simply invoke in their (essentially empty) event handlers, I did a refactor rename of the base class method and noticed that the VS designer stopped complaining. That is, it renamed the event handler registration (so it no longer followed the VS designer's convention of naming event handlers with ControlName_EventName), and that seemed to satisfy it. When I then tried to register the (now renamed) base event handler against derived class controls by entering the name in the appropriate VS event, the designer created a new event handler in the derived class which I then deleted, leaving the derived class control registered to the base class (event handler) method. Net, as you would expect, C# finds what we want to do legit. It's only the VS designer that doesn't like it when you following the designer's event handler naming convention. I don't see the need for the designer to work that way. Anywho, time to carry on.
A: If your event is already defined in your parent class, you do not need to rewire it again in your child class. That will cause the event to fire twice.
Do verify if this is what is happening. HTH :)
A: This article on MSDN should be a good starting points: Overriding Event Handlers with Visual Basic .NET. Take a look at the How the Handles Clause Can Cause Problems in the Derived Class section.
A: Why not declare the method as virtual in the parent class and then you can override it in the derived classes to add extra functionality?
A: Forget that it's an event handler and just do proper regular method override in child class.
A: Here's what I did to get base methods called in several similar looking forms, each one of them having a few extra features to the common ones:
protected override void OnLoad(EventArgs e)
{
try
{
this.SuspendLayout();
base.OnLoad(e);
foreach (Control ctrl in Controls)
{
Button btn = ctrl as Button;
if (btn == null) continue;
if (string.Equals(btn.Name, "btnAdd", StringComparison.Ordinal))
btn.Click += new EventHandler(btnAdd_Click);
else if (string.Equals(btn.Name, "btnEdit", StringComparison.Ordinal))
btn.Click += new EventHandler(btnEdit_Click);
else if (string.Equals(btn.Name, "btnDelete", StringComparison.Ordinal))
btn.Click += new EventHandler(btnDelete_Click);
else if (string.Equals(btn.Name, "btnPrint", StringComparison.Ordinal))
btn.Click += new EventHandler(btnPrint_Click);
else if (string.Equals(btn.Name, "btnExport", StringComparison.Ordinal))
btn.Click += new EventHandler(btnExport_Click);
}
The chance of an omission of using the right fixed button name looks the same to me as the chance of not wiring the inherited handler manually.
Note that you may need to test for this.DesignMode so that you skip the code in VS Designer at all, but it works fine for me even without the check.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Good asp.net (C#) apps? Any suggestions for good open source asp.net (C#) apps out there which meet as many of the following:?
*
*Designed well and multi tiered
*Clean & commented code
*Good use of several design patterns
*Web pages display properly in all common browsers
*Produces valid html and has good use of css
*Use of css themes. Prefer usage of css than tables
*NOT dependent on third party components (grids, menus, trees, ...etc)
*Has good unit tests
*Web pages are not simplistic and look professional
*Uses newer technologies like MVC, LINQ.. (not important)
*(Anything else that matters which I couldn't think of right now)
A: I would have to agree with BlogEngine. It implements a ton of different abilities and common needs in asp.net as well as allowing it to be fully customizable and very easy to understand. It can work with XML or SQL (your choice) and has a huge community behind it.
As for your requests (bold means yes):
*
*Designed well and multi tiered
*Clean & commented code
*Good use of several design patterns
*Web pages display properly in all common browsers
*Produces valid html and has good use of css
*Use of css themes. Prefer usage of css than tables
*NOT dependent on third party components (grids, menus, trees, ...etc) - kind of, still uses some custom dlls
*Has good unit tests - not sure
*Web pages are not simplistic and look professional - yes, and there are TONS of free templates out there
*Uses newer technologies like MVC, LINQ.. (not important) - not yet
*(Anything else that matters which I couldn't think of right now) - a ton more stuff like dynamic rss feeds, dynamic sitemaps, data references, etc.
There is also a bunch more great open source projects available here: http://www.asp.net/community/projects/
I know that dotNetNuke is pretty popular as well, and the Classified Program is pretty easy to use.
A: You should have a look at SharpArchitecture which uses ASP.NET MVC, and which is an open source architecture foundation for web applications.
A: BlogEngine.Net
A: dasBlog which is a blogging platform that Scott Hanselman contributes to.
A: This is pretty cool. Upcoming ASP.NET feature source is available.
A: TaskVision: a simple and sometimes very usefull .net client-server demo application:
Go to website
Complete source code is available (see bottom right corner for download)
A: Code Plex ->
*
*ASP.NET MVC - look at source
*ASP.NET Dynamic Data
*Script #
A: I learned a lot from SutekiShop (mvc, repository pattern, ddd+tdd), TechAvalanche sample app (http://www.simonsegal.net/blog/CodeDownloads/Orm.zip, several design patterns, poco with linq), CodeBetter.Award sample app for ddd+tdd, and MVC Storefront from Rob Conery.
A: Doesn't met all all points you specified but i'll mention it because i think is a good piece of software http://www.yetanotherforum.net/
A: Try to look at MojoPortal (http://www.mojoportal.com/)
A: There is MojoPortal (http://www.mojoportal.com/) :
*
*well designed
*css template & valid html => ok for all browsers
*open source
*perhaps not very modern (no MVC, no LINK...)
*but runs on Mono
For a more up-to-date project, there is Dropthings (http://www.dropthings.com/) : an open source Web 2.0 style AJAX Portal built using ASP.NET 3.5, Workflow Foundation and LINQ.
And its author is considering making an ASP.NET MVC version using jQuery (http://weblogs.asp.net/omarzabir/archive/2008/07/15/open-source-asp-net-3-5-ajax-portal-new-and-improved.aspx)
A: You can try OXITE on codeplex.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Lightweight alternatives to NHibernate NHibernate is not really a good fit for our environment due to all the dependencies. (Castle, log4net etc.)
Is there a good lightweight alternative?
Support for simple file based databases such as Access/SQLite/VistaDB is essential.
Ideally, something contained in a single assembly that only references .NET assemblies. If it only requires .NET framework 2.0 or 3.0 that is a bonus.
A: For a lightweight ORM that performs well and only requires a single assembly why not try out Lightspeed from Mindscape. It's not open-source, however source is available and it's reasonably priced - the risk with most ORM's that aren't well adopted is of course quality and level of support, and there are very few other open source ORM's worth bothering with in the .Net space at the moment.
Because of your dislike of NHibernate's dependencies it sounds like you don't have a need for a logging framework or any of the castle project facets i.e. IoC, Monorail etc. Have you considered maybe just taking the bare minimum of NHibernate requirements (log4net and the Iesi collections I believe, and dynamic proxy from the castle project?) and running ILMerge over them to consolidate them into a single assembly - might take a bit of fiddling, but it's not too hard - or alternatively you could just pull the source code for each of these projects into a custom build of NHibernate you maintain for your organization that trims out the features not required by your project/organization - it's not as hard/akward as it sounds and I've done something along these lines for one project where we wanted to benefit of an ORM, but needed to reduce the size of the distributed files/installer.
Also - are you perhaps able to explain what you feel is too "heavy" about an Nhibernate based solution ... in my experience its a reasonably lightweight ORM framework compared to some.
A: Adding to this list, you could also have a look at Dapper (written for and used by StackOverflow itself).
A: Generally speaking, for your database backend to work with .net you need an ADO.Net provider for it.
For MS Access (Jet), the Provider is shipped with .net.
For SQLite, there is a selfcontained ADO.Net Provider.
As for the data access layer lib, if you want some abstraction over ADO.Net:
*
*MS Data Access Application Block, a part of ms enterprise library.
*iBatis.Net (Was retired by the Apache Foundation in 2010) MyBatis.net
*ADOTemplate from the Spring Framework.net
All those work well starting with framework 2.0 and up.
Basically, you choose (and there is a lot of choices)
A: some of the alternatives:
- ActiveRecord - it uses nhibernate.dll in background, but configuration is done through attributes. It's like lite version of nhibernate
- Subsonic
- CoolStorage.NET - I used it a lot with small projects. Works well with number of dbs
A: Massive - https://github.com/robconery/massive
or
PetaPoco - https://github.com/toptensoftware/petapoco
Both are a single .cs file with no dependencies except what's in the GAC.
(full disclosure, PetaPoco is something I wrote)
A: Here's a big list of alternatives, ones I'd recommend:
*
*Coolstorage
*SOODA
*ODX
*Lightspeed (free for 8 objects or less)
Those 4 are the lightest ones. Subsonic, ActiveRecord and others are aimed at large systems. They work fine on smaller systems but (atleast for ActiveRecord) come with a huge list of dependencies and overkill for a small system. I'd go with Lightspeed and say anything under 8 objects is a small system, and simply using NHibernate because it's widely adopted is good for scaling but in the short term makes no sense - and having a layer between the ORM and your consumers can work around that anyway.
A: LINQ to SQL could be good alternative to "heave" ORM systems if you'll use it properly.
A: If you don't need fully-functional ORM and just need fast database independent data layer over ADO.NET try out open-source NI.Data library (V2). It is very lightweight (just one small assembly, no other dependencies), provides all standard data layer infrastructure:
*
*query abstraction and parser for its string representation called 'relex' (it looks like: "books(rating=5)[title,id]" - very good alternative to Linq-to-SQL and expressions can be composed on the fly )
*'view' concept for encapsulating complex DB-syntax dependent SQL queries
*data triggers
*data layer permissions for select/update/delete queries
*from the box supports MS SQL, SQLite, MySQL, Odbc/OleDb providers (MS Access). Support for other SQL databases could be easily added.
Its main component (DALC) initialized just with one line of code:
var dalc = new DbDalc(new SqlClientDalcFactory(), connectionStr);
that's all.
If you need .NET 2.0 runtime support you can try to compile either latest V2 version under 2.0 runtime or use previous legacy version (NI.Data.Dalc, V1).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: How to get AD User Groups for user in Asp.Net? I need to be able to get a list of the groups a user is in, but I need to have one/some/all of the following properties visible:
*
*distinguishedname
*name
*cn
*samaccountname
What I have right now returns some sort of name, but not any of the ones above (the names seem close, but don't all match correctly. This is what I am using:
ArrayList groups = new ArrayList();
foreach (System.Security.Principal.IdentityReference group in System.Web.HttpContext.Current.Request.LogonUserIdentity.Groups)
groups.Add(group.Translate(typeof(System.Security.Principal.NTAccount)));
Like I said, the above works, but will not get me the proper names I need for my program (the ones specified above). I need this to be able to match up with the list I get while calling all of the groups in my domain:
DirectoryEntry dirEnt = new DirectoryEntry("LDAP://my_domain_controller");
DirectorySearcher srch = new DirectorySearcher(dirEnt);
srch.Filter = "(objectClass=Group)";
var results = srch.FindAll();
A: You cannot do this in one step, as groups are also separate AD entries with properties.
So in the first run you should get the group names a user is in and fill them in a list of some kind.
The second step is to go through all of the group names and query them one by one to get the group properties (like distinguishedname, and so on) and collect it to some kind of structure.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Best way to really grok Java-ME for a C# guy I've recently started developing applications for the Blackberry. Consequently, I've had to jump to Java-ME and learn that and its associated tools. The syntax is easy, but I keep having issues with various gotchas and the environment.
For instance, something that surprised me and wasted a lot of time is absence of real properties on a class object (something I assumed all OOP languages had). There are many gotchas. I've been to various places where they compare Java syntax vs C#, but there don't seem to be any sites that tell of things to look out for when moving to Java.
The environment is a whole other issue all together. The Blackberry IDE is simply horrible. The look reminds me Borland C++ for Windows 3.1 - it's that outdated. Some of the other issues included spotty intellisense, weak debugging, etc... Blackberry does have a beta of the Eclipse plugin, but without debugging support, it's just an editor with fancy refactoring tools.
So, any advice on how to blend in to Java-ME?
A: This guy here had to make the inverse transition. So he listed the top 10 differences of Java and C#. I'll take his topics and show how it is made in Java:
Gotcha #10 - Give me my standard output!
To print to the standard output in Java:
System.out.println("Hello");
Gotcha #9 - Namespaces == Freedom
In Java you don't have the freedom of namespaces. The folder structure of your class must match the package name. For example, a class in the package org.test must be in the folder org/test
Gotcha #8 - What happened to super?
In Java to refer to the superclass you use the reserved word super instead of base
Gotcha #7 - Chaining constructors to a base constructor
You don't have this in Java. You have to call the constructor by yourself
Gotcha #6 - Dagnabit, how do I subclass an existing class?
To subclass a class in Java do this:
public class A extends B {
}
That means class A is a subclass of class B. In C# would be class A : B
Gotcha #5 - Why don’t constants remain constant?
To define a constant in Java use the keyword final instead of const
Gotcha #4 - Where is ArrayList, Vector or Hashtable?
The most used data structures in java are HashSet, ArrayList and HashMap. They implement Set, List and Map. Of course, there is a bunch more. Read more about collections here
Gotcha #3 - Of Accessors and Mutators (Getters and Setters)
You don't have the properties facility in Java. You have to declare the gets and sets methods for yourself. Of course, most IDEs can do that automatically.
Gotcha #2 - Can't I override!?
You don't have to declare a method virtual in Java. All methods - except those declared final - can be overridden in Java.
And the #1 gotcha…
In Java the primitive types int, float, double, char and long are not Objects like in C#. All of them have a respective object representation, like Integer, Float, Double, etc.
That's it. Don't forget to see the original link, there's a more detailed discussion.
A: Java is not significantly different from C#. On a purely syntactic level, here are some pointers that may get you through the day:
*
*In Java you have two families of exceptions: java.lang.Exception and everything that derives from it, and RuntimeException. This is meaningful because in Java exceptions are checked; this means that in order to throw any non-runtime exception you also need to add a throws annotation to your method declaration. Consequently, any method using yours will have to catch that exception or declare that it also throws the same exception. A lot of exceptions you take for granted, such as NullPointerException or IllegalArgumentException, in fact derive from RuntimeException and you therefore don't need to declare them. Checked exceptions are a point of contention between two disciplines, so I'd recommend you try them out for yourself and see if it helps or annoys you. On a personal level, I think checked exceptions improve code factoring and robustness significantly.
*Although Java has supported autoboxing for quite a while, there are still quite a few differences between the C# and Java implementations that you should be aware of. Whereas in C# you can interchangeably use int as both a value type and reference type, in Java they're literally not the same type: you get the primitive value type int and the library reference type java.lang.Integer. This manifests in two common ways: you can't use the value types as a generic type parameter (so you'll use ArrayList<Integer> instead of ArrayList<int>), and the utility methods (such as parse or toString) are statically implemented in the reference type (so it's not int a; a.toString(); but rather int a; Integer.toString( a );).
*Java has two distinct types of nested classes, C# only has one. In Java a static class that is not declared with the static modifier is called an inner class, and has implicit access to the enclosing class's instance. This is an important point because, unlike C#, Java has no concept of delegates, and inner classes are very often use to achieve the same result with relatively little syntactic pain.
*Generics in Java are implemented in a radically different manner than C#; when generics were developed for Java it was decided that the changes will be purely syntactic with no runtime support, in order to retain backwards compatibility with older VMs. With no direct generics support in the runtime, Java implements generics using a technique called type erasure. There are quite a few disadvantages to type erasure over the C# implementation of generics, but the most important point to take from this is that parameterized generic types in Java do not have different runtime types. In other words, after compilation the types ArrayList<Integer> and ArrayList<String> are equivalent. If you work heavily with generics you'll encounter these differences a lot sooner than you'd think.
There are, in my opinion, the three hardest aspects of the language for a C# developer to grok. Other than that there's the development tools and class library.
*
*In Java, there is a direct correlation between the package (namespace), class name and file name. Under a common root directory, the classes com.example.SomeClass and org.apache.SomeOtherClass will literally be found in com/example/SomeClass.class and org/apache/SomeOtherClass.class respectively. Be wary of trying to define multiple classes in a single Java file (it's possible for private classes, but not recommended), and stick to this directory structure until you're more comfortable with the development environment.
*In Java you have the concepts of class-path and class-loader which do not easily map to C# (there are rough equivalents which are not in common use by most .NET developers). Classpath tells the Java VM where libraries and classes are to be found (both yours and the system's shared libraries!), and you can think of class loaders as the context in which your types live. Class loaders are used to load types (class files) from various locations (local disk, internet, resource files, whatnot) but also constrain access to those files. For instance, an application server such as Tomcat will have a class loader for each registered application, or context; this means that a static class in application A will not be the same as a static class in application B, even if they have the same name and even if they share the same codebase. AppDomains provide somewhat similar functionality in .NET.
*The Java class library is similar to the BCL; a lot of the differences are cosmetic, but it's enough to get you running for the documentation (and/or Google) over and over again. Unfortunately I don't think there's anything to do here — you'll just build familiarity with the libraries as you go.
Bottom line: the only way to grok Java is to use it. The learning curve isn't steep, but prepare to be surprised and frustrated quite often over the first two or three months of use.
A: The short answer is - it's going to be annoying, but not difficult.
Java and C# have all the same underlying concepts, and a lot of the libraries are very close in style, but you're going to keep bumping your head across various differences.
If you're talking about class properties, Java has those. The syntax is
public class MyClass {
public static int MY_CLASS_PROPERTY = 12;
}
I would seriously suggest you get a better IDE.
Any of Netbeans, Eclipse, IDEA, JBuider is going to make your transition a lot more pleasant.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: How to center text over an image in a table using javascript, css, and/or html? How to center text over an image in a table cell using javascript, css, and/or html?
I have an HTML table containing images - all the same size - and I want to center a text label over each image. The text in the labels may vary in size. Horizontal centering is not difficult, but vertical centering is.
ADDENDUM: i did end up having to use javascript to center the text reliably using a fixed-size div with absolute positioning; i just could not get it to work any other way
A: you could try putting the images in the background.
<table>
<tr>
<td style="background: url(myImg.jpg) no-repeat; vertical-align: middle; text-align: center">
Here is my text
</td>
</tr>
</table>
You'll just need to set the height and width on the cell and that should be it.
A: There's no proper way of doing it in CSS (although there should be). But here's a method that works for me.
CSS:
#image1, #image1-text, #image1-container {
overflow: hidden;
height: 100px;
width: 100px;
}
#image1 {
top: -100px;
position: relative;
z-index: -1;
}
#image1-text {
text-align: center;
vertical-align: middle;
display: table-cell;
}
HTML:
<div id="image1-container">
<img src="image.jpeg" id="image1">
<div id="image1-text">
hello
</div>
</div>
The order of image1 and image1-text in the container doesn't matter.
It's a bit of a hack but it works anywhere, not just in a table. It doesn't properly work in IE however. It will display it at the top instead. But it works in FF, Safari and Chrome. Haven't tested in IE8.
A hack for IE7 or less, which will only show 1 line, but it will be centred is to add the following inside the <head> tag:
<!--[if lte IE 7]>
<style>
#image1-text {
line-height: 100px;
}
</style>
<![endif]-->
A: I would set the images as the cells' background via CSS, set the cells' size to the proper fixed value (again via CSS), and then insert the text label as the cell content. By default, the content of table cells is centered vertically, so I think you don't have to worry about it. Again, vertical and horizontal alignment can be easily set via CSS. This approach works because I applied it a lot of times.
Another way would be to insert both the image and text in the table cells, wrapping the text in a DIV element and playing with its CSS properties (relative position and margins), but this is a bit tricky in my opinion.
A: You can use TD's option "valign" and it can be top, bottom or center... But as far as I know cell contents are centered vertically by default, so probably your CSS makes them show with bottom or top option.
<TABLE><TR valign=center>
<TD align=center background="some image"> image label </TD>
</TR></TABLE>
A: thanks everyone for the suggestions.
i did end up having to use javascript to center the text reliably using a fixed-size div with absolute positioning; i just could not get it to work any other way.
i also had to generate the text divs with visibility hidden and have a javascript loop at the end of the page to make them visible and place them over the appropriate table cell
there are some serious holes in the layout capabilities of css/html, hopefully these will be addressed in future versions ;-)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Word frequency algorithm for natural language processing Without getting a degree in information retrieval, I'd like to know if there exists any algorithms for counting the frequency that words occur in a given body of text. The goal is to get a "general feel" of what people are saying over a set of textual comments. Along the lines of Wordle.
What I'd like:
*
*ignore articles, pronouns, etc ('a', 'an', 'the', 'him', 'them' etc)
*preserve proper nouns
*ignore hyphenation, except for soft kind
Reaching for the stars, these would be peachy:
*
*handling stemming & plurals (e.g. like, likes, liked, liking match the same result)
*grouping of adjectives (adverbs, etc) with their subjects ("great service" as opposed to "great", "service")
I've attempted some basic stuff using Wordnet but I'm just tweaking things blindly and hoping it works for my specific data. Something more generic would be great.
A: You'll need not one, but several nice algorithms, along the lines of the following.
*
*ignoring pronouns is done via a stoplist.
*preserving proper nouns? You mean, detecting named entities, like Hoover Dam and saying "it's one word" or compound nouns, like programming language? I'll give you a hint: that's tough one, but there exist libraries for both. Look for NER (Named entitiy recognition) and lexical chunking. OpenNLP is a Java-Toolkit that does both.
*ignoring hyphenation? You mean, like at line breaks? Use regular expressions and verify the resulting word via dictionary lookup.
*handling plurals/stemming: you can look into the Snowball stemmer. It does the trick nicely.
*"grouping" adjectives with their nouns is generally a task of shallow parsing. But if you are looking specifically for qualitative adjectives (good, bad, shitty, amazing...) you may be interested in sentiment analysis. LingPipe does this, and a lot more.
I'm sorry, I know you said you wanted to KISS, but unfortunately, your demands aren't that easy to meet. Nevertheless, there exist tools for all of this, and you should be able to just tie them together and not have to perform any task yourself, if you don't want to. If you want to perform a task yourself, I suggest you look at stemming, it's the easiest of all.
If you go with Java, combine Lucene with the OpenNLP toolkit. You will get very good results, as Lucene already has a stemmer built in and a lot of tutorial. The OpenNLP toolkit on the other hand is poorly documented, but you won't need too much out of it. You might also be interested in NLTK, written in Python.
I would say you drop your last requirement, as it involves shallow parsing and will definetly not impove your results.
Ah, btw. the exact term of that document-term-frequency-thing you were looking for is called tf-idf. It's pretty much the best way to look for document frequency for terms. In order to do it properly, you won't get around using multidimenional vector matrices.
... Yes, I know. After taking a seminar on IR, my respect for Google was even greater. After doing some stuff in IR, my respect for them fell just as quick, though.
A: Here is an example of how you might do that in Python, the concepts are similar in any language.
>>> import urllib2, string
>>> devilsdict = urllib2.urlopen('http://www.gutenberg.org/files/972/972.txt').read()
>>> workinglist = devilsdict.split()
>>> cleanlist = [item.strip(string.punctuation) for item in workinglist]
>>> results = {}
>>> skip = {'a':'', 'the':'', 'an':''}
>>> for item in cleanlist:
if item not in skip:
try:
results[item] += 1
except KeyError:
results[item] = 1
>>> results
{'': 17, 'writings': 3, 'foul': 1, 'Sugar': 1, 'four': 8, 'Does': 1, "friend's": 1, 'hanging': 4, 'Until': 1, 'marching': 2 ...
The first line just gets libraries that help with parts of the problem, as in the second line, where urllib2 downloads a copy of Ambrose Bierce's "Devil's Dictionary" The next lines make a list of all the words in the text, without punctuation. Then you create a hash table, which in this case is like a list of unique words associated with a number. The for loop goes over each word in the Bierce book, if there is already a record of that word in the table, each new occurrence adds one to the value associated with that word in the table; if the word hasn't appeared yet, it gets added to the table, with a value of 1 (meaning one occurrence.) For the cases you are talking about, you would want to pay much more attention to detail, for example using capitalization to help identify proper nouns only in the middle of sentences, etc., this is very rough but expresses the concept.
To get into the stemming and pluralization stuff, experiment, then look into 3rd party work, I have enjoyed parts of the NLTK, which is an academic open source project, also in python.
A: I wrote a full program to do just this a while back. I can upload a demo later when I get home.
Here is a the code (asp.net/c#): http://naspinski.net/post/Findingcounting-Keywords-out-of-a-Text-Document.aspx
A: The first part of your question doesn't sound so bad. All you basically need to do is read each word from the file (or stream w/e) and place it into a prefix tree and each time you happen upon a word that already exists you increment the value associated with it. Of course you would have an ignore list of everything you'd like left out of your calculations as well.
If you use a prefix tree you ensure that to find any word is going to O(N) where N is the maximum length of a word in your data set. The advantage of a prefix tree in this situation is that if you want to look for plurals and stemming you can check in O(M+1) if that's even possible for the word, where M is the length of the word without stem or plurality (is that a word? hehe). Once you've built your prefix tree I would reanalyze it for the stems and such and condense it down so that the root word is what holds the results.
Upon searching you could have some simple rules in place to have the match return positive in case of the root or stem or what have you.
The second part seems extremely challenging. My naive inclination would be to hold separate results for adjective-subject groupings. Use the same principles as above but just keep it separate.
Another option for the semantic analysis could be modeling each sentence as a tree of subject, verb, etc relationships (Sentence has a subject and verb, subject has a noun and adjective, etc). Once you've broken all of your text up in this way it seems like it might be fairly easy to run through and get a quick count of the different appropriate pairings that occurred.
Just some ramblings, I'm sure there are better ideas, but I love thinking about this stuff.
A: Welcome to the world of NLP ^_^
All you need is a little basic knowledge and some tools.
There are already tools that will tell you if a word in a sentence is a noun, adjective or verb. They are called part-of-speech taggers. Typically, they take plaintext English as input, and output the word, its base form, and the part-of-speech. Here is the output of a popular UNIX part-of-speech tagger on the first sentence of your post:
$ echo "Without getting a degree in information retrieval, I'd like to know if there exists any algorithms for counting the frequency that words occur in a given body of text." | tree-tagger-english
# Word POS surface form
Without IN without
getting VVG get
a DT a
degree NN degree
in IN in
information NN information
retrieval NN retrieval
, , ,
I PP I
'd MD will
like VV like
to TO to
know VV know
if IN if
there EX there
exists VVZ exist
any DT any
algorithms NNS algorithm
for IN for
counting VVG count
the DT the
frequency NN frequency
that IN/that that
words NNS word
occur VVP occur
in IN in
a DT a
given VVN give
body NN body
of IN of
text NN text
. SENT .
As you can see, it identified "algorithms" as being the plural form (NNS) of "algorithm" and "exists" as being a conjugation (VBZ) of "exist." It also identified "a" and "the" as "determiners (DT)" -- another word for article. As you can see, the POS tagger also tokenized the punctuation.
To do everything but the last point on your list, you just need to run the text through a POS tagger, filter out the categories that don't interest you (determiners, pronouns, etc.) and count the frequencies of the base forms of the words.
Here are some popular POS taggers:
TreeTagger (binary only: Linux, Solaris, OS-X)
GENIA Tagger (C++: compile your self)
Stanford POS Tagger (Java)
To do the last thing on your list, you need more than just word-level information. An easy way to start is by counting sequences of words rather than just words themselves. These are called n-grams. A good place to start is UNIX for Poets. If you are willing to invest in a book on NLP, I would recommend Foundations of Statistical Natural Language Processing.
A: The algorithm you just described it. A program that does it out of the box with a big button saying "Do it"... I don't know.
But let me be constructive. I recommend you this book Programming Collective Intelligence. Chapters 3 and 4 contain very pragmatic examples (really, no complex theories, just examples).
A: U can use the worldnet dictionary to the get the basic information of the question keyword like its past of speech, extract synonym, u can also can do the same for your document to create the index for it.
then you can easily match the keyword with index file and rank the document. then summerize it.
A: Everything what you have listed is handled well by spacy.
*
*Ignore some words - use stop words
*Extract subject - use part of speech tagging to identify it (works out of the box). After a sentence is parsed, find "ROOT" - the main verb of the sentence. By navigating the parse tree you will find a noun that relates to this verb. It will be the subject.
*Ignore hyphenation - their tokenizer handles hyphens in most cases. It can be easily extended to handle more special cases.
If the list of topics is pre-determined and not huge, you may even go further: build a classification model that will predict the topic.
Let's say you have 10 subjects. You collect sample sentences or texts. You load them into another product: prodigy. Using it's great interface you quickly assign subjects to the samples. And finally, using the categorized samples you train the spacy model to predict the subject of the texts or sentences.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: Lightweight Search Indexing API/Lbrary I'm looking for an open source search indexing library. It will be used for embedded web application so it should have a small code size. Preferably, written in C, C++ or PHP and does not require any database to be installed for storing indexes. Indexes should be stored on a file instead (e.g., xml, txt). I tried to look on some famous search libraries such as xapian and clucene, they're good but have a relatively large code size for an embedded system.
This will be run on a Linux platform and will be used to index HTML files.
Any thoughts on what would be a good search library/API to use?
Thanks.
A: Hyper Estraier.
A: Oh, man. There's a few. In order of descending obscurity...
*
*FTSearch
*Zettair
*Sphinx
*Ferret
*Solr (lucene based though, may be too heavy)
I'm sure there's a ton more out there, but these are the ones I have off the top of my head. Good luck :)
A: First: you have to store indexes somewhere. So a data file will be needed unless you want memory only indexes.
To index generic items, I can recommend you sqlite: http://www.sqlite.org/. I even use it in memory only mode when I have a bunch of data and I need to handle it with multiple indexes.
A: It depends on your requirements. A full distribution of Lucene (Java) is up to 3MB JAR file, but in practice can be stripped down to well under 1MB. CLucene is probably considerably smaller in practice. How low do you need to go?...
A: Swish-E is written in C and might do what you want. Does not require a database, uses its own binary index file format.
I've also used ht://Dig but it looks like it's been a long time since that software was maintained.
Both will compile on Linux and index HTML just fine.
A third option is SINO used by AustLII. Contact the team there to make sure you get the latest version. Should compile on Linux without too much trouble. It's not really designed for embedded systems (SINO stands for Size Is No Object) but had a decent API on it last I looked and relatively small (so, it's not designed for it but might work just as well). Targeted at HTML. Pretty fast indexing. Worth a look I think. (Disclosure: worked there a long time ago)
Finally, we use Solr which is based on Lucene. Solr uses a simple API based on POSTing XML documents to a server. Pretty simple to interface with no matter what your language.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Create Web Client Solution dialog closes on click or tab out of default field without messages I installed Web Client Software Factory February 2008 release on Visual Studio Team System 2008 Development Edition (without SP1). When I first installed that, I tried out the software factory as I've never used one before and it worked fine. That was two months ago.
Recently I needed to learn how to use the software factory and when I tried to create a WCSF solution, I get a problem.
The WCSF shows up as a Visual Studio installed template under the Guidance Package project type. There are four templates (which I believe is as it should be) which are:
*
*Web Client Solution (C#, Web Site)
*Web Client Solution (C#, WAP)
*Web Client Solution (Visual Basic, Web Site)
*Web Client Solution (Visual Basic, WAP)
Once I select any of them and proceed to create the solution, Visual Studio displays the 'Create Web Client Solution' dialog. Here's the weird part. As long as I click on any clickable control e.g. TextBox, Button or I press the 'Tab' key on the keyboard to change the cursor's location, the dialog just closes, no solution is created and the status bar will display "Creating project 'C:\MyWebApp' ... project creation failed." If I click on any other part of the dialog, nothing happens.
I've tried uninstalling everything (including Visual Studio) and reinstall and it still won't work. I tried using Microsoft's 'Windows Install Clean Up' tool to ensure any potential corrupt MSI entries are removed before reinstalling. Nothing works.
Hopefully someone else has faced this before and found an answer.
Cheers.
~ hg
A: I believe this might be a VS related authorization problem not a particular WCFS problem.
Try this fix:
http://developerspoint.wordpress.com/2008/06/25/how-to-deal-with-project-creation-failed-problem-of-visual-studio-2008/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to implement a web page that scales when the browser window is resized? How to implement a web page that scales when the browser window is resized?
I can lay out the elements of the page using either a table or CSS float sections, but i want the display to rescale when the browser window is resized
i have a working solution using AJAX PRO and DIVs with overflow:auto and an onwindowresize hook, but it is cumbersome. Is there a better way?
*
*thanks everyone for the answers so far, i intend to try them all (or at least most of them) and then choose the best solution as the answer to this thread
*using CSS and percentages seems to work best, which is what I did in the original solution; using a visibility:hidden div set to 100% by 100% gives a way to measure the client area of the window [difficult in IE otherwise], and an onwindowresize javascript function lets the AJAXPRO methods kick in when the window is resized to redraw the layout-cell contents at the new resolution
EDIT: my apologies for not being completely clear; i needed a 'liquid layout' where the major elements ('panes') would scale as the browser window was resized. I found that i had to use an AJAX call to re-display the 'pane' contents after resizing, and keep overflow:auto turned on to avoid scrolling
A: Unless you have some specific requirement here I'm not sure why JS is needed here. Tabular layouts are the easy (and archaic) way to make fluid layouts in html, but div layouts with css allow for fluid layouts as well, see http://www.glish.com/css/2.asp
A: Yep sound like you want to look at a fluid CSS layout.
For resources on this, just google fluid CSS layout, should give you a whole lot of things to check.
Also have a look at this previous question for some good pointers.
A: It really depends on the web page you are implementing. As a general rule you're going to want 100% CSS. When sizing elements that will contain text remember to gravitate towards text oriented sizes such as em, ex, and not px.
Floats are dangerous if you're new to CSS. I'm not new and they are still somewhat baffling to me. Avoid them where possible. Normally, you just need to modify the display property of the div or element you're working on anyway.
If you do all of this and scour the web where you have additional difficulties you'll not only have pages that resize when the browser does so, but also pages that can be zoomed in and out by resizing text. Basically, do it right and the design is unbreakable. I've seen it done on complex layouts but it is a lot of work, as much effort as programming the web page in certain instances.
I'm not sure who you're doing this site for (fun, profit, both) but I'd recommend you think long and hard about how you balance out the CSS purity with a few hacks here and there to help increase your efficiency. Does your web site have a business need to be 100% accessible? No? Then screw it. Do what you need to do to make your money first, then go hog wild with your passion and do anything extra you have time for.
A: Something else to consider is that JavaScript won't update continuously while the window is being resized, so there will be a noticeable delay/choppiness. With a fluid CSS layout, screen elements will update almost instantly for a seamless transition.
A: The best way that I have seen to do this is to use the YUI CSS Tools and then use percentages for everything. YUI Grids allow for various fixed width or fluid layouts with column sizes specified as fractions of the available space. There is a YUI Grids Builder to help lay things out. YUI Fonts gives you good font size controls. There are some nice cheat sheets available that show you how to lay things out and useful things like what percentage to specify for a font size of so many pixels.
This gets you scaling of the positioning but scaling of the entire site, including font sizes, when the browser window resizes is a bit trickier. I'm thinking that you are going to have to write some sort of browser plugin for this which means that your solution will be non portable. If you are on an intranet this isn't too bad as you can control the browser on each client but if you are wanting a site that is available on the internet then you may need to rethink your UI.
A: instead of using in css say "width: 200px", use stuff like "width: 50%"
This makes it use 50% of whatever it's in, so in the case of:
<body>
<div style="width:50%">
<!--some stuff-->
</div>
</body>
The div will now always take up half the window horizontaly.
A: After trying a solution by the book I got stuck with incompatibility's in either Firefox or IE. So I did some tinkering and came up with this CSS. As you can see, the margins are half of the desired size and negative.
<head><title>Centered</title>
<style type="text/css">
body {
background-position: center center;
border: thin solid #000000;
height: 300px;
width: 600px;
position: absolute;
left: 50%;
top: 50%;
margin-top: -150px;
margin-right: auto;
margin-bottom: auto;
margin-left: -300px;
}
</style></head>
Hope that helps
A: Use percentages! Say you have a "main pane" on which all your page's content lies. You want it to be centered in the window, always, and 80% of the width of the window.
Simply do this:
#centerpane{
margin: auto;
width: 80%;
}
Tada!
A: Thanks for all of the suggestions! It looks like the ugly stuff i had to do was necessary. The following works (on my machine, anyway) in IE and FireFox. I may make an article out of this for CodeProject.com later ;-)
This javascript goes in the <head> section:
<script type="text/javascript">
var tmout = null;
var mustReload = false;
function Resizing()
{
if (tmout != null)
{
clearTimeout(tmout);
}
tmout = setTimeout(RefreshAll,300);
}
function Reload()
{
document.location.href = document.location.href;
}
//IE fires the window's onresize event when the client area
//expands or contracts, which causes an infinite loop.
//the way around this is a hidden div set to 100% of
//height and width, with a guard around the resize event
//handler to see if the _window_ size really changed
var windowHeight;
var windowWidth;
window.onresize = null;
window.onresize = function()
{
var backdropDiv = document.getElementById("divBackdrop");
if (windowHeight != backdropDiv.offsetHeight ||
windowWidth != backdropDiv.offsetWidth)
{
//if screen is shrinking, must reload to get correct sizes
if (windowHeight != backdropDiv.offsetHeight ||
windowWidth != backdropDiv.offsetWidth)
{
mustReload = true;
}
else
{
mustReload = mustReload || false;
}
windowHeight = backdropDiv.offsetHeight;
windowWidth = backdropDiv.offsetWidth;
Resizing();
}
}
</script>
the <body> starts off like this:
<body onload="RefreshAll();">
<div id="divBackdrop"
style="width:100%; clear:both; height: 100%; margin: 0;
padding: 0; position:absolute; top:0px; left:0px;
visibility:hidden; z-index:0;">
</div>
the DIVs float left for the layout. I had to set the height and width to percentages just shy of the full amount (e.g., 99.99%, 59.99%, 39.99%) to keep the floats from wrapping, probably due to the borders on the DIVs.
Finally, after the content section, another javascript block to manage the refreshing:
var isWorking = false;
var currentEntity = <%=currentEntityId %>;
//try to detect a bad back-button usage;
//if the current entity id does not match the querystring
//parameter entityid=###
if (location.search != null && location.search.indexOf("&entityid=") > 0)
{
var urlId = location.search.substring(
location.search.indexOf("&entityid=")+10);
if (urlId.indexOf("&") > 0)
{
urlId = urlId.substring(0,urlId.indexOf("&"));
}
if (currentEntity != urlId)
{
mustReload = true;
}
}
//a friendly please wait... hidden div
var pleaseWaitDiv = document.getElementById("divPleaseWait");
//an example content div being refreshed via AJAX PRO
var contentDiv = document.getElementById("contentDiv");
//synchronous refresh of content
function RefreshAll()
{
if (isWorking) { return; } //no infinite recursion please!
isWorking = true;
pleaseWaitDiv.style.visibility = "visible";
if (mustReload)
{
Reload();
}
else
{
contentDiv.innerHTML = NAMESPACE.REFRESH_METHOD(
(currentEntity, contentDiv.offsetWidth,
contentDiv.offsetHeight).value;
}
pleaseWaitDiv.style.visibility = "hidden";
isWorking = false;
if (tmout != null)
{
clearTimeout(tmout);
}
}
var tmout2 = null;
var refreshInterval = 60000;
//periodic synchronous refresh of all content
function Refreshing()
{
RefreshAll();
if (tmout2 != null)
{
clearTimeout(tmout2);
tmout2 = setTimeout(Refreshing,refreshInterval);
}
}
//start periodic refresh of content
tmout2 = setTimeout(Refreshing,refreshInterval);
//clean up
window.onunload = function()
{
isWorking = true;
if (tmout != null)
{
clearTimeout(tmout);
tmout = null;
}
if (tmout2 != null)
{
clearTimeout(tmout2);
tmout2 = null;
}
ugly, but it works - which i guess it what really matters ;-)
A: use ems. jontangerine.com and simplebits.com are both amazing examples. Further reading - The Incredible Em & Elastic Layouts with CSS by Jon Tan
A: < body onresize="resizeWindow()" onload="resizeWindow()" >
PAGE
< /body >
/**** Page Rescaling Function ****/
function resizeWindow()
{
var windowHeight = getWindowHeight();
var windowWidth = getWindowWidth();
document.getElementById("content").style.height = (windowHeight - 4) + "px";
}
function getWindowHeight()
{
var windowHeight=0;
if (typeof(window.innerHeight)=='number')
{
windowHeight = window.innerHeight;
}
else {
if (document.documentElement && document.documentElement.clientHeight)
{
windowHeight = document.documentElement.clientHeight;
}
else
{
if (document.body && document.body.clientHeight)
{
windowHeight = document.body.clientHeight;
}
}
}
return windowHeight;
}
The solution I'm currently working on needs a few changes as to width otherwise height works fine as of so far ^^
-Ozaki
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Can I get more than 1000 records from a DirectorySearcher? I just noticed that the return list for results is limited to 1000. I have more than 1000 groups in my domain (HUGE domain). How can I get more than 1000 records? Can I start at a later record? Can I cut it up into multiple searches?
Here is my query:
DirectoryEntry dirEnt = new DirectoryEntry("LDAP://dhuba1kwtn004");
string[] loadProps = new string[] { "cn", "samaccountname", "name", "distinguishedname" };
DirectorySearcher srch = new DirectorySearcher(dirEnt, "(objectClass=Group)", loadProps);
var results = srch.FindAll();
I have tried to set srch.SizeLimit = 2000;, but that doesn't seem to work. Any ideas?
A: You need to set DirectorySearcher.PageSize to a non-zero value to get all results.
BTW you should also dispose DirectorySearcher when you're finished with it
using(var srch = new DirectorySearcher(dirEnt, "(objectClass=Group)", loadProps))
{
srch.PageSize = 1000;
var results = srch.FindAll();
}
The API documentation isn't very clear, but essentially:
*
*when you do a paged search, the SizeLimit is ignored, and all matching results are returned as you iterate through the results returned by FindAll. Results will be retrieved from the server a page at a time. I chose the value of 1000 above, but you can use a smaller value if preferred. The tradeoff is: using a small PageSize will return each page of results faster, but will require more frequent calls to the server when iterating over a large number of results.
*by default the search isn't paged (PageSize = 0). In this case up to SizeLimit results is returned.
As Biri pointed out, it's important to dispose the SearchResultCollection returned by FindAll, otherwise you may have a memory leak as described in the Remarks section of the MSDN documentation for DirectorySearcher.FindAll.
One way to help avoid this in .NET 2.0 or later is to write a wrapper method that automatically disposes the SearchResultCollection. This might look something like the following (or could be an extension method in .NET 3.5):
public IEnumerable<SearchResult> SafeFindAll(DirectorySearcher searcher)
{
using(SearchResultCollection results = searcher.FindAll())
{
foreach (SearchResult result in results)
{
yield return result;
}
} // SearchResultCollection will be disposed here
}
You could then use this as follows:
using(var srch = new DirectorySearcher(dirEnt, "(objectClass=Group)", loadProps))
{
srch.PageSize = 1000;
var results = SafeFindAll(srch);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "75"
}
|
Q: Mocking method results I'm trying to find a way to fake the result of a method called from within another method.
I have a "LoadData" method which calls a separate helper to get some data and then it will transform it (I'm interested in testing the transformed result).
So I have code like this:
public class MyClass(){
public void LoadData(){
SomeProperty = Helper.GetSomeData();
}
public object SomeProperty {get;set;}
}
I want to have a known result from the Helper.GetSomeData() method. Can I use a mocking framework (I've got fairly limited experience with Rhino Mocks but am open to anything) to force an expected result? If so, how?
*Edit - yeah as expected I couldn't achieve the hack I wanted, I'll have to work out a better way to set up the data.
A: I would recommend converting what you have into something like this:
public class MyClass()
{
private IHelper _helper;
public MyClass()
{
//Default constructor normal code would use.
this._helper = new Helper();
}
public MyClass(IHelper helper)
{
if(helper == null)
{
throw new NullException(); //I forget the exact name but you get my drift ;)
}
this._helper = helper;
}
public void LoadData()
{
SomeProperty = this._helper.GetSomeData();
}
public object SomeProperty {get;set;}
}
Now your class supports what is known as dependency injection. This allows you to inject the implementation of the helper class and it ensures that your class need only depend on the interface. When you mock this know you just create a mock that uses the IHelper interface and pass it in to the constructor and your class will use that as though it is the real Helper class.
Now if you're stuck using the Helper class as a static class then I would suggest that you use a proxy/adapter pattern and wrap the static class with another class that supports the IHelper interface (that you will also need to create).
If at some point you want to take this a step further you could completely remove the default Helper implementation from the revised class and use IoC (Inversion of Control) containers. If thiis is new to you though, I would recommend focusing first on the fundamentals of why all of this extra hassle is worth while (it is IMHO).
Your unit tests will look something like this psuedo-code:
public Amazing_Mocking_Test()
{
//Mock object setup
MockObject mockery = new MockObject();
IHelper myMock = (IHelper)mockery.createMockObject<IHelper>();
mockery.On(myMock).Expect("GetSomeData").WithNoArguments().Return(Anything);
//The actual test
MyClass testClass = new MyClass(myMock);
testClass.LoadData();
//Ensure the mock had all of it's expectations met.
mockery.VerifyExpectations();
}
Feel free to comment if you have any questions. (By the way I have no clue if this code all works I just typed it in my browser, I'm mainly illustrating the concepts).
A: As far as I know, you should create an interface or a base abstract class for the Helper object. With Rhino Mocks you can then return the value you want.
Alternatively, you can add an overload for LoadData that accepts as parameters the data that you normally retrieve from the Helper object. This might even be easier.
A: You might want to look into Typemock Isolator, which can "fake" method calls without forcing you to refactor your code.
I am a dev in that company, but the solution is viable if you would want to choose not to change your design (or forced not to change it for testability)
it's at www.Typemock.com
Roy
blog: ISerializable.com
A: You have a problem there. I don't know if thats a simplified scenario of your code, but if the Helper class is used that way, then your code is not testable. First, the Helper class is used directly, so you can't replace it with a mock. Second, you're calling a static method. I don't know about C#, but in Java you can't override static methods.
You'll have to do some refactoring to be able to inject a mock object with a dummy GetSomeData() method.
In this simplified version of your code is difficult to give you a straight answer. You have some options:
*
*Create an interface for the Helper class and provide a way for the client to inject the Helper implementation to the MyClass class. But if Helper is just really a utility class it doesn't make much sense.
*Create a protected method in MyClass called getSomeData and make it only call Helper.LoadSomeData. Then replace the call to Helper.LoadSomeData in LoadData with for getSomeData. Now you can mock the getSomeData method to return the dummy value.
Beware of simply creating an interface to Helper class and inject it via method. This can expose implementation details. Why a client should provide an implementation of a utility class to call a simple operation? This will increase the complexity of MyClass clients.
A: Yes, a mocking framework is exactly what you're looking for. You can record / arrange how you want certain mocked out / stubbed classes to return.
Rhino Mocks, Typemock, and Moq are all good options for doing this.
Steven Walther's post on using Rhino Mocks helped me a lot when I first started playing with Rhino Mocks.
A: I would try something like this:
public class MyClass(){
public void LoadData(IHelper helper){
SomeProperty = helper.GetSomeData();
}
This way you can mock up the helper class using for example MOQ.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Compiling on multiple hosts Say that you're developing code which needs to compile and run on multiple hosts (say Linux and Windows), how would you go about doing that in the most efficient manner given that:
*
*You have full access to hardware for each host you're compiling for (in my case a Linux host and a Windows host standing on my desk)
*Building over a network drive is too expensive
*No commits to a central repository should be required -- assume that there is a CI engine which tries to build as soon as anything is checked in
"Efficient" means keeping the compile-edit-run cycle as short and simple as possible.
A: The best thing I can recommend is an awesome cross platform project called 'BuildBot'.
BuildBot can automatically cause a build to occur on every platform you support, every time you check a new revision into your source control system. Have it build on OSX, Linux (ubuntu), Linux (debian), Linux (Redhat), Vista, Windows XP, etc, and have emails sent or whatever you prefer when a build fails.
As part of the build process, you can publish binaries if the tests pass. Useful for 'nightly' or 'bleeding edge' builds.
Here's some urls:
*
*Buildbot.net
home page
*Python.org's
buildbot
A: We find that Hudson is a great CI server that can perform builds from source control as needed. As it is written in Java it can run on your target platform of choice and as the interface is web based you can control it from anywhere. There are plugins to do most things you want to do and best of all it is free!
A: Most of the build servers mentioned in the other answers check out your changes from a version control system. Given your "No commits to a central repository should be required" requirement, I'd suggest that you try Jetbrains TeamCity CI server.
It has plugins fro Visual Studio and Eclipse and allows you to request a "private build", sending your changes straight to the build server. For each project you can define a number of build configurations with different requirements (OS is one of the possible reqs). If the builds succeed, the plugin will prompt you to commit your changes.
The free version supports 3 agents and you can buy more if needed.
It looks like Pulse also has the same feature, but I have no first hand experience with it.
A: Pick one machine as your development box.
Setup the other one to automatically update from your source control on a regular basis (hourly/daily/whatever). Any build/test failures should send you some sort of warning message. (email,im,whatever). Your non-dev box is still be building locally since it has its own copy of the tree.
Before doing a real release, you still want to do human testing of course. But this keeps life sane the rest of the time.
A: Building simple setup for such task is very simple.
I will suggest Cygwin to be used on Windows platform. This way you can write completely portable software/scrips for both Linux and Windows platforms. It's not clear from you post on which stage of project your are, but assuming that you only starting i will suggest using make to build your software. You can use cron to schedule the frequency for your check out/build circle. You can even send an email with build log if its broken.
There is number of ready daily build test both commercial and open source you can google for it or may be somebody will add here suggestions.
We are using home grown tool for that task so i can not suggest anything ready made.
Ok, i missed the point that you don't want to use source control system (which is strange, but you are the boss :) ) in this case just replace the check out with rsync everything else stays similar.
A: Use http://ccache.samba.org to speed up compiles where only a few files have changed in a larger project,
and when large changes have been made, leverage http://en.opensuse.org/Icecream at the same time for shared distributed compiling.
That should probably quicken your compile-edit-run-cycle significantly.
A: Since you are using CI I assume you have already set up a build process properly. What we are doing is that we are using windows boxes as dev machines and CI is running on Solaris. This assures that the code compiles well on multiple platforms. The code is in Java and we don't use any native libraries so it is quite guaranteed that the code will work.
We are using Bamboo at work - it is great but not free:-)
For my private projects I have been using Continuum, but the Husdon looks neat (I'll give it a try) - thanks Peter.
A: One option would be Cascade, which allows you to test your changes on all your platforms before, rather than after, commit, by "checkpointing" them on the server.
A: One word: Cruise (not Cruise Control) is very nice.
You can get two agents for free and one one agent per platform. It takes literally minutes to setup on mac and pc, and isn't too bad on linux from what i hear.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to prevent the ObjectDisposedException in C# when drawing and application exits I'm a CompSci student, and fairly new at C#, and I was doing a "Josephus Problem" program for a class, and I created an Exit button that calls Application.Exit() to exit at anytime, but if C# is still working on painting and the button is pressed it throws an ObjectDisposedExeception for the Graphics object. Is there any way to prevent this?. I was thinking of try{}catch or change a boolean to tell the painting process to stop before exiting, but I want to know if there's another solution.
A: You should be called the Close() method of the Form that contains the button in order to close down the form in an orderly manner. Closing the main form will cause the application to exit for you anyway.
A: It shouldn't be possible for this to happen. If the button is created on the same thread as the window, they share a message pump and the Paint handler cannot be interrupted to handle the exit button. The message that the button has been clicked will be queued up on the thread's message queue until the Paint handler returns.
Generally, you should defer painting to the Paint handler (or override OnPaint) and everywhere else that you need to update the screen, call the control's Invalidate method. That tells Windows that an area needs repainting and, once all other messages have been dealt with, it will generate a WM_PAINT message which ultimately will call OnPaint, which in turn will fire the Paint event.
If animating, use a System.Windows.Forms.Timer to trigger each frame, rather than using a thread. System.Threading.Timer callbacks execute on the threadpool, so they're always on the wrong thread for manipulating the UI.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How does a program ask for administrator privileges? I am developing an application using vb.net. For performing some tasks the application needs administrator privileges in the machine. How to ask for the privileges during the execution of the program?
What is the general method of switching user accounts for executing an application? In other words, is there some way for an application to run under an arbitrary user account?
A: You can edit the UAC Settings (in VB 2008) which is located in the Project Settings. Look for the line that says
<requestedExecutionLevel level="asInvoker" uiAccess="false" />
Change level="asInvoker" to
*
*level="asInvoker" (same access token as the parent process)
*level="requireAdministrator (require full administrator)
*level="highestAvailable" (highest privileges available to the current user)
A: There are several articles on the Internet about developing elevated processes in Vista, but essentially elevation requests involve decorating .NET assemblies and WIN32 executables with elevation status in the application manifest file (may be embedded or side-by-side).
There is an excellent blog post about your question which provides the code you'll probably need:
.NET Wrapper for COM Elevation
A: I have not done this yet but I believe you go to (in VS 2008) Project Settings -> Application Tab and click on the "View UAC Settings" button. This opens up your app.manifest file. In there is a tag which I think holds the options you're looking for. Mine has some options commented out which should get you started:
A: IN VS 2015: Go to: Project -> (name of project) Properties... -> Application -> View Windows Settings and find in app.manifest (line 19): And change asInvoker to:
*
*"asInvoker" (same access token as the parent process)
*"requireAdministrator (require full administrator)
*"highestAvailable" (highest privileges available to the current user)
A: Project>Properties>Application>View Windows Settings: Replace <requestedExecutionLevel level="asInvoker" uiAccess="false" /> With <requestedExecutionLevel level="requireAdministrator" uiAccess="false" />
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: ADODB interop issue We have project PrjDb.dll in vb 6.0 that has a reference to ado 2.5. The project is built on Machine A. Now when we generate an interop for PrjDb.dll on another machine B, we end up with a new ADODB.dll with ver 2.5 in version field instead of linking it with the Primary Interop Assembly adodb.dll ( found under 'Program Files\Microsoft.NET\Primary Interop Assemblies'). The problem is that when i deploy my application, it now asks for this newly generated adodb.dll. And i don't want to ship it.
Even if I provide the adodb.dll path in the command line, it still generates the new interop for AdoDB. I tried using the switch /strict then it says the it can not resolve references using the AdoDB.dll that i want it to use.
This doesn’t happen if we generate interop on the same machine where we built PrjDb.dll. Rather on any machine other than machine B, it automatically picks the PIA for AdoDB.
Any idea whats going on machine B when we generate the interop for PrjDb.dll?
A: Can you not use ADO.Net instead since you are already on .Net? That's one solution to various ADODB interop errors I found. Feel free to clarify so we can help you get a "real answer".
(http://bytes.com/forum/thread470736.html)
(from google search: adodb interop .net)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I create a thread dump of a Java Web Start application Is it possible to get a thread dump of a Java Web Start application? And if so, how?
It would be nice if there were a simple solution, which would enable a non-developer (customer) to create a thread dump. Alternatively, is it possible to create a thread dump programmatically?
In the Java Web Start Console I can get a list of threads by pressing 't' but stacktraces are not included.
If answers require certain java versions, please say so.
A: In the console, press V rather than T:
t: dump thread list
v: dump thread stack
This works under JDK6. Don't know about others.
Alternative, under JDK5 (and possibly earlier) you can send a full stack trace of all threads to standard out:
Under Windows: type ctrl-break in the Java console.
Under Unix: kill -3 <java_process_id>
(e.g. kill -3 5555). This will NOT kill your app.
One other thing: As others say, you can get the stacks programatically via the Thread class but watch out for Thread.getAllStackTraces() prior to JDK6 as there's a memory leak.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6434648
Regards,
scotty
A: Try
StackTraceElement[] stack = Thread.currentThread().getStackTrace();
Then you can iterate over the collection to show the top x stack elements you're interested in.
A: Recent JDKs (sadly not JREs) include tools like jstack which does such things. JVMs from version 5 include JMX extensions to get thread dumps, memory statistics, and much more. All java applications, including web start applications, have this functionality available.
You would either need to have the JDK installed or to write a JMX client that does the same thing. Take a look at http://java.sun.com/javase/6/docs/technotes/guides/management/ to get more information.
A: Since 1.5 you can use Thread.getAllStackTraces() to get a Map to iterate over.
The ideal output would be that produced from Ctrl-\ (or Ctrl-Break or similar), but there doesn't seem to be a documented way of producing this. If you are willing to limit yourself to sun's JVM (or use reflection I suppose) you could have a dig around the sun.* packages and see if anything interesting shows up.
A: Since Java 5 you have the getStackTrace() method of Thread class. For prior versions you can do:
Thread.currentThread().dumpStack();
This will print the stack trace to System.out
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Problem with TVN_SELCHANGED on CTreeCtrl object I have tree control object created using CTreeCtrl MFC class. The tree control needs to support rename.
When I left click on any of item in Tree the TVN_SELCHANGED event is called from which I can get the selected item of the tree as below :
HTREEITEM h = m_moveListTree.GetSelectedItem();
CString s = m_moveListTree.GetItemText(h);
However when I rightclick on any item in tree I do not get any TVN_SELCHANGED event and hence my selected item still remains the same from left click event. This is causing following problem :
1)User leftclicks on item A
2)user right clicks on item B and says rename
3)Since the selected item is still A the rename is applying for item A.
Please help in solving problem.
-Praveen
A: This behaviour is by design -- right-clicking doesn't move the selection.
For what you want, turn on the TVS_EDITLABELS style on the tree view. Then you handle the TVN_BEGINLABELEDIT and TVN_ENDLABELEDIT notifications.
A: I created my own MFC like home grown C++ GUI library on top of the Win32 API and looking at my code, this is how it handles that situation:
LRESULT xTreeCtrl::onRightClick(NMHDR *)
{
xPoint pt;
//-- get the cursor at the time the mesage was posted
DWORD dwPos = ::GetMessagePos();
pt.x = GET_X_LPARAM(dwPos);
pt.y = GET_Y_LPARAM (dwPos);
//-- now convert to window co-ordinates
pt.toWindow(this);
//-- check for a hit
HTREEITEM hItem = this->hitTest(pt);
//-- select any item that was hit
if ((int)hItem != -1) this->select(hItem);
//-- leave the rest to default processing
return 0;
}
I suspect if you do something similar in the MFC right click or right button down events that will fix the problem.
NOTE: The onRightClick code above is nothing more than the handler for the WM_NOTIFY, NM_RCLICK message.
A: Not sure how you popup the context menu, but you can use HitTest() to get from a point to a tree item. So you might use this in your right click handler.
Don't forget that the context menu can also be activated by a key on reasonable modern keyboards. Then you probably want to use the selected item as target.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to create and use resources in .NET How do I create a resource that I can reference and use in various parts of my program easily?
My specific problem is that I have a NotifyIcon that I want to change the icon of depending on the state of the program. A common problem, but one I've been struggling with for a long time.
A: The above method works well.
Another method (I am assuming web here) is to create your page. Add controls to the page. Then while in design mode go to: Tools > Generate Local Resource. A resource file will automatically appear in the solution with all the controls in the page mapped in the resource file.
To create resources for other languages, append the 4 character language to the end of the file name, before the extension (Account.aspx.en-US.resx, Account.aspx.es-ES.resx...etc).
To retrieve specific entries in the code-behind, simply call this method: GetLocalResourceObject([resource entry key/name]).
A: Well, after searching around and cobbling together various points from around StackOverflow (gee, I love this place already), most of the problems were already past this stage. I did manage to work out an answer to my problem though.
How to create a resource:
In my case, I want to create an icon. It's a similar process, no matter what type of data you want to add as a resource though.
*
*Right click the project you want to add a resource to. Do this in the Solution Explorer. Select the "Properties" option from the list.
*Click the "Resources" tab.
*The first button along the top of the bar will let you select the type of resource you want to add. It should start on string. We want to add an icon, so click on it and select "Icons" from the list of options.
*Next, move to the second button, "Add Resource". You can either add a new resource, or if you already have an icon already made, you can add that too. Follow the prompts for whichever option you choose.
*At this point, you can double click the newly added resource to edit it. Note, resources also show up in the Solution Explorer, and double clicking there is just as effective.
How to use a resource:
Great, so we have our new resource and we're itching to have those lovely changing icons... How do we do that? Well, lucky us, C# makes this exceedingly easy.
There is a static class called Properties.Resources that gives you access to all your resources, so my code ended up being as simple as:
paused = !paused;
if (paused)
notifyIcon.Icon = Properties.Resources.RedIcon;
else
notifyIcon.Icon = Properties.Resources.GreenIcon;
Done! Finished! Everything is simple when you know how, isn't it?
A: Code posted by Matthew Scharley has a memory leak:
paused = !paused;
if (paused)
notifyIcon.Icon = Properties.Resources.RedIcon;
else
notifyIcon.Icon = Properties.Resources.GreenIcon;
You should Dispose() notifyIcon.Icon before replacing it, because Properties.Resources.SOME_ICON creates a new Icon each time it is used.
This can be observed in the log, with this code:
Console.WriteLine(Properties.Resources.RedIcon.GetHashCode());
Console.WriteLine(Properties.Resources.RedIcon.GetHashCode());
Console.WriteLine(Properties.Resources.RedIcon.GetHashCode());
You will see 3 different Hash Codes in the log. This means these are different Objects.
So, the simple fix will be:
paused = !paused;
notifyIcon.Icon?.Dispose();
notifyIcon.Icon = paused
? Properties.Resources.RedIcon;
: Properties.Resources.GreenIcon;
A: The above didn't actually work for me as I had expected with Visual Studio 2010. It wouldn't let me access Properties.Resources, said it was inaccessible due to permission issues. I ultimately had to change the Persistence settings in the properties of the resource and then I found how to access it via the Resources.Designer.cs file, where it had an automatic getter that let me access the icon, via MyNamespace.Properties.Resources.NameFromAddingTheResource. That returns an object of type Icon, ready to just use.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "219"
}
|
Q: How does a program ask for administrator privileges? I am working on a small application in VB.NET. The program needs administrator privilege for doing some tasks. Is there a way to ask for administrator privileges during the execution if the program?
What is the general way of changing the user account under which the application is running?
A: You can specify this in your application's manifest file.
Check out this link and this link and this link too.
A: There are a number of methods depending on your needs. Some details are given in the application developer requirements for UAC.
*
*Include a UAC manifest that causes your program to require administrator privileges at startup.
*Use one of the suggested methods for invoking an elevation to run out of process. One of the nicest is to use the COM elevation moniker and CoCreateInstanceAsAdmin to call methods on a COM object running as an administrator. This is possibly tricky to get working in VB.Net. I got it working ok in C++ though
*Another ok method is to isolate the parts of your code that need admin privileges into an application that uses a UAC manifest to require admin privileges. Your main app does not need to run as an admin in that case. When you require admin privilegese, you would invoke the external application.
A: Try
Dim procInfo As New ProcessStartInfo()
procInfo.UseShellExecute = True
procInfo.FileName = 'Filename here
procInfo.WorkingDirectory = ""
procInfo.Verb = "runas"
Process.Start(procInfo)
Catch ex As Exception
MsgBox(ex.Message.ToString(), vbCritical)
End Try
End If
A: The most easy way to do this is to click on the Project tab -> Add Windows Form -> .XML file -> name it (program name).manifest -> paste this code in this link into it ( thanks JDOConal ) -> then right click on your project name in the solution explorer box off to the right and hit properties -> on the first tab select manifest and then the .manifest file you created -> build = done!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Keyboard scancodes? GNU/Linux text console, X11 not involved, indeed not even
installed. Keyboard is US layout, keymap US default. Kernel
version 2.20.x or later.
An application written in C is getting keyboard input in
translation mode, i.e. XLATE or UNICODE. When a key is
pressed, the application receives the corresponding
keystring. As an example, you press F1, the application
reads "\033[[A".
Before the kernel sends the keystring to the application, it
must know which key is pressed, i.e. it must know its
scancode. In the F1 example above, the scancode for the key
pressed is 59 or 0x3b.
That's to say even when the keyboard is in translation mode,
the scancodes are held somewhere in memory. How can the
application access them without switching the keyboard to
RAW or MEDIUMRAW mode? A code snippet would help.
A: Chances are that you are issuing the ioctl commands on the wrong file descriptor, check for error codes coming back from ioctl and tcsetattr.
You should be opening the console device, and then issuing your keyboard translation commands on that device. You would have to basically mimic what the X server is doing.
This is a link to the source code on codesearch.google.com.
A: Sure, the code you want to look at is in kbd-1.12.tar.bz2, which is the source bundle for the 'kbd' package. The 'kbd' package provides tools such as 'dumpkeys', 'showkeys' and 'loadkeys', which are useful for looking at the current keyboard mapping, checking what keys emit what scancodes, and loading a new mapping.
You will have to communicate with the kernel via ioctls, and it's quite complicated, so I recommend reading the source of that package to see how it's done.
Here's a link to the tarball: kbd-1.12.tar.bz2 (618K).
A: At a terminal I entered
dumpkeys -f > test.txt
and there was a great deal of detailed information, including:
keycode 29 = Control
...
string F1 = "\033[[A"
string F2 = "\033[[B"
string F3 = "\033[[C"
string F4 = "\033[[D"
string F5 = "\033[[E"
string F6 = "\033[17~"
string F7 = "\033[18~"
string F8 = "\033[19~"
...
string Prior = "\033[5~"
string Next = "\033[6~"
string Macro = "\033[M"
string Pause = "\033[P"
dumpkeys was included by default with my distribution. But you should be able to find it in what jerub posted. I would start by looking kbd-1.12/src/loadkeys.y.
It looks like the kernel is responsible for holding that data, and can report to those who know how to ask.
A: You maybe want to look at kbdev or evdev (look at your Documentation/input/input.txt file in your kernel source directory for starters.) That would work for console access.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: re-rendering of combox store in Gwt-Ext i've created a Form Panel, and i'm rendering couple of Combo Boxes in the panel with a store which is populated via an response handler.
the problem if i want to render the panel again it renders the combo boxes without the store, though i'm re-constructing the panel.
i tried to debug to figure out the cause and surprisingly though for combo box the Store is null on calling - comboBox.setStore(store) it checks for the property (isRendered) and finds it to be true and hence doesn't add the store but just keep the existing store which is still null.
i've seen this problem in another scenaio where i had created a collapsible field set containing the Combobox, On minimize and maximize of the fieldset the store vanishes for the same reasons.
can someone please help me here, i'm completely struck here i tried various option but nothing works.
A: Thanks for your comments, actually i tried the plugin approach but couldn't understand it completely as to how will i get the handle to the store which is not an exposed element of the component.
Anyways i tried something else, while debugging i found that though i'm creating the component again on click of show button, the ID passed is same ( which is desired ) but somehow for the given id there is already the previous reference available in the Ext.Components.
Hence an easy solution is following :
Component comp = Ext.getCmp(id);
if ( comp != null )
comp.destroy( );
this actually worked as the reference which was causing the ( isRendered( ) property of the ComboBox to return true is no more available and hence i can see the store again properly.
i hope this helps others who are facing similar issue.
Thanks anyways for replying.
A: Have you tried doLayout() method of FormPanel?
A: ComboBox.view.setStore() should help.
If view is a private variable, just try to mention it between Combobox config params when creating. If it doesn't help - you can use plugin like that:
view_plugin = {
init: function(o) {
o.setNewStore = function(newStore) {
this.view.setStore(newStore);
};
}
};
and add a line of
plugins: view_plugin,
to Combobox config.
Then you can call combobox.setNewStore(newStore) later in the code.
A: You need to write:
field = new ComboBox({plugins: view_plugin});
In your case and define my code of view_pligin somewhere before. Or you can even incorporate it inline:
field = new ComboBox({plugins: { code of plugin });
Inside plugin all private properties and methods are accessible/changeable.
You also can change store using field.setNewStore(store) at any time later on.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Float/double precision in debug/release modes Do C#/.NET floating point operations differ in precision between debug mode and release mode?
A: They can indeed be different. According to the CLR ECMA specification:
Storage locations for floating-point
numbers (statics, array elements, and
fields of classes) are of fixed size.
The supported storage sizes are
float32 and float64. Everywhere else
(on the evaluation stack, as
arguments, as return types, and as
local variables) floating-point
numbers are represented using an
internal floating-point type. In each
such instance, the nominal type of the
variable or expression is either R4 or
R8, but its value can be represented
internally with additional range
and/or precision. The size of the
internal floating-point representation
is implementation-dependent, can vary,
and shall have precision at least as
great as that of the variable or
expression being represented. An
implicit widening conversion to the
internal representation from float32
or float64 is performed when those
types are loaded from storage. The
internal representation is typically
the native size for the hardware, or
as required for efficient
implementation of an operation.
What this basically means is that the following comparison may or may not be equal:
class Foo
{
double _v = ...;
void Bar()
{
double v = _v;
if( v == _v )
{
// Code may or may not execute here.
// _v is 64-bit.
// v could be either 64-bit (debug) or 80-bit (release) or something else (future?).
}
}
}
Take-home message: never check floating values for equality.
A: In fact, they may differ if debug mode uses the x87 FPU and release mode uses SSE for float-ops.
A: Here's a simple example where results not only differ between debug and release mode, but the way by which they do so depend on whether one uses x86 or x84 as a platform:
Single f1 = 0.00000000002f;
Single f2 = 1 / f1;
Double d = f2;
Console.WriteLine(d);
This writes the following results:
Debug Release
x86 49999998976 50000000199,7901
x64 49999998976 49999998976
A quick look at the disassembly (Debug -> Windows -> Disassembly in Visual Studio) gives some hints about what's going on here. For the x86 case:
Debug Release
mov dword ptr [ebp-40h],2DAFEBFFh | mov dword ptr [ebp-4],2DAFEBFFh
fld dword ptr [ebp-40h] | fld dword ptr [ebp-4]
fld1 | fld1
fdivrp st(1),st | fdivrp st(1),st
fstp dword ptr [ebp-44h] |
fld dword ptr [ebp-44h] |
fstp qword ptr [ebp-4Ch] |
fld qword ptr [ebp-4Ch] |
sub esp,8 | sub esp,8
fstp qword ptr [esp] | fstp qword ptr [esp]
call 6B9783BC | call 6B9783BC
In particular, we see that a bunch of seemingly redundant "store the value from the floating point register in memory, then immediately load it back from memory into the floating point register" have been optimized away in release mode. However, the two instructions
fstp dword ptr [ebp-44h]
fld dword ptr [ebp-44h]
are enough to change the value in the x87 register from +5.0000000199790138e+0010 to +4.9999998976000000e+0010 as one may verify by stepping through the disassembly and investigating the values of the relevant registers (Debug -> Windows -> Registers, then right click and check "Floating point").
The story for x64 is wildly different. We still see the same optimization removing a few instructions, but this time around, everything relies on SSE with its 128-bit registers and dedicated instruction set:
Debug Release
vmovss xmm0,dword ptr [7FF7D0E104F8h] | vmovss xmm0,dword ptr [7FF7D0E304C8h]
vmovss dword ptr [rbp+34h],xmm0 | vmovss dword ptr [rbp-4],xmm0
vmovss xmm0,dword ptr [7FF7D0E104FCh] | vmovss xmm0,dword ptr [7FF7D0E304CCh]
vdivss xmm0,xmm0,dword ptr [rbp+34h] | vdivss xmm0,xmm0,dword ptr [rbp-4]
vmovss dword ptr [rbp+30h],xmm0 |
vcvtss2sd xmm0,xmm0,dword ptr [rbp+30h] | vcvtss2sd xmm0,xmm0,xmm0
vmovsd qword ptr [rbp+28h],xmm0 |
vmovsd xmm0,qword ptr [rbp+28h] |
call 00007FF81C9343F0 | call 00007FF81C9343F0
Here, because the SSE unit avoids using higher precision than single precision internally (while the x87 unit does), we end up with the "single precision-ish" result of the x86 case regardless of optimizations. Indeed, one finds (after enabling the SSE registers in the Visual Studio Registers overview) that after vdivss, XMM0 contains 0000000000000000-00000000513A43B7 which is exactly the 49999998976 from before.
Both of the discrepancies bit me in practice. Besides illustrating that one should never compare equality of floating points, the example also shows that there's still room for assembly debugging in a high-level language such as C#, the moment floating points show up.
A: This is an interesting question, so I did a bit of experimentation. I used this code:
static void Main (string [] args)
{
float
a = float.MaxValue / 3.0f,
b = a * a;
if (a * a < b)
{
Console.WriteLine ("Less");
}
else
{
Console.WriteLine ("GreaterEqual");
}
}
using DevStudio 2005 and .Net 2. I compiled as both debug and release and examined the output of the compiler:
Release Debug
static void Main (string [] args) static void Main (string [] args)
{ {
00000000 push ebp
00000001 mov ebp,esp
00000003 push edi
00000004 push esi
00000005 push ebx
00000006 sub esp,3Ch
00000009 xor eax,eax
0000000b mov dword ptr [ebp-10h],eax
0000000e xor eax,eax
00000010 mov dword ptr [ebp-1Ch],eax
00000013 mov dword ptr [ebp-3Ch],ecx
00000016 cmp dword ptr ds:[00A2853Ch],0
0000001d je 00000024
0000001f call 793B716F
00000024 fldz
00000026 fstp dword ptr [ebp-40h]
00000029 fldz
0000002b fstp dword ptr [ebp-44h]
0000002e xor esi,esi
00000030 nop
float float
a = float.MaxValue / 3.0f, a = float.MaxValue / 3.0f,
00000000 sub esp,0Ch 00000031 mov dword ptr [ebp-40h],7EAAAAAAh
00000003 mov dword ptr [esp],ecx
00000006 cmp dword ptr ds:[00A2853Ch],0
0000000d je 00000014
0000000f call 793B716F
00000014 fldz
00000016 fstp dword ptr [esp+4]
0000001a fldz
0000001c fstp dword ptr [esp+8]
00000020 mov dword ptr [esp+4],7EAAAAAAh
b = a * a; b = a * a;
00000028 fld dword ptr [esp+4] 00000038 fld dword ptr [ebp-40h]
0000002c fmul st,st(0) 0000003b fmul st,st(0)
0000002e fstp dword ptr [esp+8] 0000003d fstp dword ptr [ebp-44h]
if (a * a < b) if (a * a < b)
00000032 fld dword ptr [esp+4] 00000040 fld dword ptr [ebp-40h]
00000036 fmul st,st(0) 00000043 fmul st,st(0)
00000038 fld dword ptr [esp+8] 00000045 fld dword ptr [ebp-44h]
0000003c fcomip st,st(1) 00000048 fcomip st,st(1)
0000003e fstp st(0) 0000004a fstp st(0)
00000040 jp 00000054 0000004c jp 00000052
00000042 jbe 00000054 0000004e ja 00000056
00000050 jmp 00000052
00000052 xor eax,eax
00000054 jmp 0000005B
00000056 mov eax,1
0000005b test eax,eax
0000005d sete al
00000060 movzx eax,al
00000063 mov esi,eax
00000065 test esi,esi
00000067 jne 0000007A
{ {
Console.WriteLine ("Less"); 00000069 nop
00000044 mov ecx,dword ptr ds:[0239307Ch] Console.WriteLine ("Less");
0000004a call 78678B7C 0000006a mov ecx,dword ptr ds:[0239307Ch]
0000004f nop 00000070 call 78678B7C
00000050 add esp,0Ch 00000075 nop
00000053 ret }
} 00000076 nop
else 00000077 nop
{ 00000078 jmp 00000088
Console.WriteLine ("GreaterEqual"); else
00000054 mov ecx,dword ptr ds:[02393080h] {
0000005a call 78678B7C 0000007a nop
} Console.WriteLine ("GreaterEqual");
} 0000007b mov ecx,dword ptr ds:[02393080h]
00000081 call 78678B7C
00000086 nop
}
What the above shows is that the floating point code is the same for both debug and release, the compiler is choosing consistency over optimisation. Although the program produces the wrong result (a * a is not less than b) it is the same regardless of the debug/release mode.
Now, the Intel IA32 FPU has eight floating point registers, you would think that the compiler would use the registers to store values when optimising rather than writing to memory, thus improving the performance, something along the lines of:
fld dword ptr [a] ; precomputed value stored in ram == float.MaxValue / 3.0f
fmul st,st(0) ; b = a * a
; no store to ram, keep b in FPU
fld dword ptr [a]
fmul st,st(0)
fcomi st,st(0) ; a*a compared to b
but this would execute differently to the debug version (in this case, display the correct result). However, changing the behaviour of the program depending on the build options is a very bad thing.
FPU code is one area where hand crafting the code can significantly out-perform the compiler, but you do need to get your head around the way the FPU works.
A: In response to Frank Krueger's request above (in comments) for a demonstration of a difference:
Compile this code in gcc with no optimizations and -mfpmath=387 (I have no reason to think it wouldn't work on other compilers, but I haven't tried it.)
Then compile it with no optimizations and -msse -mfpmath=sse.
The output will differ.
#include <stdio.h>
int main()
{
float e = 0.000000001;
float f[3] = {33810340466158.90625,276553805316035.1875,10413022032824338432.0};
f[0] = pow(f[0],2-e); f[1] = pow(f[1],2+e); f[2] = pow(f[2],-2-e);
printf("%s\n",f);
return 0;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: How do I get a list of the active IP-addresses, MAC-addresses and NetBIOS names on the LAN? How do I get a list of the active IP-addresses, MAC-addresses and NetBIOS names on the LAN?
I'd like to get NetBIOS name, IP and MAC addresses for every host on the LAN, preferably not having to walk to every single PC and take note of the stuff myself.
How to do that with Windows Script Host/PowerShell/whatever?
A: arp -a
That gets everything the current machine knows about on the network.
(I'm putting this up there as a second option, since nmap isn't universally installed).
A: If you're using DHCP then the server will give you a list of all that information.
This website has a good tutorial on using powershell to get networking information http://www.powershellpro.com/powershell-tutorial-introduction/powershell-scripting-with-wmi/
If you neet to get quick list of computer names you can use "net view". Also have a look at nbmac although I'm unsure of it's working status under XP. Another option could be to use nbtstat -a (once you've used net view to list workstations)
A: As Daren Thomas said, use nmap.
nmap -sP 192.168.1.1/24
to scan the network 192.168.1.*
nmap -O 192.168.1.1/24
to get the operating system of the user. For more information, read the manpage
man nmap
regards
A: In PowerShell you can do something like:
$computers = "server1","server2","server3"
Get-WmiObject Win32_NetworkAdapterConfiguration -computer $computers -filter "IPEnabled ='true'" | select __Server,IPAddress,MACAddress
A: In PowerShell:
function Explore-Net($subnet, [int[]]$range){
$range | % { test-connection "$subnet.$_" -count 1 -erroraction silentlycontinue} | select -Property address | % {[net.dns]::gethostbyaddress($_.address)}
}
Example:
Explore-Net 192.168.2 @(3..10)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Fastest way to determine image resolution and file type in PHP or Unix command line? I'm currently using ImageMagick to determine the size of images uploaded to the website. By calling ImageMagick's "identify" on the command line it takes about 0.42 seconds to determine a 1MB JPEG's dimensions along with the fact that it's a JPEG. I find that a bit slow.
Using the Imagick PHP library is even slower as it attemps to load the whole 1MB in memory before doing any treatment to the image (in this case, simply determining its size and type).
Are there any solutions to speed up this process of determining which file type and which dimensions an arbitrary image file has? I can live with it only supporting JPEG and PNG. It's important to me that the file type is determined by looking at the file's headers and not simply the extension.
Edit: The solution can be a command-line tool UNIX called by PHP, much like the way I'm using ImageMagick at the moment
A: If you're using PHP with GD support, you can try getimagesize().
A: Have you tried
identify -ping filename.png
?
A: Sorry I can't add this as a comment to a previous answer but I don't have the rep. Doing some quick and dirty testing I also found that exec("identify -ping... is about 20 times faster than without the -ping. But getimagesize() appears to be about 200 times faster still.
So I would say getimagesize() is the faster method. I only tested on jpg and not on png.
the test is just
$files = array('2819547919_db7466149b_o_d.jpg', 'GP1-green2.jpg', 'aegeri-lake-switzerland.JPG');
foreach($files as $file){
$size2 = array();
$size3 = array();
$time1 = microtime();
$size = getimagesize($file);
$time1 = microtime() - $time1;
print "$time1 \n";
$time2 = microtime();
exec("identify -ping $file", $size2);
$time2 = microtime() - $time2;
print $time2/$time1 . "\n";
$time2 = microtime();
exec("identify $file", $size3);
$time2 = microtime() - $time2;
print $time2/$time1 . "\n";
print_r($size);
print_r($size2);
print_r($size3);
}
A:
It's important to me that the file type is determined by looking at the file's headers and not simply the extension.
For that you can use 'file' unix command (orsome php function that implements the same functionality).
/tmp$ file stackoverflow-logo-250.png
stackoverflow-logo-250.png: PNG image data, 250 x 70, 8-bit colormap, non-interlaced
A: Actually, to use getimagesize(), you do NOT need to have GD compiled in.
You can also use mime_content_type() to get the MIME type.
A: exif_imagetype() is faster than getimagesize().
$filename = "somefile";
$data = exif_imagetype($filename);
echo "<PRE>";
print_r($data);
echo "</PRE>";
output:
Array (
[FileName] => somefile
[FileDateTime] => 1234895396
[FileSize] => 15427
[FileType] => 2
[MimeType] => image/jpeg
[SectionsFound] =>
[COMPUTED] => Array
(
[html] => width="229" height="300"
[Height] => 300
[Width] => 229
[IsColor] => 1
)
)
A: If you're using PHP I'd suggest using the Imagick library rather than calling exec(). The feature you're looking for is Imagick::pingImage().
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: SharePoint Event when Permissions of ListItems have been changed? i need to fire an event (or start a workflow) when the permissions of a List-Element (ListItem) have been changed. "ItemUpdating" / "ItemUpdated" won't fire (since the ListItem itself is not updated, i suppose), so how can it be done?
A: I'm afraid that is not possible.
Maybe you can take another approach: build an alternate way for users to change the permissions of an item. When the user applies the permissions (using the UI you've built), you can trigger an event, or start a workflow.
Going further, you could replace the default "Manage permissions" option in the ECB and replace it with a link to your custom permissions management UI. More information: http://www.helloitsliam.com/archive/2007/08/10/moss2007-%E2%80%93-item-level-menus-investigation.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How do you load an embedded icon from an exe file with PyWin32? I have an exe file generated with py2exe. In the setup.py I specify an icon to be embedded in the exe:
windows=[{'script': 'my_script.py','icon_resources': [(0, 'my_icon.ico')], ...
I tried loading the icon using:
hinst = win32api.GetModuleHandle(None)
hicon = win32gui.LoadImage(hinst, 0, win32con.IMAGE_ICON, 0, 0, win32con.LR_DEFAULTSIZE)
But this produces an (very unspecific) error:
pywintypes.error: (0, 'LoadImage', 'No error message is available')
If I try specifying 0 as a string
hicon = win32gui.LoadImage(hinst, '0', win32con.IMAGE_ICON, 0, 0, win32con.LR_DEFAULTSIZE)
then I get the error:
pywintypes.error: (1813, 'LoadImage', 'The specified resource type cannot be found in the image file.')
So, what's the correct method/syntax to load the icon?
Also please notice that I don't use any GUI toolkit - just the Windows API via PyWin32.
A: @efotinis: You're right.
Here is a workaround until py2exe gets fixed and you don't want to include the same icon twice:
hicon = win32gui.CreateIconFromResource(win32api.LoadResource(None, win32con.RT_ICON, 1), True)
Be aware that 1 is not the ID you gave the icon in setup.py (which is the icon group ID), but the resource ID automatically assigned by py2exe to each icon in each icon group. At least that's how I understand it.
If you want to create an icon with a specified size (as CreateIconFromResource uses the system default icon size), you need to use CreateIconFromResourceEx, which isn't available via PyWin32:
icon_res = win32api.LoadResource(None, win32con.RT_ICON, 1)
hicon = ctypes.windll.user32.CreateIconFromResourceEx(icon_res, len(icon_res), True,
0x00030000, 16, 16, win32con.LR_DEFAULTCOLOR)
A: If you're using wxPython, you can use the following simple code:
wx.Icon(sys.argv[0], wx.BITMAP_TYPE_ICO)
I usually have code that checks whether it's running from an EXE or not, and acts accordingly:
def get_app_icon():
if hasattr(sys, "frozen") and getattr(sys, "frozen") == "windows_exe":
return wx.Icon(sys.argv[0], wx.BITMAP_TYPE_ICO)
else:
return wx.Icon("gfx/myapp.ico", wx.BITMAP_TYPE_ICO)
A: Well, well... I installed py2exe and I think it's a bug. In py2exe_util.c they should init rt_icon_id to 1 instead of 0. The way it is now, it's impossible to load the first format of the first icon using LoadIcon/LoadImage.
I'll notify the developers about this if it's not already a known issue.
A workaround, in the meantime, would be to include the same icon twice in your setup.py:
'icon_resources': [(1, 'my_icon.ico'), (2, 'my_icon.ico')]
You can load the second one, while Windows will use the first one as the shell icon. Remember to use non-zero IDs though. :)
A: You should set the icon ID to something other than 0:
'icon_resources': [(42, 'my_icon.ico')]
Windows resource IDs must be between 1 and 32767.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: How do you place EXIF tags into a JPG, having the raw jpeg buffer in C++? I am having a bit of a problem.
I get a RAW char* buffer from a camera and I need to add this tags before I can save it to disk. Writing the file to disk and reading it back again is not an option, as this will happen thousands of times.
The buffer data I receive from the camera does not contain any EXIF information, apart from the Width, Height and Pixels per Inch.
Any ideas? (C++)
A: Look at this PDF, on page 20 you have a diagram showing you were to place or modify your exif information. What is the difference with a file on disk ?
Does the JPEG buffer of your camera contain an EXIF section already ?
A: What's the difference? Why would doing it to a file on the disk be any different from doing it in memory?
Just do whatever it is you do after you read the file from the disk..
A: As far as I know EXIF data in JPEG is continuous subpart of file.
So
*
*prepare EXIF data in memory
*write part of JPEG file upto EXIF
*write prepared EXIF
*write rest of JPEG file
A: You might want to take a look into Exiv2 library. I know it can work on files but I suppose it also has functions to work on memory buffers.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Simple, free PHP blog engine easy to redesign? I am looking for a PHP blog engine which needs to be easy to redesign (CSS, HTML). It also needs to be free and have simple user interface so that the client doesn't struggle to add posts. Any suggestions?
A: I kinda like b2evo we used it on our site and modded it to great effect.
A: I hear Chyrp is nice. Textpattern gets some praise too.
A: Wordpress - I keep trying other blogs and I keep going back to wordpress. It's definitely the easiest I've used for customizing templates, and the admin UI is very nice.
A: I am using flatpress for over a year and i am not going to change it for nothing.
Flat text files, simply admin panel, a lot of useful plugins, templates, widgets, static pages, rss2-atom, categories, upload mechanism.
It's easy and super simply. And if your want backups, make a tar. If you want to transfer it, just copy the tar.
http://flatpress.org
A: I have been very impressed with WordPress since I started using it.
I have had a look at the CSS that sits behind and it has a good structure in my view. There are lots of templates and good information on building your own.
I have recently started looking at NetTuts mainly for the Ruby on Rails tutorial but there is lot of good tutorials on extented WordPress at http://nettuts.com/category/working-with-cmss/
A: Well, it's hard not to suggest Wordpress. Redesigning it isn't too terribly difficult, a monkey could use it, the admin interface is simple and easy on the eyes, and it has great community support. I'd recommend using the Automatic Upgrade plugin with it as well, so that your customer can always stay up to date as well (for security reasons).
A: It is not exactly a blog engine but you may find Typolight interesting. It is very easy to use and fairly extensible.
A: Wordpress is definately the answer here. It's got a large community that can assist, with a lot of available free themes you can use and customize to build your own template.
It is also easy to extend with a wide range of plugins.
There are a lot of Linux hosted servers that come with Wordpress preinstalled already to make it even easier, but the installation of it is simple and straight forward.
A: Only one answer, Wordpress. I have used it only a few times to customise but simply found that it can be done by editing the header and footer files along with the stylesheet.
What can be simpler.
I suggest you just give it a go before you look at others as you could deliberate for ages just to come back to it :)
A: In the blog specific package area I have used: Textpattern, Typolight, Nucleus, Serendipity and Wordpress. Hands down, Wordpress is the easiest for end-users to manage and, frankly, it is one of the easiest to template. The userbase for Wordpress is so large that you can easily find resources to help you out when you get stuck on something.
My only practical complaint about it is the need to set up caching so that it doesn't get bogged down by a Digg/Reddit/Etc. overload. However, if you set the cacheing up, you are good to go and can handle significant traffic.
A: Simple PHP Blog. very, VERY simple. Very lightweight. Completely customizable. you dont have to worry about using a database! I find it great! http://sourceforge.net/projects/sphpblog/
Go get it!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Convert C# 2.0 System.Data.SqlTypes.SqlXml object into a System.Xml.XmlNode I seem to always have problems with converting data to and from XML in C#. It always wants you to create a full XMLDocument object even when you think you shouldn't have to. In this case I have a SQLXML column in a MS SQL 2005 server that I am trying to pull out and push into a function that requires an XMLNode as a parameter. You would think this would be easy, but outside of converting it to a string and creating a new XMLNode object I cannot figure out the right way to do it.
I can use an SqlDataReader, the sqlComm.ExecuteReader() to load the reader, and sqlReader.GetSqlXml(0) to get the SQLXML object,but then how do I convert it to an XmlNode?
Conversely I can use the sqlComm.ExecuteXmlReader() to get an XmlReader, but how do I extract a XmlNode from the reader? http://bytes.com/forum/thread177004.html says it cannot be done with a XmlTextReader, should I use a XmlNodeReader?
Help please!
A: I ended up not having to use it, but I found what I think is the best answer. Basically you load an XmlReader, create an XmlDocument from the reader, then select a list of nodes from the document into an XmnLodeList, which you can use in a ForEach statement. Here is some sample code:
System.Xml.XmlReader sqlXMLReader = sqlComm.ExecuteXmlReader();
System.Xml.XmlDocument xmlDoc = new System.Xml.XmlDocument();
xmlDoc.Load(sqlXMLReader);
System.Xml.XmlNodeList xnlJobs = xmlDoc.SelectNodes("/job");
Still convoluted as hell, but at least there are no xml to string to xml conversions.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Querying Active Directory with "SQL"? I just wonder if anyone knows or made a wrapper around Active Directory to be able to easily query it in .net? Kind of like "LINQ-to-ActiveDirectory" or some SQL Dialect, i.e. to be able to do "SELECT DISTINCT(DEPARTMENT) FROM /Users/SomeOU/AnotherOU" or "SELECT user FROM domain" or whatever.
As far as I know, it is possible to query WMI and IIS in a "SQLesque" way, i just wonder if something similar is possible for Active Directory as well, without having to learn yet another Query Language (LDAP)?
A:
LINQ to Active Directory implements a
custom LINQ query provider that allows
querying objects in Active Directory.
Internally, queries are translated
into LDAP filters which are sent to
the server using the
System.DirectoryServices .NET
Framework library.
http://www.codeplex.com/LINQtoAD
Sample (from the site):
// NOTE: Entity type definition "User" omitted in sample - see samples in release.
var users = new DirectorySource<User>(ROOT, SearchScope.Subtree);
users.Log = Console.Out;
var res = from usr in users
where usr.FirstName.StartsWith("B") && usr.Office == "2525"
select new { Name = usr.FirstName + " " + usr.LastName, usr.Office, usr.LogonCount };
foreach (var u in res)
{
Console.WriteLine(u);
u.Office = "5252";
u.SetPassword(pwd);
}
users.Update();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Best Practices & Principles for GUI design What is your best practical user-friendly user-interface design or principle?
Please submit those practices that you find actually makes things really useful - no matter what - if it works for your users, share it!
Summary/Collation
Principles
*
*KISS.
*Be clear and specific in what an option will achieve: for example, use verbs that indicate the action that will follow on a choice (see: Impl. 1).
*Use obvious default actions appropriate to what the user needs/wants to achieve.
*Fit the appearance and behavior of the UI to the environment/process/audience: stand-alone application, web-page, portable, scientific analysis, flash-game, professionals/children, ...
*Reduce the learning curve of a new user.
*Rather than disabling or hiding options, consider giving a helpful message where the user can have alternatives, but only where those alternatives exist. If no alternatives are available, its better to disable the option - which visually then states that the option is not available - do not hide the unavailable options, rather explain in a mouse-over popup why it is disabled.
*Stay consistent and conform to practices, and placement of controls, as is implemented in widely-used successful applications.
*Lead the expectations of the user and let your program behave according to those expectations.
*Stick to the vocabulary and knowledge of the user and do not use programmer/implementation terminology.
*Follow basic design principles: contrast (obviousness), repetition (consistency), alignment (appearance), and proximity (grouping).
Implementation
*
*(See answer by paiNie) "Try to use verbs in your dialog boxes."
*Allow/implement undo and redo.
References
*
*Windows Vista User Experience Guidelines [http://msdn.microsoft.com/en-us/library/aa511258.aspx]
*Dutch websites - "Drempelvrij" guidelines [http://www.drempelvrij.nl/richtlijnen]
*Web Content Accessibility Guidelines (WCAG 1.0) [http://www.w3.org/TR/WCAG10/]
*Consistence [http://www.amazon.com/Design-Everyday-Things-Donald-Norman/dp/0385267746]
*Don't make me Think [http://www.amazon.com/Dont-Make-Me-Think-Usability/dp/0321344758/ref=pdbbssr_1?ie=UTF8&s=books&qid=1221726383&sr=8-1]
*Be powerful and simple [http://msdn.microsoft.com/en-us/library/aa511332.aspx]
*Gestalt design laws [http://www.squidoo.com/gestaltlaws]
A: If you're doing anything for the web, or any front-facing software application for that matter, you really owe it to yourself to read...
Don't make me think - Steve Krug
A: Breadcrumbs in webapps:
Tell -> The -> User -> Where -> She -> Is in the system
This is pretty hard to do in "dynamic" systems with multiple paths to the same data, but it often helps navigate the system.
A: I try to adapt to the environment.
When developing for an Windows application, I use the Windows Vista User Experience Guidelines but when I'm developing an web application I use the appropriate guidelines, because I develop Dutch websites I use the "Drempelvrij" guidelines which are based on the Web Content Accessibility Guidelines (WCAG 1.0) by the World Wide Web Consortium (W3C).
The reason I do this is to reduce the learning curve of a new user.
A: I would recommend to get a good solid understanding of GUI design by reading the book The Design of Everyday Things. Although the main printable is a comment from Joel Spolsky: When the behavior of the application differs to what the user expects to happen then you have a problem with your graphical user interface.
The best example is, when somebody swaps around the OK and Cancel button on some web sites. The user expects the OK button to be on the left, and the Cancel button to be on the right. So in short, when the application behavior differs to what the user expects what to happen then you have a user interface design problem.
Although, the best advice, in no matter what design or design pattern you follow, is to keep the design and conventions consistent throughout the application.
A: I test my GUI against my grandma.
A: Try to use verbs in your dialog boxes.
It means use
instead of
A: Avoid asking the user to make choices whenever you can (i.e. don't create a fork with a configuration dialog!)
For every option and every message box, ask yourself: can I instead come up with some reasonable default behavior that
*
*makes sense?
*does not get in the user's way?
*is easy enough to learn that it costs little to the user that I impose this on him?
I can use my Palm handheld as an example: the settings are really minimalistic, and I'm quite happy with that. The basic applications are well designed enough that I can simply use them without feeling the need for tweaking. Ok, there are some things I can't do, and in fact I sort of had to adapt myself to the tool (instead of the opposite), but in the end this really makes my life easier.
This website is another example: you can't configure anything, and yet I find it really nice to use.
Reasonable defaults can be hard to figure out, and simple usability tests can provide a lot of clues to help you with that.
A: Show the interface to a sample of users. Ask them to perform a typical task. Watch for their mistakes. Listen to their comments. Make changes and repeat.
A: Follow basic design principles
*
*Contrast - Make things that are different look different
*Repetition - Repeat the same style in a screen and for other screens
*Alignment - Line screen elements up! Yes, that includes text, images, controls and labels.
*Proximity - Group related elements together. A set of input fields to enter an address should be grouped together and be distinct from the group of input fields to enter credit card info. This is basic Gestalt Design Laws.
A: Never ask "Are you sure?". Just allow unlimited, reliable undo/redo.
A: The Design of Everyday Things - Donald Norman
A canon of design lore and the basis of many HCI courses at universities around the world. You won't design a great GUI in five minutes with a few comments from a web forum, but some principles will get your thinking pointed the right way.
--
MC
A: When constructing error messages make the error message be
the answers to these 3 questions (in that order):
*
*What happened?
*Why did it happen?
*What can be done about it?
This is from "Human Interface Guidelines: The Apple Desktop
Interface" (1987, ISBN 0-201-17753-6), but it can be used
for any error message anywhere.
There is an updated version for Mac OS X.
The Microsoft page
User Interface Messages
says the same thing: "... in the case of an error message,
you should include the issue, the cause, and the user action
to correct the problem."
Also include any information that is known by the program,
not just some fixed string. E.g. for the "Why did it happen" part of the error message use "Raw spectrum file
L:\refDataForMascotParser\TripleEncoding\Q1LCMS190203_01Doub
leArg.wiff does not exist" instead of just "File does
not exist".
Contrast this with the infamous error message: "An error
happend.".
A: Try to think about what your user wants to achieve instead of what the requirements are.
The user will enter your system and use it to achieve a goal. When you open up calc you need to make a simple fast calculation 90% of the time so that's why by default it is set to simple mode.
So don't think about what the application must do but think about the user which will be doing it, probably bored, and try to design based on what his intentions are, try to make his life easier.
A: (Stolen from Joel :o) )
Don't disable/remove options - rather give a helpful message when the user click/select it.
A: As my data structure professor pointed today: Give instructions on how your program works to the average user. We programmers often think we're pretty logical with our programs, but the average user probably won't know what to do.
A: *
*Use discreet/simple animated features to create seamless transitions from one section the the other. This helps the user to create a mental map of navigation/structure.
*Use short (one word if possible) titles on the buttons that describe clearly the essence of the action.
*Use semantic zooming where possible (a good example is how zooming works on Google/Bing maps, where more information is visible when you focus on an area).
*Create at least two ways to navigate: Vertical and horizontal. Vertical when you navigate between different sections and horizontal when you navigate within the contents of the section or subsection.
*Always keep the main options nodes of your structure visible (where the size of the screen and the type of device allows it).
*When you go deep into the structure always keep a visible hint (i.e. such as in the form of a path) indicating where you are.
*Hide elements when you want the user to focus on data (such as reading an article or viewing a project). - however beware of point #5 and #4.
A: Be Powerful and Simple
Oh, and hire a designer / learn design skills. :)
A: With GUIs, standards are kind of platform specific. E.g. While developing GUI in Eclipse this link provides decent guideline.
A: I've read most of the above and one thing that I'm not seeing mentioned:
If users are meant to use the interface ONCE, showing only what they need to use if possible is great.
If the user interface is going to be used repeatedly by the same user, but maybe not very often, disabling controls is better than hiding them: the user interface changing and hidden features not being obvious (or remembered) by an occasional user is frustrating to the user.
If the user interface is going to be used VERY REGULARLY by the same user (and there is not a lot of turnover in the job i.e. not a lot of new users coming online all the time) disabling controls is absolutely helpful and the user will become accustomed to the reasons why things happen but preventing them from using controls accidentally in improper contexts appreciated and prevents errors.
Just my opinion, but it all goes back to understanding your user profile, not just what a single user session might entail.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63"
}
|
Q: How to check PDF-A 1b compliance with open source tools? How can i check (and additionally create) PDF-A 1b compliant PDF documents using open source tools? Does anybody know an open source tool?
Thanks in advance...
A: Try with http://www.lowagie.com/iText/. It can recognize the version of the loaded PDF and can create PDF as well. It is open source, but I'm not sure do they support exactly the version of the PDF you mention.
A: Or try www.validatepdfa.com - not open source but 100% free online validator.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How can I detect the encoding/codepage of a text file? In our application, we receive text files (.txt, .csv, etc.) from diverse sources. When reading, these files sometimes contain garbage, because the files where created in a different/unknown codepage.
Is there a way to (automatically) detect the codepage of a text file?
The detectEncodingFromByteOrderMarks, on the StreamReader constructor, works for UTF8 and other unicode marked files, but I'm looking for a way to detect code pages, like ibm850, windows1252.
Thanks for your answers, this is what I've done.
The files we receive are from end-users, they do not have a clue about codepages. The receivers are also end-users, by now this is what they know about codepages: Codepages exist, and are annoying.
Solution:
*
*Open the received file in Notepad, look at a garbled piece of text. If somebody is called François or something, with your human intelligence you can guess this.
*I've created a small app that the user can use to open the file with, and enter a text that user knows it will appear in the file, when the correct codepage is used.
*Loop through all codepages, and display the ones that give a solution with the user provided text.
*If more as one codepage pops up, ask the user to specify more text.
A: If someone is looking for a 93.9% solution. This works for me:
public static class StreamExtension
{
/// <summary>
/// Convert the content to a string.
/// </summary>
/// <param name="stream">The stream.</param>
/// <returns></returns>
public static string ReadAsString(this Stream stream)
{
var startPosition = stream.Position;
try
{
// 1. Check for a BOM
// 2. or try with UTF-8. The most (86.3%) used encoding. Visit: http://w3techs.com/technologies/overview/character_encoding/all/
var streamReader = new StreamReader(stream, new UTF8Encoding(encoderShouldEmitUTF8Identifier: false, throwOnInvalidBytes: true), detectEncodingFromByteOrderMarks: true);
return streamReader.ReadToEnd();
}
catch (DecoderFallbackException ex)
{
stream.Position = startPosition;
// 3. The second most (6.7%) used encoding is ISO-8859-1. So use Windows-1252 (0.9%, also know as ANSI), which is a superset of ISO-8859-1.
var streamReader = new StreamReader(stream, Encoding.GetEncoding(1252));
return streamReader.ReadToEnd();
}
}
}
A: Notepad++ has this feature out-of-the-box. It also supports changing it.
A: Looking for different solution, I found that
https://code.google.com/p/ude/
this solution is kinda heavy.
I needed some basic encoding detection, based on 4 first bytes and probably xml charset detection - so I've took some sample source code from internet and added slightly modified version of
http://lists.w3.org/Archives/Public/www-validator/2002Aug/0084.html
written for Java.
public static Encoding DetectEncoding(byte[] fileContent)
{
if (fileContent == null)
throw new ArgumentNullException();
if (fileContent.Length < 2)
return Encoding.ASCII; // Default fallback
if (fileContent[0] == 0xff
&& fileContent[1] == 0xfe
&& (fileContent.Length < 4
|| fileContent[2] != 0
|| fileContent[3] != 0
)
)
return Encoding.Unicode;
if (fileContent[0] == 0xfe
&& fileContent[1] == 0xff
)
return Encoding.BigEndianUnicode;
if (fileContent.Length < 3)
return null;
if (fileContent[0] == 0xef && fileContent[1] == 0xbb && fileContent[2] == 0xbf)
return Encoding.UTF8;
if (fileContent[0] == 0x2b && fileContent[1] == 0x2f && fileContent[2] == 0x76)
return Encoding.UTF7;
if (fileContent.Length < 4)
return null;
if (fileContent[0] == 0xff && fileContent[1] == 0xfe && fileContent[2] == 0 && fileContent[3] == 0)
return Encoding.UTF32;
if (fileContent[0] == 0 && fileContent[1] == 0 && fileContent[2] == 0xfe && fileContent[3] == 0xff)
return Encoding.GetEncoding(12001);
String probe;
int len = fileContent.Length;
if( fileContent.Length >= 128 ) len = 128;
probe = Encoding.ASCII.GetString(fileContent, 0, len);
MatchCollection mc = Regex.Matches(probe, "^<\\?xml[^<>]*encoding[ \\t\\n\\r]?=[\\t\\n\\r]?['\"]([A-Za-z]([A-Za-z0-9._]|-)*)", RegexOptions.Singleline);
// Add '[0].Groups[1].Value' to the end to test regex
if( mc.Count == 1 && mc[0].Groups.Count >= 2 )
{
// Typically picks up 'UTF-8' string
Encoding enc = null;
try {
enc = Encoding.GetEncoding( mc[0].Groups[1].Value );
}catch (Exception ) { }
if( enc != null )
return enc;
}
return Encoding.ASCII; // Default fallback
}
It's enough to read probably first 1024 bytes from file, but I'm loading whole file.
A: I've done something similar in Python. Basically, you need lots of sample data from various encodings, which are broken down by a sliding two-byte window and stored in a dictionary (hash), keyed on byte-pairs providing values of lists of encodings.
Given that dictionary (hash), you take your input text and:
*
*if it starts with any BOM character ('\xfe\xff' for UTF-16-BE, '\xff\xfe' for UTF-16-LE, '\xef\xbb\xbf' for UTF-8 etc), I treat it as suggested
*if not, then take a large enough sample of the text, take all byte-pairs of the sample and choose the encoding that is the least common suggested from the dictionary.
If you've also sampled UTF encoded texts that do not start with any BOM, the second step will cover those that slipped from the first step.
So far, it works for me (the sample data and subsequent input data are subtitles in various languages) with diminishing error rates.
A: The tool "uchardet" does this well using character frequency distribution models for each charset. Larger files and more "typical" files have more confidence (obviously).
On ubuntu, you just apt-get install uchardet.
On other systems, get the source, usage & docs here: https://github.com/BYVoid/uchardet
A: If you're looking to detect non-UTF encodings (i.e. no BOM), you're basically down to heuristics and statistical analysis of the text. You might want to take a look at the Mozilla paper on universal charset detection (same link, with better formatting via Wayback Machine).
A: Have you tried C# port for Mozilla Universal Charset Detector
Example from http://code.google.com/p/ude/
public static void Main(String[] args)
{
string filename = args[0];
using (FileStream fs = File.OpenRead(filename)) {
Ude.CharsetDetector cdet = new Ude.CharsetDetector();
cdet.Feed(fs);
cdet.DataEnd();
if (cdet.Charset != null) {
Console.WriteLine("Charset: {0}, confidence: {1}",
cdet.Charset, cdet.Confidence);
} else {
Console.WriteLine("Detection failed.");
}
}
}
A: The StreamReader class's constructor takes a 'detect encoding' parameter.
A: You can't detect the codepage, you need to be told it. You can analyse the bytes and guess it, but that can give some bizarre (sometimes amusing) results. I can't find it now, but I'm sure Notepad can be tricked into displaying English text in Chinese.
Anyway, this is what you need to read:
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!).
Specifically Joel says:
The Single Most Important Fact About Encodings
If you completely forget everything I just explained, please remember one extremely important fact. It does not make sense to have a string without knowing what encoding it uses. You can no longer stick your head in the sand and pretend that "plain" text is ASCII.
There Ain't No Such Thing As Plain Text.
If you have a string, in memory, in a file, or in an email message, you have to know what encoding it is in or you cannot interpret it or display it to users correctly.
A:
You can't detect the codepage
This is clearly false. Every web browser has some kind of universal charset detector to deal with pages which have no indication whatsoever of an encoding. Firefox has one. You can download the code and see how it does it. See some documentation here. Basically, it is a heuristic, but one that works really well.
Given a reasonable amount of text, it is even possible to detect the language.
Here's another one I just found using Google:
A: I know it's very late for this question and this solution won't appeal to some (because of its english-centric bias and its lack of statistical/empirical testing), but it's worked very well for me, especially for processing uploaded CSV data:
http://www.architectshack.com/TextFileEncodingDetector.ashx
Advantages:
*
*BOM detection built-in
*Default/fallback encoding customizable
*pretty reliable (in my experience) for western-european-based files containing some exotic data (eg french names) with a mixture of UTF-8 and Latin-1-style files - basically the bulk of US and western european environments.
Note: I'm the one who wrote this class, so obviously take it with a grain of salt! :)
A: Got the same problem but didn't found a good solution yet for detecting it automatically .
Now im using PsPad (www.pspad.com) for that ;) Works fine
A: If you can link to a C library, you can use libenca. See http://cihar.com/software/enca/. From the man page:
Enca reads given text files, or standard input when none are given,
and uses knowledge about their language (must be supported by you) and
a mixture of parsing, statistical analysis, guessing and black magic
to determine their encodings.
It's GPL v2.
A: Open file in AkelPad(or just copy/paste a garbled text), go to Edit -> Selection -> Recode... -> check "Autodetect".
A: Since it basically comes down to heuristics, it may help to use the encoding of previously received files from the same source as a first hint.
Most people (or applications) do stuff in pretty much the same order every time, often on the same machine, so its quite likely that when Bob creates a .csv file and sends it to Mary it'll always be using Windows-1252 or whatever his machine defaults to.
Where possible a bit of customer training never hurts either :-)
A: I was actually looking for a generic, not programming way of detecting the file encoding, but I didn't find that yet.
What I did find by testing with different encodings was that my text was UTF-7.
So where I first was doing:
StreamReader file = File.OpenText(fullfilename);
I had to change it to:
StreamReader file = new StreamReader(fullfilename, System.Text.Encoding.UTF7);
OpenText assumes it's UTF-8.
you can also create the StreamReader like this
new StreamReader(fullfilename, true), the second parameter meaning that it should try and detect the encoding from the byteordermark of the file, but that didn't work in my case.
A: As addon to ITmeze post, I've used this function to convert the output of C# port for Mozilla Universal Charset Detector
private Encoding GetEncodingFromString(string codePageName)
{
try
{
return Encoding.GetEncoding(codePageName);
}
catch
{
return Encoding.ASCII;
}
}
MSDN
A: Thanks @Erik Aronesty for mentioning uchardet.
Meanwhile the (same?) tool exists for linux: chardet.
Or, on cygwin you may want to use: chardetect.
See: chardet man page: https://www.commandlinux.com/man-page/man1/chardetect.1.html
This will heuristically detect (guess) the character encoding for each given file and will report the name and confidence level for each file's detected character encoding.
A: try and install the perl module Text::Unaccent::PurePerl by typing cpanm Text::Unaccent this generates a build.log file that displays as chinese in some applications as english in others cpanm is the initial text a plausible attempt should you be lucky enough to have spaces in the language is to compare the distribution frequency of words via a statistical test
A: I use this code to detect Unicode and windows default ansi codepage when reading a file. For other codings a check of content is necessary, manually or by programming. This can de used to save the text with the same encoding as when it was opened. (I use VB.NET)
'Works for Default and unicode (auto detect)
Dim mystreamreader As New StreamReader(LocalFileName, Encoding.Default)
MyEditTextBox.Text = mystreamreader.ReadToEnd()
Debug.Print(mystreamreader.CurrentEncoding.CodePage) 'Autodetected encoding
mystreamreader.Close()
A: 10Y (!) had passed since this was asked, and still I see no mention of MS's good, non-GPL'ed solution: IMultiLanguage2 API.
Most libraries already mentioned are based on Mozilla's UDE - and it seems reasonable that browsers have already tackled similar problems. I don't know what is chrome's solution, but since IE 5.0 MS have released theirs, and it is:
*
*Free of GPL-and-the-like licensing issues,
*Backed and maintained probably forever,
*Gives rich output - all valid candidates for encoding/codepages along with confidence scores,
*Surprisingly easy to use (it is a single function call).
It is a native COM call, but here's some very nice work by Carsten Zeumer, that handles the interop mess for .net usage. There are some others around, but by and large this library doesn't get the attention it deserves.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "308"
}
|
Q: How can I tell whether I am on x64 or x86 using .NET? I'd like to offer my users correct links to an upgraded version of my program based on what platform they're running on, so I need to know whether I'm currently running on an x86 OS or an x64 OS.
The best I've found is using Environment.GetEnvironmentVariable("PROCESSOR_ARCHITECTURE"), but I would think there would be some built-in facility for this?
A: Environment.Is64BitOperatingSystem and Environment.Is64BitProcess are being introduced in .NET 4. For .NET 2 you'll need to try out some of the other answers.
A: Call IsWow64Process to find out if your 32-bit process is running in WOW64 on a 64-bit operating system. You can call GetNativeSystemInfo to find out exactly what it is: the wProcessorArchitecture member of SYSTEM_INFO will be PROCESSOR_ARCHITECTURE_INTEL for 32-bit, PROCESSOR_ARCHITECTURE_AMD64 for x64 and PROCESSOR_ARCHITECTURE_IA64 for Intel's Itanium.
A: Check the size of IntPtr with Marshal.SizeOf. 32 bit = 4 bytes, 64 bit = 8 bytes.
Edit: I am not sure this is what you are looking for after reading the question again.
A: You can determine a lot via environment variables as used in C# - How to get Program Files (x86) on Windows 64 bit [And this happened to suit me better than Mike's answer which I +1'd as I happen to be interested in finding the Program Files directory name]
A: Check just IntPtr.Size . You need to have target platform as AnyCPU.
from here
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Convert Excel 4 macros to VBA I have an old Excel 4 macro that I use to run monthly invoices. It is about 3000 lines and has many Excel 5 Dialog Box sheets (for dialog boxes). I would like to know what the easiest way would be to change it into VBA and if it is worth it. Also, if once I have converted it to VBA, how to create a standalone application out of it?
A: I have attempted this before and in the end you do need to rewrite it as Biri has said.
I did quite a bit of this work when our company was upgrading from Windows NT to Windows XP. I often found that it is easier not to look at the old code at all and start again from scratch. You can spend so much time trying to work out what the Excel 4 did especially around the "strange" dialog box notation. In the end if you know what the inputs are and the outputs then it often more time effective and cleaner to rewrite.
Whether to use VBA or not is in some ways another question but VBA is rather powerful and extensible so although I would rather use other tools like .NET in many circumstances it works well and is easy to deploy.
In terms of is it worth it? If you could say that you were never ever going to need to change your Excel 4 macro again then maybe not. But in Business there is always something that changes eg tax rates, especially end of year things. Given how hard it is to find somone to support Excel 4 and even find documentation on it I would say it is risky not to move to VBA but that is something to balance up.
A: (Disclaimer: I develop the Excel-DNA library)
Instead of moving the macro to VBA, which uses a COM automation object model that differs from the Excel macro language, rather move it to VB.NET (or C#) running with the Excel-DNA library.
Excel-DNA allows you to use the C API, which mirrors the macro language very closely. For every macro command, like your dialogs, there is a C API function that takes the same parameters. For example, the equivalent of DIALOG.BOX would be xlfDialogBox - there is some discussion and example in this thread: http://groups.google.com/group/exceldna/browse_thread/thread/53a8253269fdf0a5.
One big advantage of this move is that you can then gradually change parts of your code to use the COM automation interface - Excel-DNA allows you to mix and match the COM interfaces and C API.
A: AFAIK there is no possibility to somehow convert it. You have to basically rewrite in in VBA.
If you have converted to VBA, you cannot run as a standalone application. As VBA states Visual Basic for Application, it is living inside an application (Word, Excel, Scala, whatever).
You have to learn a standard language (not a macro-language) to create standalone applications. But you have to learn much more than the language itself. You have to learn different techniques, for example database handling instead of Excel sheet handling, printing instead of Excel printing, and so on. So basically you will lose a lot of function which is evident if you use Excel.
A: Here is a good artikel about this topic: http://msdn.microsoft.com/en-us/library/aa192490.aspx
You can download VB2008-Express for free at: http://www.microsoft.com/express/default.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Debug vs. release in .NET Continuing from my previous question, is there a comprehensive document that lists all available differences between debug and release modes in a C# application, and particularly in a web application?
What differences are there?
A: One major performanance area if you are using any of the ASP.NET Ajax controls: debug information is removed from the JavaScript library when running in release, and I have seen major preformance improvements on complicated pages. Other web based resources may be either cached or not cached based on this setting.
Also, remember that Debug / Release in a web application is dictated by the web.config file, not your settings within Visual Studio.
<system.web>
<compilation debug="true">
More information:
*
*Don’t run production ASP.NET Applications with debug=”true” enabled
A: You can also manage some part of code that you want to run only in debug or only in release with preprocessor markups:
#if DEBUG
// Some code running only in debug
#endif
or
#if NOT DEBUG
// Some code running only in release
#endif
A: "Debug" and "Release" are just names for predefined project configurations defined by Visual Studio.
To see the differences, look at the Build Tab in Project Properties in Visual Studio.
The differences in VS2005 include:
*
*DEBUG constant defined in Debug configuration
*Optimize code enabled in Release configuration
as well as other differences you can see by clicking on the "Advanced" button
But you can:
*
*Change the build settings for Debug and Release configurations in Project Propeties / Build
*Create your own custom configurations by right-clicking on the solution in Solution Explorer and selecting Configuration Manager
I think the behaviour of the DEBUG constant is fairly clear (can be referenced in the #if preprocessor directive or in the ConditionalAttribute). But I'm not aware of any comprehensive documentation on exactly what optimizations are enabled - in fact I suspect Microsoft would want to be free to enhance their optimizer without notice
A: Drawing with GDI+ is considerably slower in Debug mode.
A: Release version:
*
*is considerable faster (most important), optimized
*can't be debuged (step by step)
*and code written in "debug" directive is not included
See What's the difference between a Debug vs Release Build?.
A: I'm not aware of one concise document, but:
*
*Debug.Write calls are stripped out in Release
*In Release, your CallStack may look a bit "strange" due to optimizations, as outlined by Scott Hanselman
A: There isn't one document that lists the differences. In addition to some of the differences already listed, compiling in Debug mode turns off most of the JIT compiler optimizations that are performed at runtime and also emits more complete debug information in to the symbol database file (.pdb).
Another big difference is that the GC behavior is somewhat different in that the JIT compiler will insert calls to GC.KeepAlive() as appropriate/needed in order to support debugging sessions.
A: Debug and Release are just labelling for different solution configurations. You can add others if you want. If you wish you can add more configurations from configuration manager–
http://msdn.microsoft.com/en-us/library/kwybya3w.aspx
Major differences –
*
*In a debug DLL several extra instructions are added to enable you to set a breakpoint on every source code line in Visual Studio. Also, the code will not be optimized, again to enable you to debug the code.
In the release version, these extra instructions are removed.
*PDB file is created in only Debug mode and not in release mode.
*In release mode, code is optimized by the optimizer that's built into the JIT compiler. It makes the following optimizations:
• Method inlining - A method call is replaced by the injecting the code of the method.
• CPU register allocation - Local variables and method arguments can stay stored in a CPU register without ever (or less frequently) being stored back to the stack frame
• Array index checking elimination - An important optimization when working with arrays (all .NET collection classes use an array internally). When the JIT compiler can verify that a loop never indexes an array out of bounds then it will eliminate the index check.
• Loop unrolling - Short loops (up to 4) with small bodies are eliminated by repeating the code in the loop body.
• Dead code elimination - A statement like if (false) { /.../ } gets completely eliminated.
• Code hoisting- Code inside a loop that is not affected by the loop can be moved out of the loop.
• Common sub-expression elimination. x = y + 4; z = y + 4; becomes z = x
A: I got an error message when I distribute executable file to another machine indicating that the system missed MSVCP110D.dll.
The solution to this issue is stated in Stack Overflow question Visual Studio MSVCP110D.dll is missing.
IN XXXXD.dll D means that the DLL file is a debug version of the DLL file. But MS Visual C++ Redistributable packages include only the release version of DLL files.
That means if you need to distribute a program developed by Visual C++ you need to build it in Release mode. And also you need to install MS Visual C++ Redistributable (correct version) on the target machine.
So I think this a one of key difference between debug and release mode.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64"
}
|
Q: How to dynamically generate combination of ASP.NET user controls? I have several user controls, let's say A, B, C and D. Based on some random input, I need to generate a combination of these. For e.g. if input is 2a3d1a2c I need to show two of the A's, 3 D's after that, an A again, etc.
I will also need to stabilize clientid's in order for them to work correctly. Because each of these controls use their own ClientID property to gather the inputs on themselves. For e.g. user control A internally generates an input named this.ClientID + "$input1", and gathers its input from request like Request[this.ClientID + "$input1"]. Since there can be more than one A, each A needs to have the same (unique) ClientID after postback in order to get correct inputs from request.
A: To add controls dynamically, you can use a panel as a place holder, say
<asp:Panel ID="ControlPlaceholder" runat="server" />
Then, on the server side, you can add objects to it like so:
int controlCount = 0;
...
TextBox newTextBox = TextBox();
newTextBox.ID = "ctl_" + controlCount++;
ControlPlaceholder.Controls.Add(newTextBox);
If you add the controls to it during your Page_Load event and use a consistent method of generating the controls' IDs (such as a the simple count above) then any viewstate or event bindings will be re-bound to the right object on a postback.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Compound keys in JPA I want to make an entity that has an autogenerated primary key, but also a unique compound key made up of two other fields. How do I do this in JPA?
I want to do this because the primary key should be used as foreign key in another table and making it compound would not be good.
In the following snippet, I need the command and model to be unique. pk is of course the primary key.
@Entity
@Table(name = "dm_action_plan")
public class ActionPlan {
@Id
private int pk;
@Column(name = "command", nullable = false)
private String command;
@Column(name = "model", nullable = false)
String model;
}
A: You can use @UniqueConstraint something like this :
@Entity
@Table(name = "dm_action_plan",
uniqueConstraints={ @UniqueConstraint(columnNames= "command","model") } )
public class ActionPlan {
@Id
private int pk;
@Column(name = "command", nullable = false)
private String command;
@Column(name = "model", nullable = false)
String model;
}
This will allow your JPA implementation to generate the DDL for the unique constraint.
A: Use @GeneratedValue to indicate that the key will be generated and @UniqueConstraint to express unicity
@Entity
@Table(name = "dm_action_plan"
uniqueConstraint = @UniqueConstraint({"command", "model"})
)
public class ActionPlan {
@Id
@GeneratedValue
private int pk;
@Column(name = "command", nullable = false)
private String command;
@Column(name = "model", nullable = false)
String model;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: WCF REST Caching - Client Side & Server Side I have wirtten a RESTful WCF Service. Incorporating E-Tags, expires headers.
The caching works great when using it from a browser. However how does the caching work when calling it from a WCF Channel Factory or .NET Web Request Objects?
So in the scenario where I have my website calling the WCF restful service when a 304 not modified response is returned to me. How do I handle this. The browser detects this fine and returns the unmodified version from its cache.
However when the client is not the browser do I need to write my own version of the cache similiar to the way the browser caches?
Any help or insight would be much appreciated.
A: Yes, you're going to have to handle that yourself, same as that you're responsbile for sending the datetime in the request, so the server can determine if there was a change. I would look at the RSS Bandit source for a sample implementation.
A: We have a sample that illustrates how to do this (using .NET 4) http://code.msdn.microsoft.com/cannonicalRESTEntity
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Control Fujitsu Softune debugger Is there a way to control the Fujitsu Softune debugger with an other application(e.g. Eclipse)? I think about sending the command mentioned in the documentation of Softune and parse the output, but also other approaches are welcome.
A: There is pluging for eclipse; file name is "FujitsuF2MC16_1.0.1.jar", look for it on this page:
http://www.mikrocontroller.net/topic/70413
Complile en debug in eclipse.
Hope this helps.
A: What do you mean by controlling the Fujitsu Softune debugger?
If what you want to do is to start a debugging session with your freshly-compiled .abs file, you can do the following.
In the Eclipse environment add a button or shortcut to call the make utility to make a debug:. Your makefile would have an entry like:
debug: $(make_vars)
# start debugger
make -f$(make_vars) -f$(make_dir)/$(cfg) cfg="$(cfg)" debug_session
In the make entry for debug_session you put something like:
echo UPDATING SOFTUNE-3 PROJECT FOR DEBUGGING;\
$(subst \,/,$(DIR_SOFTUNE_WORKBENCH))/bin/Fs907s.exe softune/E7x_proj.wsp 2>/dev/null;
I hope this was useful.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: .NET: Get all Outlook calendar items How can I get all items from a specific calendar (for a specific date).
Lets say for instance that I have a calendar with a recurring item every Monday evening. When I request all items like this:
CalendarItems = CalendarFolder.Items;
CalendarItems.IncludeRecurrences = true;
I only get 1 item...
Is there an easy way to get all items (main item + derived items) from a calendar?
In my specific situation it can be possible to set a date limit but it would be cool just to get all items (my recurring items are time limited themselves).
I'm using the Microsoft Outlook 12 Object library (Microsoft.Office.Interop.Outlook).
A: I wrote similar code, but then found the export functionality:
Application outlook;
NameSpace OutlookNS;
outlook = new ApplicationClass();
OutlookNS = outlook.GetNamespace("MAPI");
MAPIFolder f = OutlookNS.GetDefaultFolder(OlDefaultFolders.olFolderCalendar);
CalendarSharing cs = f.GetCalendarExporter();
cs.CalendarDetail = OlCalendarDetail.olFullDetails;
cs.StartDate = new DateTime(2011, 11, 1);
cs.EndDate = new DateTime(2011, 12, 31);
cs.SaveAsICal("c:\\temp\\cal.ics");
A: LinqPad snipped that works for me:
//using Microsoft.Office.Interop.Outlook
Application a = new Application();
Items i = a.Session.GetDefaultFolder(OlDefaultFolders.olFolderCalendar).Items;
i.IncludeRecurrences = true;
i.Sort("[Start]");
i = i.Restrict(
"[Start] >= '10/1/2013 12:00 AM' AND [End] < '10/3/2013 12:00 AM'");
var r =
from ai in i.Cast<AppointmentItem>()
select new {
ai.Categories,
ai.Start,
ai.Duration
};
r.Dump();
A: I've studied the docs and this is my result:
I've put a time limit of one month hard-coded, but this is just an example.
public void GetAllCalendarItems()
{
Microsoft.Office.Interop.Outlook.Application oApp = null;
Microsoft.Office.Interop.Outlook.NameSpace mapiNamespace = null;
Microsoft.Office.Interop.Outlook.MAPIFolder CalendarFolder = null;
Microsoft.Office.Interop.Outlook.Items outlookCalendarItems = null;
oApp = new Microsoft.Office.Interop.Outlook.Application();
mapiNamespace = oApp.GetNamespace("MAPI"); ;
CalendarFolder = mapiNamespace.GetDefaultFolder(Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderCalendar);
outlookCalendarItems = CalendarFolder.Items;
outlookCalendarItems.IncludeRecurrences = true;
foreach (Microsoft.Office.Interop.Outlook.AppointmentItem item in outlookCalendarItems)
{
if (item.IsRecurring)
{
Microsoft.Office.Interop.Outlook.RecurrencePattern rp = item.GetRecurrencePattern();
DateTime first = new DateTime(2008, 8, 31, item.Start.Hour, item.Start.Minute, 0);
DateTime last = new DateTime(2008, 10, 1);
Microsoft.Office.Interop.Outlook.AppointmentItem recur = null;
for (DateTime cur = first; cur <= last; cur = cur.AddDays(1))
{
try
{
recur = rp.GetOccurrence(cur);
MessageBox.Show(recur.Subject + " -> " + cur.ToLongDateString());
}
catch
{ }
}
}
else
{
MessageBox.Show(item.Subject + " -> " + item.Start.ToLongDateString());
}
}
}
A: If you need want to access the shared folder from your friend, then you can set your friend as the recipient. Requirement: his calendar must be shared first.
// Set recepient
Outlook.Recipient oRecip = (Outlook.Recipient)oNS.CreateRecipient("abc@yourmail.com");
// Get calendar folder
Outlook.MAPIFolder oCalendar = oNS.GetSharedDefaultFolder(oRecip, Outlook.OlDefaultFolders.olFolderCalendar);
A: There is no need to expand recurring items manually. Just ensure you sort the items before using IncludeRecurrences.
Here is VBA example:
tdystart = VBA.Format(#8/1/2012#, "Short Date")
tdyend = VBA.Format(#8/31/2012#, "Short Date")
Dim folder As MAPIFolder
Set appointments = folder.Items
appointments.Sort "[Start]" ' <-- !!! Sort is a MUST
appointments.IncludeRecurrences = True ' <-- This will expand reccurent items
Set app = appointments.Find("[Start] >= """ & tdystart & """ and [Start] <= """ & tdyend & """")
While TypeName(app) <> "Nothing"
MsgBox app.Start & " " & app.Subject
Set app = appointments.FindNext
Wend
A: public void GetAllCalendarItems()
{
DataTable sample = new DataTable(); //Sample Data
sample.Columns.Add("Subject", typeof(string));
sample.Columns.Add("Location", typeof(string));
sample.Columns.Add("StartTime", typeof(DateTime));
sample.Columns.Add("EndTime", typeof(DateTime));
sample.Columns.Add("StartDate", typeof(DateTime));
sample.Columns.Add("EndDate", typeof(DateTime));
sample.Columns.Add("AllDayEvent", typeof(bool));
sample.Columns.Add("Body", typeof(string));
listViewContacts.Items.Clear();
oApp = new Outlook.Application();
oNS = oApp.GetNamespace("MAPI");
oCalenderFolder = oNS.GetDefaultFolder(Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderCalendar);
outlookCalendarItems = oCalenderFolder.Items;
outlookCalendarItems.IncludeRecurrences = true;
// DataTable sample = new DataTable();
foreach (Microsoft.Office.Interop.Outlook.AppointmentItem item in outlookCalendarItems)
{
DataRow row = sample.NewRow();
row["Subject"] = item.Subject;
row["Location"] = item.Location;
row["StartTime"] = item.Start.TimeOfDay.ToString();
row["EndTime"] = item.End.TimeOfDay.ToString();
row["StartDate"] = item.Start.Date;
row["EndDate"] = item.End.Date;
row["AllDayEvent"] = item.AllDayEvent;
row["Body"] = item.Body;
sample.Rows.Add(row);
}
sample.AcceptChanges();
foreach (DataRow dr in sample.Rows)
{
ListViewItem lvi = new ListViewItem(dr["Subject"].ToString());
lvi.SubItems.Add(dr["Location"].ToString());
lvi.SubItems.Add(dr["StartTime"].ToString());
lvi.SubItems.Add(dr["EndTime"].ToString());
lvi.SubItems.Add(dr["StartDate"].ToString());
lvi.SubItems.Add(dr["EndDate"].ToString());
lvi.SubItems.Add(dr["AllDayEvent"].ToString());
lvi.SubItems.Add(dr["Body"].ToString());
this.listViewContacts.Items.Add(lvi);
}
oApp = null;
oNS = null;
}
A: I believe that you must Restrict or Find in order to get recurring appointments, otherwise Outlook won't expand them. Also, you must Sort by Start before setting IncludeRecurrences.
A: I found this article very useful: https://learn.microsoft.com/en-us/office/client-developer/outlook/pia/how-to-search-and-obtain-appointments-in-a-time-range
It demonstrates how to get calendar entries in a specified time range. It worked for me. Here is the source code from the article for your convenience :)
using Outlook = Microsoft.Office.Interop.Outlook;
private void DemoAppointmentsInRange()
{
Outlook.Folder calFolder = Application.Session.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderCalendar)
as Outlook.Folder;
DateTime start = DateTime.Now;
DateTime end = start.AddDays(5);
Outlook.Items rangeAppts = GetAppointmentsInRange(calFolder, start, end);
if (rangeAppts != null)
{
foreach (Outlook.AppointmentItem appt in rangeAppts)
{
Debug.WriteLine("Subject: " + appt.Subject
+ " Start: " + appt.Start.ToString("g"));
}
}
}
/// <summary>
/// Get recurring appointments in date range.
/// </summary>
/// <param name="folder"></param>
/// <param name="startTime"></param>
/// <param name="endTime"></param>
/// <returns>Outlook.Items</returns>
private Outlook.Items GetAppointmentsInRange(
Outlook.Folder folder, DateTime startTime, DateTime endTime)
{
string filter = "[Start] >= '"
+ startTime.ToString("g")
+ "' AND [End] <= '"
+ endTime.ToString("g") + "'";
Debug.WriteLine(filter);
try
{
Outlook.Items calItems = folder.Items;
calItems.IncludeRecurrences = true;
calItems.Sort("[Start]", Type.Missing);
Outlook.Items restrictItems = calItems.Restrict(filter);
if (restrictItems.Count > 0)
{
return restrictItems;
}
else
{
return null;
}
}
catch { return null; }
}
A: Try this:
public List<AdxCalendarItem> GetAllCalendarItems()
{
Outlook.Application OutlookApp = new Outlook.Application();
List<AdxCalendarItem> result = new List<AdxCalendarItem>();
Outlook._NameSpace session = OutlookApp.Session;
if (session != null)
try
{
object stores = session.GetType().InvokeMember("Stores", BindingFlags.GetProperty, null, session, null);
if (stores != null)
try
{
int count = (int)stores.GetType().InvokeMember("Count", BindingFlags.GetProperty, null, stores, null);
for (int i = 1; i <= count; i++)
{
object store = stores.GetType().InvokeMember("Item", BindingFlags.GetProperty, null, stores, new object[] { i });
if (store != null)
try
{
Outlook.MAPIFolder calendar = null;
try
{
calendar = (Outlook.MAPIFolder)store.GetType().InvokeMember("GetDefaultFolder", BindingFlags.GetProperty, null, store, new object[] { Outlook.OlDefaultFolders.olFolderCalendar });
}
catch
{
continue;
}
if (calendar != null)
try
{
Outlook.Folders folders = calendar.Folders;
try
{
Outlook.MAPIFolder subfolder = null;
for (int j = 1; j < folders.Count + 1; j++)
{
subfolder = folders[j];
try
{
// add subfolder items
result.AddRange(GetAppointmentItems(subfolder));
}
finally
{ if (subfolder != null) Marshal.ReleaseComObject(subfolder); }
}
}
finally
{ if (folders != null) Marshal.ReleaseComObject(folders); }
// add root items
result.AddRange(GetAppointmentItems(calendar));
}
finally { Marshal.ReleaseComObject(calendar); }
}
finally { Marshal.ReleaseComObject(store); }
}
}
finally { Marshal.ReleaseComObject(stores); }
}
finally { Marshal.ReleaseComObject(session); }
return result;
}
List<AdxCalendarItem> GetAppointmentItems(Outlook.MAPIFolder calendarFolder)
{
List<AdxCalendarItem> result = new List<AdxCalendarItem>();
Outlook.Items calendarItems = calendarFolder.Items;
try
{
calendarItems.IncludeRecurrences = true;
Outlook.AppointmentItem appointment = null;
for (int j = 1; j < calendarItems.Count + 1; j++)
{
appointment = calendarItems[j] as Outlook.AppointmentItem;
try
{
AdxCalendarItem item = new AdxCalendarItem(
calendarFolder.Name,
appointment.Subject,
appointment.Location,
appointment.Start,
appointment.End,
appointment.Start.Date,
appointment.End.Date,
appointment.AllDayEvent,
appointment.Body);
result.Add(item);
}
finally
{
{ Marshal.ReleaseComObject(appointment); }
}
}
}
finally { Marshal.ReleaseComObject(calendarItems); }
return result;
}
}
public class AdxCalendarItem
{
public string CalendarName;
public string Subject;
public string Location;
public DateTime StartTime;
public DateTime EndTime;
public DateTime StartDate;
public DateTime EndDate;
public bool AllDayEvent;
public string Body;
public AdxCalendarItem(string CalendarName, string Subject, string Location, DateTime StartTime, DateTime EndTime,
DateTime StartDate, DateTime EndDate, bool AllDayEvent, string Body)
{
this.CalendarName = CalendarName;
this.Subject = Subject;
this.Location = Location;
this.StartTime = StartTime;
this.EndTime = EndTime;
this.StartDate = StartDate;
this.EndDate = EndDate;
this.AllDayEvent = AllDayEvent;
this.Body = Body;
}
}
A: Microsoft.Office.Interop.Outlook.Application oApp = null;
Microsoft.Office.Interop.Outlook.NameSpace mapiNamespace = null;
Microsoft.Office.Interop.Outlook.MAPIFolder CalendarFolder = null;
Microsoft.Office.Interop.Outlook.Items outlookCalendarItems = null;
oApp = new Microsoft.Office.Interop.Outlook.Application();
mapiNamespace = oApp.GetNamespace("MAPI"); ;
CalendarFolder = mapiNamespace.GetDefaultFolder(Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderCalendar);
outlookCalendarItems = CalendarFolder.Items;
outlookCalendarItems.IncludeRecurrences = true;
foreach (Microsoft.Office.Interop.Outlook.AppointmentItem item in outlookCalendarItems)
{
if (item.IsRecurring)
{
Microsoft.Office.Interop.Outlook.RecurrencePattern rp = item.GetRecurrencePattern();
// get all date
DateTime first = new DateTime( item.Start.Hour, item.Start.Minute, 0);
DateTime last = new DateTime();
Microsoft.Office.Interop.Outlook.AppointmentItem recur = null;
for (DateTime cur = first; cur <= last; cur = cur.AddDays(1))
{
try
{
recur = rp.GetOccurrence(cur);
MessageBox.Show(recur.Subject + " -> " + cur.ToLongDateString());
}
catch
{ }
}
}
else
{
MessageBox.Show(item.Subject + " -> " + item.Start.ToLongDateString());
}
}
}
it is working I try it but you need to add reference about
Microsoft outlook
A: Here's a combination of a few answers to get entries from the past 30 days. Will output to console but you can take the console log output and save to a file or whatever you want from there. Thanks to everyone for posting their code here, was very helpful!
using Microsoft.Office.Interop.Outlook;
void GetAllCalendarItems()
{
Microsoft.Office.Interop.Outlook.Application oApp = null;
Microsoft.Office.Interop.Outlook.NameSpace mapiNamespace = null;
Microsoft.Office.Interop.Outlook.MAPIFolder CalendarFolder = null;
Microsoft.Office.Interop.Outlook.Items outlookCalendarItems = null;
oApp = new Microsoft.Office.Interop.Outlook.Application();
mapiNamespace = oApp.GetNamespace("MAPI"); ;
CalendarFolder = mapiNamespace.GetDefaultFolder(Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderCalendar);
outlookCalendarItems = CalendarFolder.Items;
outlookCalendarItems.IncludeRecurrences = false;
Console.WriteLine("Showing Calendar Items From the last 30 days");
//Set your dates here...
DateTime startTime = DateTime.Now.AddDays(-31);
DateTime endTime = DateTime.Now;
string filter = "[Start] >= '"
+ startTime.ToString("g")
+ "' AND [End] <= '"
+ endTime.ToString("g") + "'";
try
{
outlookCalendarItems.Sort("[Start]", Type.Missing);
foreach (Microsoft.Office.Interop.Outlook.AppointmentItem item in outlookCalendarItems.Restrict(filter))
{
Console.WriteLine(item.Subject + " -> " + item.Start.ToLongDateString());
}
}
catch { }
Console.WriteLine("Finished");
}
GetAllCalendarItems();
A: calendarFolder =
mapiNamespace.GetDefaultFolder(
Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderCalendar);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
}
|
Q: Unit testing a Java Servlet I would like to know what would be the best way to do unit testing of a servlet.
Testing internal methods is not a problem as long as they don't refer to the servlet context, but what about testing the doGet/doPost methods as well as the internal method that refer to the context or make use of session parameters?
Is there a way to do this simply using classical tools such as JUnit, or preferrably TestNG? Did I need to embed a tomcat server or something like that?
A: Are you calling the doPost and doGet methods manually in the unit tests? If so you can override the HttpServletRequest methods to provide mock objects.
myServlet.doGet(new HttpServletRequestWrapper() {
public HttpSession getSession() {
return mockSession;
}
...
}
The HttpServletRequestWrapper is a convenience Java class. I suggest you to create a utility method in your unit tests to create the mock http requests:
public void testSomething() {
myServlet.doGet(createMockRequest(), createMockResponse());
}
protected HttpServletRequest createMockRequest() {
HttpServletRequest request = new HttpServletRequestWrapper() {
//overrided methods
}
}
It's even better to put the mock creation methods in a base servlet superclass and make all servlets unit tests to extend it.
A: Mockrunner (http://mockrunner.sourceforge.net/index.html) can do this. It provides a mock J2EE container that can be used to test Servlets. It can also be used to unit test other server-side code like EJBs, JDBC, JMS, Struts. I've only used the JDBC and EJB capabilities myself.
A: Most of the time I test Servlets and JSP's via 'Integration Tests' rather than pure Unit Tests. There are a large number of add-ons for JUnit/TestNG available including:
*
*HttpUnit (the oldest and best known, very low level which can be good or bad depending on your needs)
*HtmlUnit (higher level than HttpUnit, which is better for many projects)
*JWebUnit (sits on top of other testing tools and tries to simplify them - the one I prefer)
*WatiJ and Selenium (use your browser to do the testing, which is more heavyweight but realistic)
This is a JWebUnit test for a simple Order Processing Servlet which processes input from the form 'orderEntry.html'. It expects a customer id, a customer name and one or more order items:
public class OrdersPageTest {
private static final String WEBSITE_URL = "http://localhost:8080/demo1";
@Before
public void start() {
webTester = new WebTester();
webTester.setTestingEngineKey(TestingEngineRegistry.TESTING_ENGINE_HTMLUNIT);
webTester.getTestContext().setBaseUrl(WEBSITE_URL);
}
@Test
public void sanity() throws Exception {
webTester.beginAt("/orderEntry.html");
webTester.assertTitleEquals("Order Entry Form");
}
@Test
public void idIsRequired() throws Exception {
webTester.beginAt("/orderEntry.html");
webTester.submit();
webTester.assertTextPresent("ID Missing!");
}
@Test
public void nameIsRequired() throws Exception {
webTester.beginAt("/orderEntry.html");
webTester.setTextField("id","AB12");
webTester.submit();
webTester.assertTextPresent("Name Missing!");
}
@Test
public void validOrderSucceeds() throws Exception {
webTester.beginAt("/orderEntry.html");
webTester.setTextField("id","AB12");
webTester.setTextField("name","Joe Bloggs");
//fill in order line one
webTester.setTextField("lineOneItemNumber", "AA");
webTester.setTextField("lineOneQuantity", "12");
webTester.setTextField("lineOneUnitPrice", "3.4");
//fill in order line two
webTester.setTextField("lineTwoItemNumber", "BB");
webTester.setTextField("lineTwoQuantity", "14");
webTester.setTextField("lineTwoUnitPrice", "5.6");
webTester.submit();
webTester.assertTextPresent("Total: 119.20");
}
private WebTester webTester;
}
A: This implementation of a JUnit test for servlet doPost() method relies only on the Mockito library for mocking up instances of HttpRequest, HttpResponse, HttpSession, ServletResponse and RequestDispatcher. Replace parameter keys and JavaBean instance with those that correspond to values referenced in the associated JSP file from which doPost() is called.
Mockito Maven dependency:
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-all</artifactId>
<version>1.9.5</version>
</dependency>
JUnit test:
import javax.servlet.RequestDispatcher;
import javax.servlet.ServletContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpSession;
import java.io.IOException;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
import static org.mockito.Mockito.*;
/**
* Unit tests for the {@code StockSearchServlet} class.
* @author Bob Basmaji
*/
public class StockSearchServletTest extends HttpServlet {
// private fields of this class
private static HttpServletRequest request;
private static HttpServletResponse response;
private static StockSearchServlet servlet;
private static final String SYMBOL_PARAMETER_KEY = "symbol";
private static final String STARTRANGE_PARAMETER_KEY = "startRange";
private static final String ENDRANGE_PARAMETER_KEY = "endRange";
private static final String INTERVAL_PARAMETER_KEY = "interval";
private static final String SERVICETYPE_PARAMETER_KEY = "serviceType";
/**
* Sets up the logic common to each test in this class
*/
@Before
public final void setUp() {
request = mock(HttpServletRequest.class);
response = mock(HttpServletResponse.class);
when(request.getParameter("symbol"))
.thenReturn("AAPL");
when(request.getParameter("startRange"))
.thenReturn("2016-04-23 00:00:00");
when(request.getParameter("endRange"))
.thenReturn("2016-07-23 00:00:00");
when(request.getParameter("interval"))
.thenReturn("DAY");
when(request.getParameter("serviceType"))
.thenReturn("WEB");
String symbol = request.getParameter(SYMBOL_PARAMETER_KEY);
String startRange = request.getParameter(STARTRANGE_PARAMETER_KEY);
String endRange = request.getParameter(ENDRANGE_PARAMETER_KEY);
String interval = request.getParameter(INTERVAL_PARAMETER_KEY);
String serviceType = request.getParameter(SERVICETYPE_PARAMETER_KEY);
HttpSession session = mock(HttpSession.class);
when(request.getSession()).thenReturn(session);
final ServletContext servletContext = mock(ServletContext.class);
RequestDispatcher dispatcher = mock(RequestDispatcher.class);
when(servletContext.getRequestDispatcher("/stocksearchResults.jsp")).thenReturn(dispatcher);
servlet = new StockSearchServlet() {
public ServletContext getServletContext() {
return servletContext; // return the mock
}
};
StockSearchBean search = new StockSearchBean(symbol, startRange, endRange, interval);
try {
switch (serviceType) {
case ("BASIC"):
search.processData(ServiceType.BASIC);
break;
case ("DATABASE"):
search.processData(ServiceType.DATABASE);
break;
case ("WEB"):
search.processData(ServiceType.WEB);
break;
default:
search.processData(ServiceType.WEB);
}
} catch (StockServiceException e) {
throw new RuntimeException(e.getMessage());
}
session.setAttribute("search", search);
}
/**
* Verifies that the doPost method throws an exception when passed null arguments
* @throws ServletException
* @throws IOException
*/
@Test(expected = NullPointerException.class)
public final void testDoPostPositive() throws ServletException, IOException {
servlet.doPost(null, null);
}
/**
* Verifies that the doPost method runs without exception
* @throws ServletException
* @throws IOException
*/
@Test
public final void testDoPostNegative() throws ServletException, IOException {
boolean throwsException = false;
try {
servlet.doPost(request, response);
} catch (Exception e) {
throwsException = true;
}
assertFalse("doPost throws an exception", throwsException);
}
}
A: Try HttpUnit, although you are likely to end up writing automated tests that are more 'integration tests' (of a module) than 'unit tests' (of a single class).
A: I looked at the posted answers and thought that I would post a more complete solution that actually demonstrates how to do the testing using embedded GlassFish and its Apache Maven plugin.
I wrote the complete process up on my blog Using GlassFish 3.1.1 Embedded with JUnit 4.x and HtmlUnit 2.x and placed the complete project for download on Bitbucket here: image-servlet
I was looking at another post on an image servlet for JSP/JSF tags just before I saw this question. So I combined the solution I used from the other post with a complete unit tested version for this post.
How to Test
Apache Maven has a well defined lifecycle that includes test. I will use this along with another lifecycle called integration-test to implement my solution.
*
*Disable standard lifecycle unit testing in the surefire plugin.
*Add integration-test as part of the executions of the surefire-plugin
*Add the GlassFish Maven plugin to the POM.
*Configure GlassFish to execute during the integration-test lifecycle.
*Run unit tests (integration tests).
GlassFish Plugin
Add this plugin as part of the <build>.
<plugin>
<groupId>org.glassfish</groupId>
<artifactId>maven-embedded-glassfish-plugin</artifactId>
<version>3.1.1</version>
<configuration>
<!-- This sets the path to use the war file we have built in the target directory -->
<app>target/${project.build.finalName}</app>
<port>8080</port>
<!-- This sets the context root, e.g. http://localhost:8080/test/ -->
<contextRoot>test</contextRoot>
<!-- This deletes the temporary files during GlassFish shutdown. -->
<autoDelete>true</autoDelete>
</configuration>
<executions>
<execution>
<id>start</id>
<!-- We implement the integration testing by setting up our GlassFish instance to start and deploy our application. -->
<phase>pre-integration-test</phase>
<goals>
<goal>start</goal>
<goal>deploy</goal>
</goals>
</execution>
<execution>
<id>stop</id>
<!-- After integration testing we undeploy the application and shutdown GlassFish gracefully. -->
<phase>post-integration-test</phase>
<goals>
<goal>undeploy</goal>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>
Surefire Plugin
Add/modify the plugin as part of the <build>.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.12.4</version>
<!-- We are skipping the default test lifecycle and will test later during integration-test -->
<configuration>
<skip>true</skip>
</configuration>
<executions>
<execution>
<phase>integration-test</phase>
<goals>
<!-- During the integration test we will execute surefire:test -->
<goal>test</goal>
</goals>
<configuration>
<!-- This enables the tests which were disabled previously. -->
<skip>false</skip>
</configuration>
</execution>
</executions>
</plugin>
HTMLUnit
Add integration tests like the example below.
@Test
public void badRequest() throws IOException {
webClient.getOptions().setThrowExceptionOnFailingStatusCode(false);
webClient.getOptions().setPrintContentOnFailingStatusCode(false);
final HtmlPage page = webClient.getPage("http://localhost:8080/test/images/");
final WebResponse response = page.getWebResponse();
assertEquals(400, response.getStatusCode());
assertEquals("An image name is required.", response.getStatusMessage());
webClient.getOptions().setThrowExceptionOnFailingStatusCode(true);
webClient.getOptions().setPrintContentOnFailingStatusCode(true);
webClient.closeAllWindows();
}
I wrote the complete process up on my blog Using GlassFish 3.1.1 Embedded with JUnit 4.x and HtmlUnit 2.x and placed the complete project for download on Bitbucket here: image-servlet
If you have any questions, please leave a comment. I think that this is one complete example for you to use as the basis of any testing you are planning for servlets.
A: Updated Feb 2018: OpenBrace Limited has closed down, and its ObMimic product is no longer supported.
Another solution is to use my ObMimic library, which is specifically designed for unit testing of servlets. It provides complete plain-Java implementations of all the Servlet API classes, and you can configure and inspect these as necessary for your tests.
You can indeed use it to directly call doGet/doPost methods from JUnit or TestNG tests, and to test any internal methods even if they refer to the ServletContext or use session parameters (or any other Servlet API features).
This doesn't need an external or embedded container, doesn't limit you to broader HTTP-based "integration" tests, and unlike general-purpose mocks it has the full Servlet API behaviour "baked in", so your tests can be "state"-based rather than "interaction"-based (e.g. your tests don't have to rely on the precise sequence of Servlet API calls made by your code, nor on your own expectations of how the Servlet API will respond to each call).
There's a simple example in my answer to How to test my servlet using JUnit. For full details and a free download see the ObMimic website.
A: This Question has a solution proposing Mockito How to test my servlet using JUnit
This limits the task to simple unit testing, without setting up any server-like environment.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54"
}
|
Q: Reliable and performant cheap (ish) hosting for ASP.NET 3.5 and mysql I'm looking for someone reasonably cheap but better than the majority of budget hosts out there. I'm currently with brinkster.net and I've become increasingly annoyed at the their immense unreliability and low available resources.
Fasthosts business plan is close, but has no mysql, only has ASP.NET 2.0 and is maybe slightly more expensive than I was hoping for.
A: I have had several sites hosted on http://discountasp.net and have had very good results. They are on year 4 of being voted best ASP.NET host in the asp.netPRO reader's choice survey.
A: I have had great luck with Viux.com - their customer service is top-notch and they were quick to implement asp.net 3.5. I moved all my sites (5) to Viux now and couldn't be happier. Very reliable and I can't say enough about their super fast and friendly service! MySQL comes free with all of their plans and MSSQL is $2/month.
I have tried quite a few hosts, and these guys are my favorite. If you decide on another, just make sure it is not M6.net, their customer service was just horrendous!
A: GoDaddy supports .NET 3.5 and mysql on their basic hosting packages.
A: We've used GoDaddy at my primary employment (day job :-) for several years and have had a positive experience with them (I also recently switched my home business from Yahoo! Small Business to GoDaddy).
Regarding reliability, I haven't had any problems with downtime. As a result, I have no first hand tech support experience with GoDaddy, but from what I read on the boards their tech support is pretty good (comparable to any other tech support I guess). They offer LINUX and Windows hosting (if that matters to you), MySQL and MSSQL database support, and .NET 3.5/AJAX.
And the price was reasonable, as far as I'm concerned.
A: I've been with these guys for quite a bit,
http://www.webhost4life.com/
Cheap and cheerful.
A: Something that might interest you -- ScottGu's latest blog post mentions Amazon's EC2 is going to support ASP.NET.
http://aws.typepad.com/aws/2008/10/coming-soon-ama.html
Depending on what you plan on doing, that could be of interest. It's usually pretty cheap, as well.
A: Dreamhost supports stackoverflow's podcast
http://www.dreamhost.com/
Edit: It looks like they don't support ASP.NET though, that was unexpected.
A: Take a look at ReliableSite.Net
It is cheap and good. They even throw a free MS SQL 2005 database(1 GB- Extra DB costs $1) what other places charge $10/Month and give you less then 500MB of space.
So you can upgrade to mssql 2005(not sure if you where just using mysql because it is cheaper).
If you don't want to bother changing to mssql 2005 then you can save it for another project(you can host unlimited domains on Reliable) and use the mysql database that they also throw in for free.
I find Reliable does not nickle and dim you for every single thing and gives reasonable prices and have great coupons.
Like this coupon for 15% off for life: "aspforum"
A: Planet Small Business http://www.planetsmb.com/ are pretty cheap, and have excellent customer service.
The only hassle I've had with them is over hosting WCF services. I wasn't able to host it as a native ASP.Net service, you have to do a bit of extra plumbing to manually add a service host, but nothing impossible, and their customer support was there ready and waiting.
Highly recommended.
A: I Use SmarterAsp.net to host Multiple Sites ; they have good control panel and their Price start fro $2.95/Month you can also get 60 days free trial so you can decide if it's suitable for you
http://www.SmarterASP.NET/index?r=100819197
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Advantages of VS 2008 over VS 2005 Could somebody please name a few. I could given time, but this is for somebody else, and I'd also like some community input.
A: Personally I would say one of the biggest advantages I have found is the product responds quicker i.e. opens faster, compiles and runs projects faster. In my mind why wouldn't you upgrade for this benefit alone.
Because of the environment that I work in I am restricted to .NET 2.0 so have not been able to take advantages of the many other features of multi targeting etc.
However for the ASP.NET work I have done the split view and CSS support is great. Certainly that is the one area I have noticed the biggest functional improvement from VS2005 to VS2008. With CSS you still need to but in the effort to understand it but why not get as much help as possible from the IDE as well.
Overall I have found it to be a very easy transition so I can't think of a reason not to upgrade.
A: Well, it supports .NET 3.5, which offers a lot of new features - it depends on whether you need them.
Other than that, they improved (also with SP1) the refactoring tool, compile speed, IntelliSense now works great with C# too, and you get the new C# compiler even when writing .NET 2.0 code. Also, ASP.NET designer performance has improved a lot.
In my opinion, even writing mostly .NET 2.0 code, I find it slightly better than 2005.
A: One big feature is it lets you target different versions of the .NET runtime, depending on the project.
A: If you're using The Microsoft Unit Testing framework, it's far better in 2008. i.e. it's usable.
A: Besides the usual support for latest versions of framework and integrated unit testing, i personally find out VS 2008 to be more stable, with better refactoring support and more mature (read stable) product than VS 2005.
I was using VS 2005 since it showed up on the market, until the first release of VS 2008, so i can tell the difference.
A: Depends on your programming language.
*
*On .net, the built-in support for .net 3.5 is obvious, although this is mainly project templates. However, SP1 adds .net Client Framework support, which is not possible within VS2005 to my knowledge.
*This also means support for WPF with a XAML Designer, although most people still prefer Expression Blend for WPF Interfaces.
*Apparantly, there is now a JavaScript Debugger, even though it's a bit broken as it seems (not sure if SP1 fixes this)
In short: For .net 3.5, it's almost a must-have if you are a professional developer, but that is just my opinion.
A: The CSS property builder is much better IMHO.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Benefits of SQL Server 2005 over 2000 Could somebody please name a few. I could given time, but this is for somebody else, and I'd also like some community input.
A: Some differences:
*
*CLR (.NET) stored procedures
*SSIS instead of DTS
*Management Studio instead of Enterprise Manager, with more functions (2008 version is even better)
*VS integration
*better replication
*SMO and AMO (extensions to handle the server from applications)
*table and index partitioning
*XML as data type
*XQuery to handle XML data type
*Service Broker
*Notification Services
*Analysis Services
*Reporting Service
I have now these ones in mind. There are a lot of other small nice stuff, but I cannot name more.
A: Also, Common Table Expressions and exception management in TSQL. Very useful.
A: Two things make it much better for me:
1 - Great XML support.
2 - Partitioned Tables. No more multiple-tables and views - just define your partition schema and you can easily manage HUGE tables with far improved performance.
A: Snapshot Isolation
Also known as readers don't block writers.
A: The Data Type varchar(MAX)
In SQL Server 2000 and SQL Server 7, a
row cannot exceed 8000 bytes in size
... To solve this problem, Microsoft
introduced the VARCHAR(MAX),
NVARCHAR(MAX), and VARBINARY(MAX) data
types in SQL Server 2005. These data
types can hold the same amount of data
BLOBs can hold (2 GB) and they are
stored in the same type of data pages
used for other data types. When data
in a MAX data type exceeds 8 KB, an
over-flow page is used.
From http://www.teratrax.com/articles/varchar_max.html
A: CLR stored procedure support
A: Better clustering/replication facilities
A: Schemas - Okay, 2000 has owners, but they can be a real pain to get permissions right on.
A: Something very important is the TRY CATCH statement - SQL2005 supports such statement while SQL2000 does not.
sample:
BEGIN TRY
-- Generate divide-by-zero error.
SELECT 1/0;
END TRY
BEGIN CATCH
-- Execute custom error retrieval routine.
END CATCH;
A: It depends if you're talking about just the DB engine or the product as a whole. SQL Server 2000 didn't have anything like Analysis services and Reporting services. As for the DB engine - if you use it for a simple application, I don't think you should bother. Also, I think the SQL Express edition introduced in 2005 became valuable for one-man-show companies and investigation/testing small projects
A: SQL Express has larger database capacity than the previous MSDE product (4Gb vs 2Gb), so it's great as Slavo said for testing and small apps. SQL Server 2005 is updated via Microsoft Update, which is good or bad depending on your point of view.
A: The PIVOT/UNPIVOT operators have been a big win for me.
PIVOT rotates row-based output into columns, which is a huge help in a lot of our reporting needs. We had to roll our own functions for PIVOT prior to SQL 2005.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How can I find out how much of address space the application is consuming and report this to user? I'm writing the memory manager for an application, as part of a team of twenty-odd coders. We're running out of memory quota and we need to be able to see what's going on, since we only appear to be using about 700Mb. I need to be able to report where it's all going - fragmentation etc. Any ideas?
A: You can use existing memory debugging tools for this, I found Memory Validator 1 quite useful, it is able to track both API level (heap, new...) and OS level (Virtual Memory) allocations and show virtual memory maps.
The other option which I also found very usefull is to be able to dump a map of the whole virtual space based on VirtualQuery function. My code for this looks like this:
void PrintVMMap()
{
size_t start = 0;
// TODO: make portable - not compatible with /3GB, 64b OS or 64b app
size_t end = 1U<<31; // map 32b user space only - kernel space not accessible
SYSTEM_INFO si;
GetSystemInfo(&si);
size_t pageSize = si.dwPageSize;
size_t longestFreeApp = 0;
int index=0;
for (size_t addr = start; addr<end; )
{
MEMORY_BASIC_INFORMATION buffer;
SIZE_T retSize = VirtualQuery((void *)addr,&buffer,sizeof(buffer));
if (retSize==sizeof(buffer) && buffer.RegionSize>0)
{
// dump information about this region
printf(.... some buffer information here ....);
// track longest feee region - usefull fragmentation indicator
if (buffer.State&MEM_FREE)
{
if (buffer.RegionSize>longestFreeApp) longestFreeApp = buffer.RegionSize;
}
addr += buffer.RegionSize;
index+= buffer.RegionSize/pageSize;
}
else
{
// always proceed
addr += pageSize;
index++;
}
}
printf("Longest free VM region: %d",longestFreeApp);
}
A: You can also find out information about the heaps in a process with Heap32ListFirst/Heap32ListNext, and about loaded modules with Module32First/Module32Next, from the Tool Help API.
'Tool Help' originated on Windows 9x. The original process information API on Windows NT was PSAPI, which offers functions which partially (but not completely) overlap with Tool Help.
A: Our (huge) application (a Win32 game) started throwing "Not enough quota" exceptions recently, and I was charged with finding out where all the memory was going. It is not a trivial job - this question and this one were my first attempts at finding out. Heap behaviour is unexpected, and accurately tracking how much quota you've used and how much is available has so far proved impossible. In fact, it's not particularly useful information anyway - "quota" and "somewhere to put things" are subtly and annoyingly different concepts. The accepted answer is as good as it gets, although enumerating heaps and modules is also handy. I used DebugDiag from MS to view the true horror of the situation, and understand how hard it is to actually thoroughly track everything.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I use the TranslateBehavior in CakePHP? There is no documentation on cakephp.org and I am unable to find one on google. Please link me some documentation or supply one!
A: The translate behavior is another of CakePHP's very useful but poorly documented features. I've implemented it a couple of times with reasonable success in multi-lingual websites along the following lines.
Firstly, the translate behavior will only internationalize the database content of your site. If you've any more static content, you'll want to look at Cake's __('string') wrapper function and gettext (there's some useful information about this here)
Assuming there's Contents that we want to translate with the following db table:
CREATE TABLE `contents` (
`id` int(11) unsigned NOT NULL auto_increment,
`title` varchar(255) default NULL,
`body` text,
PRIMARY KEY (`id`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
The content.php model then has:
var $actsAs = array('Translate' => array('title' => 'titleTranslation',
'body' => 'bodyTranslation'
));
in its definition. You then need to add the i18n table to the database thusly:
CREATE TABLE `i18n` (
`id` int(10) NOT NULL auto_increment,
`locale` varchar(6) NOT NULL,
`model` varchar(255) NOT NULL,
`foreign_key` int(10) NOT NULL,
`field` varchar(255) NOT NULL,
`content` mediumtext,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Then when you're saving the data to the database in your controller, set the locale to the language you want (this example would be for Polish):
$this->Content->locale = 'pol';
$result = $this->Content->save($this->data);
This will create entries in the i18n table for the title and body fields for the pol locale. Finds will find based on the current locale set in the user's browser, returning an array like:
[Content]
[id]
[titleTranslation]
[bodyTranslation]
We use the excellent p28n component to implement a language switching solution that works pretty well with the gettext and translate behaviours.
It's not a perfect system - as it creates HABTM relationships on the fly, it can cause some issues with other relationships you may have created manually, but if you're careful, it can work well.
A: For anyone searching the same thing, cakephp updated their documentation. For Translate Behavior go here..
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do I use INotifyPropertyChanged to update an array binding? Let's say I have a class:
class Foo
{
public string Bar
{
get { ... }
}
public string this[int index]
{
get { ... }
}
}
I can bind to these two properties using "{Binding Path=Bar}" and "{Binding Path=[x]}". Fine.
Now let's say I want to implement INotifyPropertyChanged:
class Foo : INotifyPropertyChanged
{
public string Bar
{
get { ... }
set
{
...
if( PropertyChanged != null )
{
PropertyChanged( this, new PropertyChangedEventArgs( "Bar" ) );
}
}
}
public string this[int index]
{
get { ... }
set
{
...
if( PropertyChanged != null )
{
PropertyChanged( this, new PropertyChangedEventArgs( "????" ) );
}
}
}
public event PropertyChangedEventHandler PropertyChanged;
}
What goes in the part marked ????? (I've tried string.Format("[{0}]", index) and it doesn't work). Is this a bug in WPF, is there an alternative syntax, or is it simply that INotifyPropertyChanged isn't as powerful as normal binding?
A: Avoiding strings in your code, you can use the constant Binding.IndexerName, which is actually "Item[]"
new PropertyChangedEventArgs(Binding.IndexerName)
A: PropertyChanged( this, new PropertyChangedEventArgs( "Item[]" ) )
for all indexes and
PropertyChanged( this, new PropertyChangedEventArgs( "Item[" + index + "]" ) )
for a single item
greetings, jerod
A: Don't know for sure if this'll work, but reflector shows that the get and set methods for an indexed property are called get_Item and set_Item. Perhaps you could try Item and see if that works.
A: Thanks to Cameron's suggestion, I've found the correct syntax, which is:
Item[]
Which updates everything (all index values) bound to that indexed property.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: Can you register an existing instance of a type in the Windsor Container? In the Windsor IOC container is it possible to register a type that I've already got an instance for, instead of having the container create it?
A: There is a AddComponentInstance method on the Container's Kernel property.
From the Unit Tests:
[Test]
public void AddComponentInstance()
{
CustomerImpl customer = new CustomerImpl();
kernel.AddComponentInstance("key", typeof(ICustomer), customer);
Assert.IsTrue(kernel.HasComponent("key"));
CustomerImpl customer2 = kernel["key"] as CustomerImpl;
Assert.AreSame(customer, customer2);
customer2 = kernel[typeof(ICustomer)] as CustomerImpl;
Assert.AreSame(customer, customer2);
}
[Test]
public void AddComponentInstance_ByService()
{
CustomerImpl customer = new CustomerImpl();
kernel.AddComponentInstance <ICustomer>(customer);
Assert.AreSame(kernel[typeof(ICustomer)],customer);
}
[Test]
public void AddComponentInstance2()
{
CustomerImpl customer = new CustomerImpl();
kernel.AddComponentInstance("key", customer);
Assert.IsTrue(kernel.HasComponent("key"));
CustomerImpl customer2 = kernel["key"] as CustomerImpl;
Assert.AreSame(customer, customer2);
customer2 = kernel[typeof(CustomerImpl)] as CustomerImpl;
Assert.AreSame(customer, customer2);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: replace-char in Emacs Lisp ? Emacs Lisp has replace-string but has no replace-char. I want to replace "typographic" curly quotes (Emacs code for this character is hexadecimal 53979) with regular ASCII quotes, and I can do so with:
(replace-string (make-string 1 ?\x53979) "'")
I think it would be better with replace-char.
What is the best way to do this?
A: This is the way I replace characters in elisp:
(subst-char-in-string ?' ?’ "John's")
gives:
"John’s"
Note that this function doesn't accept characters as string. The first and second argument must be a literal character (either using the ? notation or string-to-char).
Also note that this function can be destructive if the optional inplace argument is non-nil.
A: Why not just use
(replace-string "\x53979" "'")
or
(while (search-forward "\x53979" nil t)
(replace-match "'" nil t))
as recommended in the documentation for replace-string?
A:
which would certainly be better with replace-char. Any way to improve my code?
Is it actually slow to the point where it matters? My elisp is usually ridiculously inefficient and I never notice. (I only use it for editor tools though, YMMV if you're building the next MS live search with it.)
Also, reading the docs:
This function is usually the wrong thing to use in a Lisp program.
What you probably want is a loop like this:
(while (search-forward "’" nil t)
(replace-match "'" nil t))
This answer is probably GPL licensed now.
A: What about this
(defun my-replace-smart-quotes (beg end)
"replaces ’ (the curly typographical quote, unicode hexa 2019) to ' (ordinary ascii quote)."
(interactive "r")
(save-excursion
(format-replace-strings '(("\x2019" . "'")) nil beg end)))
Once you have that in your dotemacs, you can paste elisp example codes (from blogs and etc) to your scratch buffer and then immediately press C-M-\ (to indent it properly) and then M-x my-replace-smart-quotes (to fix smart quotes) and finally C-x C-e (to run it).
I find that the curly quote is always hexa 2019, are you sure it's 53979 in your case? You can check characters in buffer with C-u C-x =.
I think you can write "’" in place of "\x2019" in the definition of my-replace-smart-quotes and be fine. It's just to be on the safe side.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Multiple Inheritance in PHP I'm looking for a good, clean way to go around the fact that PHP5 still doesn't support multiple inheritance. Here's the class hierarchy:
Message
-- TextMessage
-------- InvitationTextMessage
-- EmailMessage
-------- InvitationEmailMessage
The two types of Invitation* classes have a lot in common; i'd love to have a common parent class, Invitation, that they both would inherit from. Unfortunately, they also have a lot in common with their current ancestors... TextMessage and EmailMessage. Classical desire for multiple inheritance here.
What's the most light-weight approach to solve the issue?
Thanks!
A: The Symfony framework has a mixin plugin for this, you might want to check it out -- even just for ideas, if not to use it.
The "design pattern" answer is to abstract the shared functionality into a separate component, and compose at runtime. Think about a way to abstract out the Invitation functionality out as a class that gets associated with your Message classes in some way other than inheritance.
A: I'm using traits in PHP 5.4 as the way of solving this.
http://php.net/manual/en/language.oop5.traits.php
This allows for classic inheritance with extends, but also gives the possible of placing common functionality and properties into a 'trait'. As the manual says:
Traits is a mechanism for code reuse in single inheritance languages such as PHP. A Trait is intended to reduce some limitations of single inheritance by enabling a developer to reuse sets of methods freely in several independent classes living in different class hierarchies.
A: It sounds like the decorator pattern may be suitable, but hard to tell without more details.
A: This is both a question and a solution....
What about the magical _call(),_get(), __set() methods? I have not yet tested this solution but what if you make a multiInherit class. A protected variable in a child class could contain an array of classes to inherit. The constructor in the multi-interface class could create instances of each of the classes that are being inherited and link them to a private property, say _ext. The __call() method could use the method_exists() function on each of the classes in the _ext array to locate the correct method to call. __get() and __set could be used to locate internal properties, or if your an expert with references you could make the properties of the child class and the inherited classes be references to the same data. The multiple inheritance of your object would be transparent to code using those objects. Also, internal objects could access the inherited objects directly if needed as long as the _ext array is indexed by class name. I have envisioned creating this super-class and have not yet implemented it as I feel that if it works than it could lead to developing some vary bad programming habits.
A: Maybe you can replace an 'is-a' relation with a 'has-a' relation? An Invitation might have a Message, but it does not necessarily need to 'is-a' message. An Invitation f.e. might be confirmed, which does not go well together with the Message model.
Search for 'composition vs. inheritance' if you need to know more about that.
A: Alex, most of the times you need multiple inheritance is a signal your object structure is somewhat incorrect. In situation you outlined I see you have class responsibility simply too broad. If Message is part of application business model, it should not take care about rendering output. Instead, you could split responsibility and use MessageDispatcher that sends the Message passed using text or html backend. I don't know your code, but let me simulate it this way:
$m = new Message();
$m->type = 'text/html';
$m->from = 'John Doe <jdoe@yahoo.com>';
$m->to = 'Random Hacker <rh@gmail.com>';
$m->subject = 'Invitation email';
$m->importBody('invitation.html');
$d = new MessageDispatcher();
$d->dispatch($m);
This way you can add some specialisation to Message class:
$htmlIM = new InvitationHTMLMessage(); // html type, subject and body configuration in constructor
$textIM = new InvitationTextMessage(); // text type, subject and body configuration in constructor
$d = new MessageDispatcher();
$d->dispatch($htmlIM);
$d->dispatch($textIM);
Note that MessageDispatcher would make a decision whether to send as HTML or plain text depending on type property in Message object passed.
// in MessageDispatcher class
public function dispatch(Message $m) {
if ($m->type == 'text/plain') {
$this->sendAsText($m);
} elseif ($m->type == 'text/html') {
$this->sendAsHTML($m);
} else {
throw new Exception("MIME type {$m->type} not supported");
}
}
To sum it up, responsibility is split between two classes. Message configuration is done in InvitationHTMLMessage/InvitationTextMessage class, and sending algorithm is delegated to dispatcher. This is called Strategy Pattern, you can read more on it here.
A: If I can quote Phil in this thread...
PHP, like Java, does not support multiple inheritance.
Coming in PHP 5.4 will be traits which attempt to provide a solution
to this problem.
In the meantime, you would be best to re-think your class design. You
can implement multiple interfaces if you're after an extended API to
your classes.
And Chris....
PHP doesn't really support multiple inheritance, but there are some
(somewhat messy) ways to implement it. Check out this URL for some
examples:
http://www.jasny.net/articles/how-i-php-multiple-inheritance/
Thought they both had useful links. Can't wait to try out traits or maybe some mixins...
A: I have a couple of questions to ask to clarify what you are doing:
1) Does your message object just contain a message e.g. body, recipient, schedule time?
2) What do you intend to do with your Invitation object? Does it need to be treated specially compared to an EmailMessage?
3) If so WHAT is so special about it?
4) If that is then the case, why do the message types need handling differently for an invitation?
5) What if you want to send a welcome message or an OK message? Are they new objects too?
It does sound like you are trying combine too much functionality into a set of objects that should only be concerned with holding a message contents - and not how it should be handled. To me, you see, there is no difference between an invitation or a standard message. If the invitation requires special handling, then that means application logic and not a message type.
For example: a system I built had a shared base message object that was extended into SMS, Email, and other message types. However: these were not extended further - an invitation message was simply pre-defined text to be sent via a message of type Email. A specific Invitation application would be concerned with validation and other requirements for an invite. After all, all you want to do is send message X to recipient Y which should be a discrete system in its own right.
A: Same problem like Java. Try using interfaces with abstract functions for solving that problem
A: PHP does support interfaces. This could be a good bet, depending on your use-cases.
A: How about an Invitation class right below the Message class?
so the hierarchy goes:
Message
--- Invitation
------ TextMessage
------ EmailMessage
And in Invitation class, add the functionality that was in InvitationTextMessage and InvitationEmailMessage.
I know that Invitation isn't really a type of Message, it's more a functionality of Message. So I'm not sure if this is good OO design or not.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "99"
}
|
Q: Eclipse 3.3.2/MyEclipse doesn't recognize xml as ant files Using eclipse 3.3.2 with MyEclipse installed. For some reason if a file isn't called build.xml then it isnt' recognised as an ant file. The file association for *.xml includes ant and says "locked by 'Ant Buildfile' content type.
The run-as menu is broken. Even if the editor association works run-as doesn't.
The ant buildfiles in question are correctly formatted. They work fine if you call them build.xml or if you use them anywhere else. Eclipse just won't recognise and thus wont allow you to run them.
A: The environment inspects the file contents to determine if it is an Ant file (if it isn't called "build.xml"). Add the following to the XML file:
<?xml version="1.0" encoding="UTF-8"?>
<project name="myproject" default="t1">
<target name="t1"></target>
</project>
You should now see the "Ant Editor" in the "Open With >" menu when you right-click on the file.
A: I was having a similar problem and found that the Ant Tools weren't included in the Eclipse binary I downloaded. You can try installing the Eclipse Java Development Tools. These can be found under Java Development > Eclipse Java Development Tools in Help > Software Updates > Available Software.
A: If you open the "File Associations" page (Window -> Preferences -> General -> Editors -> File Associations) you should see a list of all file types which Eclipse recognises. Scroll down to the "*.xml" entry, highlight "Ant Editor" in the "Associated Editors" pane and hit the "Default" button on the right-hand side. Eclipse should now open any XML files with the ant editor.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I iterate a .Net IList collection in the reverse order? I have an IList that contains items ( parent first ), they need to be added to a Diagram Document in the reverse order so that the parent is added last, drawn on top so that it is the first thing to be selected by the user.
What's the best way to do it? Something better/more elegant than what I am doing currently
which I post below..
A: If you have .NET 3.5 you could use LINQ's Reverse?
foreach(var item in obEvtArgs.NewItems.Reverse())
{
...
}
(Assuming you're talking about the generic IList)
A: Based on the comments to Davy's answer, and Gishu's original answer, you could cast your weakly-typed System.Collections.IList to a generic collection using the System.Linq.Enumerable.Cast extension method:
var reversedCollection = obEvtArgs.NewItems
.Cast<IMySpecificObject>( )
.Reverse( );
This removes the noise of both the reverse for loop, and the as cast to get a strongly-typed object from the original collection.
A: NewItems is my List here... This is a bit clunky though.
for(int iLooper = obEvtArgs.NewItems.Count-1; iLooper >= 0; iLooper--)
{
GoViewBoy.Document.Add(CreateNodeFor(obEvtArgs.NewItems[iLooper] as IMySpecificObject, obNextPos));
}
A: You don't need LINQ:
var reversed = new List<T>(original); // assuming original has type IList<T>
reversed.Reverse();
foreach (T e in reversed) {
...
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/90996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How to create a path to a temporary file on Windows XP/Vista What is the best way of doing this?
tmpnam() returns a path to a file in the root of the drive, which requires administrator privileges on Windows Vista, so this is not an option.
A: GetTempPath and GetTempFileName should work.
A: The environment variable %TEMP% on Windows points to the users temp directory.
In managed C++ you can call Path::GetTempFileName() which will give you a temporary file in the users temp directory (which can be found using Path::GetTempPath() ). GetTempFileName() basically just gives you a path to a file in the %TEMP% path using a GUID as the file name. You then use that path to create the file and do what you want with it. You could do similar logic in any language that has access to the current processes environment variables.
Hope that helps,
Martin.
A: Have you tried with the environment variables TEMP and TMP set to a directory writable by all?
To change environment variables in XP (not familiar with Vista), you go to System Properties, [Advanced] tab, [Environment Variables] button.
A: Perhaps you could use the Win32 method GetTempPath() in kernel32.dll. This is wrapped in .NET by System.IO.Path.GetTempFileName().
On XP this returns a path in C:\Documents and Settings\username\Local Settings\Temp\, so you should not require admin privileges.
A: If you care about interoperability, the man page for tmpnam suggests:
tmpnam man page
BUGS
Never use this function. Use mkstemp(3) instead.
mkstemp man page
SYNOPSIS
#include <stdlib.h>
int mkstemp(char *template);
DESCRIPTION
The mkstemp() function generates a unique temporary file name from template. The last six characters of template must be
XXXXXX and these are replaced with a string that makes the filename unique. The file is then created with mode read/write
but all these suggest that you have prepared your template prefixed by the contents of the TMP environment variable.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How to rewrite an URL on a JBoss server? I would like to redirect/rewrite this two kinds of URLs:
*
*mydomain.com -> newdomain.com
*mydomain.com/specificPage -> newdomain.com/newSpecificPage
*mydomain.com/anyOtherPage -> mydomain.com/anyOtherPage (no redirect here)
So I just want to redirect the root domain to a new domain, and some pages from my domain to some pages on a new domain...
How can I do that on a JBoss server ?
A: You might take a look at this http://code.google.com/p/urlrewritefilter/
A: Have you looked into http://www.jboss.org/jbossweb/modules/rewrite.html? It looks like what you're looking for, and it's pretty similar to Mod_rewrite for Apache.
A: Sounds like you want to send an HTTP 301 Moved Permanently response.
RewriteCond %{REQUEST_URI} ^URI_TO_REDIRECT
RewriteRule redirect=301 NEW_SITE [L]
or similar. The [L] is to tell it to redirect immediately instead of continuing to rewrite.
A: If you are routing through apache at all it is possible to use mod_rewrite; you just need to be careful as to where you declare the rewrite rules. Directory configs and .htaccess files won't work; you need it as a global configuration for the entire host. Similar thread on serverfault.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Java EE App Server Hello World I am fairly comfortable with standalone Java app development, but will soon be working on a project using a Java EE application server.
Does anyone know of a straightforward how-to tutorial to getting a hello-world type application working in an application server? I'm (perhaps naievly) assuming that the overall approach is similar between different frameworks, so I'm more interested in finding out the approach rather than getting bogged down in differences between the different frameworks.
If you are not aware of a good guide, then could you post bullet-point type steps to getting a hello-world running?, i.e.
*
*Download XX
*Write some code to do YY
*Change file ZZ
*Other steps...
Note: Just because I have a windows machine at home, I would prefer to run if this could be run on windows, but in the interest of a better answer, linux/mac based implementations are welcome.
A: I would choose JBoss AS or GlassFish for a start. However I'm not sure what you mean by Java EE "Hello World". If you just want to deploy some JSP you could use this tutorial (for JBoss):
http://www.centerkey.com/jboss/
If you want to get further and do the EJB stack and/or deploy an ear-file, you could read the very good JBoss documentation:
Installation Guide
Getting started
Configuration Guide
In general you could also just do the basic installation and change or try the pre-installed example applications.
I currently have JBoss installed (on windows). I develop with Eclipse and use the Java EE server integration to hot deploy or debug my code. After you get your first code running you realy should have a look at the ide integration since it makes development/deploy roundtrips so much faster.
A: The JavaEE (they dropped the 2) space is pretty big. A good tutorial to start is the one from Sun. For a simple hello world application, the web container only would suffice. A well known servlet jsp container is tomcat. See here for installation instructions. Try installing it with eclipse and create a web project. This will generate some files for you that you can look at and edit. Also starting and stopping the application server is simpler.
A: Another option is to get Oracle JDeveloper (free to download and use - it's a full featured IDE that includes some neat extras like the SQL workbench and BPEL designer).
As a learning tool, it is quite good, not only for the tutorials available from Oracle, but it includes a range of "cue-card" lessons in the tool itself to teach many common techniques.
cue card view http://tardate.heroku.com/images/jdev-cuecards.jpg
A: If you haven't gone near NetBeans in a while its catching up with Eclipse very fast and worth a look, especially when starting Java EE.
Version 6.x installs Tomcat and/or Glassfish for you and then provides wizards to create/deploy/redeploy applications.
The initial tutorial on Web Applications is here and a more complex example here.
A: As JeroenWyseur puts it, Java EE is a fairly big space. In addition to what he said, you should try to get more details of what exactly you'll be doing: servelts & co, EJB (entity, session, message beans?) and try to get familiar with that.
It should be clear for you that your code runs in a managed environment, which imposes a lot of constraints. in order to make sure you understand what happens you should get familiar with the concept of deployment. Then, if you do EJBs, transaction management is important too. If you don't understand exactly what happens when a bean or a servlet is deployed, how transactions are managed, how beans are invoked, you're going to have a hard time.
A book that helped me a lot back in the time is Mastering EJB, by Ed Roman.
Also, getting familiar with RMI will help you understand EJBs.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Emacs, switch to previous window In Emacs, C-x o takes me to the next window.
What keyboard macro takes me to the previous window in Emacs?
A: That'd be C-- C-x o
In other words, C-x o with an argument of -1. You can specify how many windows to move by inserting a numeric argument between C-u and the command, as in C-u 2 C-x o. (C-- is a shortcut for C-u - 1)
A: Base on idea from @Nate but slightly modified to support backwards cycling between windows
;; Windows Cycling
(defun windmove-up-cycle()
(interactive)
(condition-case nil (windmove-up)
(error (condition-case nil (windmove-down)
(error (condition-case nil (windmove-right) (error (condition-case nil (windmove-left) (error (windmove-up))))))))))
(defun windmove-down-cycle()
(interactive)
(condition-case nil (windmove-down)
(error (condition-case nil (windmove-up)
(error (condition-case nil (windmove-left) (error (condition-case nil (windmove-right) (error (windmove-down))))))))))
(defun windmove-right-cycle()
(interactive)
(condition-case nil (windmove-right)
(error (condition-case nil (windmove-left)
(error (condition-case nil (windmove-up) (error (condition-case nil (windmove-down) (error (windmove-right))))))))))
(defun windmove-left-cycle()
(interactive)
(condition-case nil (windmove-left)
(error (condition-case nil (windmove-right)
(error (condition-case nil (windmove-down) (error (condition-case nil (windmove-up) (error (windmove-left))))))))))
(global-set-key (kbd "C-x <up>") 'windmove-up-cycle)
(global-set-key (kbd "C-x <down>") 'windmove-down-cycle)
(global-set-key (kbd "C-x <right>") 'windmove-right-cycle)
(global-set-key (kbd "C-x <left>") 'windmove-left-cycle)
A: Just to add to @Nate, @aspirin and @Troydm's answer I find this to be a very helpful addition if you decide to bind the windmove commands to whatever key combination you choose:
(setq windmove-wrap-around t)
With the default configuration you will get an error when you get to attempt to move to a window that doesn't exist which becomes kind of annoying after a while. However when windmove-wrap-around is set then attempting to move off the bottom of the frame for example will instead select the topmost window in the frame. This may be a more intuitive behaviour for you.
A: M-n and M-p makes the most sense to me, since they are analogous to C-n (next-line) and C-p (previous-line):
(define-key global-map (kbd "M-p") 'previous-multiframe-window)
(define-key global-map (kbd "M-n") 'other-window)
(inspired by to this and that)
A: Personally I prefer to use window-number.el
To select a different window, use Ctrl-x, Ctrl-j n
Where n is the number of the window, the modeline of each window shows it's number, as shown in the screenshot.
Just download window-number.el, place it in your emacs load-path and use the following in your .emacs
(autoload 'window-number-mode "window-number"
"A global minor mode that enables selection of windows according to
numbers with the C-x C-j prefix. Another mode,
`window-number-meta-mode' enables the use of the M- prefix."
t)
There's another similar mode called switch-window.el which gives you big numbers in the windows... (pressing the number switches the window and reverts the display.)
(source: tapoueh.org)
A: In reference to Nate's answer, I replaced the arrow keys to use the traditional p for going up, n for going down, f for going right and b for going left. I also replaced the Ctrl with Super key as C-p, C-n, C-f and C-b are the default movement keys. This combination with M lets you jump characters and lines instead of going through just one by one after each keystroke. Thus Super key felt the best choice to keep it an easy key binding. Also, now you don't have to take your hand off the home row any more!
(global-set-key (kbd "s-p") `windmove-up)
(global-set-key (kbd "s-n") `windmove-down)
(global-set-key (kbd "s-f") `windmove-right)
(global-set-key (kbd "s-b") `windmove-left)
Hope it helps!
A: (global-unset-key (kbd "M-j"))
(global-unset-key (kbd "M-k"))
(global-set-key (kbd "M-j") (lambda () (interactive) (other-window 1)))
(global-set-key (kbd "M-k") (lambda () (interactive) (other-window -1)))
altj and altk will cycle through your visibles buffers. Forwards and backwards, to be exact.
A: If you work with multiple emacs windows (>3) a lot and you will want to save some keystrokes add this to your init file and you'll be better off:
(defun frame-bck()
(interactive)
(other-window-or-frame -1)
)
(define-key (current-global-map) (kbd "M-o") 'other-window-or-frame)
(define-key (current-global-map) (kbd "M-O") 'frame-bck)
Now just cycle quickly thru the windows with M-o
A: There are some very good and complete answers here, but to answer the question in a minimalist fashion:
(defun prev-window ()
(interactive)
(other-window -1))
(define-key global-map (kbd "C-x p") 'prev-window)
A: You might also want to try using windmove which lets you navigate to the window of your choice based on geometry. I have the following in my .emacs file to change windows using C-x arrow-key.
(global-set-key (kbd "C-x <up>") 'windmove-up)
(global-set-key (kbd "C-x <down>") 'windmove-down)
(global-set-key (kbd "C-x <right>") 'windmove-right)
(global-set-key (kbd "C-x <left>") 'windmove-left)
A: There is already a package that lets you switch windows by using M-. check this website. Add this to your init file:
(require 'windmove)
(windmove-default-keybindings 'meta) ;; or use 'super to use windows key instead alt
A: (global-set-key (kbd "C-x a") 'ace-swap-window)
(global-set-key (kbd "C-x q") 'ace-select-window)
download ace-window from the melpa repo if you don't know how to do that
put this in your .emacs file if you don't have one create it
(package-initialize)
(require 'package)
(add-to-list 'package-archives '("melpa" , "http://melpa.org/packages/"))
(package-initialize)
then "m-x list-packages"
A: The fastest method I have found for switching to the previous window is to mash a couple keys together as a "key-chord". The following lets you use your left pinky+ring fingers together to go to previous window:
(key-chord-define-global "qw" 'prev-window)
(key-chord-define-global "'y" 'other-window) ; bonus for my colemak, adjust otherwise
(key-chord-define-global ";'" 'other-window) ; probably normal
(This is possible because Emacs key chords are order independent, meaning that qw is the same as wq.)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "97"
}
|
Q: Why easyphp stop all apache processes on severals PC? on some Windows PC, when I have both easyPHP and a standalone apache service configured on another network port and with a specific service name, I have a problem : when I stop easyphp, the other apache is stopped too.
The problem do not occur on all PC I have, but seems very strange to me.
Any idea?
more information after answer from Radar : my process have a special name (ecapache), but easyphp does not seems to use process but rather lauch directly the servers.
Thanks
Cédric
A: Maybe how it's killing it is that it's finding all processes called "apache" or similar and just killing them, regardless of if it 'owns' it or not.
A: i think Both are integrated together, i was not occur in all PC because of port number.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I get my C# program to sleep for 50 milliseconds? How do I get my C# program to sleep (pause execution) for 50 milliseconds?
A: For readability:
using System.Threading;
Thread.Sleep(TimeSpan.FromMilliseconds(50));
A: You can't specify an exact sleep time in Windows. You need a real-time OS for that. The best you can do is specify a minimum sleep time. Then it's up to the scheduler to wake up your thread after that. And never call .Sleep() on the GUI thread.
A: Since now you have async/await feature, the best way to sleep for 50ms is by using Task.Delay:
async void foo()
{
// something
await Task.Delay(50);
}
Or if you are targeting .NET 4 (with Async CTP 3 for VS2010 or Microsoft.Bcl.Async), you must use:
async void foo()
{
// something
await TaskEx.Delay(50);
}
This way you won't block UI thread.
A: System.Threading.Thread.Sleep(50);
Remember though, that doing this in the main GUI thread will block your GUI from updating (it will feel "sluggish")
Just remove the ; to make it work for VB.net as well.
A: Use this code
using System.Threading;
// ...
Thread.Sleep(50);
A: Starting with .NET Framework 4.5, you can use:
using System.Threading.Tasks;
Task.Delay(50).Wait(); // wait 50ms
A: There are basically 3 choices for waiting in (almost) any programming language:
*
*Loose waiting
*
*Executing thread blocks for given time (= does not consume processing power)
*No processing is possible on blocked/waiting thread
*Not so precise
*Tight waiting (also called tight loop)
*
*processor is VERY busy for the entire waiting interval (in fact, it usually consumes 100% of one core's processing time)
*Some actions can be performed while waiting
*Very precise
*Combination of previous 2
*
*It usually combines processing efficiency of 1. and preciseness + ability to do something of 2.
for 1. - Loose waiting in C#:
Thread.Sleep(numberOfMilliseconds);
However, windows thread scheduler causes acccuracy of Sleep() to be around 15ms (so Sleep can easily wait for 20ms, even if scheduled to wait just for 1ms).
for 2. - Tight waiting in C# is:
Stopwatch stopwatch = Stopwatch.StartNew();
while (true)
{
//some other processing to do possible
if (stopwatch.ElapsedMilliseconds >= millisecondsToWait)
{
break;
}
}
We could also use DateTime.Now or other means of time measurement, but Stopwatch is much faster (and this would really become visible in tight loop).
for 3. - Combination:
Stopwatch stopwatch = Stopwatch.StartNew();
while (true)
{
//some other processing to do STILL POSSIBLE
if (stopwatch.ElapsedMilliseconds >= millisecondsToWait)
{
break;
}
Thread.Sleep(1); //so processor can rest for a while
}
This code regularly blocks thread for 1ms (or slightly more, depending on OS thread scheduling), so processor is not busy for that time of blocking and code does not consume 100% of processor's power. Other processing can still be performed in-between blocking (such as: updating of UI, handling of events or doing interaction/communication stuff).
A: Thread.Sleep(50);
The thread will not be scheduled for execution by the operating system for the amount of time specified. This method changes the state of the thread to include WaitSleepJoin.
This method does not perform standard COM and SendMessage pumping.
If you need to sleep on a thread that has STAThreadAttribute, but you want to perform standard COM and SendMessage pumping, consider using one of the overloads of the Join method that specifies a timeout interval.
Thread.Join
A: Best of both worlds:
using System.Runtime.InteropServices;
[DllImport("winmm.dll", EntryPoint = "timeBeginPeriod", SetLastError = true)]
private static extern uint TimeBeginPeriod(uint uMilliseconds);
[DllImport("winmm.dll", EntryPoint = "timeEndPeriod", SetLastError = true)]
private static extern uint TimeEndPeriod(uint uMilliseconds);
/**
* Extremely accurate sleep is needed here to maintain performance so system resolution time is increased
*/
private void accurateSleep(int milliseconds)
{
//Increase timer resolution from 20 miliseconds to 1 milisecond
TimeBeginPeriod(1);
Stopwatch stopwatch = new Stopwatch();//Makes use of QueryPerformanceCounter WIN32 API
stopwatch.Start();
while (stopwatch.ElapsedMilliseconds < milliseconds)
{
//So we don't burn cpu cycles
if ((milliseconds - stopwatch.ElapsedMilliseconds) > 20)
{
Thread.Sleep(5);
}
else
{
Thread.Sleep(1);
}
}
stopwatch.Stop();
//Set it back to normal.
TimeEndPeriod(1);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "323"
}
|
Q: How to match a single quote in sed How to match a single quote in sed if the expression is enclosed in single quotes:
sed -e '...'
For example need to match this text:
'foo'
A: You can either use:
"texta'textb" (APOSTROPHE inside QUOTATION MARKs)
or
'texta'\''textb' (APOSTROPHE text APOSTROPHE, then REVERSE SOLIDUS, APOSTROPHE, then APOSTROPHE more text APOSTROPHE)
I used unicode character names. REVERSE SOLIDUS is more commonly known as backslash.
In the latter case, you close your apostrophe, then shell-quote your apostrophe with a backslash, then open another apostrophe for the rest of the text.
A: As noted in the comments to the question, it's not really about sed, but how to include a quote in a quoted string in a shell (e.g. bash).
To clarify a previous answer, you need to escape the quote with a backslash, but you can't do that within a single-quoted expression. From the bash man page:
Enclosing characters in single quotes
preserves the literal value of each
character within the quotes. A single
quote may not occur between single
quotes, even when preceded by a
backslash.
Therefore, you need to terminate the quoted expression, insert the escaped quote, and start a new quoted expression. The shell's quote removal does not add any extra spaces, so in effect you get string concatenation.
So, to answer the original question of how to single quote the expression 'foo', you would do something like this:
sed -e '...'\''foo'\''...'
(where '...' is the rest of the sed expression).
Overall, for the sake of readability, you'd be much better off changing the surrounding quotes to double quotes if at all possible:
sed -e "...'foo'..."
[As an example of the potential maintenance nightmare of the first (single quote) approach, note how StackOverflow's syntax highlighting colours the quotes, backslashes and other text -- it's definitely not correct.]
A: For sed, a very simple solution is to change the single quotation format to a double quote.
For a given variable that contains single quotes
var="I'm a string with a single quote"
If double quotes are used for sed, this will match the single quote.
echo $var | sed "s/'//g"
Im a string with a single quote
Rather than single quotes, which will hang
echo $var | sed 's/'//g'
A: You can also use ['] to match a literal single quote without needing to do any shell quoting tricks.
myvar="stupid computers can't reason about life"
echo "$myvar" | sed -e "s/[']t//"
Outputs:
stupid computers can reason about life
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
}
|
Q: SQL query to calculate coordinate proximity I'm using this formula to calculate the distance between entries in my (My)SQL database which have latitude and longitude fields in decimal format:
6371 * ACOS(SIN(RADIANS( %lat1% )) * SIN(RADIANS( %lat2% )) +
COS(RADIANS( %lat1% )) * COS(RADIANS( %lat2% )) * COS(RADIANS( %lon2% ) -
RADIANS( %lon1% )))
Substituting %lat1% and %lat2% appropriately it can be used in the WHERE clause to find entries within a certain radius of another entry, using it in the ORDER BY clause together with LIMIT will find the nearest x entries etc.
I'm writing this mostly as a note for myself, but improvements are always welcome. :)
Note: As mentioned by Valerion below, this calculates in kilometers. Substitute 6371 by an appropriate alternative number to use meters, miles etc.
A: For databases (such as SQLite) that don't support trigonometric functions you can use the Pythagorean theorem.
This is a faster method, even if your database does support trigonometric functions, with the following caveats:
*
*you need to store coords in x,y grid instead of (or as well as) lat,lng;
*the calculation assumes 'flat earth', but this is fine for relatively local searches.
Here's an example from a Rails project I'm working on (the important bit is the SQL in the middle):
class User < ActiveRecord::Base
...
# has integer x & y coordinates
...
# Returns array of {:user => <User>, :distance => <distance>}, sorted by distance (in metres).
# Distance is rounded to nearest integer.
# point is a Geo::LatLng.
# radius is in metres.
# limit specifies the maximum number of records to return (default 100).
def self.find_within_radius(point, radius, limit = 100)
sql = <<-SQL
select id, lat, lng, (#{point.x} - x) * (#{point.x} - x) + (#{point.y} - y) * (#{point.y} - y) d
from users where #{(radius ** 2)} >= d
order by d limit #{limit}
SQL
users = User.find_by_sql(sql)
users.each {|user| user.d = Math.sqrt(user.d.to_f).round}
return users
end
A: Am i right in thinking this is the Haversine formula?
A: I use the exact same method on a vehicle-tracking application and have done for years. It works perfectly well. A quick check of some old code shows that I multiply the result by 6378137 which if memory serves converts to meters, but I haven't touched it for a very long time.
I believe SQL 2008 has a new spatial datatype that I imagine allows these kinds of comparisons without knowing this formula, and also allows spatial indexes which might be interesting, but I've not looked into it.
A: I have been using this, forget where I got it though.
SELECT n, SQRT(POW((69.1 * (n.field_geofield_lat - :lat)) , 2 ) + POW((53 * (n.field_geofield_lon - :lon)), 2)) AS distance FROM field_revision_field_geofield n ORDER BY distance ASC
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Infrastructure for a software project I'd be leading a new project soon. And I've been pondering over what are the basic infrastructure for a software project. These are the stuff that I think every project should have:
-Coding style conventions
-Naming conventions
-Standard project directory structure(eg maven standard dir layout, etc)
-Project management and issue tracking(eg trac, redmine, etc)
-Continuous Integration server(eg, hudson, cruise control, etc)
I'm not sure if I missed out anything. Would anyone like to add?
A: As a preliminary answer, check out the Joel test:
http://www.joelonsoftware.com/articles/fog0000000043.html
Just an appetizer:
*
*Do you use source control?
*Can you make a build in one step?
*Do you make daily builds?
*Do you have a bug database?
*Do you fix bugs before writing new code?
*Do you have an up-to-date schedule?
*Do you have a spec?
*Do programmers have quiet working conditions?
*Do you use the best tools money can buy?
*Do you have testers?
*Do new candidates write code during their interview?
*Do you do hallway usability testing?
A: *
*revision control system (eg. subversion, cvs, git)
A: In addition to yours I will put:
*
*Unit Test Strategy
*Integration Test Strategy
*Defined Process
*Release (delivery) strategy (like milestones, working packages and so on)
*Source control branching strategy
A: *
*What about documentation - how (comments in code, high-level specs), when, amount, who
*How you will test - unit/acceptance/user testing
*code versioning, some SVN/Git (or is it included in trac?)
*team roles and responsibilities - need to be done in ocntext of your project
A: Knowledge management is crucial. As you already plan to use wiki (like Trac or Redmine) you could use it for KM as well.
A: Functional testing is a mandatory part of any project. Unit testing is great and it works well for Agile projects but the functional testing is still necessary. You need at least a basic Test Plan. If you plan to have multiple projects or sub-projects a Test Strategy document or Wiki page would be good.
Test Cases, Acceptance Test Cases etc could be driven by your User Stories or their equivalents but they still have to exist in some form.
A: I would throw a file sharing server into the mix too. I thought version control was so basic, that I didn't even bother to put it there in the list. But its a good point version control.
A: Configuration Management Plan. You need to have a documented approach to your development workstreams, how you will be merging between then, etc.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do I best convert a string representation into a DbType? Suppose I have a string 'nvarchar(50)', which is for example the T-SQL string segment used in creating a table of that type. How do I best convert that to an enum representation of System.Data.DbType?
Could it handle the many different possible ways of writing the type in T-SQL, such as:
[nvarchar](50)
nvarchar 50
@Jorge Table: Yes, that's handy, but isn't there a prebaked converter? Otherwise good answer.
A: Hope this mapping table do the job.
http://www.carlprothman.net/Default.aspx?tabid=97
A: My first attempt would involve using a regex to parse the two parts of the declaration (where the second part is only used for variably sized types.) Make sure that you convert the type-name to lower case when you've parsed it.
You could make an enum with all the various types in it (lower-cased), then use Enum.Parse to get an instance of the enum value, and then use a switch-case to get the appropriate System.Data.DbType for each enum value.
Kind of gross, I admit.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Best way to validate drag/drop operations for a TreeView in C# I want to verify a drag & drop operation is allowed. A valid item can come from another one of our "controls", or internally from within the custom treeview. Currently I have this:
bool CanDrop(DragEventArgs e)
{
bool allow = false;
Point point = tree.PointToClient(new Point(e.X, e.Y));
TreeNode target = tree.GetNodeAt(point);
if (target != null)
{
if (CanWrite(target)) //user permissions
{
if (e.Data.GetData(typeof(DataInfoObject)) != null) //from internal application
{
DataInfoObject info = (DataInfoObject)e.Data.GetData(typeof(DataInfoObject));
DragDataCollection data = info.GetData(typeof(DragDataCollection)) as DragDataCollection;
if (data != null)
{
allow = true;
}
}
else if (tree.SelectedNode.Tag.GetType() != typeof(TreeRow)) //node belongs to this & not a root node
{
if (TargetExistsInNode(tree.SelectedNode, target) == false)
{
if (e.Effect == DragDropEffects.Copy)
{
allow = true;
}
else if (e.Effect == DragDropEffects.Move)
{
allow = true;
}
}
}
}
}
return allow;
}
I've moved all the checking code to this method to try to improve things, but to me this is still horrible!
So much logic, and so much of it to do things that I'd expect the treeview would do itself (eg. "TargetExistsInNode" checks whether the dragged node is being dragged to one of its children).
What is the best way to validate input to a control?
A: I use the TreeNode.Tag property to store small "controller" objects that makes up the logic. E.g.:
class TreeNodeController {
Entity data;
virtual bool IsReadOnly { get; }
virtual bool CanDrop(TreeNodeController source, DragDropEffects effect);
virtual bool CanDrop(DataInfoObject info, DragDropEffects effect);
virtual bool CanRename();
}
class ParentNodeController : TreeNodeController {
override bool IsReadOnly { get { return data.IsReadOnly; } }
override bool CanDrop(TreeNodeController source, DragDropEffect effect) {
return !IsReadOnly && !data.IsChildOf(source.data) && effect == DragDropEffect.Move;
}
virtual bool CanDrop(DataInfoObject info, DragDropEffects effect) {
return info.DragDataCollection != null;
}
override bool CanRename() {
return !data.IsReadOnly && data.HasName;
}
}
class LeafNodeController : TreeNodeController {
override bool CanDrop(TreeNodeController source, DragDropEffect effect) {
return false;
}
}
Then my CanDrop would be something like:
bool CanDrop(DragDropEventArgs args) {
Point point = tree.PointToClient(new Point(e.X, e.Y));
TreeNode target = tree.GetNodeAt(point);
TreeNodeController targetController = target.Tag as TreeNodeController;
DataInfoObject info = args.GetData(typeof(DataInfoObject)) as DataInfoObject;
TreeNodeController sourceController = args.GetData(typeof(TreeNodeController)) as TreeNodeController;
if (info != null) return targetController.CanDrop(info, e.Effect);
if (sourceController != null) return targetController.CanDrop(sourceController, e.Effect);
return false;
}
Now for each class of objects that I add to the tree I can specialize the behaviour by choosing which TreeNodeController to put in the Tag object.
A: Not strictly answering your question, but I've spotted a bug in your code.
DragDropEffects has the flags attribute set so you could get e.Effect to be a bitwise combination of copy and move. In which case your code would incorrectly return false.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: OO Design, open/closed principle question I've been thinking about this object oriented design question for a while now and have unable to come up with a satisfactory solution, so thought I'd throw it open to the crowds here for some opinions.
I have a Game class that represents a turn based board game, we can assume it's similar to Monopoly for the purposes of this question.
In my design I have a Player class containing a method TakeTurn.
The Game loops through all Players and calls the TakeTurn method to do all the necessary things to complete the turn.
I want to be able to have n number of players, and be able to set an arbitrary number of them to be computer players.
So, my thought was to have a HumanPlayer class and a ComputerPlayer class, both of which derive from Player.
The Game knows only the Player class and simply calls the TakeTurn method on each Player in turn.
My problem comes in the fact that ComputerPlayer objects can be completely automated, i.e. keeping with the Monopoly example, can decide to buy a property using some piece of logic.
Now, with the HumanPlayer object, it needs to get an input from the actual user to be able to buy a property for instance, which seems to imply a different interface and potentially mean they shouldn't derive
I haven't been able to come up with a good solution to the problem without having the Game class know the actual implementations of the various Player classes explicitly.
I could always make the assumption in the Game class that there will only ever be human and computer players and effectively close it for extension, but it doesn't seem like good OO programming.
Any opinions on this would be appreciated.
A: I think you should not let the Game class handle IO.
this way, the (blocking) TakeTurn method will hide from the game board the means of implementation. it can use other objects to communicate with the user.
All the Game class should concern itself with is the state of the board and the turn. the players should all implement a single Player interface, and hide all implementation from the Game.
A: If the Game is managing the game state and doing I/O, the Game is doing too much.
You want Game to be tightly focused on just rules and turns and state changes.
Game doesn't know what a player is; it only knows that it has players.
You want Players to examine the Game state and execute the legal actions during their turns.
Human Players and the Game as a whole both share a common I/O package that shows game state and prompts humans for their input.
You can make good use of the Java Observable by making the I/O package an Observer of the Game. That way, Game state changes are reported to the I/O for display or logging or both.
A: I would probably not have two HumanPlayer and ComputerPlayer classes, but a single Player class which is configured at creation time with the proper input strategy.
The way the player obtains information to decide its move in the next turn of the game is the only thing that varies (from the original problem description, at least), so just encapsulate that in a separate abstraction.
Whatever high-level class that sets up the game should also create the two sets of players (one human, another computer-simulated), with the proper input strategy for each, and then simply give these player objects to the game object. The Game class will then only call the TakeTurn method on the given list of players, for each new turn.
A: Instead of telling the game class there is only ever one human, why not let it get that input during the menu/initialization of the game? If there are more players, that can be decided via some form of input (select players in the menu), prior to the game class initialization.
A: The interface that Player presents to Game is orthogonal to the behaviour of derived Player classes.
The fact that the implementation of TakeTurn varies depending on the concrete type of the Player object should not be a cause for concern.
A: I think the Game Class should not concern about any implementations of the Player classes, and also ignore the User Interface.
Any user input needs to be handled by the HumanPlayer class.
A: Im am not sure if this is what you want
public abstract class Player
{
int position;
DecisionMaker decisionDependency;
...
public void TakeTurn()
{
position += RollDice();
GameOption option GetOptions(position);
MakeDescion(option);
}
protected int RollDice()
{
//do something to get the movement
}
protected abstract void MakeDecision(GameOption option);
}
Public class ComputerPlayer : Player
{
public ComputerPlayer()
{
decisionDependency = new AIDecisionMaker();
}
protected override void void MakeDecision(GameOption option)
{
decisionDependency.MakeDecision(option);
//do stuff, probably delgate toan AI based dependency
}
}
Public class HumanPlayer : Player
{
public HumanPlayer()
{
decisionDependency = new UIDecisionMaker();
}
protected override void void MakeDecision(GameOption option)
{
decisionDependency.MakeDecision(option);
//do stuff, probably interacting with the a UI or delgate to a dependency
}
}
A: I'd say, the Game class shouldn't care if this is a computer player or a human player. It should always call TakeTurn on the next player class. If this is a human player, it is the responsibility of the Player class, to communicate with the user and ask the user what to do. That means it will block till the user made up his mind. As usually UI interaction takes place in the main thread of an application, it is only important that a blocking TakeTurn won't block the application as a whole, otherwise user input cannot be processed while Game waits for TakeTurn.
A: Instead of the Game class calling TakeTurn on all the players the players should call TakeTurn on the Game class and the Game class should validate if the right player is taking his turn.
This should help solve the User and Computer player problem.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Hosting a WCF endpoint with programatic settings in IIS I need to host a WCF service in IIS that exposes a wsHttpBinding. That part is working nicely using the settings of system.serviceModel in my web.config.
What i need now is to setup the configuration (like maxReceivedMessageSize and other options) through a configuration assembly that is also used by the client(s).
How is this possible? I see no handles in my .svc file like in my client to overload binding configuration. I suspect this is because it is automaticly handled by ISS when the application starts up as in contrast to a windows service where you have to manually declare the client/channel.
Am i right about this? And would the solution to his problem (if i still want hosting inside IIS) to remove all configuration and instead create a HttpHandler that takes care of the hosting on startup?
If i'm right i guess i just wasted a whole lot of space writing this, but i can't help thinking i'm missing something.
A: You're missing something :)
Create a custom ServiceHost and use that in the .svc file; in the custom service host do all your configuration
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Optional Parameters in Mysql stored procedures How do I create an optional parameter in a mysql stored procedure?
A: Optional parameters are not supported in mySQL stored procedures, nor are there any current plans to add this functionality at this time. I'd recommend passing null for optional parameters.
A: According to this bug, there is currently no way to create optional parameters.
A: The (somewhat clunky) workaround I used was to define a number of parameters that I thought I might use and then test to see if the optional parameter's value is NOT NULL before using it:
CREATE PROCEDURE add_product(product_name VARCHAR(100), product_price FLOAT,
cat1 INT, cat2 INT, cat3 INT)
-- The cat? parameters are optional; provide a NULL value if not required
BEGIN
...
-- Add product to relevant categories
IF cat1 IS NOT NULL THEN
INSERT INTO products_to_categories (products_id, categories_id) VALUES (product_id, cat1);
END IF;
IF cat2 IS NOT NULL THEN
INSERT INTO products_to_categories (products_id, categories_id) VALUES (product_id, cat2);
END IF;
IF cat3 IS NOT NULL THEN
INSERT INTO products_to_categories (products_id, categories_id) VALUES (product_id, cat3);
END IF;
END
If I don't want to use the parameter when calling the stored, I simply pass a NULL value. Here's an example of the above stored procedure being called:
CALL add_product("New product title", 25, 66, 68, NULL);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: XMLSockets in Flash Lite? Are XMLSockets available in Flash Lite, and if yes in which versions, and are there differences between the regular and the lite objects?
A: I don't know enough to tell you the exact difference(s) between XML sockets in Flash and Flash Lite, but they are definitely supported in Flash Lite versions 2.1 and later. See the Flash Mobile Blog for an example.
A: We're using them to send and receive text strings in both Flash 9 (PC) and Flashlite3.1 (ARM) with no code changes. Unless there are differences in the XML parsing, you're probably good to go.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I best convert a DbType to System.Type? How do I best convert a System.Data.DbType enumeration value to the corresponding (or at least one of the possible corresponding) System.Type values?
For example:
DbType.StringFixedLength -> System.String
DbType.String -> System.String
DbType.Int32 -> System.Int32
I've only seen very "dirty" solutions but nothing really clean.
(yes, it's a follow up to a different question of mine, but it made more sense as two seperate questions)
A: AFAIK there is no built-in converter in .NET for converting a SqlDbType to a System.Type. But knowing the mapping you can easily roll your own converter ranging from a simple dictionary to more advanced (XML based for extensability) solutions.
The mapping can be found here:
http://www.carlprothman.net/Default.aspx?tabid=97
A: System.Data.SqlClient objects use the MetaType component to translate DbType and SqlDbType to .NET CLR Types. Using reflection, you could leverage this ability if needed:
var dbType = DbType.Currency;
Type metaClrType = Type.GetType(
"System.Data.SqlClient.MetaType, System.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089",
true,
true
);
object metaType = metaClrType.InvokeMember(
"GetMetaTypeFromDbType",
BindingFlags.InvokeMethod | BindingFlags.Static | BindingFlags.NonPublic,
null,
null,
new object[] { dbType }
);
var classType = (Type)metaClrType.InvokeMember(
"ClassType",
BindingFlags.GetField | BindingFlags.Instance | BindingFlags.NonPublic,
null,
metaType,
null
);
string cSharpDataType = classType.FullName;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: What process is listening on a certain port on Solaris? So I log into a Solaris box, try to start Apache, and find that there is already a process listening on port 80, and it's not Apache. Our boxes don't have lsof installed, so I can't query with that. I guess I could do:
pfiles `ls /proc` | less
and look for "port: 80", but if anyone has a better solution, I'm all ears! Even better if I can look for the listening process without being root. I'm open to both shell and C solutions; I wouldn't mind having a little custom executable to carry with me for the next time this comes up.
Updated: I'm talking about generic installs of solaris for which I am not the administrator (although I do have superuser access), so installing things from the freeware disk isn't an option. Obviously neither are using Linux-specific extensions to fuser, netstat, or other tools. So far running pfiles on all processes seems to be the best solution, unfortunately. If that remains the case, I'll probably post an answer with some slightly more efficient code that the clip above.
A: Mavroprovato's answer reports more than only the listening ports. Listening ports are sockets without a peer. The following Perl program reports only the listening ports. It works for me on SunOS 5.10.
#! /usr/bin/env perl
##
## Search the processes which are listening on the given port.
##
## For SunOS 5.10.
##
use strict;
use warnings;
die "Port missing" unless $#ARGV >= 0;
my $port = int($ARGV[0]);
die "Invalid port" unless $port > 0;
my @pids;
map { push @pids, $_ if $_ > 0; } map { int($_) } `ls /proc`;
foreach my $pid (@pids) {
open (PF, "pfiles $pid 2>/dev/null |")
|| warn "Can not read pfiles $pid";
$_ = <PF>;
my $fd;
my $type;
my $sockname;
my $peername;
my $report = sub {
if (defined $fd) {
if (defined $sockname && ! defined $peername) {
print "$pid $type $sockname\n"; } } };
while (<PF>) {
if (/^\s*(\d+):.*$/) {
&$report();
$fd = int ($1);
undef $type;
undef $sockname;
undef $peername; }
elsif (/(SOCK_DGRAM|SOCK_STREAM)/) { $type = $1; }
elsif (/sockname: AF_INET[6]? (.*) port: $port/) {
$sockname = $1; }
elsif (/peername: AF_INET/) { $peername = 1; } }
&$report();
close (PF); }
A: #!/usr/bin/bash
# This is a little script based on the "pfiles" solution that prints the PID and PORT.
pfiles `ls /proc` 2>/dev/null | awk "/^[^ \\t]/{smatch=\$0;next}/port:[ \\t]*${1}/{print smatch, \$0}{next}"
A: From Solaris 11.2 onwards you can indeed do this with the netstat command. Have a look here. The -u switch is what you are looking for.
If you are on a lower version of Solaris then - as others have pointed out - the Solaris way of doing this is some kind of script wrapper around pfiles command. Beware though that pfiles command halts the process for a split second in order to inspect it. For 99.9% of processes this is unimportant. Unfortunately we have a process that will give a core dump if it is hit with a pfiles command so we are a bit cautious about using the command. Your situation may be totally different if you are in the 99.9%, meaning you can safely use the pfiles command.
A: I found this script somewhere. I don't remember where, but it works for me:
#!/bin/ksh
line='---------------------------------------------'
pids=$(/usr/bin/ps -ef | sed 1d | awk '{print $2}')
if [ $# -eq 0 ]; then
read ans?"Enter port you would like to know pid for: "
else
ans=$1
fi
for f in $pids
do
/usr/proc/bin/pfiles $f 2>/dev/null | /usr/xpg4/bin/grep -q "port: $ans"
if [ $? -eq 0 ]; then
echo $line
echo "Port: $ans is being used by PID:\c"
/usr/bin/ps -ef -o pid -o args | egrep -v "grep|pfiles" | grep $f
fi
done
exit 0
Edit: Here is the original source:
[Solaris] Which process is bound to a given port ?
A: netstat on Solaris will not tell you this, nor will older versions of lsof, but if you download and build/install a newer version of lsof, this can tell you that.
$ lsof -v
lsof version information:
revision: 4.85
latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/
latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ
latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man
configuration info: 64 bit kernel
constructed: Fri Mar 7 10:32:54 GMT 2014
constructed by and on: user@hostname
compiler: gcc
compiler version: 3.4.3 (csl-sol210-3_4-branch+sol_rpath)
8<- - - - ***SNIP*** - - -
With this you can use the -i option:
$ lsof -i:22
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 521 root 3u IPv6 0xffffffff89c67580 0t0 TCP *:ssh (LISTEN)
sshd 5090 root 3u IPv6 0xffffffffa8668580 0t322598 TCP host.domain.com:ssh->21.43.65.87:52364 (ESTABLISHED)
sshd 5091 johngh 4u IPv6 0xffffffffa8668580 0t322598 TCP host.domain.com:ssh->21.43.65.87:52364 (ESTABLISHED)
Which shows you exactly what you're asking for.
I had a problem yesterday with a crashed Jetty (Java) process, which only left 2 files in its /proc/[PID] directory (psinfo & usage).
pfiles failed to find the process (because the date it needed was not there)
lsof found it for me.
A: Here's a one-liner:
ps -ef| awk '{print $2}'| xargs -I '{}' sh -c 'echo examining process {}; pfiles {}| grep 80'
'echo examining process PID' will be printed before each search, so once you see an output referencing port 80, you'll know which process is holding the handle.
Alternatively use:
ps -ef| grep $USER|awk '{print $2}'| xargs -I '{}' sh -c 'echo examining process {}; pfiles {}| grep 80'
Since 'pfiles' might not like that you're trying to access other user's processes, unless you're root of course.
A: You might not want to, but your best bet is to grab the sunfreeware CD and install lsof.
Other than that, yes you can grovel around in /proc with a shell script.
A: I think the first answer is the best
I wrote my own shell script developing this idea :
#!/bin/sh
if [ $# -ne 1 ]
then
echo "Sintaxis:\n\t"
echo " $0 {port to search in process }"
exit
else
MYPORT=$1
for i in `ls /proc`
do
pfiles $i | grep port | grep "port: $MYPORT" > /dev/null
if [ $? -eq 0 ]
then
echo " Port $MYPORT founded in $i proccess !!!\n\n"
echo "Details\n\t"
pfiles $i | grep port | grep "port: $MYPORT"
echo "\n\t"
echo "Process detail: \n\t"
ps -ef | grep $i | grep -v grep
fi
done
fi
A: Most probly sun's administrative server..
It's usually bundled along with sun's directory and a few other webmin-ish stuff that is in the default installation
A: This is sort of an indirect approach, but you could see if a website loads on your web browser of choice from whatever is running on port 80. Or you could telnet to port 80 and see if you get a response that gives you a clue as to what is running on that port and you can go shut it down. Since port 80 is the default port for http traffic chances are there is some sort of http server running there by default, but there's no guarantee.
A: If you have access to netstat, that can do precisely that.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
}
|
Q: Make Web Application Accessible What things have to be done before I can honestly tell myself my web application is accessible by anyone? Or even better, convince Joe Clark. I don't have any video or audio to worry about, so I know I won't need transcripts. What else do I have to check?
A: You should also check out the WAI-ARIA stuff:
*
*http://www.w3.org/WAI/intro/aria
*http://alistapart.com/articles/waiaria
*http://juicystudio.com/article/wai-aria-live-regions.php
And, for a perspective on the challenges actually faced by users with various disabilities when they try to use the web, have a look at some of the presentations and videos from the Scripting Enabled conference in London on Friday: http://scriptingenabled.org/ (I don't think all of them are uploaded yet)
A: Your question is very vague, but in short, you need to ensure that your site meets one of the three levels (A, AA, or AAA) of the Web Content Accessibility Guidelines.
FWIW, in my experience, if you are providing anything other than a purely static HTML site, aim for AA. Trying to follow the WCAG guidelines stringently to triple-A standard for a dynamic website is the road to hair loss IMO. This may change with WCAG 2.0.
Good luck!
EDIT: @Blowdart suggests running your site through online checkers. This is fine so long as you realise that many of the WCAG guidelines (especially towards the higher end) are so arbitrary, they can only be validated with a human eye.
Do not trust the output of these online checkers and blindly stick a AAA badge on your site. If you are called on it, you may be in trouble.
+1 Blowdart for suggesting HTML and CSS validation, and Chris Pederick's add-on is great!
A: If the site is usable with JavaScript and CSS disabled, then you're doing pretty well. The Web Developer toolbar in Firefox lets you easily disable both of these. Another way to check is to use Lynx, a command-line, text-only browser. Beyond that, your best bet is to check the site heuristically against whatever guidelines apply (in the U.S., that's usually Section 508). No site will be perfectly accessible, but doing these things will ensure that yours is very accessible.
A: Run it through a bunch of checkers. Check the HTML is valid. Try to use it with script disabled. Try to use it with CSS disabled. You can do all of this with the web developer addon for Firefox
A: Online accessibility checkers can help, following the WCAG can help, trying your site with demo versions of screen readers can help, but the only way to "insure" a site is accessible is to have an expert check it for you, or become an expert your self. Fortunately, if you do follow the WCAG 2.0, test with a screen reader, and run the online tests, 98% of the time, you will be just fine. But if you want a guaranty...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Excel report framework Is there any Excel report framework available? We need to export some of the reports into Excel format. Our application is java application hence anything supporting java would be great. I have tried Apache POI API, however that is not good enough. Any framework based on Windows API would be better.
A: SQL Server Reporting Services has options to export to Excel.
A: FYI, JasperReports uses POI for Excel-conversion.
A: Can you elaborate on what you don't like about Apache POI? I've been using POI for years now and haven't found anything that it couldn't do with a tweak here or there or taking an creative approach. IMHO, it's the best open-source (and free) Excel generation/reporting framework out there.
If you are willing to pay money, then Actuate has probably the best solution. Actuate's e.Spreadsheet Engine and the Excel API, you can read, write, modify and generate entire spreadsheets or parts of spreadsheets. I've used it and their API is richer and simpler then POI. POI, while powerful feels like an API that's grown up over time and has many developers involved in creating functionality and patches.
A: Try xlsgen, supports Java (but can only run under Windows).
A: Why is poi not good enough for you?
An alternative might be jasper reports. I've used this instead of poi a couple of times and the experience was pleasant.
A: You can also try jxl, but honestly it's API is more confusing than POI.
A: Xylophone is LGPL Java library and command line utility that uses Apache POI, but mitigates most of its drawbacks.
It consumes data in XML format, spreadsheet templates in XLS(X) format and makes producing of complex Excel reports more fun. You can read about it in this post. Because of licensing and security issues this must be better choice for Java backend than Windows API.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Automated tests for Java Swing GUIs What options are there for building automated tests for GUIs written in Java Swing?
I'd like to test some GUIs which have been written using the NetBeans Swing GUI Builder, so something that works without requiring special tampering of the code under test would be ideal.
A: We are considering jemmy to automate some of the GUI testing. Looks promising.
A: I use java.awt.Robot. Is not nice, is not easy but works every time.
Pros:
*
*You are in control
*Very fast
*Build your own FWK
*Portable
*No external dependencies
Cons:
*
*No nice GUI to build test
*You have to leave the GUI alone while you test
*Build your own FWK
*Difficult to change test code and create your first harness
Now if you have the budget I would go for LoadRunner. Best in class.
(Disclosure: relationship to the company that owns LR, but I worked with LR before the relationship)
A: We're using QF-Test and are quite satisfied.
A: I haven't used it personally, but SwingUnit looks quite good. You can use it with jUnit, and it isn't based on "location of components" (i.e. x and y co-ordinates).
The only thing you may have to do with the NetBeans GUI Builder is set unique names for your components.
A: You can use Marathon :
"Marathon Integrated Testing Environment, MarathonITE, is an affordable, easy-to-use and cross-platform Java/Swing™ GUI Test automation framework. You can use MarathonITE‘s inbuilt script recorder to create clean, readable test scripts either in Python or Ruby. Advanced features like extract-method refactoring, create-datadriven-tests and objectmap editing allows you to create maintainable, resilient test suites."
A: Recently I came across FEST which seemed promising, except that the developer announced in 2012 that development would not continue.
AssertJ is a fork of FEST that is working very well for me. It is actively maintained (at time of writing), supports Java 8, has assertions for a few popular libraries such as Guava and Joda Time, and is very well documented. It is also free and open.
A: Sikuli: a GUI-tester using screenshots
http://sikuli.org/
A: You could try ReTest, which is a novel tool that implements an innovative approach to functional regression testing and combines it with ai-based monkey testing. It is about to become open source as well...
Disclaimer: I am one of the founders of the company behind ReTest.
A: I'm currently using FEST. It works with JUnit and will also take screenshots of failed tests.
It has default component hunting methods which look for the name of the component being tested (which need to be set manually), but you can also generate the testers for a given component by passing it the component.
A: For those with an adventurous mind, there is gooey https://github.com/robertoaflores/Gooey a (very basic and under-development) programmatic testing tool for swing applications.
A: You can try to use Cucumber and Swinger for writing functional acceptance tests in plain english for Swing GUI applications. Swinger uses Netbeans' Jemmy library under the hood to drive the app.
Cucumber allows you to write tests like this:
Scenario: Dialog manipulation
Given the frame "SwingSet" is visible
And the frame "SwingSet" is the container
When I click the menu "File/About"
Then I should see the dialog "About Swing!"
Given the dialog "About Swing!" is the container
When I click the button "OK"
Then I should not see the dialog "About Swing!"
Take a look at this Swinger video demo to see it in action.
A: Just did some quick scans. Squish was the most promising. Not for free though
A: You can user sikuli or Automa for testing your GUI part, these are well documented and tested tools
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "75"
}
|
Q: Cruise Control and Obfuscation, How? this is my first question to stackoverflow so here it goes...
I use cruise control for my continuous integration scheme, i want to use obfuscation in order to add another protection layer to my assemblies. The thing is that i don't know how to go about it since i couldn't find articles describing about this. Suggestions that include other CI tools such as NAnt are also accepted.
Commercial tools are also an option so don't hesitate to include them in your answer. The applications that i am building and want to obfuscate are mostly written in Compact Framework 2.0, Dot Net 2.0-3.5.
At the moment cruise control checks for changes in the repository, then based on the script for the specific solution downloads and builds the project by using devenv, after the setup project has been run it copies the setup file into a different folder and thats more or less it. So i need obfuscate somewhere in this process..
A: @konstantinos.konstantinidis.myopenid.com: your problem seems to be with the setup project not the continuous integration server.
You have the setup project picking up the primary outputs for the project.
Change it to use file references to the outputs. It's a little more configuration in the setup project but I think you can still achieve all the same functionality.
That way you can pick up your obfuscated assemblies from whatever staging directory your obfuscation tool is dumping them into.
A: Nice to hear someone is using CruiseControl.NET. That is a nice tool. You can use a command line tool for obfuscation as RemoteSoft .NET Obfuscator and integrate a call to the tool in the build script. See example on how to execute a file below from CruiseControl.NET wiki.
http://www.remotesoft.com/salamander/obfuscator.html
<exec>
<executable>make</executable>
<baseDirectory>D:\dev\MyProject</baseDirectory>
<buildArgs>all</buildArgs>
<buildTimeoutSeconds>10</buildTimeoutSeconds>
<successExitCodes>0,1,3,5</successExitCodes>
<environment>
<variable>
<name>MyVar1</name>
<value>Var1Value</value>
</variable>
<variable name="MyVar2" value="Var2Value"/>
...
</environment>
</exec>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: I need to write code in python for comparing text of two documents using fingerprint techniques I need to write code in python language for comparing the text of document using fingerprint techniques. I do not know to take fingerprint of a document or to generate fingerprint of a document. I'm asking if anyone knows the method or has source code for generating fingerprints of documents which is stored in bits form.
A: If you want message digests (cryptographic hashes), use the hashlib library. Here's an example (IPython session):
In [1]: import hashlib
In [2]: md = hashlib.sha256(open('/tmp/Calendar.xls', 'rb').read())
In [3]: md.hexdigest()
Out[3]: '8517f1eae176f1a20de78d879f81f23de503cfd6b8e4be1d798fb2342934b187'
A: You might try the following papers to get started with the concept of fingerprinting:
*
*Winnowing: Local Algorithms for Document Fingerprinting
*Scalable Document Fingerprinting
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Delete DataGridView line How can I delete a line form a bound DataGridView when pressing the Delete key?
A: If you're on framework 3.5, it looks like there's a method on the DataGridView to process the delete key.
http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridview.processdeletekey.aspx
Otherwise, I would suggest capturing the form keypress event and working backwards to get the active/selected DataGridView row to know which one to process.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can you tell programmatically if a Flex App is running in debug mode? Is it possible to write code in a Flex application that will only be run in a debug build, or when running through the debugger?
Does Flex provide a way to actually remove code entirely from release builds, like C-style #defines?
The app is not necessarily running in a web page.
A: You can do conditional compilation like this:
CONFIG::debugging {
// this will be removed if CONFIG::debugging resolves to false at compile time
}
And then add this to the compiler flags:
-define+=CONFIG::debugging,true
for debug builds, and
-define+=CONFIG::debugging,false
for release builds. CONFIG and debugging can be anything, like MY_AWESOME_NAMESPACE and fooBar, it doesn't matter.
Read more here: Using conditional compilation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: How to write an emacs mode for a new language? I would like to write an Emacs major mode for a 4GL.
Can someone show me a tutorial?
As far as I googled I was able to find only this broken:
link http://two-wugs.net/emacs/mode-tutorial.html
A: If you're lazy, one easy way is to extend generic-mode to know about your new language:
http://emacswiki.org/emacs/GenericMode
I do this a lot for config files for applications that I work with a lot to get decent syntax highlighting. Here's one I did for the asterisk PBX a long time ago as an example.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61"
}
|
Q: Is there an efficient\easy way to draw a concave polygon in Direct3d I'm trying to draw a polygon using c# and directx
All I get is an ordered list of points from a file and I need to draw the flat polygon in a 3d world.
I can load the points and draw a convex shape using a trianglefan and drawuserprimitives.
This obviously leads to incorrect results when the polygon is very concave (which it may be).
I can't imagine I'm the only person to grapple with this problem (tho I'm a gfx/directx neophyte - my background is in gui\windows application development).
Can anyone point me towards a simple to follow resource\tutorial\algorithm which may assist me?
A: Direct3D can only draw triangles (well, it can draw lines and points as well, but that's besides the point). So if you want to draw any shape that is more complex than a triangle, you have to draw a bunch of touching triangles that equal to that shape.
In your case, it's a concave polygon triangulation problem. Given a bunch of vertices, you can keep them as is, you just need to compute the "index buffer" (in simplest case, three indices per triangle that say which vertices the triangle uses). Then draw that by putting into vertex/index buffers or using DrawUserPrimitives.
Some algorithms for triangulating simple (convex or concave, but without self-intersections or holes) polygons are at VTerrain site.
I have used Ratcliff's code in the past; very simple and works well. VTerrain has a dead link to it; the code can be found here. It's C++, but porting that over to C# should be straightforward.
Oh, and don't use triangle fans. They are of very limited use, inefficient and are going away soon (e.g. Direct3D 10 does not support them anymore). Just use triangle lists.
A: If you are able to use the stencil buffer, it should not be hard to do. Here's a general algorithm:
Clear the stencil buffer to 1.
Pick an arbitrary vertex v0, probably somewhere near the polygon to reduce floating-point errors.
For each vertex v[i] of the polygon in clockwise order:
let s be the segment v[i]->v[i+1] (where i+1 will wrap to 0 when the last vertex is reached)
if v0 is to the "right" of s:
draw a triangle defined by v0, v[i], v[i+1] that adds 1 to the stencil buffer
else
draw a triangle defined by v0, v[i], v[i+1] that subtracts 1 from the stencil buffer
end for
fill the screen with the desired color/texture, testing for stencil buffer values >= 2.
By "right of s" I mean from the perspective of someone standing on v[i] and facing v[i+1]. This can be tested by using a cross product:
cross(v0 - v[i], v[i+1] - v[i]) > 0
A: Triangulation is he obvious answer, but it's hard to write a solid triangulator. Unless you have two month time to waste don't even try it.
There are a couple of codes that may help you:
The GPC Library. Very easy to use, but you may not like it's license:
http://www.cs.man.ac.uk/~toby/alan/software/gpc.html
There is also triangle:
http://www.cs.cmu.edu/~quake/triangle.html
And FIST:
http://www.cosy.sbg.ac.at/~held/projects/triang/triang.html
Another (and my prefered) option would be to use the GLU tesselator. You can load and use the GLU library from DirectX programs just fine. It does not need an OpenGL context to use it and it's pre-installed on all windows machines. If you want source you can lift off the triangulation code from the SGI reference implementation. I did that once and it took me just a couple of hours.
So far for triangulation. There is a different way as well: You can use stencil tricks.
The general algorithm goes like this:
*
*Disable color- and depth writes. Enable stencil writes and setup your stencil buffer that it will invert the current stencil value. One bit of stencil is sufficient. Oh - your stencil buffer should be cleared as well.
*Pick a random point on the screen. Any will do. Call this point your Anchor.
*For each edge of your polygon build a triangle from the two vertices that build the edge and your anchor. Draw that triangle.
*Once you've drawn all these triangles, turn off stencil write, turn on stencil test and color-write and draw a fullscreen quad in your color of choice. This will fill just the pixels inside your convex polygon.
It's a good idea to place the anchor into the middle of the polygon and just draw a rectangle as large as the boundary box of your polygon. That saves a bit of fillrate.
Btw - the stencil technique works for self-intersecting polygons as well.
Hope it helps,
Nils
A: I just had to do this for a project. The simplest algorithm I found is called "Ear Clipping". A great paper on it is here: TriangulationByEarClipping.pdf
I took me about 250 lines of c++ code and 4 hours to implement the brute force version of it. Other algorithms have better performance, but this was simple to implement and understand.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Will everything in the standard library treat strings as unicode in Python 3.0? I'm a little confused about how the standard library will behave now that Python (from 3.0) is unicode-based. Will modules such as CGI and urllib use unicode strings or will they use the new 'bytes' type and just provide encoded data?
A: One of the great things about this question (and Python in general) is that you can just mess around in the interpreter! Python 3.0 rc1 is currently available for download.
>>> import urllib.request
>>> fh = urllib.request.urlopen('http://www.python.org/')
>>> print(type(fh.read(100)))
<class 'bytes'>
A: Logically a lot of things like MIME-encoded mail messages, URLs, XML documents, and so on should be returned as bytes not strings. This could cause some consternation as the libraries start to be nailed down for Python 3 and people discover that they have to be more aware of the bytes/string conversions than they were for str/unicode ...
A: There will be a two-step dance here. See Python 3000 and You.
Step 1 is to get running under 3.0.
Step 2 is to rethink your API's to, perhaps, do something more sensible.
The most likely course is that the libraries will switch to unicode strings to remain as compatible as possible with how they used to work.
Then, perhaps, some will switch to bytes to more properly implement the RFC standards for the various protocols.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Beginning Web Development on Plan 9 I've been wanting to program for the Plan 9 operating system for a while. I'd really like to play around with a web app there. Of course, the only language I know for Plan 9 is C, and that doesn't seem ideal for web development. I also understand that it doesn't run apache or mysql either.
What is the best way to start coding web apps on Plan 9?
A: Check out Kenji Arisawa's Pegasus (paper) webserver for Plan 9.
Plan 9 may have a reputation for being C-only, but several langauges, including Scheme, Ruby, Python, and Perl have been ported. Check out the Contrib Index for the code.
Finally, start reading the Plan 9 white papers so that you can understand its philosophy. If you want to do net-related things, the file protocol 9p is essential.
A: Werc is a web framework designed to run on Plan 9 (and Plan 9 from User Space). It is built using the rc shell and following the classic Bell Labs 'tool philosophy'.
Instead of a database, in keeping with the Unix tradition it uses plain text files stored in a file system.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Serialize Java objects into Java code Does somebody know a Java library which serializes a Java object hierarchy into Java code which generates this object hierarchy? Like Object/XML serialization, only that the output format is not binary/XML but Java code.
A: Serialised data represents the internal data of objects. There isn't enough information to work out what methods you would need to call on the objects to reproduce the internal state.
There are two obvious approaches:
*
*Encode the serialised data in a literal String and deserialise that.
*Use java.beans XML persistence, which should be easy enough to process with your favourite XML->Java source technique.
A: I am not aware of any libraries that will do this out of the box but you should be able to take one of the many object to XML serialisation libraries and customise the backend code to generate Java. Would probably not be much code.
For example a quick google turned up XStream. I've never used it but is seems to support multiple backends other than XML - e.g. JSON. You can implement your own writer and just write out the Java code needed to recreate the hierarchy.
I'm sure you could do the same with other libraries, in particular if you can hook into a SAX event stream.
See:
HierarchicalStreamWriter
A: Great question. I was thinking about serializing objects into java code to make testing easier. The use case would be to load some data into a db, then generate the code creating an object and later use this code in test methods to initialize data without the need to access the DB.
It is somehow true that the object state doesn't contain enough info to know how it's been created and transformed, however, for simple java beans there is no reason why this shouldn't be possible.
Do you feel like writing a small library for this purpose? I'll start coding soon!
A: XStream is a serialization library I used for serialization to XML. It should be possible and rather easy to extend it so that it writes Java code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: What is the difference between mysql_real_escape_string and addslashes? mysql_real_escape_string and addslashes are both used to escape data before the database query, so what's the difference? (This question is not about parametrized queries/PDO/mysqli)
A: mysql_real_escape_string() has the added benefit of escaping text input correctly with respect to the character set of a database through the optional link_identifier parameter.
Character set awareness is a critical distinction. addslashes() will add a slash before every eight bit binary representation of each character to be escaped.
If you're using some form of multibyte character set it's possible, although probably only through poor design of the character set, that one or both halves of a sixteen or thirty-two bit character representation is identical to the eight bits of a character addslashes() would add a slash to.
In such cases you might get a slash added before a character that should not be escaped or, worse still, you might get a slash in the middle of a sixteen (or thirty-two) bit character which would corrupt the data.
If you need to escape content in database queries you should always use mysql_real_escape_string() where possible. addslashes() is fine if you're sure the database or table is using 7 or 8 bit ASCII encoding only.
A: string mysql_real_escape_string ( string $unescaped_string [, resource $link_identifier ] )
mysql_real_escape_string() calls MySQL's library function mysql_real_escape_string, which prepends backslashes to the following characters: \x00, \n, \r, \, ', " and \x1a.
string addslashes ( string $str )
Returns a string with backslashes before characters that need to be quoted in database queries etc. These characters are single quote ('), double quote ("), backslash (\) and NUL (the NULL byte).
They affect different characters. mysql_real_escape_string is specific to MySQL. Addslashes is just a general function which may apply to other things as well as MySQL.
A: case 1:
$str = "input's data";
print mysql_real_escape_string($str); input\'s data
print addslashes($str); input\'s data;
case 2:
$str = "input\'s data";
print mysql_real_escape_string($str); input\'s data
print addslashes($str); input\\'s data;
A: It seems that mysql_real_escape_string is binary-safe - the documentation states:
If binary data is to be inserted, this function must be used.
I think it's probably safer to always use mysql_real_escape_string than addslashes.
A: mysql_real_escape_string should be used when you are receiving binary data, addslashes is for text input.
You can see the differences here: mysql-real-escape-string and addslashes
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Checking if another web server is listening from asp I'm using Microsoft.XMLHTTP to get some information from another server from an old ASP/VBScript site. But that other server is restarted fairly often, so I want to check that it's up and running before trying to pull information from it (or avoid my page from giving an HTTP 500 by detecting the problem some other way).
How can I do this with ASP?
A: You could try making a ping to the server and check the response.
Take a look at this article.
A: All you need to do is have the code continue on error, then post to the other server and read the status from the post. Something like this:
PostURL = homelink & "CustID.aspx?SearchFlag=PO"
set xmlhttp = CreateObject("MSXML2.ServerXMLHTTP.3.0")
on error resume next
xmlhttp.open "POST", PostURL, false
xmlhttp.send ""
status = xmlhttp.status
if err.number <> 0 or status <> 200 then
if status = 404 then
Response.Write "ERROR: Page does not exist (404).<BR><BR>"
elseif status >= 401 and status < 402 then
Response.Write "ERROR: Access denied (401).<BR><BR>"
elseif status >= 500 and status <= 600 then
Response.Write "ERROR: 500 Internal Server Error on remote site.<BR><BR>"
else
Response.write "ERROR: Server is down or does not exist.<BR><BR>"
end if
else
'Response.Write "Server is up and URL is available.<BR><BR>"
getcustomXML = xmlhttp.responseText
end if
set xmlhttp = nothing
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: List the names of all the classes within a VS2008 project Is there a quick & dirty way of obtaining a list of all the classes within a Visual Studio 2008 (c#) project? There are quite a lot of them and Im just lazy enough not to want to do it manually.
A: If you open the "Class View" dialogue (View -> Class View or Ctrl+W, C) you can get a list of all of the classes in your project which you can then select and copy to the clipboard. The copy will send the fully qualified (i.e. with complete namespace) names of all classes that you have selected.
A: I've had success using doxygen to generate documentation from the XML comments in my projects - a byproduct of this is a nice, hyperlinked list of classes.
A: Maybe you can write an XSL/XSLT to display only the class names from the XML generated by the XML documentation, if you have any.
A: I would probably build the assembly, and then use reflection to iterate over all the exported types... If you want some sample code you can find inspiration in http://www.timvw.be/presenting-assemblytypepicker/.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Multiple keyboards and low-level hooks I have a system where I have multiple keyboards and really need to know which keyboard the key stroke is coming from. To explain the set up:
*
*I have a normal PC and USB keyboard
*I have an external VGA screen with some hard-keys
*The hard keys are mapped as a standard USB keyboard, sending a limited number of key-codes (F1, F2, Return, + and -)
I have a low-level hook (in C# but actually calling upon Win32 functionality) which is able to deal with the input even when my application is not focused.
The problem is that when using the normal keyboard, some of the mapped key-codes at picked up by the application being driven on the external screen. One of the key-presses sent by the external screen and used for confirmation is VK_RETURN. Unless I can identify the "device" and filter upon it, the user could be performing actions and confirming them on a screen their not even looking at.
How do I know which keyboard was responsible for the key-press?
A: Yes I stand corrected, my bad, learning something new every day.
Here's my attempt at making up for it :) :
*
*Register the devices you want to use for raw input (the two keyboards) with ::RegisterRawInputDevices().
*You can get these devices from GetRawInputDeviceList()
*After you've registered your devices, you will start getting WM_INPUT messages.
*The lParam of the WM_INPUT message contains a RAWKEYBOARD structure that you can use to determine the keyboard where the input came from, plus the virtual keycode and the type of message (WM_KEYDOWN, WM_KEYUP, ...)
*So you can set a flag of where the last message came from and then dispatch it to the regular keyboard input handlers.
A: No way to do this. Windows abstracts this for you. As mentioned, you need to write/modify a device driver.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: VS: Attribute for ignoring missing XML comments when building I have a VS2008 solution using xml documentation, and we have warnings as errors turned on for release mode (a nice feature IMHO); this results, however, in long lists of 'missing xml comment' errors for such things as every element of a (self describing) enum.
Does anyone know of an attribute or similar which switches off the requrement for xml comments? Ideally for some delimited area, not just one line (otherwise I could just put empty tags before every item, kind of defeating the purpose...)
Thanks!
A: Use #pragma warning disable.
More info here: http://msdn.microsoft.com/en-us/library/441722ys(VS.80).aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Scrum Burndown issues We have been using Scrum for around 9 months and it has largely been successful. However our burndown charts rarely look like the 'model' charts, instead resembling more of a terrifying rollercoaster ride with some vomit inducing climbs and drops.
To try and combat this we are spending more time before the sprint prototyping and designing but we still seem to discover much more work during the sprint than initially thought. Note: By this I mean the work required to meet the backlog is more involved than first thought rather than we have identified new items for the backlog.
Is this a common problem with Scrum and does anyone have any tips to help smooth the ride?
I should point out that most of our development work is not greenfield, so we are maintaining functionality in an existing large and complex application. Is scrum less suited to this type of development simply because you don't know what problems the existing code is going to throw up?
Just how much time should we be spending before the sprint starts working out the detail of the development?
UPDATE: We are having more success and a smoother ride now. This is largely because we have taken a more pessimistic view when estimating which is giving us more breathing space to deal with things when they dont go to plan. You could say its allowing us to be more 'agile'. We are also trying to change the perception that the burn down chart is some kind of schedule rather than an indication of scope v resources.
A: I am happy to hear that scrum has been largely successful for you - that is more important than having the sprint burndown chart look ideal. The sprint burndown is just a tool for the team to help it know if it is on track for the sprint goals. if the team has been meeting the sprint goals, I would not worry too much that the chart looks like a roller coaster. A few suggestions
*
*During the sprint retrospective ask the team where the additional work is coming from
*Extra work can come from not having good acceptance tests early in the sprint
*Extra work can come from not having a well groomed backlog. A good rule of thumb is to spend at least 5% of the team's time thinking about the next sprint's stories ahead of time.
*Monitor work in progress - is the team doing too much in parallel?
*During sprint planning - how does the team feel about the breakdown of tasks that make up the stories?
If you have not been meeting sprint goals - use the established team velocity to take on less during the next sprint. You have to get good at walking before you can run.
A: In my experience, Scrum is definitely geared more towards new development than it is towards maintenance. New development is much more predictable than maintaining an old, large code base.
With that said, one possible problem is that you're not breaking up the tasks into small enough chunks. A common problem people have with software planning in general is that they think "oh, this task should take me 2 days" without really thinking about what goes into doing that task. Often, you'll find that if you sit down and think about it that task consists of doing A, B, C, and D and winds up being more than 2 days of work.
A: Some tips on smoothing things out.
1) As others have said - try and break down the tasks into smaller chunks. The more obvious way of doing this is to try and break down the technical tasks in greater detail. Where possible I'd encourage you to talk to the product owner and see if you can reduce scope or "thin" the story instead. I find the latter more effective. Juggling priorities and estimates is easier if both team and product owner understand what's being discussed.
My general rule of thumb is any estimate bigger than half an ideal day is probably wrong :-)
2) Try doing shorter sprints. If you're doing one month sprints - try two weeks. If you're doing two weeks - try one.
*
*It acts a limiter on story size - encouraging the product owner and the team to work on smaller stories that are easier to estimate accurately
*You get feedback more often about your estimates - and it's easier to see the connections between the decisions you made at the start of the sprint and what actually happened
*Everything gets better with practice :-)
3) Use the stand ups and retrospectives to look a bit more at the reasons for the ups and downs. Is it when you spend time with particular areas of the code base? Is it caused by folk misunderstanding the product owner? Random emergencies that take development time away from the team? Once you have more of an understanding where ups and downs are coming from you can often address those problems specifically. Again - shorter sprints can help make this more obvious.
4) Believe your history. You probably know this one... but I'll say it anyway :-) If fiddling with that ghastly legacy Foo package took 3 x longer than you thought it would last sprint - then it will also take 3 x as long as you think the next sprint. No matter how much more effective you think you'll be this time ;-) Trust the history and use things like Yesterday's Weather to guide your estimates in the next spring.
Hope this helps!
A: As others have said, I would expect a burndown to be up and down. Stuff happens! You should use the "up and down" bits as fodder for your retrospectives.
Make sure everyone is clear on what "being done" means, and use that joint understanding to help drive your planning sessions. Often having a list of what constitutes done available will (a) help you remember things you might forget and (b) will likely trigger more ideas for tasks that would otherwise surface later on.
One other point to think about - if you are working month on month with an unpredictable codebase, I would still expect your velocity to normalise out to a reasonably steady rate. Just track your success against your planned work and only use completed items as a maximum when planning. Then focus on your unplanned tasks and see if there are any patterns that suggest there are things you can do differently to include those things in the planned work.
A: I have had similar issues. My previous team (on it for over a year) was large and we maintained a very large, rapidly changing codebase for series of initial product launches. Our burndowns were shameful looking, but it was the best we could ever do.
One thing that may help (make your graph look better) is stick to the number of hours/points committed to constant. If you have underestimated a task, and have to double hours, pull something out of the sprint. If you pull in a new task, it's obviously of higher priority than something your team committed to so pull that other thing out.
We tried the breaking up the task into many tasks in and before planning, and that never seemed to help. In fact, it just gave us more damn tickets to keep track of during the sprint. Requirements started migrating to the tickets and (not surprisingly) got lost in all the shuffle.
On my new team we took a pretty radical approach and started creating big tickets (some over a week long) that say things like "implement v1.2 features in ProjectX." The requirements/feature lists for ProjectX (version 1.2 included) are kept on a wiki so the ticket is very clean and only tracks the work performed. This has helped us a lot - we have way fewer tickets to keep track of, and have been able to finish all our sprints even though we keep getting pulled off our sprint tasks to help other teams or put out fires.
We continue to push items out of the sprint if (and only if) we are forced (by the man) to bring in new items.
Another simple tip that helped us: add "total hours in sprint" to your burndown. This should be the sum of all estimates. Working on keeping this line flat may help, and increases visibility of the problems your team may be facing (assuming that won't get you demoted...)
-ab
A: I had similar problems in my burndown as well. I "fixed" it by refining what was included in the burndown.
SiKeep commented:
Its progress against the backlog
selected for that sprint, which may or
may not end up as a release.
Since you selected certain things for the sprint and that's what is on the burndown, I don't know that all the "new work" should appear in the burndown. I would see it going onto the backlog (not affecting the burndown), unless it's important enough to move into your current sprint (which would then show up as an upward trend in the burndown).
That said, minor up's and down's are normal, if the trendline basically follows your expected velocity. I would be concerned about the roller-coaster trend you're mentioning. However, the idea of isolating the burndown by only adding high priority items to the current sprint may help dampen these up and downs on your burndown.
As others have said, the planning before the sprint starts should be short...(no more than 4 hours).
A: We are using a 'time-boxed' task for unplanned tasks. Whenever high-priority work is coming, or sudden bugs pop up, we can use time of the time-box (but, we can never go under zero).
This method has the additional advantage that we can easily track which tasks were unforeseen, and keep those things into account during our next sprint planning.
A: You can integrate the new work at the sprint's start date, to have a great looking Burndown chart.
You can tag with a specific marker the additional work and evaluate at the sprint's end why you haven't be able to identify those tasks before.
A: We are now using a burn UP chart. Instead of just charting the amount of work left we chart two things: the amount of work completed and the total amount of work (ie. completed + outstanding).
This gives you two lines on the graph that should meet when all the work is done. It also has a big advantage in that it clearly shows when progress is slow because more work has been added.
If you like, the PO 'owns' one line (the total work) and the developers/testers 'own' the other line (work done).
The PO's line will go up and down as they add/remove work.
The dev/tester line will only go up as they complete work.
A: Article Is it your burn down chart? explains what given status in burn down chart means. It also provides suggestions what to do with that.
Some examples described in the article:
A: This is as it should be. If your burndown chart looks like the model chart, you're in trouble. The chart will help to see if you will be able to make you commitment and finish all the stories.
Discovering stories during the sprint will always happen. Ideally you would be able to design and find out the tasks up front but if they worked why would a big upfront design not work?
To answer you last question, the sprint planning should take at most four hours.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Pause GNU Make in a Windows console if an error occurs Part of the install for an app I am responsible for, compiles some C code libraries. This is done in a console using GNU Make.
So, as part of the install, a console window pops open, you see the make file output wiz by as it compiles and links, when finished the console window closes and the installer continues.
All good, unless there is a compilation error. Then the make file bugs out and the console window closes before you have a chance to figure out what is happening.
So, what I'd like to happen is have the console window pause with a 'press a key to continue' type functionality, if there is an error from the makefile so that the console stays open. Otherwise, just exit as normal and close the console.
I can't work out how to do this in a GNU Makefile or from a batch file that could run the Make.
A: this should do the trick:
if not ERRORLEVEL 0 pause
type help if in DOS for more info on errorlevel usage.
A: This is what you're looking for:
if ERRORLEVEL 1 pause
If you type
HELP IF
you get this info: ERRORLEVEL number |
Specifies a true condition if the last program run returned an exit code equal to or greater than the number specified.
A: Using this simple C program to manipulate the exit code:
#include <stdio.h>
main(int argc, char *argv[]) {
if (argc == 2) {
// return integer of argument 1
return strtol(argv[1], NULL, 10);
}
else {
return 0;
}
}
We can test the exit code in a batch file like so:
test.exe 0
IF ERRORLEVEL 0 PAUSE
Condition: 0 => 0 == TRUE
When ERRORLEVEL = 0, the pause will occur because the logic is >= or greater-than-or-equal. This is important, as it's not immediately clear that the condition is not a == comparison.
Notice that subsituting for 1 => 0 will also be true, and thus the pause will occur as well. This is true for any positive number.
We can trigger the opposite effect only by going below 0:
test.exe -1
IF ERRORLEVEL 0 PAUSE
Condition: -1 => 0 == FALSE
Since an ERRORLEVEL of 1 typically means there is an error, and 0 no error, we can just increase the minimum in the comparison condition to get what we want like so:
test.exe 0
IF ERRORLEVEL 1 PAUSE
Condition: -1 => 1 == FALSE
Condition: 0 => 1 == FALSE
Condition: 1 => 1 == TRUE
In this example. the script will pause when ERRORLEVEL is 1 or higher
Notice that this allows -1 exit codes the same as 0. What if one only wants 0 to not pause? We can use a separate syntax:
test.exe 0
IF NOT %ERRORLEVEL% EQU 0 PAUSE
Condition: -1 != 0 == TRUE
Condition: 0 != 0 == FALSE
Condition: 1 != 0 == TRUE
In this example, the script pauses if %ERRORLEVEL% is not 0 We can do this by using the EQU operator to first check if %ERRORLEVEL% EQU 0, then the NOT operator to get the opposite effect, equivalent to the != operator. However, I believe this only works on NT machines, not plain DOS.
References:
http://chrisoldwood.blogspot.ca/2013/11/if-errorlevel-1-vs-if-errorlevel-neq-0.html
http://ss64.com/nt/errorlevel.html
A: Have you tried the 'pause' command?
@echo off
echo hello world
pause
*
*more info on 'pause' : http://technet.microsoft.com/en-us/library/bb490965.aspx
*DOS Command Line reference A-Z : http://technet.microsoft.com/en-us/library/bb490890.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Any free alternative to Robohelp? Any free alternative to Robohelp? Prefer open source
Need some sort of online help authoring tool for an open source project.
A: Check out this list of free help authoring tools, bound to be something useful there.
A: Just remembered: depending on what you want to do, you can use doxygen (www.doxygen.org) or the free help tool from http://www.vizacc.com/.
A: I like Sphinx.
You write your documentation using a plain text markup format, reStructuredText.
Sphinx can create HTML, Windows HTML help and PDF (via Latex).
I have created end-user documentation for two projects using Sphinx, and have also used it to document a couple of Python packages (Sphinx has a lot of features around extracting documentation from Python modules, it was originally created to write the Python documentation).
It is very easy to get started, and you get professional-looking documentation with a minimal effort.
If you are used to the WYSIWYG way of text editing, using a plain-text markup format might take some getting used to. But I believe it will be worth the effort.
I also have some experience with DocBook (one of the end-user documentation projects mentioned was a DocBook-to-Sphinx migration). I prefer Sphinx over Docbook: Plain text is more pleasant to work with than XML, Sphinx has a simpler toolchain.
See also:
*
*List of projects using Sphinx.
*PDF example
A: Are you trying to make just CHM or other output formats too?
Take a look at DocBook. You can make (from one source file) pdf, html and chm - and some others, too. I've used it in the past but it's not very easy or convenient to use. If you only want to output chm (and need to use a free solution), see if you can get away with using the htmlhelp workshop (http://www.microsoft.com/downloads/details.aspx?familyid=00535334-c8a6-452f-9aa0-d597d16580cc&displaylang=en).
A: If you are trying to build .chm based help, try chm-build
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: Statistically removing erroneous values We have a application where users enter prices all day. These prices are recorded in a table with a timestamp and then used for producing charts of how the price has moved... Every now and then the user enters a price wrongly (eg. puts in a zero to many or to few) which somewhat ruins the chart (you get big spikes). We've even put in an extra confirmation dialogue if the price moves by more than 20% but this doesn't stop them entering wrong values...
What statistical method can I use to analyse the values before I chart them to exclude any values that are way different from the rest?
EDIT: To add some meat to the bone. Say the prices are share prices (they are not but they behave in the same way). You could see prices moving significantly up or down during the day. On an average day we record about 150 prices and sometimes one or two are way wrong. Other times they are all good...
A: Calculate and track the standard deviation for a while. After you have a decent backlog, you can disregard the outliers by seeing how many standard deviations away they are from the mean. Even better, if you've got the time, you could use the info to do some naive Bayesian classification.
A: That's a great question but may lead to quite a bit of discussion as the answers could be very varied. It depends on
*
*how much effort are you willing to put into this?
*could some answers genuinely differ by +/-20% or whatever test you invent? so will there always be need for some human intervention?
*and to invent a relevant test I'd need to know far more about the subject matter.
That being said the following are possible alternatives.
*
*A simple test against the previous value (or mean/mode of previous 10 or 20 values) would be straight forward to implement
*The next level of complexity would involve some statistical measurement of all values (or previous x values, or values of the last 3 months), a normal or Gaussian distribution would enable you to give each value a degree of certainty as to it being a mistake vs. accurate. This degree of certainty would typically be expressed as a percentage.
See http://en.wikipedia.org/wiki/Normal_distribution and http://en.wikipedia.org/wiki/Gaussian_function there are adequate links from these pages to help in programming these, also depending on the language you're using there are likely to be functions and/or plugins available to help with this
*
*A more advanced method could be to have some sort of learning algorithm that could take other parameters into account (on top of the last x values) a learning algorithm could take the product type or manufacturer into account, for instance. Or even monitor the time of day or the user that has entered the figure. This options seems way over the top for what you need however, it would require a lot of work to code it and also to train the learning algorithm.
I think the second option is the correct one for you. Using standard deviation (a lot of languages contain a function for this) may be a simpler alternative, this is simply a measure of how far the value has deviated from the mean of x previous values, I'd put the standard deviation option somewhere between option 1 and 2
A: You could measure the standard deviation in your existing population and exclude those that are greater than 1 or 2 standard deviations from the mean?
It's going to depend on what your data looks like to give a more precise answer...
A: Or graph a moving average of prices instead of the actual prices.
A: Quoting from here:
Statisticians have devised several methods for detecting outliers. All the methods first quantify how far the outlier is from the other values. This can be the difference between the outlier and the mean of all points, the difference between the outlier and the mean of the remaining values, or the difference between the outlier and the next closest value. Next, standardize this value by dividing by some measure of scatter, such as the SD of all values, the SD of the remaining values, or the range of the data. Finally, compute a P value answering this question: If all the values were really sampled from a Gaussian population, what is the chance of randomly obtaining an outlier so far from the other values? If the P value is small, you conclude that the deviation of the outlier from the other values is statistically significant.
Google is your friend, you know. ;)
A: For your specific question of plotting, and your specific scenario of an average of 1-2 errors per day out of 150, the simplest thing might be to plot trimmed means, or the range of the middle 95% of values, or something like that. It really depends on what value you want out of the plot.
If you are really concerned with the true max and true of a day's prices, then you have to deal with the outliers as outliers, and properly exclude them, probably using one of the outlier tests previously proposed ( data point is x% more than next point, or the last n points, or more than 5 standard deviations away from the daily mean). Another approach is to view what happens after the outlier. If it is an outlier, then it will have a sharp upturn followed by a sharp downturn.
If however you care about overall trend, plotting daily trimmed mean, median, 5% and 95% percentiles will portray history well.
Choose your display methods and how much outlier detection you need to do based on the analysis question. If you care about medians or percentiles, they're probably irrelevant.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Is WebRequest The Right C# Tool For Interacting With Websites? I'm writing a small tool in C# which will need to send and receive data to/from a website using POST and json formatting. I've never done anything like this before in C# (or any language really) so I'm struggling to find some useful information to get me started.
I've found some information on the WebRequest class in C# (specifically from here) but before I start diving into it, I wondered if this was the right tool for the job.
I've found plenty of tools to convert data into the json format but not much else, so any information would be really helpful here in case I end up down a dead end.
A: WebRequest and more specifically the HttpWebRequest class is a good starting point for what you want to achieve. To create the request you will use the WebRequest.Create and cast the created request to an HttpWebRequest to actually use it. You will then create your post data and send it to the stream like:
HttpWebRequest req = (HttpWebRequest)
WebRequest.Create("http://mysite.com/index.php");
req.Method = "POST";
req.ContentType = "application/x-www-form-urlencoded";
string postData = "var=value1&var2=value2";
req.ContentLength = postData.Length;
StreamWriter stOut = new
StreamWriter(req.GetRequestStream(),
System.Text.Encoding.ASCII);
stOut.Write(postData);
stOut.Close();
Similarly you can read the response back by using the GetResponse method which will allow you to read the resultant response stream and do whatever else you need to do. You can find more info on the class at:
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.aspx
A: WebClient is sometimes easier to use than WebRequest. You may want to take a look at it.
For JSON deserialization you are going to want to look at the JavaScriptSerializer class.
WebClient example:
using (WebClient client = new WebClient ())
{
//manipulate request headers (optional)
client.Headers.Add (HttpRequestHeader.UserAgent, "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)");
//execute request and read response as string to console
using (StreamReader reader = new StreamReader(client.OpenRead(targetUri)))
{
string s = reader.ReadToEnd ();
Console.WriteLine (s);
}
}
Marked as wiki in case someone wants to update the code
A: When it comes to POSTing data to a web site, System.Net.HttpWebRequest (the HTTP-specific implementation of WebRequest) is a perfectly decent solution. It supports SSL, async requests and a bunch of other goodies, and is well-documented on MSDN.
The payload can be anything: data in JSON format or whatever -- as long as you set the ContentType property to something the server expects and understands (most likely application/json, text/json or text/x-json), all will be fine.
One potential issue when using HttpWebRequest from a system service: since it uses the IE proxy and credential information, default behavior may be a bit strange when running as the LOCALSYSTEM user (or basically any account that doesn't log on interactively on a regular basis). Setting the Proxy and Authentication properties to Nothing (or, as you C# folks prefer to call it, null, I guess) should avoid that.
A: I have used WebRequest for interacting with websites. It is the right 'tool'
I can't comment on the JSON aspect of your question.
A: To convert from instance object to json formatted string and vice-versa, try out Json.NET:
http://json.codeplex.com/
I am currently using it for a project and it's easy to learn and work with and offers some flexibility in terms of serializing and custom type converters. It also supports a LINQ syntax for querying json input.
A: The currently highest rated answer is helpful, but it doesn't send or receive JSON.
Here is an example that uses JSON for both sending and receiving:
How to post json object in web service
And here is the StackOverflow question that helped me most to solve this problem:
Problems sending and receiving JSON between ASP.net web service and ASP.Net web client
And here is another related question:
json call with C#
A: in 3.5 there is a built-in jsonserializer. The webrequest is the right class your looking for.
A few examples:
*
*Link
*http://dev.aol.com/blog/markdeveloper/ShareFileWithNETFramework
*Link
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
}
|
Q: Best solution for using EJBs from Excel We would like to give access to some of our EJBs from Excel. The goal is to give an API usable from VBA.
Our EJBs are mostly Stateless Session Beans that do simple CRUD operations with POJOs.
Some possible solutions:
*
*Exposing the EJBs as WebServices and create a VB/C# dll wrapping them,
*Using Corba to access the EJBs from C#,
*Creating a COM Library that uses Java to access the EJBs,
Pointers to frameworks for these solution or other ideas are welcome.
A: You could take a look at IIOP.NET, which addresses this issue.
A: If you have a fairly recent ejb container, the cheapest and easiest should be to expose your beans as web services and call it from VB/C#. This doesn't require any extra tool or library.
A: I work on an open source project called XLLoop - this framework allows you to expose POJO functions as Excel functions.
It consists of:
*
*An Excel add-in (XLL), which communicates over TCP to:
*A Java server/library, which invokes java methods.
You could embed this java function server in an EJB and have it deployed as part of your app server.
A: Back in the VB6/COM/DCOM times we used the suite J-Integra to accomblish this task. I have no experience with the .NET version though.
A: I highly recommend IKVM. It is a java byte code to .NET assembly compiler (i.e. JAR --> DLL) and I have used it to create live JMX links and listeners in an Excel automation server. It should not be difficult for you to create a .NET assembly of your EJB client stubs and supporting libraries.
//Nicholas
A: You could try Obba (I work on this project):
Obba is a Java object handler for spreadsheet applications.
It provides a bridge between spreadsheets and Java classes, such that spreadsheets can be used as graphical user interface for Java libraries. Accessing your Java library form the spreadsheet requires no glue code (no VBA needed, no special Java code needed). Objects are instantiated by their original constructor. Constructors and methods are invoked using a "by name" reflection. A spreadsheet-specific factory method is not necessary. Obba provides the functions to handle objects in spreadsheets.
The Java virtual machine providing the add-in may run on the same computer or a remote computer - without any change to the spreadsheet, i.e., object referenced in the spreadsheet can reside on remote Java virtual machine.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to Programmatically Build a TemplateColumn How does one go about programatically building a TemplateColumn object and adding it to a DataGrid. I know how to add it, but not how to build the contents of the TemplateColumn. There are no useful looking methods on the ITemplate the column class exposes.
A: You can use the CompiledTemplateBuilder Class.
Here is an example:
http://iridescence.no/post/Using-Templated-Controls-Programmatically.aspx
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Ruby on Rails Migration - Create New Database Schema I have a migration that runs an SQL script to create a new Postgres schema. When creating a new database in Postgres by default it creates a schema called 'public', which is the main schema we use. The migration to create the new database schema seems to be working fine, however the problem occurs after the migration has run, when rails tries to update the 'schema_info' table that it relies on it says that it does not exist, as if it is looking for it in the new database schema and not the default 'public' schema where the table actually is.
Does anybody know how I can tell rails to look at the 'public' schema for this table?
Example of SQL being executed: ~
CREATE SCHEMA new_schema;
COMMENT ON SCHEMA new_schema IS 'this is the new Postgres database schema to sit along side the "public" schema';
-- various tables, triggers and functions created in new_schema
Error being thrown: ~
RuntimeError: ERROR C42P01 Mrelation "schema_info" does not exist
L221 RRangeVarGetRelid: UPDATE schema_info SET version = ??
Thanks for your help
Chris Knight
A: Well that depends what your migration looks like, what your database.yml looks like and what exactly you are trying to attempt. Anyway more information is needed change the names if you have to and post an example database.yml and the migration. does the migration change the search_path for the adapter for example ?
But know that in general rails and postgresql schemas don't work well together (yet?).
There are a few places which have problems. Try and build and app that uses only one pg database with 2 non-default schemas one for dev and one for test and tell me about it. (from thefollowing I can already tell you that you will get burned)
Maybe it was fixed since the last time I played with it but when I see http://rails.lighthouseapp.com/projects/8994/tickets/390-postgres-adapter-quotes-table-name-breaks-when-non-default-schema-is-used or this http://rails.lighthouseapp.com/projects/8994/tickets/918-postgresql-tables-not-generating-correct-schema-list or this in postgresql_adapter.rb
# Drops a PostgreSQL database
#
# Example:
# drop_database 'matt_development'
def drop_database(name) #:nodoc:
execute "DROP DATABASE IF EXISTS #{name}"
end
(yes this is wrong if you use the same database with different schemas for both dev and test, this would drop both databases each time you run the unit tests !)
I actually started writing patches. the first one was for the indexes methods in the adapter which didn't care about the search_path ending up with duplicated indexes in some conditions, then I started getting hurt by the rest and ended up abandonning the idea of using schemas: I wanted to get my app done and I didn't have the extra time needed to fix the problems I had using schemas.
A: I'm not sure I understand what you're asking exactly, but, rake will be expecting to update the version of the Rails schema into the schema_info table. Check your database.yml config file, this is where rake will be looking to find the table to update.
Is it a possibility that you are migrating to a new Postgres schema and rake is still pointing to the old one? I'm not sure then that a standard Rails migration is what you need. It might be best to create your own rake task instead.
Edit: If you're referencing two different databases or Postgres schemas, Rails doesn't support this in standard migrations. Rails assumes one database, so migrations from one database to another is usually not possible. When you run "rake db:migrate" it actually looks at the RAILS_ENV environment variable to find the correct entry in database.yml. If rake starts the migration looking at the "development" environment and database config from database.yml, it will expect to update to this environment at the end of the migration.
So, you'll probably need to do this from outside the Rails stack as you can't reference two databases at the same time within Rails. There are attempts at plugins to allow this, but they're majorly hacky and don't work properly.
A: You can use pg_power. It provides additional DSL for migration to create PostgreSQL schemas and not only.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: .NET's SslStream is always negotiating to the least secure cipher I have. How can I change this? SslStream is supposed to negotiate the cipher type, key length, hash algorithm, etc. with its peer SSL stack. When using it in my code, I find that the negotiation always defaults to RC4 & MD5. I would like to use 3DES or AES for some added security.
Looking around the web I find only a few references to this problem and no solutions; one poster is claiming this actually makes sense, since the lowest common denominator between the two stacks is secure while has the added benefit of being faster/using less CPU resources. While this may be technically correct, my particular trade-off between complexity and cost lies elsewhere (I prefer to use AES with a long key).
If anyone can help I'd appreciate it.
A: SSLStream uses Schannel that is supplied with the operating system.
The suites are listed in the default order in which they are chosen by the Microsoft Schannel Provider for :
Windows Vista:
RSA WITH AES_128 CBC SHA
RSA WITH AES_256 CBC SHA
RSA WITH RC4_128 SHA
...
Windows XP:
RSA WITH RC4 128 MD5
RSA WITH RC4 128 SHA
RSA WITH 3DES CBC SHA
....
You can also modify the list of cipher suites by configuring the SSL Cipher Suite Order group policy settings using the Group Policy Object snap-in in Microsoft Management Console (Windows Vista)
But the issue is that Windows XP doesn't include AES in the list of ciphers available for SSLStream.
However, it's possible to change Registry settings in Windows XP:
HKLM\System\CurrentControlSet\Control\Lsa\FIPSAlgorithmPolicy 1 for getting 3DES cipher.
A: You can select which protocols are available for selection by making some simple registry changes. We remove the ability to select RC4, for example. You only need to make the change at one end of the connection (eg server) because the client and server negotiate to find commonly supported algorithm
http://msdn.microsoft.com/en-us/library/ms925716.aspx
Best wishes
James
A: It should be using the most secure set of algorithms that were in both lists. I find it hard to believe that it isn't, because SslStream is wrapping the SChannel SSPI, and if that were broken then Internet Explorer, IIS and everything else on Windows would be broken too.
It could be that you have an outdated version of SChannel.dll/Secur32.dll. What OS and Internet Explorer version do you have installed?
It is possible to disable protocols in SCHANNEL. Could you check that this hasn't been done?
A: I'm using XP SP3 and IE7 with all updates. The registry seems configured with everything enabled.
A: In Java you can order the various algorithms/ciphers according to your needs and preferences. May be there is a similar API in .NET...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/91304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.