text
stringlengths 8
267k
| meta
dict |
|---|---|
Q: Different values of GetHashCode for inproc and stateserver session variables I've recently inherited an application that makes very heavy use of session, including storing a lot of custom data objects in session. One of my first points of business with this application was to at least move the session data away from InProc, and off load it to either a stateserver or SQL Server.
After I made all of the appropriate data objects serializable, and changed the web.config to use a state service, everything appeared to work fine.
However, I found that this application does a lot of object comparisons using GetHashCode(). Methods that worked fine when the session was InProc no longer work because the HashCodes no longer match when they are supposed to. This appears to be the case when trying to find a specific child object from a parent when you know the child object's original hash code
If I simply change the web.config back to using inproc, it works again.
Anyone have any thoughts on where to begin with this?
EDIT:
qbeuek: thanks for the quick reply. In regards to:
The default implementation of GetHashCode in Object class return a hash value based on objects address in memory or something similar. If some other identity comparison is required, you have to override both Equals and GetHashCode.
I should have given more information on how they are using this. Basically, they have one parent data object, and there are several arrays of child objects. They happen to know the hash code for a particular object they need, so they are looping through a specific array of child objects looking for a hash code that matches. Once a match is found, they then use that object for other work.
A: When you write
does a lot of object comparisons using GetHashCode()
i sense there is something horribly wrong with this code. The GetHashCode method does not guarantee, that the returned hash values should be in any way unique given two different objects. As far as GetHashCode is concerned, it can return 0 for all objects and still be considered correct.
When two object are the same (the Equals method returns true), they MUST have the same value returned from GetHashCode. When two objects have the same hash value, they can be the same object (Equals returns true) or be different objects (Equals returns false).
There are no other guarantees on the result of GetHashCode.
The default implementation of GetHashCode in Object class return a hash value based on objects address in memory or something similar. If some other identity comparison is required, you have to override both Equals and GetHashCode.
A: Override the GetHashCode method in classes that get called this method and calculate the hash code based on unique object properties (like ID or all object fields).
A: Solution 1: Create a unique ID for all the child objects and use that instead of hash code.
Solution 2: Replace if (a.GetHashCode() == b.GetHashCode()) with if (a.Equals(b)).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What is the inversion of the Shunting Yard algorithm? Dijkstra's Shunting Yard algorithm is used to parse an infix notation and generate RPN output.
I am looking for the opposite, a way to turn RPN into highschool-math-class style infix notation, in order to represent RPN expressions from a database to lay users in an understandable way.
Please save your time and don't cook up the algorithm yourselves, just point me to textbook examples that I can't seem to find. Working backwards from the Shunting Yard algorithm and using my knowledge about the notations I'll probably be able to work up a solution. I'm just looking for a quick shortcut, so I don't have to reinvent the wheel.
Oh, and please don't tag this as "homework", I swear I'm out of school already! ;-)
A: Since RPN is also known as postfix notation, I tried googling convert "postfix to infix" and got quite a few results. The first several have code examples, but I found the RubyQuiz entry particularly enlightening.
A: If you're not worried about removing redundant parentheses, then the following Lisp code will work:
(defun rpn-to-inf (pre)
(if (atom pre)
pre
(cond ((eq (car (last pre)) 'setf)
(list (rpn-to-inf (first pre)) '= (rpn-to-inf (second pre))))
((eq (car (last pre)) 'expt)
(list (rpn-to-inf (first pre)) '^ (rpn-to-inf (second pre))))
(t (list (rpn-to-inf (first pre))
(car (last pre))
(rpn-to-inf (second pre)))))))
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: TimeStamp in Control File I have a script that takes a table name and generates a control file by querying all the columns/rows the table. This works fine for numeric and character data but fails on timestamp data so I need to adjust the script to output the timestamp data into the control in such a way that it can be read in properly.
So essentially, my question is how to format TimeStamp data in a control file so that it can be inputed into a TimeStamp column.
A: You need to use to_date in your column listing as demonstrated here. Something like:
LOAD DATA
INFILE *
INTO TABLE some_table
FIELDS TERMINATED BY ","
( col1
col2 "to_date(:col2, 'YYYY-MM-DD HH24:MI:SS')"
)
BEGINDATA
foo,2008-09-17 13:00:00
bar,2008-09-17 13:30:05
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Debugging LINQ to SQL SubmitChanges() I am having a really hard time attempting to debug LINQ to SQL and submitting changes.
I have been using http://weblogs.asp.net/scottgu/archive/2007/07/31/linq-to-sql-debug-visualizer.aspx, which works great for debugging simple queries.
I'm working in the DataContext Class for my project with the following snippet from my application:
JobMaster newJobToCreate = new JobMaster();
newJobToCreate.JobID = 9999
newJobToCreate.ProjectID = "New Project";
this.UpdateJobMaster(newJobToCreate);
this.SubmitChanges();
I will catch some very odd exceptions when I run this.SubmitChanges;
Index was outside the bounds of the array.
The stack trace goes places I cannot step into:
at System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager`3.TryCreateKeyFromValues(Object[] values, MultiKey`2& k)
at System.Data.Linq.IdentityManager.StandardIdentityManager.IdentityCache`2.Find(Object[] keyValues)
at System.Data.Linq.IdentityManager.StandardIdentityManager.Find(MetaType type, Object[] keyValues)
at System.Data.Linq.CommonDataServices.GetCachedObject(MetaType type, Object[] keyValues)
at System.Data.Linq.ChangeProcessor.GetOtherItem(MetaAssociation assoc, Object instance)
at System.Data.Linq.ChangeProcessor.BuildEdgeMaps()
at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode)
at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
at System.Data.Linq.DataContext.SubmitChanges()
at JobTrakDataContext.CreateNewJob(NewJob job, String userName) in D:\JobTrakDataContext.cs:line 1119
Does anyone have any tools or techniques they use? Am I missing something simple?
EDIT:
I've setup .net debugging using Slace's suggestion, however the .net 3.5 code is not yet available: http://referencesource.microsoft.com/netframework.aspx
EDIT2:
I've changed to InsertOnSubmit as per sirrocco's suggestion, still getting the same error.
EDIT3:
I've implemented Sam's suggestions trying to log the SQL generated and to catch the ChangeExceptoinException. These suggestions do not shed any more light, I'm never actually getting to generate SQL when my exception is being thrown.
EDIT4:
I found an answer that works for me below. Its just a theory but it has fixed my current issue.
A: First, thanks everyone for the help, I finally found it.
The solution was to drop the .dbml file from the project, add a blank .dbml file and repopulate it with the tables needed for my project from the 'Server Explorer'.
I noticed a couple of things while I was doing this:
*
*There are a few tables in the system named with two words and a space in between the words, i.e. 'Job Master'. When I was pulling that table back into the .dbml file it would create a table called 'Job_Master', it would replace the space with an underscore.
*In the orginal .dbml file one of my developers had gone through the .dbml file and removed all of the underscores, thus 'Job_Master' would become 'JobMaster' in the .dbml file. In code we could then refer to the table in a more, for us, standard naming convention.
*My theory is that somewhere, the translation from 'JobMaster' to 'Job Master' while was lost while doing the projection, and I kept coming up with the array out of bounds error.
It is only a theory. If someone can better explain it I would love to have a concrete answer here.
A: My first debugging action would be to look at the generated SQL:
JobMaster newJobToCreate = new JobMaster();
newJobToCreate.JobID = 9999
newJobToCreate.ProjectID = "New Project";
this.UpdateJobMaster(newJobToCreate);
this.Log = Console.Out; // prints the SQL to the debug console
this.SubmitChanges();
The second would be to capture the ChangeConflictException and have a look at the details of failure.
catch (ChangeConflictException e)
{
Console.WriteLine("Optimistic concurrency error.");
Console.WriteLine(e.Message);
Console.ReadLine();
foreach (ObjectChangeConflict occ in db.ChangeConflicts)
{
MetaTable metatable = db.Mapping.GetTable(occ.Object.GetType());
Customer entityInConflict = (Customer)occ.Object;
Console.WriteLine("Table name: {0}", metatable.TableName);
Console.Write("Customer ID: ");
Console.WriteLine(entityInConflict.CustomerID);
foreach (MemberChangeConflict mcc in occ.MemberConflicts)
{
object currVal = mcc.CurrentValue;
object origVal = mcc.OriginalValue;
object databaseVal = mcc.DatabaseValue;
MemberInfo mi = mcc.Member;
Console.WriteLine("Member: {0}", mi.Name);
Console.WriteLine("current value: {0}", currVal);
Console.WriteLine("original value: {0}", origVal);
Console.WriteLine("database value: {0}", databaseVal);
}
}
}
A: You can create a partial class for your DataContext and use the Created or what have you partial method to setup the log to the console.out wrapped in an #if DEBUG.. this will help you to see the queries executed while debugging any instance of the datacontext you are using.
I have found this useful while debugging LINQ to SQL exceptions..
partial void OnCreated()
{
#if DEBUG
this.Log = Console.Out;
#endif
}
A: I always found useful to know exactly what changes are being sent to the DataContext in the SubmitChanges() method.
I use the DataContext.GetChangeSet() method, it returns a ChangeSet object instance that holds 3 read-only IList's of objects which have either been added, modified, or removed.
You can place a breakpoint just before the SubmitChanges method call, and add a Watch (or Quick Watch) containing:
ctx.GetChangeSet();
Where ctx is the current instance of your DataContext, and then you'll be able to track all the changes that will be effective on the SubmitChanges call.
A: The error you are referring to above is usually caused by associations pointing in the wrong direction. This happens very easily when manually adding associations to the designer since the association arrows in the L2S designer point backwards when compared to data modelling tools.
It would be nice if they threw a more descriptive exception, and maybe they will in a future version. (Damien / Matt...?)
A: This is what I did
...
var builder = new StringBuilder();
try
{
context.Log = new StringWriter(builder);
context.MY_TABLE.InsertAllOnSubmit(someData);
context.SubmitChanges();
}
finally
{
Log.InfoFormat("Some meaningful message here... ={0}", builder);
}
A: A simple solution could be to run a trace on your database and inspect the queries run against it - filtered ofcourse to sort out other applications etc. accessing the database.
That ofcourse only helps once you get past the exceptions...
A: VS 2008 has the ability to debug though the .NET framework (http://blogs.msdn.com/sburke/archive/2008/01/16/configuring-visual-studio-to-debug-net-framework-source-code.aspx)
This is probably your best bet, you can see what's happening and what all the properties are at the exact point in time
A: Why do you do UpdateJobMaster on a new instance ? Shouldn't it be InsertOnSubmit ?
JobMaster newJobToCreate = new JobMaster();
newJobToCreate.JobID = 9999
newJobToCreate.ProjectID = "New Project";
this.InsertOnSubmit(newJobToCreate);
this.SubmitChanges();
A: This almost certainly won't be everyone's root cause, but I encountered this exact same exception in my project - and found that the root cause was that an exception was being thrown during construction of an entity class. Oddly, the true exception is "lost" and instead manifests as an ArgumentOutOfRange exception originating at the iterator of the Linq statement that retrieves the object/s.
If you are receiving this error and you have introduced OnCreated or OnLoaded methods on your POCOs, try stepping through those methods.
A: Hrm.
Taking a WAG (Wild Ass Guess), it looks to me like LINQ - SQL is trying to find an object with an id that doesn't exist, based somehow on the creation of the JobMaster class. Are there foreign keys related to that table such that LINQ to SQL would attempt to fetch an instance of a class, which may not exist? You seem to be setting the ProjectID of the new object to a string - do you really have an id that's a string? If you're trying to set it to a new project, you'll need to create a new project and get its id.
Lastly, what does UpdateJobMaster do? Could it be doing something such that the above would apply?
A: We have actually stopped using the Linq to SQL designer for our large projects and this problem is one of the main reasons. We also change a lot of the default values for names, data types and relationships and every once in a while the designer would lose those changes. I never did find an exact reason, and I can't reliably reproduce it.
That, along with the other limitations caused us to drop the designer and design the classes by hand. After we got used to the patterns, it is actually easier than using the designer.
A: I posted a similar question earlier today here: Strange LINQ Exception (Index out of bounds).
It's a different use case - where this bug happens during a SubmitChanges(), mine happens during a simple query, but it is also an Index out of range error.
Cross posting in this question in case the combination of data in the questions helps a good Samaritan answer either :)
A: Check that all the "primary key" columns in your dbml actually relate to the primary keys on the database tables. I just had a situation where the designer decided to put an extra PK column in the dbml, which meant LINQ to SQL couldn't find both sides of a foreign key when saving.
A: I recently encountered the same issue: what I did was
Proce proces = unit.Proces.Single(u => u.ProcesTypeId == (from pt in context.ProcesTypes
where pt.Name == "Fix-O"
select pt).Single().ProcesTypeId &&
u.UnitId == UnitId);
Instead of:
Proce proces = context.Proces.Single(u => u.ProcesTypeId == (from pt in context.ProcesTypes
where pt.Name == "Fix-O"
select pt).Single().ProcesTypeId &&
u.UnitId == UnitId);
Where context was obviously the DataContext object and "unit" an instance of Unit object, a Data Class from a dbml file.
Next, I used the "proce" object to set a property in an instance of another Data Class object. Probably the LINQ engine could not check whether the property I was setting from the "proce" object, was allowed in the INSERT command that was going to have to be created by LINQ to add the other Data Class object to the database.
A: I had the same non speaking error.
I had a foreign key relation to a column of a table that was not the primary key of the table, but a unique column.
When I changed the unique column to be the primary key of the table the problem went away.
Hope this helps anyone!
A: Posted my experiences with this exception in an answer to SO# 237415
A: I ended up on this question when trying to debug my LINQ ChangeConflictException. In the end I realized the problem was that I manually added a property to a table in my DBML file, but I forgot to set the properties like Nullable (should have been true in my case) and Server Data Type
Hope this helps someone.
A: This is a long time ago, but I had the same problem and the error was because of a trigger with a select statement. Something like
CREATE TRIGGER NAME ON TABLE1 AFTER UPDATE AS SELECT table1.key from table1
inner join inserted on table1.key = inserted.key
When linq-to-sql runs the update command, it also runs a select statement to receive the auto generated values in the same query and expecting the first record set to contains the columns "asked for" but in this case the first row was the columns from the select statement in the trigger. So linq-to-sql was expecting two autogenerated columns, but it only received one column (with wrong data) and that was causing this exception.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: Membership bulk email software We have a Microsoft web stack web site, we have a members database.
We want to start doing mass-emails to our (opted in) membership.
I don't particularly want to re-invent the wheel with a system for having a web form submit and then mass send emails looping through thousands of records without timing out and overloading the server...
I guess I'm looking for something like Mailman, that can run on a windows (ASP/ASP.NET + SQL Server) backend and do the work for me, with suitable APIs to manage the subscriber list and send emails to that list.
Suggestions please?
A: I agree with acrosman, third parties that host email lists are a good way to go. A very reliable site I've found for mass emailing is http://mailing-list-services.com/. They do a good job to make sure their servers are never black listed or marked as spam. I've used them a few times, their website design blows, but their service is awsome. The Lyris Listmanager software they use has a pretty extensive API.
A: Advanced Intellect has some great tools, like aspNetEmail and ListNanny.
A: MaxBulkMailer might be a solution for you? The organisation I work for uses it to connect to www.authsmtp.com which gives us credits for a certain number of e-mails that we can send per month. You can import a spreadsheet of your mailing list or tap straight into a SQL server and pull the names and addresses. Available for Mac and Windows.
A: (not a sales pitch)
my company offers mail manager, but it's a hosted service. It has a full API though.
A: You can also check out how DotNetNuke does this
A: Unless your running a business that specializes in email, I'd suggest you find a hosted solution. There are 100's of little issues that come up when you run your own service over time. A hosted solution can save you lots of time and effort (and therefore money).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Does DataGrid on CE 5.0 Compact Framework .NET support editing? I am trying to get a DataGrid under CE 5.0 / .NET CF 2.0 that a user can edit. The document at http://msdn.microsoft.com/en-us/library/ms838165.aspx indicates that some environments do not support editing -
As there is no native support for
editing in the DataGrid control, this
needs to be implemented manually
Do I need to implement this ugly example - which doesn't work very well as shown?
The documentation is not clear about which .NET features are available on which platform.
A: No, it is not directly editable. MSDN has samples for using the DataGrid, including suggestions for data editing, for both Pocket PC and Smartphone devices. Either one would be a reasonable start for a generic CE device, but the general strategy is to determine which cell is active and place a textbox over it for editing capability.
You might also look at Eric Hartwell's extensions for the DataGrid as well as the CF datagrid on CodeProject.
Of course the usual suspects like Resco and ComponentOne also have commercial offerings.
A: Unfortunately the DataGrid does not support editing. What I do, is use it for displaying read only data and then provide a separate form for editing the selected record.
If you must support editing in a grid layout, then I would look for a third party component.
A: http://msdn.microsoft.com/en-us/library/ms838165.aspx
Guys it can be done..Just follow it..Its so simple when its done ;P..
A: The link http://msdn.microsoft.com/en-us/library/ms838165.aspx was really helpful but I think that there is a bug.
Instead of
DataTable dataTable = (DataTable)grdOrders.DataSource;
DataView dataView = dataTable.DefaultView;
It should be
DataView dataView = (DataView)itemdataentryGrid.DataSource;
DataTable dataTable = dataView.Table;
A: Aye, in my opinion its a joke. God knows why they've left it like this since 1.0.
On my first ever CF project I assumed it was available and spent an age implementing a read-write one cause i'd put it in the agreed spec.... :(
I learnt a very important lesson that day : Don't expect anything from the FF to be in CF (or rather, check before quoting!).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Some sort of creational pattern needed in C# I have the following type :
// incomplete class definition
public class Person
{
private string name;
public string Name
{
get { return this.name; }
}
}
I want this type to be created and updated with some sort of dedicated controller/builder, but I want it to remain read-only for other types.
This object also needs to fire an event every time it is updated by its controller/builder.
To summary, according to the previous type definition skeleton :
*
*The Person could only be instantiated by a specific controller
*This controller could update the state of the Person (name field) at any time
*The Person need to send a notification to the rest of the world when it occurs
*All other types should only be able to read Person attributes
How should I implement this ? I'm talking about a controller/builder here, but all others solutions are welcome.
Note : I would be able to rely on the internal modifier, but ideally all my stuff should be in the same assembly.
A: Create an interface IReadOnlyPerson which exposes only get accessors. Have Person implement IReadOnlyPerson. Store the reference to Person in your controller. Give other clients only the read only version.
This will protect against mistakes, but not fraud, as with most OO features. Clients can runtime cast to Person if they happen to know (or suspect) IReadOnlyPerson is implemented by Person.
Update, per the comment:
The Read Only interface may also expose an event delegate, just like any other object. The idiom generally used in C# doesn't prevent clients from messing with the list of listeners, but convention is only to add listeners, so that should be adequate. Inside any set accessor or function with state-changing side effects, just call the event delegate with a guard for the null (no listeners) case.
A: I like to have a read-only interface. Then the builder/controller/whatever can reference the object directly, but when you expose this object to the outside you show only the interface.
A: Use an interface IPerson and a nested class:
public class Creator
{
private class Person : IPerson
{
public string Name { get; set; }
}
public IPerson Create(...) ...
public void Modify(IPerson person, ...)
{
Person dude = person as Person;
if (dude == null)
// wasn't created by this class.
else
// update the data.
}
}
A: I think internal is the least complex and best approach (this of course involves multiple assemblies). Short of doing some overhead intensive stack walking to determine the caller in the property setter you could try:
interface IPerson
{
Name { get; set; }
}
and implement this interface explicitly:
class Person : IPerson
{
Name { get; private set; }
string IPerson.Name { get { return Name; } set { Name = value; } }
}
then perform explicit interface casts in your builder for setting properties. This still doesn't protect your implementation and isn't a good solution though it does go some way to emphasize your intention.
In your property setters you'll have to implement an event notification. Approaching this problem myself I would not create separate events and event handlers for each property but instead create a single PropertyChanged event and fire it in each property when a change occurs (where the event arguments would include the property name, old value, and new value).
A: Seems odd that, though I cannot change the name of the Person object, I can simply grab its controller and change it there. That's not a good way to secure your object's data.
But, notwithstanding, here's a way to do it:
/// <summary>
/// A controlled person. Not production worthy code.
/// </summary>
public class Person
{
private string _name;
public string Name
{
get { return _name; }
private set
{
_name = value;
OnNameChanged();
}
}
/// <summary>
/// This person's controller
/// </summary>
public PersonController Controller
{
get { return _controller ?? (_controller = new PersonController(this)); }
}
private PersonController _controller;
/// <summary>
/// Fires when <seealso cref="Name"/> changes. Go get the new name yourself.
/// </summary>
public event EventHandler NameChanged;
private void OnNameChanged()
{
if (NameChanged != null)
NameChanged(this, EventArgs.Empty);
}
/// <summary>
/// A Person controller.
/// </summary>
public class PersonController
{
Person _slave;
public PersonController(Person slave)
{
_slave = slave;
}
/// <summary>
/// Sets the name on the controlled person.
/// </summary>
/// <param name="name">The name to set.</param>
public void SetName(string name) { _slave.Name = name; }
}
}
A: Maybe something like that ?
public class Person
{
public class Editor
{
private readonly Person person;
public Editor(Person p)
{
person = p;
}
public void SetName(string name)
{
person.name = name;
}
public static Person Create(string name)
{
return new Person(name);
}
}
protected string name;
public string Name
{
get { return this.name; }
}
protected Person(string name)
{
this.name = name;
}
}
Person p = Person.Editor.Create("John");
Person.Editor e = new Person.Editor(p);
e.SetName("Jane");
Not pretty, but I think it works. Alternatively you can use properties instead of SetX methods on the editor.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How best to alleviate scenarios that trigger non-incremental linking (MSVS) While incremental linking addresses much of the time spent linking, even for very large projects, I find the incremental linker in MSVS to be pretty haphazard. (I'm currently using 2003 atm, would love to hear if 2005/8 addressed any of this.) My list of known triggers include:
*
*changing anything external to the main .exe project will trigger a full link
*adding static variables had a 50% chance of triggering a full link
and this list is certainly not inclusive. What can I do to avoid full links?
So far, the only diagnosis tool i've found so far is
*
*/test in the linker command line options
and it's terrible. What solutions are out there for diagnosing triggers for full re-links?
A: Minimizing the number of projects in your solution makes the problem a little better. And of course all the normal build speed-ups will work, like reducing includes and shrinking obj files size.
A: I'm using 2008; and while I have only used it for small->medium sized projects, so far I haven't experienced any unexpected full links.
I haven't used 03, but in my opinion 08 seems to be far better then 05.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Change .xla File with MSBuild I'm trying to create a build script for my current project, which includes an Excel Add-in. The Add-in contains a VBProject with a file modGlobal with a variable version_Number. This number needs to be changed for every build. The exact steps:
*
*Open XLA document with Excel.
*Switch to VBEditor mode. (Alt+F11)
*Open VBProject, entering a password.
*Open modGlobal file.
*Change variable's default value to the current date.
*Close & save the project.
I'm at a loss for how to automate the process. The best I can come up with is an excel macro or Auto-IT script. I could also write a custom MSBuild task, but that might get... tricky. Does anyone else have any other suggestions?
A: An alternative way of handling versioning of an XLA file is to use a custom property in Document Properties. You can access and manipulate using COM as described here: http://support.microsoft.com/?kbid=224351.
Advantages of this are:
*
*You can examine the version number without opening the XLA file
*You don't need Excel on your build machine - only the DsoFile.dll component
Another alternative would be to store the version number (possibly other configuration data too) on a worksheet in the XLA file. The worksheet would not be visible to users of the XLA. One technique I have used in the past is to store the add-in as an XLS file in source control, then as part of the build process (e.g. in a Post-Build event) run the script below to convert it to an XLA in the output directory. This script could be easily extended to update a version number in a worksheet before saving. In my case I did this because my Excel Add-in used VSTO, and Visual Studio doesn't support XLA files directly.
'
' ConvertToXla.vbs
'
' VBScript to convert an Excel spreadsheet (.xls) into an Excel Add-In (.xla)
'
' The script takes two arguments:
'
' - the name of the input XLS file.
'
' - the name of the output XLA file.
'
Option Explicit
Dim nResult
On Error Resume Next
nResult = DoAction
If Err.Number <> 0 Then
Wscript.Echo Err.Description
Wscript.Quit 1
End If
Wscript.Quit nResult
Private Function DoAction()
Dim sInputFile, sOutputFile
Dim argNum, argCount: argCount = Wscript.Arguments.Count
If argCount < 2 Then
Err.Raise 1, "ConvertToXla.vbs", "Missing argument"
End If
sInputFile = WScript.Arguments(0)
sOutputFile = WScript.Arguments(1)
Dim xlApplication
Set xlApplication = WScript.CreateObject("Excel.Application")
On Error Resume Next
ConvertFileToXla xlApplication, sInputFile, sOutputFile
If Err.Number <> 0 Then
Dim nErrNumber
Dim sErrSource
Dim sErrDescription
nErrNumber = Err.Number
sErrSource = Err.Source
sErrDescription = Err.Description
xlApplication.Quit
Err.Raise nErrNumber, sErrSource, sErrDescription
Else
xlApplication.Quit
End If
End Function
Public Sub ConvertFileToXla(xlApplication, sInputFile, sOutputFile)
Dim xlAddIn
xlAddIn = 18 ' XlFileFormat.xlAddIn
Dim w
Set w = xlApplication.Workbooks.Open(sInputFile,,,,,,,,,True)
w.IsAddIn = True
w.SaveAs sOutputFile, xlAddIn
w.Close False
End Sub
A: I'm not 100% sure how to do exactly what you have requested. But guessing the goal you have in mind there are a few possibilities.
1) Make part (or all) of your Globals a separate text file that is distributed with the .XLA I would use this for external references such as the version of the rest of your app. Write this at build time and distribute, and read on the load of the XLA.
2) I'm guessing your writing the version of the main component (ie: the non XLA part) of your application. If this is tru why store this in your XLA? Why not have the main part of the app allow certain version of the XLA to work. Version 1.1 of the main app could accept calls from Version 7.1 - 8.9 of the XLA.
3) If you are just looking to update the XLA so it gets included in your version control system or similar (i'm guessing here) maybe just touch the file so it looks like it changed.
If it's the version of the rest of the app that you are controlling i'd just stick it in a text file and distribute that along with the XLA.
A: You can modify the code in the xla programmatically from within Excel. You will need a reference to the 'Microsoft Visual Basic for Applications Extensibility..' component.
The examples on Chip Pearson's excellent site should get you started.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How to properly handle exceptions when performing file io Often I find myself interacting with files in some way but after writing the code I'm always uncertain how robust it actually is. The problem is that I'm not entirely sure how file related operations can fail and, therefore, the best way to handle exceptions.
The simple solution would seem to be just to catch any IOExceptions thrown by the code and give the user an "Inaccessible file" error message, but is it possible to get a bit more fine-grained error messages? Is there a way to determine the difference between such errors as a file being locked by another program and the data being unreadable due to a hardware error?
Given the following C# code, how would you handle errors in a user friendly (as informative as possible) way?
public class IO
{
public List<string> ReadFile(string path)
{
FileInfo file = new FileInfo(path);
if (!file.Exists)
{
throw new FileNotFoundException();
}
StreamReader reader = file.OpenText();
List<string> text = new List<string>();
while (!reader.EndOfStream)
{
text.Add(reader.ReadLine());
}
reader.Close();
reader.Dispose();
return text;
}
public void WriteFile(List<string> text, string path)
{
FileInfo file = new FileInfo(path);
if (!file.Exists)
{
throw new FileNotFoundException();
}
StreamWriter writer = file.CreateText();
foreach(string line in text)
{
writer.WriteLine(line);
}
writer.Flush();
writer.Close();
writer.Dispose();
}
}
A: The first thing you should change are your calls to StreamWriter and StreamReader to wrap them in a using statement, like this:
using (StreamReader reader = file.OpenText())
{
List<string> text = new List<string>();
while (!reader.EndOfStream)
{
text.Add(reader.ReadLine());
}
}
This will take care of calling Close and Dispose for you and will actually wrap it in a try/finally block so the actual compiled code looks like this:
StreamReader reader = file.OpenText();
try
{
List<string> text = new List<string>();
while (!reader.EndOfStream)
{
text.Add(reader.ReadLine());
}
}
finally
{
if (reader != null)
((IDisposable)reader).Dispose();
}
The benefit here is that you ensure the stream gets closed even if an exception occurs.
As far as any more explicit exception handling, it really depends on what you want to happen. In your example you explicitly test if the file exists and throw a FileNotFoundException which may be enough for your users but it may not.
A:
...but is it possible to get a bit more fine-grained error messages.
Yes. Go ahead and catch IOException, and use the Exception.ToString() method to get a relatively relevant error message to display. Note that the exceptions generated by the .NET Framework will supply these useful strings, but if you are going to throw your own exception, you must remember to plug in that string into the Exception's constructor, like:
throw new FileNotFoundException("File not found");
Also, absolutely, as per Scott Dorman, use that using statement. The thing to notice, though, is that the using statement doesn't actually catch anything, which is the way it ought to be. Your test to see if the file exists, for instance, will introduce a race condition that may be rather vexing. It doesn't really do you any good to have it in there. So, now, for the reader we have:
try {
using (StreamReader reader = file.OpenText()) {
// Your processing code here
}
} catch (IOException e) {
UI.AlertUserSomehow(e.ToString());
}
In short, for basic file operations:
1. Use using
2, Wrap the using statement or function in a try/catch that catches IOException
3. Use Exception.ToString() in your catch to get a useful error message
4. Don't try to detect exceptional file issues yourself. Let .NET do the throwing for you.
A: *
*Skip the File.Exists(); either handle it elsewhere or let CreateText()/OpenText() raise it.
*The end-user usually only cares if it succeeds or not. If it fails, just say so, he don't want details.
I haven't found a built-in way to get details about what and why something failed in .NET, but if you go native with CreateFile you have thousands of error-codes that can tell you what went wrong.
A: I don't see the point in checking for existence of a file and throwing a FileNotFoundException with no message. The framework will throw the FileNotFoundException itself, with a message.
Another problem with your example is that you should be using the try/finally pattern or the using statement to ensure your disposable classes are properly disposed even when there is an exception.
I would do this something like the following, catch any exception outside the method, and display the exception's message :
public IList<string> ReadFile(string path)
{
List<string> text = new List<string>();
using(StreamReader reader = new StreamReader(path))
{
while (!reader.EndOfStream)
{
text.Add(reader.ReadLine());
}
}
return text;
}
A: I would use the using statement to simplify closing the file. See MSDN the C# using statement
From MSDN:
using (TextWriter w = File.CreateText("log.txt")) {
w.WriteLine("This is line one");
w.WriteLine("This is line two");
}
using (TextReader r = File.OpenText("log.txt")) {
string s;
while ((s = r.ReadLine()) != null) {
Console.WriteLine(s);
}
}
A: Perhaps this is not what you are looking for, but reconsider the kind you are using exception handling. At first exception handling should not be treated to be "user-friendly", at least as long as you think of a programmer as user.
A sum-up for that may be the following article http://goit-postal.blogspot.com/2007/03/brief-introduction-to-exception.html .
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: How to check if a String contains another String in a case insensitive manner in Java? Say I have two strings,
String s1 = "AbBaCca";
String s2 = "bac";
I want to perform a check returning that s2 is contained within s1. I can do this with:
return s1.contains(s2);
I am pretty sure that contains() is case sensitive, however I can't determine this for sure from reading the documentation. If it is then I suppose my best method would be something like:
return s1.toLowerCase().contains(s2.toLowerCase());
All this aside, is there another (possibly better) way to accomplish this without caring about case-sensitivity?
A: You can use regular expressions, and it works:
boolean found = s1.matches("(?i).*" + s2+ ".*");
A: Here's some Unicode-friendly ones you can make if you pull in ICU4j. I guess "ignore case" is questionable for the method names because although primary strength comparisons do ignore case, it's described as the specifics being locale-dependent. But it's hopefully locale-dependent in a way the user would expect.
public static boolean containsIgnoreCase(String haystack, String needle) {
return indexOfIgnoreCase(haystack, needle) >= 0;
}
public static int indexOfIgnoreCase(String haystack, String needle) {
StringSearch stringSearch = new StringSearch(needle, haystack);
stringSearch.getCollator().setStrength(Collator.PRIMARY);
return stringSearch.first();
}
A: A simpler way of doing this (without worrying about pattern matching) would be converting both Strings to lowercase:
String foobar = "fooBar";
String bar = "FOO";
if (foobar.toLowerCase().contains(bar.toLowerCase()) {
System.out.println("It's a match!");
}
A: Yes, contains is case sensitive. You can use java.util.regex.Pattern with the CASE_INSENSITIVE flag for case insensitive matching:
Pattern.compile(Pattern.quote(wantedStr), Pattern.CASE_INSENSITIVE).matcher(source).find();
EDIT: If s2 contains regex special characters (of which there are many) it's important to quote it first. I've corrected my answer since it is the first one people will see, but vote up Matt Quail's since he pointed this out.
A: I did a test finding a case-insensitive match of a string. I have a Vector of 150,000 objects all with a String as one field and wanted to find the subset which matched a string. I tried three methods:
*
*Convert all to lower case
for (SongInformation song: songs) {
if (song.artist.toLowerCase().indexOf(pattern.toLowercase() > -1) {
...
}
}
*Use the String matches() method
for (SongInformation song: songs) {
if (song.artist.matches("(?i).*" + pattern + ".*")) {
...
}
}
*Use regular expressions
Pattern p = Pattern.compile(pattern, Pattern.CASE_INSENSITIVE);
Matcher m = p.matcher("");
for (SongInformation song: songs) {
m.reset(song.artist);
if (m.find()) {
...
}
}
Timing results are:
*
*No attempted match: 20 msecs
*To lower match: 182 msecs
*String matches: 278 msecs
*Regular expression: 65 msecs
The regular expression looks to be the fastest for this use case.
A: There is a simple concise way, using regex flag (case insensitive {i}):
String s1 = "hello abc efg";
String s2 = "ABC";
s1.matches(".*(?i)"+s2+".*");
/*
* .* denotes every character except line break
* (?i) denotes case insensitivity flag enabled for s2 (String)
* */
A: One problem with the answer by Dave L. is when s2 contains regex markup such as \d, etc.
You want to call Pattern.quote() on s2:
Pattern.compile(Pattern.quote(s2), Pattern.CASE_INSENSITIVE).matcher(s1).find();
A: You can use
org.apache.commons.lang3.StringUtils.containsIgnoreCase("AbBaCca", "bac");
The Apache Commons library is very useful for this sort of thing. And this particular one may be better than regular expressions as regex is always expensive in terms of performance.
A: "AbCd".toLowerCase().contains("abcD".toLowerCase())
A: Yes, this is achievable:
String s1 = "abBaCca";
String s2 = "bac";
String s1Lower = s1;
//s1Lower is exact same string, now convert it to lowercase, I left the s1 intact for print purposes if needed
s1Lower = s1Lower.toLowerCase();
String trueStatement = "FALSE!";
if (s1Lower.contains(s2)) {
//THIS statement will be TRUE
trueStatement = "TRUE!"
}
return trueStatement;
This code will return the String "TRUE!" as it found that your characters were contained.
A: A Faster Implementation: Utilizing String.regionMatches()
Using regexp can be relatively slow. It (being slow) doesn't matter if you just want to check in one case. But if you have an array or a collection of thousands or hundreds of thousands of strings, things can get pretty slow.
The presented solution below doesn't use regular expressions nor toLowerCase() (which is also slow because it creates another strings and just throws them away after the check).
The solution builds on the String.regionMatches() method which seems to be unknown. It checks if 2 String regions match, but what's important is that it also has an overload with a handy ignoreCase parameter.
public static boolean containsIgnoreCase(String src, String what) {
final int length = what.length();
if (length == 0)
return true; // Empty string is contained
final char firstLo = Character.toLowerCase(what.charAt(0));
final char firstUp = Character.toUpperCase(what.charAt(0));
for (int i = src.length() - length; i >= 0; i--) {
// Quick check before calling the more expensive regionMatches() method:
final char ch = src.charAt(i);
if (ch != firstLo && ch != firstUp)
continue;
if (src.regionMatches(true, i, what, 0, length))
return true;
}
return false;
}
Speed Analysis
This speed analysis does not mean to be rocket science, just a rough picture of how fast the different methods are.
I compare 5 methods.
*
*Our containsIgnoreCase() method.
*By converting both strings to lower-case and call String.contains().
*By converting source string to lower-case and call String.contains() with the pre-cached, lower-cased substring. This solution is already not as flexible because it tests a predefiend substring.
*Using regular expression (the accepted answer Pattern.compile().matcher().find()...)
*Using regular expression but with pre-created and cached Pattern. This solution is already not as flexible because it tests a predefined substring.
Results (by calling the method 10 million times):
*
*Our method: 670 ms
*2x toLowerCase() and contains(): 2829 ms
*1x toLowerCase() and contains() with cached substring: 2446 ms
*Regexp: 7180 ms
*Regexp with cached Pattern: 1845 ms
Results in a table:
RELATIVE SPEED 1/RELATIVE SPEED
METHOD EXEC TIME TO SLOWEST TO FASTEST (#1)
------------------------------------------------------------------------------
1. Using regionMatches() 670 ms 10.7x 1.0x
2. 2x lowercase+contains 2829 ms 2.5x 4.2x
3. 1x lowercase+contains cache 2446 ms 2.9x 3.7x
4. Regexp 7180 ms 1.0x 10.7x
5. Regexp+cached pattern 1845 ms 3.9x 2.8x
Our method is 4x faster compared to lowercasing and using contains(), 10x faster compared to using regular expressions and also 3x faster even if the Pattern is pre-cached (and losing flexibility of checking for an arbitrary substring).
Analysis Test Code
If you're interested how the analysis was performed, here is the complete runnable application:
import java.util.regex.Pattern;
public class ContainsAnalysis {
// Case 1 utilizing String.regionMatches()
public static boolean containsIgnoreCase(String src, String what) {
final int length = what.length();
if (length == 0)
return true; // Empty string is contained
final char firstLo = Character.toLowerCase(what.charAt(0));
final char firstUp = Character.toUpperCase(what.charAt(0));
for (int i = src.length() - length; i >= 0; i--) {
// Quick check before calling the more expensive regionMatches()
// method:
final char ch = src.charAt(i);
if (ch != firstLo && ch != firstUp)
continue;
if (src.regionMatches(true, i, what, 0, length))
return true;
}
return false;
}
// Case 2 with 2x toLowerCase() and contains()
public static boolean containsConverting(String src, String what) {
return src.toLowerCase().contains(what.toLowerCase());
}
// The cached substring for case 3
private static final String S = "i am".toLowerCase();
// Case 3 with pre-cached substring and 1x toLowerCase() and contains()
public static boolean containsConverting(String src) {
return src.toLowerCase().contains(S);
}
// Case 4 with regexp
public static boolean containsIgnoreCaseRegexp(String src, String what) {
return Pattern.compile(Pattern.quote(what), Pattern.CASE_INSENSITIVE)
.matcher(src).find();
}
// The cached pattern for case 5
private static final Pattern P = Pattern.compile(
Pattern.quote("i am"), Pattern.CASE_INSENSITIVE);
// Case 5 with pre-cached Pattern
public static boolean containsIgnoreCaseRegexp(String src) {
return P.matcher(src).find();
}
// Main method: perfroms speed analysis on different contains methods
// (case ignored)
public static void main(String[] args) throws Exception {
final String src = "Hi, I am Adam";
final String what = "i am";
long start, end;
final int N = 10_000_000;
start = System.nanoTime();
for (int i = 0; i < N; i++)
containsIgnoreCase(src, what);
end = System.nanoTime();
System.out.println("Case 1 took " + ((end - start) / 1000000) + "ms");
start = System.nanoTime();
for (int i = 0; i < N; i++)
containsConverting(src, what);
end = System.nanoTime();
System.out.println("Case 2 took " + ((end - start) / 1000000) + "ms");
start = System.nanoTime();
for (int i = 0; i < N; i++)
containsConverting(src);
end = System.nanoTime();
System.out.println("Case 3 took " + ((end - start) / 1000000) + "ms");
start = System.nanoTime();
for (int i = 0; i < N; i++)
containsIgnoreCaseRegexp(src, what);
end = System.nanoTime();
System.out.println("Case 4 took " + ((end - start) / 1000000) + "ms");
start = System.nanoTime();
for (int i = 0; i < N; i++)
containsIgnoreCaseRegexp(src);
end = System.nanoTime();
System.out.println("Case 5 took " + ((end - start) / 1000000) + "ms");
}
}
A: I'm not sure what your main question is here, but yes, .contains is case sensitive.
A: String container = " Case SeNsitive ";
String sub = "sen";
if (rcontains(container, sub)) {
System.out.println("no case");
}
public static Boolean rcontains(String container, String sub) {
Boolean b = false;
for (int a = 0; a < container.length() - sub.length() + 1; a++) {
//System.out.println(sub + " to " + container.substring(a, a+sub.length()));
if (sub.equalsIgnoreCase(container.substring(a, a + sub.length()))) {
b = true;
}
}
return b;
}
Basically, it is a method that takes two strings. It is supposed to be a not-case sensitive version of contains(). When using the contains method, you want to see if one string is contained in the other.
This method takes the string that is "sub" and checks if it is equal to the substrings of the container string that are equal in length to the "sub". If you look at the for loop, you will see that it iterates in substrings (that are the length of the "sub") over the container string.
Each iteration checks to see if the substring of the container string is equalsIgnoreCase to the sub.
A: If you have to search an ASCII string in another ASCII string, such as a URL, you will find my solution to be better. I've tested icza's method and mine for the speed and here are the results:
*
*Case 1 took 2788 ms - regionMatches
*Case 2 took 1520 ms - my
The code:
public static String lowerCaseAscii(String s) {
if (s == null)
return null;
int len = s.length();
char[] buf = new char[len];
s.getChars(0, len, buf, 0);
for (int i=0; i<len; i++) {
if (buf[i] >= 'A' && buf[i] <= 'Z')
buf[i] += 0x20;
}
return new String(buf);
}
public static boolean containsIgnoreCaseAscii(String str, String searchStr) {
return StringUtils.contains(lowerCaseAscii(str), lowerCaseAscii(searchStr));
}
A: import java.text.Normalizer;
import org.apache.commons.lang3.StringUtils;
public class ContainsIgnoreCase {
public static void main(String[] args) {
String in = " Annulée ";
String key = "annulee";
// 100% java
if (Normalizer.normalize(in, Normalizer.Form.NFD).replaceAll("[\\p{InCombiningDiacriticalMarks}]", "").toLowerCase().contains(key)) {
System.out.println("OK");
} else {
System.out.println("KO");
}
// use commons.lang lib
if (StringUtils.containsIgnoreCase(Normalizer.normalize(in, Normalizer.Form.NFD).replaceAll("[\\p{InCombiningDiacriticalMarks}]", ""), key)) {
System.out.println("OK");
} else {
System.out.println("KO");
}
}
}
A: We can use stream with anyMatch and contains of Java 8
public class Test2 {
public static void main(String[] args) {
String a = "Gina Gini Protijayi Soudipta";
String b = "Gini";
System.out.println(WordPresentOrNot(a, b));
}// main
private static boolean WordPresentOrNot(String a, String b) {
//contains is case sensitive. That's why change it to upper or lower case. Then check
// Here we are using stream with anyMatch
boolean match = Arrays.stream(a.toLowerCase().split(" ")).anyMatch(b.toLowerCase()::contains);
return match;
}
}
A: or you can use a simple approach and just convert the string's case to substring's case and then use contains method.
A: You could simply do something like this:
String s1 = "AbBaCca";
String s2 = "bac";
String toLower = s1.toLowerCase();
return toLower.contains(s2);
A: String x="abCd";
System.out.println(Pattern.compile("c",Pattern.CASE_INSENSITIVE).matcher(x).find());
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "460"
}
|
Q: Is there a specific name for the node that coresponds to a subtree? I'm designing a web site navigation hierarchy. It's a tree of nodes. Nodes represent web pages.
Some nodes on the tree are special. I need a name for them.
There are multiple such nodes. Each is the "root" of a sub-tree with pages that have a distinct logo, style sheet, or layout. Think of different departments.
site map with color-coded sub-trees http://img518.imageshack.us/img518/153/subtreesfe1.gif
What should I name this type of node?
A: How about Root (node with children, but no parent), Node (node with children and parent) and Leaf (node with no children and parent)?
You can then distinguish by name and position within the tree structure (E.g. DepartmentRoot, DepartmentNode, DepartmentLeaf) if need be..
Update Following Comment from OP
Looking at your question, you said that "some" are special, and in your diagram, you have different nodes looking differently at different levels. The nodes may be different in their design, you can build a tree structure many ways. For example, a single abstract class that can have child nodes, if no children, its a leaf, if no parent, its a root but this can change in its lifetime. Or, a fixed class structure in which leafs are a specific class type that cannot have children added to them in any way.
IF your design does not need you to distinguish nodes differently depending on their position (relative to the root) it suggests that you have an abstract class used for them all.
In which case, it raises the question, how is it different?
If it is simply the same as the standard node everywhere else, but with a bit of styling, how about StyledNode? Do you even need it to be seperate (no style == no big deal, it doesn't render).
Since I don't know the mechanics of how the tree is architected, there could possibly be several factors to consider when naming.
A: The word you are looking for is "Section". It's part of a whole and has the same stuff inside.
So, you have Nodes, which have children and a parent, and you have SectionNodes which are the roots of these special subtrees.
A: How about PageTemplate to embody the fact that its children have their own layout, CSS etc?
A: AreaNode
A: So, it sounds like you are gathering categories. The nodes are the entry points of this categories. How about "TopCategoryNode", "CategoryEntry" then for som,ething that is below them. Or, if you want to divide more, something like "CategoryCSS", "CategoryLayout" etc?
This is kind of generic, but you make clear that there are "categories", and that these do consist of more than one subnode, or subtheme.
A: Branch ?
Keeps the tree analogy and also hints in this case at departments etc
Thinking about class heirarchies, Root is probably a special case of Branch, which is a special case of Node, special case of Leaf. The Branch/Node distinction is one you get to make for your special situation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to avoid thousands of needless ListView.SelectedIndexChanged events? If a user select all items in a .NET 2.0 ListView, the ListView will fire a SelectedIndexChanged event for every item, rather than firing an event to indicate that the selection has changed.
If the user then clicks to select just one item in the list, the ListView will fire a SelectedIndexChanged event for every item that is getting unselected, and then an SelectedIndexChanged event for the single newly selected item, rather than firing an event to indicate that the selection has changed.
If you have code in the SelectedIndexChanged event handler, the program will become pretty unresponsive when you begin to have a few hundred/thousand items in the list.
I've thought about dwell timers, etc.
But does anyone have a good solution to avoid thousands of needless ListView.SelectedIndexChange events, when really one event will do?
A: This is the dwell timer solution i'm using for now (dwell just means "wait for a little bit"). This code might suffer from a race condition, and perhaps a null reference exception.
Timer changeDelayTimer = null;
private void lvResults_SelectedIndexChanged(object sender, EventArgs e)
{
if (this.changeDelayTimer == null)
{
this.changeDelayTimer = new Timer();
this.changeDelayTimer.Tick += ChangeDelayTimerTick;
this.changeDelayTimer.Interval = 200; //200ms is what Explorer uses
}
this.changeDelayTimer.Enabled = false;
this.changeDelayTimer.Enabled = true;
}
private void ChangeDelayTimerTick(object sender, EventArgs e)
{
this.changeDelayTimer.Enabled = false;
this.changeDelayTimer.Dispose();
this.changeDelayTimer = null;
//Add original SelectedIndexChanged event handler code here
//todo
}
A: Old question I know, but this still seems to be an issue.
Here is my solution not using timers.
It waits for the MouseUp or KeyUp event before firing the SelectionChanged event.
If you are changing the selection programatically, then this will not work, the event won't fire, but you could easily add a FinishedChanging event or something to trigger the event.
(It also has some stuff to stop flickering which isn't relevant to this question).
public class ListViewNF : ListView
{
bool SelectedIndexChanging = false;
public ListViewNF()
{
this.SetStyle(ControlStyles.OptimizedDoubleBuffer | ControlStyles.AllPaintingInWmPaint, true);
this.SetStyle(ControlStyles.EnableNotifyMessage, true);
}
protected override void OnNotifyMessage(Message m)
{
if(m.Msg != 0x14)
base.OnNotifyMessage(m);
}
protected override void OnSelectedIndexChanged(EventArgs e)
{
SelectedIndexChanging = true;
//base.OnSelectedIndexChanged(e);
}
protected override void OnMouseUp(MouseEventArgs e)
{
if (SelectedIndexChanging)
{
base.OnSelectedIndexChanged(EventArgs.Empty);
SelectedIndexChanging = false;
}
base.OnMouseUp(e);
}
protected override void OnKeyUp(KeyEventArgs e)
{
if (SelectedIndexChanging)
{
base.OnSelectedIndexChanged(EventArgs.Empty);
SelectedIndexChanging = false;
}
base.OnKeyUp(e);
}
}
A: Good solution from Ian. I took that and made it into a reusable class, making sure to dispose of the timer properly. I also reduced the interval to get a more responsive app. This control also doublebuffers to reduce flicker.
public class DoublebufferedListView : System.Windows.Forms.ListView
{
private Timer m_changeDelayTimer = null;
public DoublebufferedListView()
: base()
{
// Set common properties for our listviews
if (!SystemInformation.TerminalServerSession)
{
DoubleBuffered = true;
SetStyle(ControlStyles.ResizeRedraw, true);
}
}
/// <summary>
/// Make sure to properly dispose of the timer
/// </summary>
/// <param name="disposing"></param>
protected override void Dispose(bool disposing)
{
if (disposing && m_changeDelayTimer != null)
{
m_changeDelayTimer.Tick -= ChangeDelayTimerTick;
m_changeDelayTimer.Dispose();
}
base.Dispose(disposing);
}
/// <summary>
/// Hack to avoid lots of unnecessary change events by marshaling with a timer:
/// http://stackoverflow.com/questions/86793/how-to-avoid-thousands-of-needless-listview-selectedindexchanged-events
/// </summary>
/// <param name="e"></param>
protected override void OnSelectedIndexChanged(EventArgs e)
{
if (m_changeDelayTimer == null)
{
m_changeDelayTimer = new Timer();
m_changeDelayTimer.Tick += ChangeDelayTimerTick;
m_changeDelayTimer.Interval = 40;
}
// When a new SelectedIndexChanged event arrives, disable, then enable the
// timer, effectively resetting it, so that after the last one in a batch
// arrives, there is at least 40 ms before we react, plenty of time
// to wait any other selection events in the same batch.
m_changeDelayTimer.Enabled = false;
m_changeDelayTimer.Enabled = true;
}
private void ChangeDelayTimerTick(object sender, EventArgs e)
{
m_changeDelayTimer.Enabled = false;
base.OnSelectedIndexChanged(new EventArgs());
}
}
Do let me know if this can be improved.
A: The timer is the best overall solution.
A problem with Jens's suggestion is that once the list has a lot of selected items (thousands or more), getting the list of selected items starts to take a long time.
Instead of creating a timer object every time a SelectedIndexChanged event occurs, it's simpler to just put a permanent one on the form with the designer, and have it check a boolean variable in the class to see whether or not it should call the updating function.
For example:
bool timer_event_should_call_update_controls = false;
private void lvwMyListView_SelectedIndexChanged(object sender, EventArgs e) {
timer_event_should_call_update_controls = true;
}
private void UpdateControlsTimer_Tick(object sender, EventArgs e) {
if (timer_event_should_call_update_controls) {
timer_event_should_call_update_controls = false;
update_controls();
}
}
This works fine if you're using the information simply for display purposes, such as updating a status bar to say "X out of Y selected".
A: A flag works for the OnLoad event of the windows form / web form / mobile form.
In a single select Listview, not multi-select, the following code is simple to implement, and prevents multiple firing of the event.
As the ListView de-selects the first item, the second item it what you need and the collection should only ever contain one item.
The same below was used in a mobile application, therefore some of the collection names might be different as it is using the compact framework, however the same principles apply.
Note: Make sure OnLoad and populate of the listview you set the first item to be selected.
// ################ CODE STARTS HERE ################
//Flag to create at the form level
System.Boolean lsvLoadFlag = true;
//Make sure to set the flag to true at the begin of the form load and after
private void frmMain_Load(object sender, EventArgs e)
{
//Prevent the listview from firing crazy in a single click NOT multislect environment
lsvLoadFlag = true;
//DO SOME CODE....
//Enable the listview to process events
lsvLoadFlag = false;
}
//Populate First then this line of code
lsvMain.Items[0].Selected = true;
//SelectedIndexChanged Event
private void lsvMain_SelectedIndexChanged(object sender, EventArgs e)
{
ListViewItem lvi = null;
if (!lsvLoadFlag)
{
if (this.lsvMain.SelectedIndices != null)
{
if (this.lsvMain.SelectedIndices.Count == 1)
{
lvi = this.lsvMain.Items[this.lsvMain.SelectedIndices[0]];
}
}
}
}
################ CODE END HERE ################
Ideally, this code should be put into a UserControl for easy re-use and distrbution in a single select ListView. This code would not be much use in a multi-select, as the event works as it should for that behavior.
I hope that helps.
Kind regards,
Anthony N. Urwin
http://www.manatix.com
A: You can use async & await:
private bool waitForUpdateControls = false;
private async void listView_SelectedIndexChanged(object sender, EventArgs e)
{
// To avoid thousands of needless ListView.SelectedIndexChanged events.
if (waitForUpdateControls)
{
return;
}
waitForUpdateControls = true;
await Task.Delay(100);
waitForUpdateControls = false;
UpdateControls();
return;
}
A: I would either try tying the postback to a button to allow the user to submit their changes and unhook the event handler.
A: I was just trying to tackle this very problem yesterday. I don't know exactly what you mean by "dwell" timers, but I tried implementing my very own version of waiting until all changes are done. Unfortunately the only way I could think of to do this was in a separate thread and it turns out that when you create a separate thread, your UI elements are inaccessible in that thread. .NET throws an exception stating that the UI elements can only be accessed in the thread where the elements were created! So, I found a way to optimize my response to the SelectedIndexChanged and make it fast enough to where it is bearable - its not a scalable solution though. Lets hope someone has a clever idea to tackle this problem in a single thread.
A: Maybe this can help you to accomplish what you need without using timers:
http://www.dotjem.com/archive/2009/06/19/20.aspx
I Don't like the user of timers ect. As i also state in the post...
Hope it helps...
Ohh i forgot to say, it's .NET 3.5, and I am using some of the features in linq to acomplish "Selection Changes Evaluation" if you can call it that o.O...
Anyways, if you are on an older version, this evaluation has to be done with a bit more code... >.<...
A: I recommend virtualizing your list view if it has a few hundred or thousand items.
A: Maylon >>>
The aim was never to work with list above a few hundreds items, But...
I have tested the Overall user experience with 10.000 items, and selections of 1000-5000 items at one time (and changes of 1000-3000 items in both Selected and Deselected)...
The overall duration of calculating never exceeded 0.1 sec, some of the highest measurements was of 0.04sec, I Found that perfectly acceptable with that many items.
And at 10.000 items, just initializing the list takes over 10 seconds, so at this point I would have thought other things had come in to play, as Virtualization as Joe Chung points out.
That said, it should be clear that the code is not an optimal solution in how it calculates the difference in the selection, if needed this can be improved a lot and in various ways, I focused on the understanding of the concept with the code rather than the performance.
However, if your experiencing degraded performance I am very interested in some of the following:
*
*How many items in the list?
*How many selected/deselected elements at a time?
*How long does it roughly take for the event to raise?
*Hardware platform?
*More about The case of use?
*Other relevant information you can think of?
Otherwise it ain't easy to help improving the solution.
A: Leave the ListView and all the old controls.
Make DataGridView your friend, and all will be well :)
A: Raymond Chen has a blog post that (probably) explains why there are thousands of change events, rather than just one:
Why is there an LVN_ODSTATECHANGED notification when there's already a perfectly good LVN_ITEMCHANGED notification?
...
The LVN_ODSTATECHANGED notification
tells you that the state of all items
in the specified range has changed.
It's a shorthand for sending an
individual LVN_ITEMCHANGED for all
items in the range [iFrom..iTo]. If
you have an ownerdata list view with
500,000 items and somebody does a
select-all, you'll be glad that you
get a single LVN_ODSTATECHANGED
notification with iFrom=0 and
iTo=499999 instead of a half million
individual little LVN_ITEMCHANGED
notifications.
i say probably explains why, because there's no guarantee that the .NET list view is a wrapper around the Listview Common Control - that's an implementation detail that is free to change at any time (although almost certainly never will).
The hinted solution is to use the .NET listview in virtual mode, making the control an order of magnitude more difficult to use.
A: I may have a better solution.
My situation:
*
*Single select list view (rather than multi-select)
*I want to avoid processing the event when it fires for deselection of the previously selected item.
My solution:
*
*Record what item the user clicked on MouseDown
*Ignore the SelectedIndexChanged event if this item is not null and SelectedIndexes.Count == 0
Code:
ListViewItem ItemOnMouseDown = null;
private void lvTransactions_MouseDown(object sender, MouseEventArgs e)
{
ItemOnMouseDown = lvTransactions.GetItemAt(e.X, e.Y);
}
private void lvTransactions_SelectedIndexChanged(object sender, EventArgs e)
{
if (ItemOnMouseDown != null && lvTransactions.SelectedIndices.Count == 0)
return;
SelectedIndexDidReallyChange();
}
A: What worked for me was just using the OnClick event.
I just needed to get a single value and get out, and the first choice was fine, whether it was the same original value or a new one.
The click seems to occur after all of the selection changes are done, like the timer would do.
Click ensures a real click occurred rather than just mouse up. Though in practice probably makes no difference unless they slid into the dropdown with the mouse down and released.
This worked for me because, click seems to only fire in the list item bearing client area. And I had no headers to click on.
I just had a plain single control popup dropdown. And I didn't have to worry about key movements selecting items. Any key movements on a property grid dropdown cancel the dropdown.
Trying to close in the middle of SelectedIndexChanged would many times cause a crash also. But closing during Click is fine.
The crashing thing was what caused me to look for alternatives and find this post.
void OnClick(object sender, EventArgs e)
{
if (this.isInitialize) // kind of pedantic
return;
if (this.SelectedIndices.Count > 0)
{
string value = this.SelectedItems[0].Tag;
if (value != null)
{
this.OutValue = value;
}
}
//NOTE: if this close is done in SelectedIndexChanged, will crash
// with corrupted memory error if an item was already selected
// Tell property grid to close the wrapper Form
var editorService = provider.GetService(typeof(IWindowsFormsEditorService)) as IWindowsFormsEditorService;
if ((object)editorService != null)
{
editorService.CloseDropDown();
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
}
|
Q: Connecting delegate classes in Objective-C I've got two controls in my Interface Builder file, and each of those controls I've created a separate delegate class for in code (Control1Delegate and Control2Delegate). I created two "Objects" in interface builder, made them of that type, and connected the controls to them as delegates. The delegates work just fine. My problem is, I need to share information from one delegate to the other delegate, and I'm not sure how.
What is the best way to do this? Combine the two delegates into one class, or somehow access a third class that they can both read? Since I'm not actually initializing the class anywhere in my code, I'm not sure how to get a reference to the actual instance of it (if there is an actual instance of it), or even access the "main" class that the project came with.
A: You can add outlets from either delegate to the other delegate. There are two ways to add an outlet to an object in IB (assuming you're using Xcode/IB version 3.0 or later:
*
*If you have not generated the code for your delegate classes yet, select the desired delegate, then open the "Object Identity" tab in the IB inspector. Add a "Class outlet" of type NSObject. You should then be able to set this new outlet to the other delegate. Of course you will have to generate the code for your delegate class and add the generated source files to your Xcode project before you can load the nib.
*If you've already generated the code for the delegate class (or added an NSObject to your NIB and set its Class to an existing class in your Xcode project), add an instance variable to the delegate class:
IBOutlet id outletToOtherDelegate;
As long as your Xcode project is open (as indicated by the green bubble in the lower-left of your NIB window), IB will automatically detect the new outlet and allow you to assign it to the other delegate object in your NIB.
Cocoa automatically connects these outlets at NIB load time. Once awakeFromNib is called on instances of your delegate objects, you may assume that all the other objects in the NIB have been instantiated and all outlets have been connected. You should not assume an order on calls to awakeFromNib, however.
A: I think you can create outlets on each one and cross-bind them so that they each have the same data all the time. If there's one model object they need to share, that's pretty tidy. I don't actually know how to do this; I think I saw it in an iPhone tutorial one time!
A: I don't have my Mac in front of me currently since I'm at work, but would it be possible to bind an instance of one delegate to a member of the other delegate? This would be similar to binding an NSArrayController to a member of another controller class, for example.
However, depending on what the delegate classes are doing, if the tasks are similar I would probably just combine them into once class. That would eliminate the problem altogether.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Is the syntax for the Wordpress style.css template element available anywhere? I've recently embarked upon the grand voyage of Wordpress theming and I've been reading through the Wordpress documentation for how to write a theme. One thing I came across here was that the style.css file must contain a specific header in order to be used by the Wordpress engine. They give a brief example but I haven't been able to turn up any formal description of what must be in the style.css header portion. Does this exist on the Wordpress site? If it doesn't could we perhaps describe it here?
A: Based on http://codex.wordpress.org/Theme_Development:
The following is an example of the first few lines of the stylesheet, called the style sheet header, for the Theme "Rose":
/*
Theme Name: Rose
Theme URI: the-theme's-homepage
Description: a-brief-description
Author: your-name
Author URI: your-URI
Template: use-this-to-define-a-parent-theme--optional
Version: a-number--optional
Tags: a-comma-delimited-list--optional
.
General comments/License Statement if any.
.
*/
The simplest Theme includes only a style.css file, plus images if any. To create such a Theme, you must specify a set of templates to inherit for use with the Theme by editing the Template: line in the style.css header comments. For example, if you wanted the Theme "Rose" to inherit the templates from another Theme called "test", you would include Template: test in the comments at the beginning of Rose's style.css. Now "test" is the parent Theme for "Rose", which still consists only of a style.css file and the concomitant images, all located in the directory wp-content/themes/Rose. (Note that specifying a parent Theme will inherit all of the template files from that Theme — meaning that any template files in the child Theme's directory will be ignored.)
The comment header lines in style.css are required for WordPress to be able to identify a Theme and display it in the Administration Panel under Design > Themes as an available Theme option along with any other installed Themes.
The Theme Name, Version, Author, and Author URI fields are parsed by WordPress and used to display that data in the Current Theme area on the top line of the current theme information, where the Author's Name is hyperlinked to the Author URI. The Description and Tag fields are parsed and displayed in the body of the theme's information, and if the theme has a parent theme, that information is placed in the information body as well. In the Available Themes section, only the Theme Name, Description, and Tags fields are used.
None of these fields have any restrictions - all are parsed as strings. In addition, none of them are required in the code, though in practice the fields not marked as optional in the list above are all used to provide contextual information to the WordPress administrator and should be included for all themes.
A: You are probably thinking about this:
/*
THEME NAME: Parallax
THEME URI: http://parallaxdenigrate.net
VERSION: .1
AUTHOR: Martin Jacobsen
AUTHOR URI: http://martinjacobsen.no
*/
If I'm not way off, Wordpress uses this info to display in the "Activate Design" dialog in the admin backend.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Why would a "java.net.ConnectException: Connection timed out" exception occur when URL is up? I'm getting a ConnectException: Connection timed out with some frequency from my code. The URL I am trying to hit is up. The same code works for some users, but not others. It seems like once one user starts to get this exception they continue to get the exception.
Here is the stack trace:
java.net.ConnectException: Connection timed out
Caused by: java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
at java.net.Socket.connect(Socket.java:516)
at java.net.Socket.connect(Socket.java:466)
at sun.net.NetworkClient.doConnect(NetworkClient.java:157)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:365)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:477)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:214)
at sun.net.www.http.HttpClient.New(HttpClient.java:287)
at sun.net.www.http.HttpClient.New(HttpClient.java:299)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:796)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:748)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:673)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:840)
Here is a snippet from my code:
URLConnection urlConnection = null;
OutputStream outputStream = null;
OutputStreamWriter outputStreamWriter = null;
InputStream inputStream = null;
try {
URL url = new URL(urlBase);
urlConnection = url.openConnection();
urlConnection.setDoOutput(true);
outputStream = urlConnection.getOutputStream(); // exception occurs on this line
outputStreamWriter = new OutputStreamWriter(outputStream);
outputStreamWriter.write(urlString);
outputStreamWriter.flush();
inputStream = urlConnection.getInputStream();
String response = IOUtils.toString(inputStream);
return processResponse(urlString, urlBase, response);
} catch (IOException e) {
throw new Exception("Error querying url: " + urlString, e);
} finally {
IoUtil.close(inputStream);
IoUtil.close(outputStreamWriter);
IoUtil.close(outputStream);
}
A: Connection timeouts (assuming a local network and several client machines) typically result from
a) some kind of firewall on the way that simply eats the packets without telling the sender things like "No Route to host"
b) packet loss due to wrong network configuration or line overload
c) too many requests overloading the server
d) a small number of simultaneously available threads/processes on the server which leads to all of them being taken. This happens especially with requests that take a long time to run and may combine with c).
A: The error message says it all: your connection timed out. This means your request did not get a response within some (default) timeframe. The reasons that no response was received is likely to be one of:
a) The IP/domain or port is incorrect
b) The IP/domain or port (i.e service) is down
c) The IP/domain is taking longer than your default timeout to respond
d) You have a firewall that is blocking requests or responses on whatever port you are using
e) You have a firewall that is blocking requests to that particular host
f) Your internet access is down
g) Your live-server is down i.e in case of "rest-API call".
Note that firewalls and port or IP blocking may be in place by your ISP
A: I'd recommend raising the connection timeout time before getting the output stream, like so:
urlConnection.setConnectTimeout(1000);
Where 1000 is in milliseconds (1000 milliseconds = 1 second).
A: *
*try to do the Telnet to see any firewall issue
*perform tracert/traceroute to find number of hops
A: If the URL works fine in the web browser on the same machine, it might be that the Java code isn't using the HTTP proxy the browser is using for connecting to the URL.
A: I solved my problem with:
System.setProperty("https.proxyHost", "myProxy");
System.setProperty("https.proxyPort", "80");
or http.proxyHost...
A:
Why would a “java.net.ConnectException: Connection timed out”
exception occur when URL is up?
Because the URLConnection (HttpURLConnection/HttpsURLConnection) is erratic. You can read about this here and here.
Our solution were two things:
a) set the ContentLength via setFixedLengthStreamingMode
b) catch any TimeoutException and retry if it failed.
A: This can be a IPv6 problem (the host publishes an IPv6 AAAA-Address and the users host thinks it is configured for IPv6 but it is actually not correctly connected). This can also be a network MTU problem, a firewall block, or the target host might publish different IP addresses (randomly or based on originators country) which are not all reachable. Or similliar network problems.
You cant do much besides setting a timeout and adding good error messages (especially printing out the hosts' resolved address). If you want to make it more robust add retry, parallel trying of all addresses and also look into name resolution caching (positive and negative) on the Java platform.
A: There is a possibility that your IP/host are blocked by the remote host, especially if it thinks you are hitting it too hard.
A: The reason why this happened to me was that a remote server was allowing only certain IP addressed but not its own, and I was trying to render the images from the server's URLs... so everything would simply halt, displaying the timeout error that you had...
Make sure that either the server is allowing its own IP, or that you are rendering things from some remote URL that actually exists.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "93"
}
|
Q: Javascript array reference If I have the following:
{"hdrs": ["Make","Model","Year"],
"data" : [
{"Make":"Honda","Model":"Accord","Year":"2008"}
{"Make":"Toyota","Model":"Corolla","Year":"2008"}
{"Make":"Honda","Model":"Pilot","Year":"2008"}]
}
And I have a "hdrs" name (i.e. "Make"), how can I reference the data array instances?
seems like data["Make"][0] should work...but unable to get the right reference
EDIT
Sorry for the ambiguity.. I can loop through hdrs to get each hdr name, but I need to use each instance value of hdrs to find all the data elements in data (not sure that is any better of an explanation). and I will have it in a variable t since it is JSON (appreciate the re-tagging) I would like to be able to reference with something like this: t.data[hdrs[i]][j]
A: I had to alter your code a little:
var x = {"hdrs": ["Make","Model","Year"],
"data" : [
{"Make":"Honda","Model":"Accord","Year":"2008"},
{"Make":"Toyota","Model":"Corolla","Year":"2008"},
{"Make":"Honda","Model":"Pilot","Year":"2008"}]
};
alert( x.data[0].Make );
EDIT: in response to your edit
var x = {"hdrs": ["Make","Model","Year"],
"data" : [
{"Make":"Honda","Model":"Accord","Year":"2008"},
{"Make":"Toyota","Model":"Corolla","Year":"2008"},
{"Make":"Honda","Model":"Pilot","Year":"2008"}]
};
var Header = 0; // Make
for( var i = 0; i <= x.data.length - 1; i++ )
{
alert( x.data[i][x.hdrs[Header]] );
}
A: First, you forgot your trailing commas in your data array items.
Try the following:
var obj_hash = {
"hdrs": ["Make", "Model", "Year"],
"data": [
{"Make": "Honda", "Model": "Accord", "Year": "2008"},
{"Make": "Toyota", "Model": "Corolla", "Year": "2008"},
{"Make": "Honda", "Model": "Pilot", "Year": "2008"},
]
};
var ref_data = obj_hash.data;
alert(ref_data[0].Make);
@Kent Fredric: note that the last comma is not strictly needed, but allows you to more easily move lines around (i.e., if you move or add after the last line, and it didn't have a comma, you'd have to specifically remember to add one). I think it's best to always have trailing commas.
A: So, like this?
var theMap = /* the stuff you posted */;
var someHdr = "Make";
var whichIndex = 0;
var correspondingData = theMap["data"][whichIndex][someHdr];
That should work, if I'm understanding you correctly...
A: var x = {"hdrs": ["Make","Model","Year"],
"data" : [
{"Make":"Honda","Model":"Accord","Year":"2008"}
{"Make":"Toyota","Model":"Corolla","Year":"2008"}
{"Make":"Honda","Model":"Pilot","Year":"2008"}]
};
x.data[0].Make == "Honda"
x['data'][0]['Make'] == "Honda"
You have your array/hash lookup backwards :)
A: I'm not sure I understand your question, but...
Assuming the above JSON is the var obj, you want:
obj.data[0]["Make"] // == "Honda"
If you just want to refer to the field referenced by the first header, it would be something like:
obj.data[0][obj.hdrs[0]] // == "Honda"
A: perhaps try data[0].Make
A: Close, you'd use
var x = data[0].Make;
var z = data[0].Model;
var y = data[0].Year;
A: Your code as displayed is not syntactically correct; it needs some commas. I got this to work:
$foo = {"hdrs": ["Make","Model","Year"],
"data" : [
{"Make":"Honda","Model":"Accord","Year":"2008"},
{"Make":"Toyota","Model":"Corolla","Year":"2008"},
{"Make":"Honda","Model":"Pilot","Year":"2008"}]
};
and then I can access data as:
$foo["data"][0]["make"]
A: With the help of the answers (and after getting the inside and outside loops correct) I got this to work:
var t = eval( "(" + request + ")" ) ;
for (var i = 0; i < t.data.length; i++) {
myTable += "<tr>";
for (var j = 0; j < t.hdrs.length; j++) {
myTable += "<td>" ;
if (t.data[i][t.hdrs[j]] == "") {myTable += " " ; }
else { myTable += t.data[i][t.hdrs[j]] ; }
myTable += "</td>";
}
myTable += "</tr>";
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: DB2 Transport Component is not registered correctly I'm trying to test the DB2 adapter for BizTalk 2006 (not R2).
While trying to configure an instance in an application, I get an error stating:
DB2 Transport Component is not registered correctly
The enivronment is 2 BizTalk servers sharing a messagebox.
The DB2 adapter works fine on the first server. It is the second server I am having problems with.
I've exported the .msi files from the first server, then installed them onto the second server and imported them into BizTalk. All of the other adapters that I'm using work fine on both servers.
*
*Google searches don't bring up a whole lot regarding troubleshooting the BizTalk DB2 adapter.
*Further troubleshooting has shown that MS BizTalk Adapters for Host Systems is installed on both machines. However, it was only configured on the machine that is giving me the issue.
*I've unconfigured it, but that still has not helped.
*I've double checked tht version numbers of the .dll's for the DB2 adapter are the same on both servers, and made sure that they are installed in the GAC.
*None of this has helped.
Has anyone run into an issue like this before, or point me in the direction of where to look for BizTalk DB2 adapter troubleshooting guidence?
A: When the "registered" word appears, I think about the registration of COM components, not the installation of .NET assemblies. The underlying driver the DB2 adapter uses is the Microsoft ODBC Driver for DB2. You may want to check if your ODBC DSN control panel shows up that particular driver for you to configure a DSN.
I'd recommend a reinstallation for the Adapter pack for Host Systems.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why is the root logger collecting all log types regardless the configuration? I am having problem that even though I specify the level to ERROR in the root tag, the specified appender logs all levels (debug, info, warn) to the file regardless the settings. I am not a Log4j expert so any help is appreciated.
I have checked the classpath for log4j.properties (there is none) except the log4j.xml.
Here is the log4j.xml file:
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j='http://jakarta.apache.org/log4j/'>
<!-- ============================== -->
<!-- Append messages to the console -->
<!-- ============================== -->
<appender name="console" class="org.apache.log4j.ConsoleAppender">
<param name="Target" value="System.out" />
<layout class="org.apache.log4j.PatternLayout">
<!-- The default pattern: Date Priority [Category] Message\n -->
<param name="ConversionPattern" value="[AC - %5p] [%d{ISO8601}] [%t] [%c{1} - %L] %m%n" />
</layout>
</appender>
<appender name="logfile" class="org.apache.log4j.RollingFileAppender">
<param name="File" value="./logs/server.log" />
<param name="MaxFileSize" value="1000KB" />
<param name="MaxBackupIndex" value="2" />
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="[AC - %-5p] {%d{dd.MM.yyyy - HH.mm.ss}} %m%n" />
</layout>
</appender>
<appender name="payloadAppender" class="org.apache.log4j.RollingFileAppender">
<param name="File" value="./logs/payload.log" />
<param name="MaxFileSize" value="1000KB" />
<param name="MaxBackupIndex" value="10" />
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="[AC - %-5p] {%d{dd.MM.yyyy - HH.mm.ss}} %m%n" />
</layout>
</appender>
<appender name="errorLog" class="org.apache.log4j.RollingFileAppender">
<param name="File" value="./logs/error.log" />
<param name="MaxFileSize" value="1000KB" />
<param name="MaxBackupIndex" value="10" />
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="[AC - %-5p] {%d{dd.MM.yyyy - HH.mm.ss}} %m%n" />
</layout>
</appender>
<appender name="traceLog"
class="org.apache.log4j.RollingFileAppender">
<param name="File" value="./logs/trace.log" />
<param name="MaxFileSize" value="1000KB" />
<param name="MaxBackupIndex" value="20" />
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern"
value="[AccessControl - %-5p] {%t: %d{dd.MM.yyyy - HH.mm.ss,SSS}} %m%n" />
</layout>
</appender>
<appender name="traceSocketAppender" class="org.apache.log4j.net.SocketAppender">
<param name="remoteHost" value="localhost" />
<param name="port" value="4445" />
<param name="locationInfo" value="true" />
</appender>
<logger name="TraceLogger">
<level value="trace" /> <!-- Set level to trace to activate tracing -->
<appender-ref ref="traceLog" />
</logger>
<logger name="org.springframework.ws.server.endpoint.interceptor">
<level value="DEBUG" />
<appender-ref ref="payloadAppender" />
</logger>
<root>
<level value="error" />
<appender-ref ref="errorLog" />
</root>
</log4j:configuration>
If I replace the root with another logger, then nothing gets logged at all to the specified appender.
<logger name="com.mydomain.logic">
<level value="error" />
<appender-ref ref="errorLog" />
</logger>
A: Run your program with -Dlog4j.debug so that standard out gets info about how log4j is configured -- I suspected that it isn't configured the way that you think it is.
A: The root logger resides at the top of the logger hierarchy. It is exceptional in three ways:
*
*it always exists,
*its level cannot be set to null
*it cannot be retrieved by name.
The rootLogger is the father of all appenders. Each enabled logging request for a given logger will be forwarded to all the appenders in that logger as well as the appenders higher in the hierarchy (including rootLogger)
For example, if the console appender is added to the root logger, then all enabled logging requests will at least print on the console. If in addition a file appender is added to a logger, say L, then enabled logging requests for L and L's children will print on a file and on the console. It is possible to override this default behavior so that appender accumulation is no longer additive by setting the additivity flag to false.
From the log4j manual
To sum up:
If you want not to propagate a logging event to the parents loggers (say rootLogger) then add the additivity flag to false in those loggers. In your case:
<logger name="org.springframework.ws.server.endpoint.interceptor"
additivity="false">
<level value="DEBUG" />
<appender-ref ref="payloadAppender" />
</logger>
In standard log4j config style (which I prefer to XML):
log4j.logger.org.springframework.ws.server.endpoint.interceptor = INFO, payloadAppender
log4j.additivity.org.springframework.ws.server.endpoint.interceptor = false
Hope this helps.
A: To add on to what James A. N. Stauffer and cynicalman said - I would bet that there is another log4j.xml / log4j.properties on your classpath other than the one you wish to be used that is causing log4j to configure itself the way it is.
-Dlog4j.debug is an absolute killer way to troubleshoot any log4j issues.
A: Two things: Check additivity and decide whether you want log events captured by more detailed levels of logging to propagate to the root logger.
Secondly, check the level for the root logger. In addition you can also add filtering on the appender itself, but this should normally not be necessary.
A: If you are using a log4j.properties file, this file is typically expected to be in the root of your classpath, so make sure it's there.
A: This is correct behavior. The root logger is like the default behavior. So if you don't specify any logger it will take root logger level as the default level but this does not mean that root logger level is the level for all your logs.
Any of your code which logs using 'TraceLogger'logger or 'org.springframework.ws.server.endpoint.interceptor' logger will log messages using TRACE and DEBUG level respectively any other code will use root logger to log message using level, which is in your case ERROR.
So if you use logger other than root, root log level will be overridden by that logger's log level. To get the desired output change the other two log level to ERROR.
I hope this is helpful.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41"
}
|
Q: Creating a fluid panel in GWT to fill the page? I would like a panel in GWT to fill the page without actually having to set the size. Is there a way to do this? Currently I have the following:
public class Main implements EntryPoint
{
public void onModuleLoad()
{
HorizontalSplitPanel split = new HorizontalSplitPanel();
//split.setSize("250px", "500px");
split.setSplitPosition("30%");
DecoratorPanel dp = new DecoratorPanel();
dp.setWidget(split);
RootPanel.get().add(dp);
}
}
With the previous code snippet, nothing shows up. Is there a method call I am missing?
Thanks.
UPDATE Sep 17 '08 at 20:15
I put some buttons (explicitly set their size) on each side and that still doesn't work. I'm really surprised there isn't like a FillLayout class or a setFillLayout method or setDockStyle(DockStyle.Fill) or something like that. Maybe it's not possible? But for as popular as GWT is, I would think it would be possible.
UPDATE Sep 18 '08 at 14:38
I have tried setting the RootPanel width and height to 100% and that still didn't work. Thanks for the suggestion though, that seemed like it maybe was going to work. Any other suggestions??
A: Google has answered the main part of your question in one of their FAQs:
http://code.google.com/webtoolkit/doc/1.6/FAQ_UI.html#How_do_I_create_an_app_that_fills_the_page_vertically_when_the_b
The primary point is that you can't set height to 100%, you must do something like this:
final VerticalPanel vp = new VerticalPanel();
vp.add(mainPanel);
vp.setWidth("100%");
vp.setHeight(Window.getClientHeight() + "px");
Window.addResizeHandler(new ResizeHandler() {
public void onResize(ResizeEvent event) {
int height = event.getHeight();
vp.setHeight(height + "px");
}
});
RootPanel.get().add(vp);
A: Ben's answer is very good, but is also out of date. A resize handler was necessary in GWT 1.6, but we are at 2.4 now. You can use a DockLayoutPanel to have your content fill the page vertically. See the sample mail app, which uses this panel.
A: I think you'll need to put something on the left and/or right of the split (split.setLeftWidget(widget), split.setRightWidget(widget) OR split.add(widget), which will add first to the left, then to the right) in order for anything to show up.
A: Try setting the width and/or height of the rootpanel to 100% before adding your widget.
A: Panels automatically resize to the smallest visible width. So you must resize every panel you add to the RootPanel to 100% including your SplitPanel. The RootPanel itself does not need resizing. So try:
split.setWidth("100%");
split.setHeight("100%");
A: The documentation for DecoratorPanel says:
If you set the width or height of the DecoratorPanel, you need to set the height and width of the middleCenter cell to 100% so that the middleCenter cell takes up all of the available space. If you do not set the width and height of the DecoratorPanel, it will wrap its contents tightly.
A: The problem is that DecoratorPanel middleCenter cell will fill tight on your contents as default, and SplitPanel doesn't allow "100%" style size setting.
To do what you want, set the table's style appropriately, on your CSS:
.gwt-DecoratorPanel .middleCenter {
height: 100%;
width: 100%;
}
A: For me it does not work if I just set the width like
panel.setWidth("100%");
The panel is a VerticalPanel that it just got a randomly size that I don't know where it comes from. But what @Ben Bederson commented worked very well for me after I added the line:
panel.setWidth(Window.getClientWidth() + "px");
Just this, nothing else. Thanks
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
}
|
Q: Opening Javascript based popup ads on the same page I own an image hosting site and would like to generate one popup per visitor per day. The easiest way for me to do this was to write a php script that called subdomains, like ads1.sitename.com
ads2.sitename.com
unfortunatly most of my advertisers want to give me a block of javascript code to use rather than a direct link, so I can't just make the individual subdomains header redirects.I'd rather use the subdomains that way I can manage multiple advertisers without changing any code on page, just code in my php admin page. Any ideas on how I can stick this jscript into the page so I don't need to worry about a blank ads1.sitename.com as well as the popup coming up?
A: I doubt you'll find much sympathy for help with pop-up ads.
A: How about appending a simple window.close() after the advertising code? That way their popup is displayed and your window closes neatly.
I'm not sure that I've ever had a browser complain that the window is being closed. This method has always worked for me. (IE, Firefox, etc.)
A: At the risk of helping someone who wants to deploy popup ads (which is bound to fail due to most popup blockers anyway), why can't you just have the subdomains load pages that load the block of Javascript the advertisers give you?
A: Hey, cut the guy some slack. Popups might not be very nice, but at least he's trying to reduce the amount of them. And popup blockers are going to fix most of it anyway. In any case, someone else might find this question with more altruistic goals (not sure how they'd fit that with popups, but hey-ho).
I don't quite follow your question, but here's some ideas:
*
*Look into Server Side Includes (SSI) to easily add a block of javascript to each page (though you could also do it with a PHP include instead)
*Do your advertiser choosing in your PHP script rather than calling the subdomains
*Decipher the javascript to work out what it's doing and put a modified version in the subdomain page so it doesn't need an additional popup. Shouldn't be too hard.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Suggestions on how build an HTML Diff tool? In this post I asked if there were any tools that compare the structure (not actual content) of 2 HTML pages. I ask because I receive HTML templates from our designers, and frequently miss minor formatting changes in my implementation. I then waste a few hours of designer time sifting through my pages to find my mistakes.
The thread offered some good suggestions, but there was nothing that fit the bill. "Fine, then", thought I, "I'll just crank one out myself. I'm a halfway-decent developer, right?".
Well, once I started to think about it, I couldn't quite figure out how to go about it. I can crank out a data-driven website easily enough, or do a CMS implementation, or throw documents in and out of BizTalk all day. Can't begin to figure out how to compare HTML docs.
Well, sure, I have to read the DOM, and iterate through the nodes. I have to map the structure to some data structure (how??), and then compare them (how??). It's a development task like none I've ever attempted.
So now that I've identified a weakness in my knowledge, I'm even more challenged to figure this out. Any suggestions on how to get started?
clarification: the actual content isn't what I want to compare -- the creative guys fill their pages with lorem ipsum, and I use real content. Instead, I want to compare structure:
<div class="foo">lorem ipsum<div>
is different that
<div class="foo"><p>lorem ipsum<p><div>
A: The DOM is a data structure - it's a tree.
A: Run both files through the following Perl script, then use diff -iw to do a case-insensitive, whitespace-ignoring diff.
#! /usr/bin/perl -w
use strict;
undef $/;
my $html = <STDIN>;
while ($html =~ /\S/) {
if ($html =~ s/^\s*<//) {
$html =~ s/^(.*?)>// or die "malformed HTML";
print "<$1>\n";
} else {
$html =~ s/^([^<]+)//;
print "(text)\n";
}
}
A: @Mike - that would compare everything, including the content of the page, which isn't want the original poster wanted.
Assuming that you have access to the browser's DOM (by writing a Firefox/IE plugin or whatever), I would probably put all of the HTML elements into a tree, then compare the two trees. If the tag name is different, then the node is different. You might want to stop enumerating at a certain point (you probably don't care about span, bold, italic, etc. - maybe only worry about divs?), since some tags are really the content, rather than the structure, of the page.
A: If i was to tacke this issue I would do this:
*
*Plan for some kind of a DOM for html pages. starts at lightweight and then add more as needed. I would use composite pattern for the data structure. i.e. every element has children collection of the base class type.
*Create a parser to parse html pages.
*Using the parser load html element to the DOM.
*After the pages' been loaded up to the DOM, you have the hierachical snapshot of your html pages structure.
*Keep iterating through every element on both sides till the end of the DOM. You'll find the diff in the structure, when you hit a mismatched of element type.
In your example you would have only a div element object loaded on one side, on the other side you would have a div element object loaded with 1 child element of type paragraph element. fire up your iterator, first you'll match up the div element, second iterator you'll match up paragraph with nothing. You've got your structural difference.
A: I think some of the suggestions above don't take into account that there are other tags in the HTML between two pages which would be textually different, but the resulting HTML markup is functionally equivalent. Danimal lists control IDs as an example.
The following two markups are functionlly identical, but would show up as different if you simply compared tags:
<div id="ctl00_TopNavHome_DivHeader" class="header4">foo</div>
<div class="header4">foo</div>
I was going to suggest Danimal write an HTML translation which looks for the HTML tags and converts both docs into a simplified version of both which omits ID tags and any other tags you designate as irrelevant. This’d likely have to be a work in progress, as you ignore certain attributes/tags and then run into new ones which you also want to ignore.
However, I like the idea of using the XmlSchemaInterface to boil it down to the XML schema, then use a diff tool which understands XML rules.
A: See http://www.semdesigns.com/Products/SmartDifferencer/index.html for a tool that is parameterized by langauge grammar, and produces deltas in terms of language elements (identifiers, expressions, statements, blocks, methods, ...) inserted, deleted, moved, replaced, or has identifiers substituted across it consistently. This tool ignores whitespace reformatting (e.g., different linebreaks or layouts) and semantically indistinguishable values (e.g., it knows that 0x0F and 15 are the same value).
This can be applied to HTML using an HTML parser.
EDIT: 9/12/2009. We've built an experimental SmartDiff tool using an HTML editor.
A: http://www.mugo.ca/Products/Dom-Diff
Works with FF 3.5. I haven't tested FF 3.6 yet.
A: I don't know any tool but I know there is a simple way to do this:
*
*First, use a regular expression tool to strip off all the text in your HTML file. You can use this regular expression to search for the text (?<=^|>)[^><]+?(?=<|$) and replace them with an empty string (""), i.e. delete all the text. After this step, you will have all HTML markup tags. There are a lot of free regular expression tools out there.
*Then, you repeat the first step for the original HTML file.
*Last, you use a diff tool to compare the two sets of HTML markups. This will show what is missing between one set and the other.
A: This has been an excellent start. A few more clarifications/comments:
*
*I probably don't care about IDs, since .net will mangle them
* some of the structure will be in a repeater or other such control, so I might end up having more or fewer repeating elements
further thought:
I think a good start would be to assume the html is XHTML compliant. I could then infer the schema (using the new .net XmlSchemaInference methods), then diff the schemata. I can then look at the differences and consider whether or not they're significant.
A: My suggestion is just the basic way of doing it... Of course to tackle the issue you mentioned additional rules must be applied here... Which is in your case, we got a matching div element, and then apply attributes/property matching rules and what not...
To be honest, there are many and complicated rules that need to be applied for the comparison, and its not just a simple matching element to another element. For example what happens if you have duplicates.
e.g. 1 div element on one side, and 2 div element on the other side. How are you gonna match up which div elements matches together?
There are alot other complicated issues that you will find in the comparison word. Im speaking based of experience (part of my job is to maitain my company text comparison engine).
A: Take a look at beyond compare. It has an XML comparison feature that can help you out.
A: You may also have to consider that the 'content' itself could contain additional mark-up so it's probably worth stripping out everything within certain elements (like <div>s with certain IDs or classes) before you do your comparison. For example:
<div id="mainContent">
<p>lorem ipsum etc..</p>
</div>
and
<div id="mainContent">
<p>Here is some real content<img class="someImage" src="someImage.jpg" /></p>
<ul>
<li>and</li>
<li>some</li>
<li>more..</li>
</ul>
</div>
A: I would use (or contribute to) html5lib and its SAX output. Just zip through the 2 SAX streams looking for mismatches and highlight the whole corresponding subtree.
A: Pretty Diff can do this. It will compare the code structure only regardless of differences to white space, comments, or even content. Just be sure to check the option "Normalize Content and String Literals".
http://prettydiff.com/
A: Open each page in the browser and save them as .htm files. Compare the two using windiff.
A: If i were to do this, first i would learn HTML. (^-^) Then i would build a tool that strips out all of the actual content and then saves that as a file so it can be piped through WinDiff (or other merge tool).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: How do I fix "501 Syntactically invalid HELO argument(s)"? I'm using exim on both the sending and relay hosts, the sending host seems to offer:
HELO foo_bar.example.com
Response:
501 Syntactically invalid HELO argument(s)
A: Possibly a problem with underscores in the hostname?
http://www.exim.org/lurker/message/20041124.113314.c44c83b2.en.html
A: Underscores aren't actually valid in internet host names, despite some people using them anyway. A sane DNS server should not allow you to have records for them.
Change your system's host name so it's valid, hopefully this will fix it.
A: After spending so many hours trying to fix this problem, which in my case just come up from nothing, I ended up with a solution. In my case, only the systems deployed to Suse OSs suddenly stopped sending emails but not those ( the same ) running on Ubuntu. After exhausting and eliminating all the suggested possibilities of this problem and even considering to change de OS of those machines, I found out that somehow the send email service is sensible to the hostname of the host machine. In the Ubuntu machines the file /etc/hosts have only the following line:
127.0.0.1 localhost
and so were the Suse machines, which stopped sending the emails. After editing the /etc/hosts from Suse machines to
127.0.0.1 localhost proplad
where proplad is the hostname of the machine, the errors were vanished. It seems that some security policy ( maybe from the smtp service ) uses the hostname information carried through the API, which was being ignored in the case of the Ubuntu machines, but not in the case of Suse machines. Hope this helps others, avoiding massive hours of research over the internet.
A: Diago's answer helped me solve the problem I have been trying to figure out.
Our Suse OS also stopped working out of nowhere. Tried every suggestion that I found here and on google. Nothing worked. Tried adding our domain to etc/hosts but that did not help.
Got the hostname of server with the hostname command. Added that hostname to the etc/hosts file just like Digao suggested.
127.0.0.1 localhost susetest
I saved the changes, then ran postfix stop, postfix start. And works like a charm now.
A: The argument to HELO should be a hostname or an IP address. foo_bar.example.com is neither an IP address nor a hostname (underscores are illegal in hostnames), so the error message is correct and there is nothing to fix.
A: Using qmail I came across this problem. I realised this was because of a previously unfinished installation.
1) When sending email qmail announces itself to other SMTP servers with "HELO ..." and then it adds what is in the file at: /var/qmail/control/me
(sometimes the file is located at /var/qmail/control/helohost)
2) This file should have a hostname with a valid DNS entry in.
Mine did not it had (none) which is why mails were failing to be sent.
A: I found that my local dev server suddenly stopped sending emails (using external SMTP) and on the server logs I found:
rejected EHLO from cpc96762-*******.net [..**.68]: syntactically invalid argument(s): 127.0.0.1:8888/app_dev.php
127.0.0.1:8888/app_dev.php is my local URL, I am using Docker, Symfony and Swift Mailer.
The only solution that helped in my case was adding the parameter:
local_domain = "localhost"
to my Swift Mailer configuration. That solved all the problems.
See the docs for the Swift Mailer local_domain option: https://symfony.com/doc/current/reference/configuration/swiftmailer.html#local-domain
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Converting SQL Result Sets to XML I am looking for a tool that can serialize and/or transform SQL Result Sets into XML. Getting dumbed down XML generation from SQL result sets is simple and trivial, but that's not what I need.
The solution has to be database neutral, and accepts only regular SQL query results (no db xml support used). A particular challenge of this tool is to provide nested XML matching any schema from row based results. Intermediate steps are too slow and wasteful - this needs to happen in one single step; no RS->object->XML, preferably no RS->XML->XSLT->XML. It must support streaming due to large result sets, big XML.
Anything out there for this?
A: Not that I know of. I would just roll my own. It's not that hard to do, maybe something like this:
#!/usr/bin/env jruby
import java.sql.DriverManager
# TODO some magic to load the driver
conn = DriverManager.getConnection(ARGV[0], ARGV[1], ARGV[2])
res = conn.executeQuery ARGV[3]
puts "<result>"
meta = res.meta_data
while res.next
puts "<row>"
for n in 1..meta.column_count
column = meta.getColumnName n
puts "<#{column}>#{res.getString(n)}</#{column}"
end
puts "</row>"
end
puts "</result>"
Disclaimer: I just made all of that up, I'm not even bothering to pretend that it works. :-)
A: In .NET you can fill a dataset from any source and then it can write that out to disk for you as XML with or without the schema. I can't say what performance for large sets would be like. Simple :)
A: With SQL Server you really should consider using the FOR XML construct in the query.
If you're using .Net, just use a DataAdapter to fill a dataset. Once it's in a dataset, just use its .WriteXML() method. That breaks your DB->object->XML rule, but it's really how things are done. You might be able to work something out with a datareader, but I doubt it.
A: Another option, depending on how many schemas you need to output, and/or how dynamic this solution is supposed to be, would be to actually write the XML directly from the SQL statement, as in the following simple example...
SELECT
'<Record>' ||
'<name>' || name || '</name>' ||
'<address>' || address || '</address>' ||
'</Record>'
FROM
contacts
You would have to prepend and append the document element, but I think this example is easy enough to understand.
A: dbunit (www.dbunit.org) does go from sql to xml and vice versa; you might be able to modify it more for your needs.
A: Technically, converting a result set to an XML file is straight forward and doesn't need any tool unless you have a requirement to convert the data structure to fit specific export schema. In general the result set gets the top-level element of an XML file, then you produce a number of record elements containing attributes, which effectively are the fields of a record.
When it comes to Java, for example, you just need appropriate JDBC driver for interfacing with DBMS of your choice addressing the database independency requirement (usually provided by a DBMS vendor), and a few lines of code to read a result set and print out an XML string per record, per field. Not a difficult task for an average Java developer in my opinion.
Anyway, the more concrete purpose you state the more concrete answer you get.
A: In Java, you may just fill an object with the xml data (like an entity bean) and then use XMLEncoder to get it to xml. From there you may use XSLT for further conversion or XMLDecoder to bring it back to an object.
Greetz, GHad
PS: See http://ghads.wordpress.com/2008/09/16/java-to-xml-to-java/ for an example for the Object to XML part... From DB to Object multiple more way are possible: JDBC, Groovy DataSets or GORM. Apache Common Beans may help to fill up JavaBeans via Reflection-like methods.
A: I created a solution to this problem by using the equivalent of a mail merge using the resultset as the source, and a template through which it was merged to produce the desired XML.
The template was standard XML, with a Header element, a Footer element and a Body element. Using a CDATA block in the Body element allowed me to include a complete XML structure that acted as the template for each row. In order to include a fields from the resultset in the template, I used markers that looked like this <[FieldName]>. The template was then pre-parsed to isolate the markers such that in operation, the template requests each of the fields from the resultset as the Body is being produced.
The Header and Footer elements are output only once at the beginning and end of the output set. The body could be any XML or text structure desired. In your case, it sounds like you might have several templates, one for each of your desired schemas.
All of the above was encapsulated in a Template class, such that after loading the Template, I merely called merge() on the template passing the resultset in as a parameter.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What's a good way to store raster data? I have a variety of time-series data stored on a more-or-less georeferenced grid, e.g. one value per 0.2 degrees of latitude and longitude. Currently the data are stored in text files, so at day-of-year 251 you might see:
251
12.76 12.55 12.55 12.34 [etc., 200 more values...]
13.02 12.95 12.70 12.40 [etc., 200 more values...]
[etc., 250 more lines]
252
[etc., etc.]
I'd like to raise the level of abstraction, improve performance, and reduce fragility (for example, the current code can't insert a day between two existing ones!). We'd messed around with BLOB-y RDBMS hacks and even replicating each line of the text file format as a row in a table (one row per timestamp/latitude pair, one column per longitude increment -- yecch!).
We could go to a "real" geodatabase, but the overhead of tagging each individual value with a lat and long seems prohibitive. The size and resolution of the data haven't changed in ten years and are unlikely to do so.
I've been noodling around with putting everything in NetCDF files, but think we need to get past the file mindset entirely -- I hate that all my software has to figure out filenames from dates, deal with multiple files for multiple years, etc.. The alternative, putting all ten years' (and counting) data into a single file, doesn't seem workable either.
Any bright ideas or products?
A: I've assembled your comments here:
*
*I'd like to do all this "w/o writing my own file I/O code"
*I need access from "Java Ruby MATLAB" and "FORTRAN routines"
When you add these up, you definitely don't want a new file format. Stick with the one you've got.
If we can get you to relax your first requirement - ie, if you'd be willing to write your own file I/O code, then there are some interesting options for you. I'd write C++ classes, and I'd use something like SWIG to make your new classes available to the multiple languages you need. (But I'm not sure you'd be able to use SWIG to give you access from Java, Ruby, MATLAB and FORTRAN. You might need something else. Not really sure how to do it, myself.)
You also said, "Actually, if I have to have files, I prefer text because then I can just go in and hand-edit when necessary."
My belief is that this is a misguided statement. If you'd be willing to make your own file I/O routines then there are very clever things you could do... And as an ultimate fallback, you could give yourself a tool that converts from the new file format to the same old text format you're used to... And another tool that converts back. I'll come back to this at the end of my post...
You said something that I want to address:
"leverage 40 yrs of DB optimization"
Databases are meant for relational data, not raster data. You will not leverage anyone's DB optimizations with this kind of data. You might be able to cram your data into a DB, but that's hardly the same thing.
Here's the most useful thing I can tell you, based on everything you've told us. You said this:
"I am more interested in optimizing my time than the CPU's, though exec speed is good!"
This is frankly going to require TOOLS. Stop thinking of it as a text file. Start thinking of the common tasks you do, and write small tools - in WHATEVER LANGAUGE(S) - to make those things TRIVIAL to do.
And if your tools turn out to have lousy performance? Guess what - it's because your flat text file is a cruddy format. But that's just my opinion. :)
A: I'd definitely change from text to binary but keep each day in a separate file still. You could name them in such a way that insertions in between don't cause any strangeness with indices, such as by including the date and possible time in the filename. You could also consider the file structure if you have several fields per location for example. Is it common to look for a small tile from a large number of timesteps? In that case you might want to store them as tiles containing data from several days. You didn't mention how the data is accessed which plays a big role in how to organise it efficiently.
A: Clarifications:
I'm surprised you added "database" as one of the tags, and considered it as an option. Why did you do this?
Essentially, you have a 2D, single component floating point image at every time step. Would you agree with this way of viewing your data?
You also mentioned the desire to insert a day between two existing ones - which seems to be a very odd thing to do. Why would you need to do that? Is there a new day between May 4 and May 5 that I don't know about?
Is "compression" one of the things you care about, or are you just sick of flat files?
Would a float or a double be sufficient to store your data, or do you feel you need more arbitrary precision?
Also, what programming language(s) do you want to access this data with?
A: your answer on how to store the data depends entirely on what you're going to do with the data. for example, if you only ever need to retrieve by specifying the date or a date range, then storing in a database as a BLOB makes some sense. but if you need to find records that have certain values, you'll need to do something different.
please describe how you need to be able to access the data/
A: Matt, thanks very much, and likewise longneck and jirv.
This post was partly an experiment, testing the quality of stackoverflow discourse. If you guys/gals/alien lifeforms are representative, I'm sold.
And on point, you've clarified my thinking considerably. Mind, I still might not necessarily implement your advice, but know that I will be thinking about it very seriously. >;-)
I may very well leave the file format the same, add to the extant C and/or Ruby routines to tack on the few low-level features I lack (e.g. inserting missing timesteps), and hang an HTTP front end on the whole thing so that the data can be consumed by whatever box needs it, in whatever language is currently hoopy. While it's mostly unchanging legacy software that construct these data, we're always coming up with new consumers for it, so the multi-language/multi-computer requirement (gee, did I forget that one?) applies to the reading side, not the writing side. That also obviates a whole slew of security issues.
Thanks again, folks.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Unpivot xml doc based on attributes I have a simple xml document that looks like the following snippet. I need to write a XSLT transform that basically 'unpivots' this document based on some of the attributes.
<?xml version="1.0" encoding="utf-8" ?>
<root xmlns:z="foo">
<z:row A="1" X="2" Y="n1" Z="500"/>
<z:row A="2" X="5" Y="n2" Z="1500"/>
</root>
This is what I expect the output to be -
<?xml version="1.0" encoding="utf-8" ?>
<root xmlns:z="foo">
<z:row A="1" X="2" />
<z:row A="1" Y="n1" />
<z:row A="1" Z="500"/>
<z:row A="2" X="5" />
<z:row A="2" Y="n2"/>
<z:row A="2" Z="1500"/>
</root>
Appreciate your help.
A: Here's the full stylesheet you need (since the namespaces are important):
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:z="foo">
<xsl:template match="root">
<root>
<xsl:apply-templates />
</root>
</xsl:template>
<xsl:template match="z:row">
<xsl:variable name="A" select="@A" />
<xsl:for-each select="@*[local-name() != 'A']">
<z:row A="{$A}">
<xsl:attribute name="{local-name()}">
<xsl:value-of select="." />
</xsl:attribute>
</z:row>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
I much prefer using literal result elements (eg <z:row>) rather than <xsl:element> and attribute value templates (those {}s in attribute values) rather than <xsl:attribute> where possible as it makes the code shorter and makes it easier to see the structure of the result document that you're generating. Others prefer <xsl:element> and <xsl:attribute> because then everything is an XSLT instruction.
If you're using XSLT 2.0, there are a couple of syntactic niceties that help, namely the except operator in XPath and the ability to use a select attribute directly on <xsl:attribute>:
<xsl:stylesheet version="2.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
exclude-result-prefixes="xs"
xmlns:z="foo">
<xsl:template match="root">
<root>
<xsl:apply-templates />
</root>
</xsl:template>
<xsl:template match="z:row">
<xsl:variable name="A" as="xs:string" select="@A" />
<xsl:for-each select="@* except @A">
<z:row A="{$A}">
<xsl:attribute name="{local-name()}" select="." />
</z:row>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
A: <xsl:template match="row">
<row A="{$A}" X="{$X}" />
<row A="{$A}" Y="{$Y}" />
<row A="{$A}" Z="{$Z}" />
</xsl:template>
Plus obvious boilerplate.
A: This is more complex but also more generic:
<xsl:template match="z:row">
<xsl:variable name="attr" select="@A"/>
<xsl:for-each select="@*[(local-name() != 'A')]">
<xsl:element name="z:row">
<xsl:copy-of select="$attr"/>
<xsl:attribute name="{name()}"><xsl:value-of select="."/></xsl:attribute>
</xsl:element>
</xsl:for-each>
</xsl:template>
A: Here is a bit of a brute force way:
<xsl:template match="z:row">
<xsl:element name="z:row">
<xsl:attribute name="A">
<xsl:value-of select="@A"/>
</xsl:attribute>
<xsl:attribute name="X">
<xsl:value-of select="@X"/>
</xsl:attribute>
</xsl:element>
<xsl:element name="z:row">
<xsl:attribute name="A">
<xsl:value-of select="@A"/>
</xsl:attribute>
<xsl:attribute name="Y">
<xsl:value-of select="@Y"/>
</xsl:attribute>
</xsl:element>
<xsl:element name="z:row">
<xsl:attribute name="A">
<xsl:value-of select="@A"/>
</xsl:attribute>
<xsl:attribute name="Z">
<xsl:value-of select="@Z"/>
</xsl:attribute>
</xsl:element>
</xsl:template>
<xsl:template match="@* | node()">
<xsl:copy>
<xsl:apply-templates select="@* | node()"/>
</xsl:copy>
</xsl:template>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Best way to handle null when writing equals operator
Possible Duplicate:
How do I check for nulls in an '==' operator overload without infinite recursion?
When I overload the == operator for objects I typically write something like this:
public static bool operator ==(MyObject uq1, MyObject uq2) {
if (((object)uq1 == null) || ((object)uq2 == null)) return false;
return uq1.Field1 == uq2.Field1 && uq1.Field2 == uq2.Field2;
}
If you don't down-cast to object the function recurses into itself but I have to wonder if there isn't a better way?
A: As Microsoft says,
A common error in overloads of
operator == is to use (a == b), (a ==
null), or (b == null) to check for
reference equality. This instead
results in a call to the overloaded
operator ==, causing an infinite loop.
Use ReferenceEquals or cast the type
to Object, to avoid the loop.
So use ReferenceEquals(a, null) || ReferenceEquals(b, null) is one possibility, but casting to object is just as good (is actually equivalent, I believe).
So yes, it seems there should be a better way, but the method you use is the one recommended.
However, as has been pointed out, you really SHOULD override Equals as well when overriding ==. With LINQ providers being written in different languages and doing expression resolution at runtime, who knows when you'll be bit by not doing it even if you own all the code yourself.
A: ReferenceEquals(object obj1, object obj2)
A: @neouser99: That's the right solution, however the part that is missed is that when overriding the equality operator (the operator ==) you should also override the Equals function and simply make the operator call the function. Not all .NET languages support operator overloading, hence the reason for overriding the Equals function.
A: if ((object)uq1 == null)
return ((object)uq2 == null)
else if ((object)uq2 == null)
return false;
else
//return normal comparison
This compares them as equal when both are null.
A: Just use Resharper to create you Equals & GetHashCode methods. It creates the most comprehensive code for this purpose.
Update
I didn't post it on purpose - I prefer people to use Resharper's function instead of copy-pasting, because the code changes from class to class. As for developing C# without Resharper - I don't understand how you live, man.
Anyway, here is the code for a simple class (Generated by Resharper 3.0, the older version - I have 4.0 at work, I don't currently remember if it creates identical code)
public class Foo : IEquatable<Foo>
{
public static bool operator !=(Foo foo1, Foo foo2)
{
return !Equals(foo1, foo2);
}
public static bool operator ==(Foo foo1, Foo foo2)
{
return Equals(foo1, foo2);
}
public bool Equals(Foo foo)
{
if (foo == null) return false;
return y == foo.y && x == foo.x;
}
public override bool Equals(object obj)
{
if (ReferenceEquals(this, obj)) return true;
return Equals(obj as Foo);
}
public override int GetHashCode()
{
return y + 29*x;
}
private int y;
private int x;
}
A: Follow the DB treatment:
null == <anything> is always false
A: But why don't you create an object member function? It can certainly not be called on a Null reference, so you're sure the first argument is not Null.
Indeed, you lose the symmetricity of a binary operator, but still...
(note on Purfideas' answer: Null might equal Null if needed as a sentinel value of an array)
Also think of the semantics of your == function: sometimes you really want to be able to choose whether you test for
*
*Identity (points to same object)
*Value Equality
*Equivalence ( e.g. 1.000001 is equivalent to .9999999 )
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Network Socket command Unix.....>>netstat -al | grep 8787 (will see packets on port 8787)
A: What is the nature of the question here? Are you trying to see packets on port 8787? Are you looking for services listening on port 8787? Most importantly, how is this a programming-related question?
A: If you want to see the actual packets then you need to use tcpdump.
Use the -s option to specify how much of the packet you want to see (0 means the whole packet) and the -X option to get a Hex and ASCII dump.
A: Use the command
ifconfig -a
to determine the interface you want to listen on. Then use
tcpdump -npi eth0 port 8787
to listen on the port where eth0 is the interface you want to listen on that you identified from the ifconfig command.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Same source code on two machines yield different executable behavior Here's the scenario:
A C# Windows Application project stored in SVN is used to create an executable. Normally, a build server handles the build process and creates builds at regular intervals which are used by testing. In this particular instance I was asked to modify a specific build and create the executable.
I'm not entirely sure if the build server modifies the project files, but I know it creates a tag in SVN of the source code it used to compile the executables. Using that tag I've checked out the code on a second machine, which is a development machine. I then compiled the source on the development machine.
When executed, the application that was compiled on the development machine does not function exactly like the one compiled by the build server. For example, on the testing machines a DateTime Parse execption is detected by the application. However, the build machine's executable does not throw any exeptions. If I run the executable on the development machine no exceptions are thrown.
So in summary, both machines are theoretically using the same source code and projects.
The development machine's executable only works on the dev machine. The Build machine's executable works on every machine, including the dev machine.
Are the machine's Regional Settings or Time Zone stored in the compiled executable? Any idea what might cause this behaviour or how to check the executables to find the possible differences and correct them?
Unfortunately, I cannot take a testing machine and attach a debugger to it. As soon as I can I will.
A: The app uses the Regional Settings of the machine it's running on, and it looks like it is your problem. You can force a thread to use a specific culture by setting System.Threading.Thread.CurrentThread.CurrentCulture and System.Threading.Thread.CurrentThread.CurrentUICulture to a specific value.
A: It's possible that the two machines have different versions of an underlying dll that isn't part of your build process. I've seen this happen when distributing services across our internal server farm.
A: Can you run the program on the build machine under a debugger?
If so, then debug the problem - there's no need to guess.
Have the debugger on the dev machine catch the exception, set a break point at the same place on the build machine. See what's different between the two.
A: I've seen different "Regional and Language Options" on XP cause this sort of behavior. Do these match on both machines? Start | Settings | Control Panel | Regional and Language Options...
A: I have a couple questions - do both machines have identical regional settings and where are your error logs? I would hope ;-) you have exceptions being handled and written to disk, event logs .. something to help with problems like this.
Where does the date come from that is being parsed? If it is in your db maybe you have bad data too.
A: I had a similar problem once (except in C++) When I compared the sizes of the compiled executables, they were way off. Unfortunately, after days of searching, the best solution I found was to uninstall VS05 and re-install it.
A: Why are you using a build server anyways, for C# code, if I may ask?
The build times for C# when I was using it were hardly noticable (<2s). Is the app really that big?
A: The build system probably makes a release version, while the manual build on the dev PC makes a debug version. The debug version has more error checking in it. See if you can manually build a release version and see if there are still differences.
A: The same source code rarely if every builds the same program on different computers. You should always assume the programs are different, never expect them to be the same. In an environment like linux with a good package manager and periodic and or random updates, never expect the same source code to build the same program on the same computer either. The higher the language the worse it gets. Building a program for the debugger is drastically different than building for release. The debugger version even without the debugger hides bugs that you wont find until you go to the release build. You basically get to debug the program twice if you rely too much on a debugger environment.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I get a warning before killing a temporary buffer in Emacs? More than once I've lost work by accidentally killing a temporary buffer in Emacs. Can I set up Emacs to give me a warning when I kill a buffer not associated with a file?
A: (defun maybe-kill-buffer ()
(if (and (not buffer-file-name)
(buffer-modified-p))
;; buffer is not visiting a file
(y-or-n-p (format "Buffer %s has been edited. Kill it anyway? "
(buffer-name)))
t))
(add-to-list 'kill-buffer-query-functions 'maybe-kill-buffer)
A: Make a function that will ask you whether you're sure when the buffer has been edited and is not associated with a file. Then add that function to the list kill-buffer-query-functions.
Looking at the documentation for Buffer File Name you understand:
*
*a buffer is not visiting a file if and only if the variable buffer-file-name is nil
Use that insight to write the function:
(defun maybe-kill-buffer ()
(if (and (not buffer-file-name)
(buffer-modified-p))
;; buffer is not visiting a file
(y-or-n-p "This buffer is not visiting a file but has been edited. Kill it anyway? ")
t))
And then add the function to the hook like so:
(add-to-list 'kill-buffer-query-functions 'maybe-kill-buffer)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Best way to update multi-gigabyte program (DVD fulfillment? Updater software?) Two years ago, we shipped a multi-gigabyte Windows application, with lots of video files. Now we're looking to release a significant update, with approximately 1 gigabyte of new and changed data.
We're currently looking at DVD fulfillment houses (like these folks, for example), which claim to be able to ship DVDs to our customers for $5 and up. Does anyone have any experience with these companies?
We've also looked at a bunch of network-based "updater" software. Unfortunately, most of these tools are intended for much smaller programs. Are there any libraries or products which handle gigabyte-sized updates well?
Thank you for your advice!
A: BITS is a library from Microsoft for downloading files piece by piece using unused bandwidth. You can basically have your clients trickle-download the new video files. The problem, however, is that you'll have to update your program to utilize BITS first.
A: Depending on who the end user is you have a few options:
*
*Shipping DVD's
This option tends to be rather expensive, and may not be the best way, what if you are shipping it to someone that no longer has the software installed.
*HTTP hosting (using Akamai, or any other CDN)
This works rather well for other companies, for example Apple and I believe Microsoft as well.
*Bittorrent
It is not just used for illegal content, it will allow you to offload some of the work load of sending the file, and at the same time it is a fast protocol, if you make sure the that the machine seeding has the correct file, the bittorrent protocol will make sure the end user gets the same file with the exact same hash.
A: You can use the rsync algorithm: http://samba.anu.edu.au/rsync/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Avoiding double-thunking with C++/CLI properties I've read (in Nish Sivakumar's book C++/CLI In Action among other places) that you should use the __clrcall decorator on function calls to avoid double-thunking, in cases where you know that the method will never be called from unmanaged code. Nish also says that if the method signature contains any CLR types, then the JIT compiler will automatically add the __clrcall. What is not clear to me is if I need to include __clrcall when I create C++/CLI properties. In one sense, properties are only accessible from .NET languages, on the other hand the C++/CLI compiler (I think) just generates methods (e.g. ***_get() ) that are callable from both managed and unmanaged code. So do I need to use the __clrcall modifier on my properties, and if so, where does it go? On the get/set functions themselves?
A: @Mike B - Thanks for the tip on ildasm - I didn't know about that tool.
It appears that I misread/misunderstood Nish - the __clrcall modifier and the double-thunking problem it eliminates only apply to methods of NATIVE classes. All methods of Managed classes are __clrcall by default - which seems obvious in retrospect.
Evidently Marcus Heege's book Expert C++/CLI is available as a free download, and it has a nice table on page 215 that summarizes the calling conventions.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Oracle connection problem on Mac OSX: "Status : Failure -Test failed: Io exception: The Network Adapter could not establish the connection" I'm trying this with Oracle SQL Developer and am on an Intel MacBook Pro. But I believe this same error happens with other clients. I can ping the server hosting the database fine so it appears not to be an actual network problem.
Also, I believe I'm filling in the connection info correctly. It's something like this:
host = foo1.com
port = 1530
server = DEDICATED
service_name = FOO
type = session
method = basic
A: That's the message you get when you don't have the right connection parameters. The SID, in particular, tends to trip up newcomers.
A: If you want to connect to database on other host then you need to know
*
*hostname
*port number (by default 1521)
*SID name
if you get the connection error that you mentioned in your question then you have not specified correctly either hostname or port number. Try
telnet hostname portnumber
from Terminal to verify if you can connect to portnumber (by default 1521) - if not then probably port number is incorrect.
A: My problem turned out to be some kind of ACL problem. I needed to SSH tunnel in through a "blessed host". I put the following in my .ssh/config
Host=blessedhost
HostName=blessedhost.whatever.com
User=alice
Compression=yes
Protocol=2
LocalForward=2202 oraclemachine.whatever.com:1521
Host=foo
HostName=localhost
Port=2202
User=alice
Compression=yes
Protocol=2
(I don't think the second block is really necessary.) Then I change the host and port in the oracle connection info to localhost:2202.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do you manage "pick lists" in a database I have an application with multiple "pick list" entities, such as used to populate choices of dropdown selection boxes. These entities need to be stored in the database. How do one persist these entities in the database?
Should I create a new table for each pick list? Is there a better solution?
A: In the past I've created a table that has the Name of the list and the acceptable values, then queried it to display the list. I also include a underlying value, so you can return a display value for the list, and a bound value that may be much uglier (a small int for normalized data, for instance)
CREATE TABLE PickList(
ListName varchar(15),
Value varchar(15),
Display varchar(15),
Primary Key (ListName, Display)
)
You could also add a sortOrder field if you want to manually define the order to display them in.
A: It depends on various things:
*
*if they are immutable and non relational (think "names of US States") an argument could be made that they should not be in the database at all: after all they are simply formatting of something simpler (like the two character code assigned). This has the added advantage that you don't need a round trip to the db to fetch something that never changes in order to populate the combo box.
You can then use an Enum in code and a constraint in the DB. In case of localized display, so you need a different formatting for each culture, then you can use XML files or other resources to store the literals.
*if they are relational (think "states - capitals") I am not very convinced either way... but lately I've been using XML files, database constraints and javascript to populate. It works quite well and it's easy on the DB.
*if they are not read-only but rarely change (i.e. typically cannot be changed by the end user but only by some editor or daily batch), then I would still consider the opportunity of not storing them in the DB... it would depend on the particular case.
*in other cases, storing in the DB is the way (think of the tags of StackOverflow... they are "lookup" but can also be changed by the end user) -- possibly with some caching if needed. It requires some careful locking, but it would work well enough.
A: Well, you could do something like this:
PickListContent
IdList IdPick Text
1 1 Apples
1 2 Oranges
1 3 Pears
2 1 Dogs
2 2 Cats
and optionally..
PickList
Id Description
1 Fruit
2 Pets
A: I've found that creating individual tables is the best idea.
I've been down the road of trying to create one master table of all pick lists and then filtering out based on type. While it works, it has invariably created headaches down the line. For example you may find that something you presumed to be a simple pick list is not so simple and requires an extra field, do you now split this data into an additional table or extend you master list?
From a database perspective, having individual tables makes it much easier to manage your relational integrity and it makes it easier to interpret the data in the database when you're not using the application
A: We have followed the pattern of a new table for each pick list. For example:
Table FRUIT has columns ID, NAME, and DESCRIPTION.
Values might include:
15000, Apple, Red fruit
15001, Banana, yellow and yummy
...
If you have a need to reference FRUIT in another table, you would call the column FRUIT_ID and reference the ID value of the row in the FRUIT table.
A: Create one table for lists and one table for list_options.
# Put in the name of the list
insert into lists (id, name) values (1, "Country in North America");
# Put in the values of the list
insert into list_options (id, list_id, value_text) values
(1, 1, "Canada"),
(2, 1, "United States of America"),
(3, 1, "Mexico");
A: Two tables. If you try to cram everything into one table then you break normalization (if you care about that). Here are examples:
LIST
---------------
LIST_ID (PK)
NAME
DESCR
LIST_OPTION
----------------------------
LIST_OPTION_ID (PK)
LIST_ID (FK)
OPTION_NAME
OPTION_VALUE
MANUAL_SORT
The list table simply describes a pick list. The list_ option table describes each option in a given list. So your queries will always start with knowing which pick list you'd like to populate (either by name or ID) which you join to the list_ option table to pull all the options. The manual_sort column is there just in case you want to enforce a particular order other than by name or value. (BTW, whenever I try to post the words "list" and "option" connected with an underscore, the preview window goes a little wacky. That's why I put a space there.)
The query would look something like:
select
b.option_name,
b.option_value
from
list a,
list_option b
where
a.name="States"
and
a.list_id = b.list_id
order by
b.manual_sort asc
You'll also want to create an index on list.name if you think you'll ever use it in a where clause. The pk and fk columns will typically automatically be indexed.
And please don't create a new table for each pick list unless you're putting in "relationally relevant" data that will be used elsewhere by the app. You'd be circumventing exactly the relational functionality that a database provides. You'd be better off statically defining pick lists as constants somewhere in a base class or a properties file (your choice on how to model the name-value pair).
A: To answer the second question first: yes, I would create a separate table for each pick list in most cases. Especially if they are for completely different types of values (e.g. states and cities). The general table format I use is as follows:
id - identity or UUID field (I actually call the field xxx_id where xxx is the name of the table).
name - display name of the item
display_order - small int of order to display. Default this value to something greater than 1
If you want you could add a separate 'value' field but I just usually use the id field as the select box value.
I generally use a select that orders first by display order, then by name, so you can order something alphabetically while still adding your own exceptions. For example, let's say you have a list of countries that you want in alpha order but have the US first and Canada second you could say "SELECT id, name FROM theTable ORDER BY display_order, name" and set the display_order value for the US as 1, Canada as 2 and all other countries as 9.
You can get fancier, such as having an 'active' flag so you can activate or deactivate options, or setting a 'x_type' field so you can group options, description column for use in tooltips, etc. But the basic table works well for most circumstances.
A: Depending on your needs, you can just have an options table that has a list identifier and a list value as the primary key.
select optionDesc from Options where 'MyList' = optionList
You can then extend it with an order column, etc. If you have an ID field, that is how you can reference your answers back... of if it is often changing, you can just copy the answer value to the answer table.
A: If you don't mind using strings for the actual values, you can simply give each list a different list_id in value and populate a single table with :
item_id: int
list_id: int
text: varchar(50)
Seems easiest unless you need multiple things per list item
A: We actually created entities to handle simple pick lists. We created a Lookup table, that holds all the available pick lists, and a LookupValue table that contains all the name/value records for the Lookup.
Works great for us when we need it to be simple.
A: I've done this in two different ways:
1) unique tables per list
2) a master table for the list, with views to give specific ones
I tend to prefer the initial option as it makes updating lists easier (at least in my opinion).
A: Try turning the question around. Why do you need to pull it from the database? Isn't the data part of your model but you really want to persist it in the database? You could use an OR mapper like linq2sql or nhibernate (assuming you're in the .net world) or depending on the data you could store it manually in a table each - there are situations where it would make good sense to put it all in the same table but do consider this only if you feel it makes really good sense. Normally putting different data in different tables makes it a lot easier to (later) understand what is going on.
A: There are several approaches here.
1) Create one table per pick list. Each of the tables would have the ID and Name columns; the value that was picked by the user would be stored based on the ID of the item that was selected.
2) Create a single table with all pick lists. Columns: ID; list ID (or list type); Name. When you need to populate a list, do a query "select all items where list ID = ...". Advantage of this approach: really easy to add pick lists; disadvantage: a little more difficult to write group-by style queries (for example, give me the number of records that picked value X".
I personally prefer option 1, it seems "cleaner" to me.
A: You can use either a separate table for each (my preferred), or a common picklist table that has a type column you can use to filter on from your application. I'm not sure that one has a great benefit over the other generally speaking.
If you have more than 25 or so, organizationally it might be easier to use the single table solution so you don't have several picklist tables cluttering up your database.
Performance might be a hair better using separate tables for each if your lists are very long, but this is probably negligible provided your indexes and such are set up properly.
I like using separate tables so that if something changes in a picklist - it needs and additional attribute for instance - you can change just that picklist table with little effect on the rest of your schema. In the single table solution, you will either have to denormalize your picklist data, pull that picklist out into a separate table, etc. Constraints are also easier to enforce in the separate table solution.
A: This has served us well:
SQL> desc aux_values;
Name Type
----------------------------------------- ------------
VARIABLE_ID VARCHAR2(20)
VALUE_SEQ NUMBER
DESCRIPTION VARCHAR2(80)
INTEGER_VALUE NUMBER
CHAR_VALUE VARCHAR2(40)
FLOAT_VALUE FLOAT(126)
ACTIVE_FLAG VARCHAR2(1)
The "Variable ID" indicates the kind of data, like "Customer Status" or "Defect Code" or whatever you need. Then you have several entries, each one with the appropriate data type column filled in. So for a status, you'd have several entries with the "CHAR_VALUE" filled in.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/86992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Screen capture doesn't work on MFC application in Vista We've got some in-house applications built in MFC, with OpenGL drawing routines. They all use the same code to draw on the screen and either print the screen or save it to a JPEG file. Everything's been working fine in Windows XP, and I need to find a way to make them work on Vista.
In three of our applications, everything works. In the remaining one, I can get the window border, title bar, menus, and task bar, but the interior never shows up. As I said, these applications use the exact same code to write to the screen and capture the window image, and the only difference I see that looks like it might be relevant is that the problem application uses the MFC multiple document interface, while the ones that work use the single document interface.
Either the answer isn't on the net, or I'm worse at Googling than I thought. I asked on the MSDN forums, and the only practical suggestion I got was to use GDI+ rather than GDI, and that did nothing different. I have tried different things with every part of the code that captures and prints or save, given a pointer to the window, so apparently it's a matter of the window itself. I haven't rebuilt the offending application using SDI yet, and I really don't have any other ideas.
Has anybody seen anything like this?
What I've got is four applications. They use a lot of common code, and share the actual .h and .cpp files, so I know the drawing and screen capture code is identical.
There is a WindowtoDIB() routine that takes a *pWnd, and a source rectangle and destination size. It looks like very slightly adapted Microsoft code, and I've found other functions in this file on the Microsoft website. Of my four applications, three handle this just fine, but one doesn't. The most obvious difference is that the problem one is MDI.
It looks to me like the *pWnd is the problem. I'm not a MFC guru by a long shot, and it seems to me that the problem may be that we've got one window setup in the SDIs, and more than one in the MDI. I may be passing the wrong *pWnd to the function.
In the meantime, it has started working properly on the 64-bit Vista test machine, although it still doesn't work on the 32-bit Vista machine. I have no idea why. I haven't changed anything since the last tests, and I didn't think anybody else had. (On the 32-bit version, the Print Screen key works as expected, but it does not save the screen as a JPEG.)
A: Your question title mentions screen capture but your actual question doesn't. Please elaborate more clearly. Is the problem that you can do screen capture of three of your applications, but not the fourth one? You can use different screen capture software that can capture OpenGL/DirectX windows. Those surfaces are handled directly by the Window Manager and won't show up with a simple 'PrtScn'.
Switching to GDI+ won't solve it, nor will switching to SDI.
A: If it's the content of the CView that you want, then yes, that should be right one. If it's the content of the whole screen (at least the content, without the toolbar(s) and status bar), then you should pass it the CMainFrame (that's the default name which may have been changed, the one that is derived from CMDIFrameWnd).
Can you post the code of WindowToDIB()? I've just tried it and It Works For Me (TM), but without OpenGL code in the view. Try passing the following windows to your WindowToDIB() function:
CMainFrame* mainfrm = static_cast<CMainFrame*>(::AfxGetMainWnd());
- mainfrm
- mainfrm->MDIGetActive()
- mainfrm->MDIGetActive()->GetActiveView()
and see what you get.
A: The contents of each window are directX surfaces and are only assembled by the window manager in the graphics card. You'd not be able to capture this unless you switch off the new interface (DWM) or code specifically for screen capture from the DWM.
Wikipedia has a good description of the Desktop Window Manager (DWM)
A: Sorry, I still don't understand. You're trying to get the Print Screen key to work on all four applications? Or you're trying to get the WindowtoDIB() function to work, which takes a 'screenshot' (from within your own application) of the application itself, so that it can be saved as an image file?
Also, what do you mean with 'he Print Screen key works as expected, but it does not save the screen as a JPEG.'? Print Screen only copies to the clipboard, what happens when you paste in Paint?
If your WindowtoDIB() function only 'captures' the window you pass to it, then yes, your MDI child windows are not going to show up.
A: We eventually solved this by creating a different OpenGL context, and drawing everything to that. We gave up on the screen capture.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: What are the best practices for using HTML with XML based languages like SVG? From browsing on this site and elsewhere, I've learned that serving websites as XHTML at present is considered harmful.
Delivering XHTML and serving it as application/xhtml+xml isn't supported by the majority of people browsing at present, delivering xhtml as text/html is at best a placebo for myself, and at worst a recipe for breaking sites usually when you least need it happening.
So we end up back at html 4.01. If I instead serve my pages as html 4.01, is it possible to use SVG or any other XML-based language on the page?
If so, how?
A: In HTML you won't be able to insert SVG directly.
You can embed SVG files with <object>/<embed> and in cutting-edge browsers (Opera, Safari) also <img> and CSS background-image.
You can put SVG in data: URI to avoid using external files.
Simple mathematical expressions can be written with help of Unicode and basic HTML/CSS (Opera 9.5 supports large chunk of MathML via CSS). For anything more complex you'll need to use images, like Wikipedia does.
HTML misinterprets namespace prefixes, so you won't be able to (properly) use other XML markup with HTML DOM. HTML5 has data-* attributes for application-specific markup additions. For metadata consider Microformats.
However if you want to embed XML only for non-browsers (robots), then you could use HTML-compatible XHTML subset and HTTP content negotiation to send proper XML with proper type to clients that understand it (if you thoroughly test page in both XML and HTML modes, then it won't be harmful).
A: You may (read I haven't tried this myself) to use an embedded object and type it accordingly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Ruby code for quick-and-dirty XML serialization? Given a moderately complex XML structure (dozens of elements, hundreds of attributes) with no XSD and a desire to create an object model, what's an elegant way to avoid writing boilerplate from_xml() and to_xml() methods?
For instance, given:
<Foo bar="1"><Bat baz="blah"/></Foo>
How do I avoid writing endless sequences of:
class Foo
attr_reader :bar, :bat
def from_xml(el)
@bar = el.attributes['bar']
@bat = Bat.new()
@bat.from_xml(XPath.first(el, "./bat")
end
etc...
I don't mind creating the object structure explicitly; it's the serialization that I'm just sure can be taken care of with some higher-level programming...
I am not trying to save a line or two per class (by moving from_xml behavior into initializer or class method, etc.). I am looking for the "meta" solution that duplicates my mental process:
"I know that every element is going to become a class name. I know that every XML attribute is going to be a field name. I know that the code to assign is just @#{attribute_name} = el.[#{attribute_name}] and then recurse into sub-elements. And reverse on to_xml."
I agree with suggestion that a "builder" class plus XmlSimple seems the right path. XML -> Hash -> ? -> Object model (and Profit!)
Update 2008-09-18 AM: Excellent suggestions from @Roman, @fatgeekuk, and @ScottKoon seem to have broken the problem open. I downloaded HPricot source to see how it solved the problem; key methods are clearly instance_variable_set and class_eval . irb work is very encouraging, am now moving towards implementation .... Very excited
A: You could use Builder instead of creating your to_xml method, and you could use XMLSimple to pull your xml file into a Hash instead of using the from _xml method. Unfortunately, I'm not sure you'll really gain all that much from using these techniques.
A: I suggest using a XmlSimple for a start. After you run the XmlSimple#xml_in on your input file, you get a hash. Then you can recurse into it (obj.instance_variables) and turn all internal hashes (element.is_a?(Hash)) to the objects of the same name, for example:
obj.instance_variables.find {|v| obj.send(v.gsub(/^@/,'').to_sym).is_a?(Hash)}.each do |h|
klass= eval(h.sub(/^@(.)/) { $1.upcase })
Perhaps a cleaner way can be found to do this.
Afterwards, if you want to make an xml from this new object, you'll probably need to change the XmlSimple#xml_out to accept another option, which distinguishes your object from the usual hash it's used to receive as an argument, and then you'll have to write your version of XmlSimple#value_to_xml method, so it'll call the accessor method instead of trying to access a hash structure. Another option, is having all your classes support the [] operator by returning the wanted instance variable.
A: Could you define a method missing that allows you to do:
@bar = el.bar? That would get rid of some boilerplate. If Bat is always going to be defined that way, you could push the XPath into the initialize method,
class Bar
def initialize(el)
self.from_xml(XPath.first(el, "./bat"))
end
end
Hpricot or REXML might help too.
A: Could you try parsing the XML with hpricot and using the output to build a plain old Ruby object? [DISCLAIMER]I haven't tried this.
A: I would subclass attr_accessor to build your to_xml and from_xml for you.
Something like this (note, this is not fully functional, only an outline)
class XmlFoo
def self.attr_accessor attributes = {}
# need to add code here to maintain a list of the fields for the subclass, to be used in to_xml and from_xml
attributes.each do |name, value|
super name
end
end
def to_xml options={}
# need to use the hash of elements, and determine how to handle them by whether they are .kind_of?(XmlFoo)
end
def from_xml el
end
end
you could then use it like....
class Second < XmlFoo
attr_accessor :first_attr => String, :second_attr => Float
end
class First < XmlFoo
attr_accessor :normal_attribute => String, :sub_element => Second
end
Hope this gives a general idea.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Query Web Service for list of Messages? Is there a straightforward way to query a web service to see which messages it supports? The C# .NET application I'm working on needs to be able to handle an older version of the web service, which does not implement the message I'm trying to send. The web service does not expose a version number, so Plan B is to see if the message is defined.
I'm assuming I can just make an HTTP request for the WSDL and parse it, but before I go down that path, I want to make sure there's not a simpler approach.
Update:
I've decided to get the WSDL and get messages directly. Here's the rough draft for getting all the messages:
HttpWebRequest webRequest = (HttpWebRequest) WebRequest.Create( "http://your/web/service/here.asmx?WSDL" );
webRequest.PreAuthenticate = // details elided
webRequest.Credentials = // details elided
webRequest.Timeout = // details elided
HttpWebResponse webResponse = (HttpWebResponse) webRequest.GetResponse();
XPathDocument xpathDocument = new XPathDocument( webResponse.GetResponseStream() );
XPathNavigator xpathNavigator = xpathDocument.CreateNavigator();
XmlNamespaceManager xmlNamespaceManager = new XmlNamespaceManager( new NameTable() );
xmlNamespaceManager.AddNamespace( "wsdl", "http://schemas.xmlsoap.org/wsdl/" );
foreach( XPathNavigator node in xpathNavigator.Select( "//wsdl:message/@name", xmlNamespaceManager ) )
{
string messageName = node.Value;
}
A: Parsing the WSDL is probably the simplest way to do this. Using WCF, it's also possible to download the WSDL at runtime, essentially run svcutil on it through code, and end up with a dynamically generated proxy that you can check the structure of. See https://learn.microsoft.com/en-us/archive/blogs/vipulmodi/dynamic-programming-with-wcf for an example of a runtime-generated proxy.
A: I'm pretty sure WSDL is the way to do this.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Where to find Java 6 JSSE/JCE Source Code? Where can I download the JSSE and JCE source code for the latest release of Java? The source build available at https://jdk6.dev.java.net/ does not include the javax.crypto (JCE) packages nor the com.sun.net.ssl.internal (JSSE) packages.
Not being able to debug these classes makes solving SSL issues incredibly difficult.
A: I downloaded the src jar from: http://download.java.net/jdk6/source/
NOTE:
This is a self extracting jar, so just linking to it won't work.
... and jar -xvf <filename> won't work either.
You need to: java -jar <filename>
cheers,
jer
A: if you just want read the source code:
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/security/ssl/SSLSocketImpl.java
A: there: openjdk javax.net in the security group
src/share/classes/javax/net
src/share/classes/com/sun/net/ssl
src/share/classes/sun/security/ssl
src/share/classes/sun/net/www/protocol/https
also on this page:
src/share/classes/javax/crypto
src/share/classes/com/sun/crypto/provider
src/share/classes/sun/security/pkcs11
src/share/classes/sun/security/mscapi
These directories contain the core
cryptography framework and three
providers (SunJCE, SunPKCS11,
SunMSCAPI). SunJCE contains Java
implementations of many popular
algorithms, and the latter two
libraries allow calls made through the
standard Java cryptography APIs to be
routed into their respective native
libraries.
A: While this doesn't directly answer your question, using the javax.net.debug system property has helped me sort through SSL issues. -Djavax.net.debug=all pretty much gives you everything in gory detail. Documentation on this is at JSSE Debugging Utilities.
One note: I've seen that on Java 1.4 and maybe 1.5 levels, the output with option "all" is not as complete as it is using the same option on the Java 1.6 level. E.g., 1.6 shows the actual contents of network (socket) reads and writes. Maybe some levels of 1.4 and 1.5 do as well, but 1.6 was more consistent.
A: For some unknown reason Orcale doesn't released source.jar and javadocs jar for JSE.
I found only one place where you can find them http://jdk7src.sourceforge.net/ but it's outdated and unofficial.
The only one way is to clone OpenJDK repository
A: Put Jad on your system path. Install JadClipse plugin for Eclipse. Use the force, read the decompiled source. :-)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
}
|
Q: Color reduction (in Java) I would like to find a way to take JPEG (or GIF/PNG) images and reduce the amount of colors to e.g. 20. Could someone recommend some library or other reference? Also source codes in other languages are welcome.
A: Take a look at the Java Advanced Imaging API. There are a number of algorithms implemented in that API for doing color reduction.
A: JAI (Java Advanced Imaging API) would do the work but it has some drawbacks.
The API is far from being easy to use, especially if you care about memory footprint...
IMHO Java is not the best platform for imaging tasks.
You might try ImageMagick, a wonderful command line tool, used by popular sites such as Flickr. You can integrate ImageMagick in your java application using the command line (Runtime.exec()) or Jmagick which a java bridge to ImageMagick
A: This seems like a simple implementation in java, based on ImageMagick:
http://gurge.com/amd/java/quantize/index.html
A: Look for algorithms on color quantization, especially median cut. You'll find many examples with those keywords. Libraries to do this for you include ImageMagick which has bindings for many languages. JMagick is the Java flavor.
A: Take a look at the image filters at http://www.jhlabs.com/ip/filters/index.html. The QuantizeFilter seems to do what you want.
A: JAI API is the way to go. WIth today's JVM performance is very close to assembler code. I know I've done it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Attempting to update a user's "connect to:" home directory path in AD using C# I have a small application I am working on that at one point needs to update a user's home directory path in AD under the profile tab where it allows you to map a drive letter to a particular path. The code I have put together so far sets the Home Folder Local path portion OK, but I'm trying to figure out the name for the "connect" portion, as well as how to select the drive letter. Go easy on me, I'm new to C#. Thanks!!
Here's my code that updates the Local path section.
DirectoryEntry deUser = new
DirectoryEntry(findMeinAD(tbPNUID.Text));
deUser.InvokeSet("HomeDirectory", tbPFolderVerification.Text);
deUser.CommitChanges();
Where findMeinAD is a method that looks up a user's info in AD and tbPFolderVerification.Text is a text box in the form that contains the path I'd like to set a particular drive to map to.
A: You may need to set the HomeDrive property as well:
DirectoryEntry deUser = new DirectoryEntry(findMeinAD(tbPNUID.Text));
deUser.InvokeSet("HomeDirectory", tbPFolderVerification.Text);
deUser.InvokeSet("HomeDrive", "Z:");
deUser.CommitChanges();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: In Rails, after using find with :select, my objects don't save Running something like:
@users = User.find(:first, :select => "name, lastname, salary")
for @user in @users do
@user.salary = 100000
@user.save
end
After this looking up in the Mysql table, the users aren't updated.
A: John Ruby is correct that you need to include the id of the object in the select.
Also, you rarely need to use the :select option. You can with joins, or if there is a real performance issue with selecting the entire row, but this doesn't come up much.
And you really don't need to be setting all those variables to instance vars (@). @user in the loop can be a local var. If you do need all the users as @users you can do:
@users = User.find(:all, :select => "id, name, lastname, salary")
@users.each do |user|
user.salary = 10000
user.save
end
You might also want to look at ActiveRecord's update_all for simple changes. But note this doesn't call any save callbacks.
A: ActiveRecord doesn't know the object's id, in order to save the data.
So include the id field in :select, like the example below:
@users = User.find(:first, :select => "id, name, lastname, salary")
A: Try using .update or .update_attributes instead: those are designed for edits (as opposed to .save, which is for creating new rows)
A: Using .save! will also throw an error if it fails to save.
A: I realize this is a pretty old question at this point, but the correct answer, I believe is that any objects returned by :select are read-only.
See the official Rails guide to Active Record querying, section 3.2:
http://guides.rubyonrails.org/active_record_querying.html
Curiously, I don't see this documented in the API.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: STL Alternative I really hate using STL containers because they make the debug version of my code run really slowly. What do other people use instead of STL that has reasonable performance for debug builds?
I'm a game programmer and this has been a problem on many of the projects I've worked on. It's pretty hard to get 60 fps when you use STL container for everything.
I use MSVC for most of my work.
A: For big, performance critical applications, building your own containers specifically tailored to your needs may be worth the time investment.
I´m talking about real game development here.
A: I'll bet your STL uses a checked implementation for debug. This is probably a good thing, as it will catch iterator overruns and such. If it's that much of a problem for you, there may be a compiler switch to turn it off. Check your docs.
A: If you're using Visual C++, then you should have a look at this:
http://channel9.msdn.com/shows/Going+Deep/STL-Iterator-Debugging-and-Secure-SCL/
and the links from that page, which cover the various costs and options of all the debug-mode checking which the MS/Dinkware STL does.
If you're going to ask such a platform dependent question, it would be a good idea to mention your platform, too...
A: Check out EASTL.
A: MSVC uses a very heavyweight implementation of checked iterators in debug builds, which others have already discussed, so I won't repeat it (but start there)
One other thing that might be of interest to you is that your "debug build" and "release build" probably involves changing (at least) 4 settings which are only loosely related.
*
*Generating a .pdb file (cl /Zi and link /DEBUG), which allows symbolic debugging. You may want to add /OPT:ref to the linker options; the linker drops unreferenced functions when not making a .pdb file, but with /DEBUG mode it keeps them all (since the debug symbols reference them) unless you add this expicitly.
*Using a debug version of the C runtime library (probably MSVCR*D.dll, but it depends on what runtime you're using). This boils down to /MT or /MTd (or something else if not using the dll runtime)
*Turning off the compiler optimizations (/Od)
*setting the preprocessor #defines DEBUG or NDEBUG
These can be switched independently. The first costs nothing in runtime performance, though it adds size. The second makes a number of functions more expensive, but has a huge impact on malloc and free; the debug runtime versions are careful to "poison" the memory they touch with values to make uninitialized data bugs clear. I believe with the MSVCP* STL implementations it also eliminates all the allocation pooling that is usually done, so that leaks show exactly the block you'd think and not some larger chunk of memory that it's been sub-allocating; that means it makes more calls to malloc on top of them being much slower. The third; well, that one does lots of things (this question has some good discussion of the subject). Unfortunately, it's needed if you want single-stepping to work smoothly. The fourth affects lots of libraries in various ways, but most notable it compiles in or eliminates assert() and friends.
So you might consider making a build with some lesser combination of these selections. I make a lot of use of builds that use have symbols (/Zi and link /DEBUG) and asserts (/DDEBUG), but are still optimized (/O1 or /O2 or whatever flags you use) but with stack frame pointers kept for clear backtraces (/Oy-) and using the normal runtime library (/MT). This performs close to my release build and is semi-debuggable (backtraces are fine, single-stepping is a bit wacky at the source level; assembly level works fine of course). You can have however many configurations you want; just clone your release one and turn on whatever parts of the debugging seem useful.
A: Sorry, I can't leave a comment, so here's an answer: EASTL is now available at github: https://github.com/paulhodge/EASTL
A: EASTL is a possibility, but still not perfect. Paul Pedriana of Electronic Arts did an investigation of various STL implementations with respect to performance in game applications the summary of which is found here:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2271.html
Some of these adjustments to are being reviewed for inclusion in the C++ standard.
And note, even EASTL doesn't optimize for the non-optimized case. I had an excel file w/ some timing a while back but I think I've lost it, but for access it was something like:
debug release
STL 100 10
EASTL 10 3
array[i] 3 1
The most success I've had was rolling my own containers. You can get those down to near array[x] performance.
A: My experience is that well designed STL code runs slowly in debug builds because the optimizer is turned off. STL containers emit a lot of calls to constructors and operator= which (if they are light weight) gets inlined/removed in release builds.
Also, Visual C++ 2005 and up has checking enabled for STL in both release and debug builds. It is a huge performance hog for STL-heavy software. It can be disabled by defining _SECURE_SCL=0 for all your compilation units. Please note that having different _SECURE_SCL status in different compilation units will almost certainly lead to disaster.
You could create a third build configuration with checking turned off and use that to debug with performance. I recommend you to keep a debug configuration with checking on though, since it's very helpful to catch erroneous array indices and stuff like that.
A: If your running visual studios you may want to consider the following:
#define _SECURE_SCL 0
#define _HAS_ITERATOR_DEBUGGING 0
That's just for iterators, what type of STL operations are you preforming? You may want to look at optimizing your memory operations; ie, using resize() to insert several elements at once instead of using pop/push to insert elements one at a time.
A: Ultimate++ has its own set of containers - not sure if you can use them separatelly from the rest of the library: http://www.ultimatepp.org/
A: What about the ACE library? It's an open-source object-oriented framework for concurrent communication software, but it also has some container classes.
A: Checkout Data Structures and Algorithms with Object-Oriented Design Patterns in C++
By Bruno Preiss
http://www.brpreiss.com/
A: Qt has reimplemented most c++ standard library stuff with different interfaces. It looks pretty good, but it can be expensive for the commercially licensed version.
Edit: Qt has since been released under LGPL, which usually makes it possible to use it in commercial product without bying the commercial version (which also still exists).
A: STL containers should not run "really slowly" in debug or anywhere else. Perhaps you're misusing them. You're not running against something like ElectricFence or Valgrind in debug are you? They slow anything down that does lots of allocations.
All the containers can use custom allocators, which some people use to improve performance - but I've never needed to use them myself.
A: There is also the ETL https://www.etlcpp.com/. This library aims especially for time critical (deterministic) applications
From the webpage:
The ETL is not designed to completely replace the STL, but complement
it. Its design objective covers four main areas.
*
*Create a set of containers where the size or maximum size is determined at compile time. These containers should be largely
equivalent to those supplied in the STL, with a compatible API.
*Be compatible with C++ 03 but implement as many of the C++ 11 additions as possible.
*Have deterministic behaviour.
*Add other useful components that are not present in the standard library.
The embedded template library has been designed for lower resource
embedded applications. It defines a set of containers, algorithms and
utilities, some of which emulate parts of the STL. There is no dynamic
memory allocation. The library makes no use of the heap. All of the
containers (apart from intrusive types) have a fixed capacity allowing
all memory allocation to be determined at compile time. The library is
intended for any compiler that supports C++03.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28"
}
|
Q: How to SelectAll / SelectNone in .NET 2.0 ListView? What is a good way to select all or select no items in a listview without using:
foreach (ListViewItem item in listView1.Items)
{
item.Selected = true;
}
or
foreach (ListViewItem item in listView1.Items)
{
item.Selected = false;
}
I know the underlying Win32 listview common control supports LVM_SETITEMSTATE message which you can use to set the selected state, and by passing -1 as the index it will apply to all items. I'd rather not be PInvoking messages to the control that happens to be behind the .NET Listview control (I don't want to be a bad developer and rely on undocumented behavior - for when they change it to a fully managed ListView class)
Bump
Pseudo Masochist has the SelectNone case:
ListView1.SelectedItems.Clear();
Now just need the SelectAll code
A: Either
ListView1.SelectedItems.Clear();
or
ListView1.SelectedIndices.Clear();
should do the trick for select none, anyway.
A: Wow this is old... :D
SELECT ALL
listView1.BeginUpdate();
foreach (ListViewItem i in listView1.Items)
{
i.Selected = true;
}
listView1.EndUpdate();
SELECT INVERSE
listView1.BeginUpdate();
foreach (ListViewItem i in listView1.Items)
{
i.Selected = !i.Selected;
}
listView1.EndUpdate();
BeginUpdate and EndUpdate are used to disable/enable the control redrawing while its content is being updated... I figure it would select all quicker, since it would refresh only once, and not listView.Items.Count times.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do I fix 404.17 error on Win Server 2k8 and IIS7 I've setup a new .net 2.0 website on IIS 7 under Win Server 2k8 and when browsing to a page it gives me a 404.17 error, claiming that the file (default.aspx in this case) appears to be a script but is being handled by the static file handler. It SOUNDS like the module mappings for ASP.Net got messed up, but they look fine in the configurations. Does anyone have a suggestion for correcting this error?
A: For me it worked by doing the following
Install ASP.NET
cd %windir%\Microsoft.NET\Framework64/v4.0.30319
aspnet_regiis.exe -i
*
*Next Go to IIS Manager and click on the server (root) node.
*In features view, IIS section, open "ISAPI & CGI Restrictions"
*Right-click the ASP.NET 4 restriction column and right-click to Allow
Hope it works for you..
A: For me, my problem came because of a setting in my project's web.config file (and also the solution, once I understood the problem).
In my web.config file, we had these two lines in the system.webServer > handlers area:
<remove name="WebServiceHandlerFactory-ISAPI-2.0" />
<add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
Notice the alternative handler has the attribute 'preCondition="integratedMode"'. So, I had to change my AppPool to use Integrated instead of Classic for my pipeline mode setting (which is the opposite of what the solutions above told me to do).
A: Always try "Revert to Parent" in Handler Mappings first.
I was getting 404.17 when trying to run ASP.NET 4.0 in IIS 7.5. I tried all of the above and eventually got the correct Handler Mappings manually set up and the error went away.
Then, on yet another website with the same error, I tried "Revert to Parent" in Handler Mappings and it added 6 *.aspx mappings and everything worked perfectly.
Obviously, you'd have to have the parent configured properly (from the install or otherwise), but this is definitely the first step everyone should take since it is so easy.
A: For me the solution was to click "revert from inherited" from the handler mappings section under the virtual application.
A: I had this problem on IIS6 one time when somehow the ASP.NET ISAPI stuff was broke.
Running
%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i
to recreate the settings took care of it.
A: This solution worked for me... (I've had aspnet_regiis.exe -i do some damage)
http://forums.iis.net/t/1157725.aspx
1. Locate your App Pool and Right Click
2. Select Basic Settings
3. Select your current .Net Framework Version
4.Restart the App Pool
A: So far, none of these solutions have worked for me.
I have found a few other possible solutions (which did not work for me):
*
*http://first-reboot.blogspot.com/2009/12/error-40417-opening-asmx-page.html
*http://forums.asp.net/p/1432329/3219236.aspx
A: Only one way to solve this problem...
First Installed Windows7
Then Install IIS 7 with all features
And then installed Visual Studio 2008 / 2010
I work on visual studio 2008 and 2010 but I never seen this error before.
I can also try on my friend's PC. And also I solve this error.
A: For me this got resolved by setting 32 bit application to true.
A: Non of the above worked for me.
Our server is 64 bit so setting the Application to allow 32 bit applications worked for us:
*
*Go to Web Server\Application Pools
*Right click the application pool used by your website.
*Click on Advanced Settings...
*Set "Enable 32-Bit Applications" to True.
I think this was because the web application was compiled for 32 bit only.
A: %windir%\Microsoft.NET\Framework64\v2.0.50727\aspnet_regiis.exe -i
worked for me after getting
"An attempt was made to load a program with an incorrect format ..."
with the 32 framework
maybe ill save u one more sec googling
A: For me, this worked. Installs machine configuration sections, handlers, assemblies, modules, protocols and lots of other thing to work things properly.
A: For me it was HTTP Activation was not checked in the server features.
A: We needed to install ASP.NET 3.5 and 4.5, ISAPI Extensions, ISAPI Filters and Server Side Includes, in the Windows Features menu under IIS Development Features.
Alternatively, do with the command line DISM:
Dism /online /enable-feature /featurename:NetFx3 /All /Source:WindowsInstallers\Win8\sxs /LimitAccess
Dism /online /enable-feature /featurename:NetFx4 /All /Source:WindowsInstallers\Win8\sxs /LimitAccess
Dism /online /enable-feature /featurename:IIS-ISAPIExtensions /All /Source:WindowsInstallers\Win8\sxs /LimitAccess
Dism /online /enable-feature /featurename:IIS-ISAPIFilter /All /Source:WindowsInstallers\Win8\sxs /LimitAccess
Dism /online /enable-feature /featurename:IIS-ServerSideIncludes /All /Source:WindowsInstallers\Win8\sxs /LimitAccess
A: http activation under WCF Services in turn on / off windows features resolved the issue.
A: In my case, none of the above answers resolved the issue, and the reason was that the CGI module wasn't installed.
To resolve this I followed these instructions.
https://learn.microsoft.com/en-us/iis/configuration/system.webserver/cgi
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
}
|
Q: GUID Behind the Scenes I am wondering, what goes into the creation of a GUID. I don't mean what is used to create a GUID in a specific language (NewID() in SQL Server, Guid.NewGuid() in C#), I mean when you call those methods/functions, what do they do to make the GUID?
A: Also, RFC 4122 (which is referenced in the Wikipedia article) describes how GUIDs should be built.
A: The details of GUIDs, including the algorithm used to generate them is described on wikipedia.
A: In short, it's not complicated at all. GUID (or UUID) Version 4 (current) is a partially random number, plain and simple (122 out of 128 bits are random, the rest are used for storing version and revision). The trick is that the possible values of this number are so many that the probability of a hit is for most practical purposes, zero.
A: Hash function. Its complicated.
http://en.wikipedia.org/wiki/GUID#Algorithm Knows more than I do.
A: A word of caution that a very great deal of what you read on the Internet about GUID creation may well be wrong, or at least out of date for your specific platform.
I once single-stepped through a heap of Windows code to settle an argument about GUID creation on WinXP. Unfortunately, it turned out that I was wrong (i.e. I lost the argument), but so was Larry Osterman, so I felt slightly better about it.
A: There are five official ways of generating GUID's (and certainly many more unofficial ones).
*
*Version 1 is a time based GUID usually using MAC addresses of the primary network card of the using used to compute the GUID. This is normally not used due to privacy issues, but I do believe that Microsoft SQL Servers from 2005 and onwards use a modified version of this (claiming to be version 14), to create sequential GUID's useful for id's in a database to avoid fractioning of data blocks (NewSequentialId()).
*Version 2 is DCE Security version. I have never found this kind of GUID, but I have not worked a lot with POSIX either and there seems to be a connection between version 2 GUID's and POSIX.
*Version 3 is a "name based" version, meaning you can take a text and create a GUID representation of that, given a namespace. Version 3 uses a MD5 hashing algorithm. See also version 5.
*Version 4 is basically a random number type GUID. The random number is of sequrity level, not just your average random number generator though. This is the version usually used in the world today. The C# Guid.NewGuid() uses this version, according to Microsoft documentation. Also the normal function for generating an uniqueidentifier in MS SQL Server (NewId()) generates a version 4 GUID.
*Version 5 is just like version 3, but uses a SHA-3 hashing algorithm instead. The extended guid C# project uses the version 5 algorithm.
For one implementation of GUID making I'd recomend looking at the extended guid project. As many has pointed out the RFC 4122 gives a detailed description on how all five algorithms work. However, there are no guarantees all implementations are correct.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: C# ListView mouse wheel scroll without focus I'm making a WinForms app with a ListView set to detail so that several columns can be displayed.
I'd like for this list to scroll when the mouse is over the control and the user uses the mouse scroll wheel. Right now, scrolling only happens when the ListView has focus.
How can I make the ListView scroll even when it doesn't have focus?
A: "Simple" and working solution:
public class FormContainingListView : Form, IMessageFilter
{
public FormContainingListView()
{
// ...
Application.AddMessageFilter(this);
}
#region mouse wheel without focus
// P/Invoke declarations
[DllImport("user32.dll")]
private static extern IntPtr WindowFromPoint(Point pt);
[DllImport("user32.dll")]
private static extern IntPtr SendMessage(IntPtr hWnd, int msg, IntPtr wp, IntPtr lp);
public bool PreFilterMessage(ref Message m)
{
if (m.Msg == 0x20a)
{
// WM_MOUSEWHEEL, find the control at screen position m.LParam
Point pos = new Point(m.LParam.ToInt32() & 0xffff, m.LParam.ToInt32() >> 16);
IntPtr hWnd = WindowFromPoint(pos);
if (hWnd != IntPtr.Zero && hWnd != m.HWnd && System.Windows.Forms.Control.FromHandle(hWnd) != null)
{
SendMessage(hWnd, m.Msg, m.WParam, m.LParam);
return true;
}
}
return false;
}
#endregion
}
A: You'll normally only get mouse/keyboard events to a window or control when it has focus. If you want to see them without focus then you're going to have to put in place a lower-level hook.
Here is an example low level mouse hook
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
}
|
Q: Sequence Diagram Reverse Engineering I'm looking for a tool that will reverse engineer Java into a sequence diagram BUT also provides the ability to filter out calls to certain libraries.
For example, the Netbeans IDE does a fantastic job of this but it includes all calls to String or Integer which clutter up the diagram to the point it is unusable.
Any help is greatly appreciated!!!!!!!
A: JTracert is now discontinued. In place, they recommend http://www.jsonde.com/
A: I have a tool that meets your requirements exactly. Check it out
http://sourceforge.net/projects/javacalltracer/
In addition to being a reverse engineering tool for java it is also very lightweight. You can control the what you want to record from your java program.
A: I think jtracert is what you are looking for. It generates a sequence diagram from a running Java program. Also, because its output is a text description of the diagram (in the formats of several popular SD tools), you can use grep to filter for only the classes you are interested in.
A: Try MaintainJ. MaintainJ generates sequence diagrams at runtime for a use case. It provides multiple ways to filter out unwanted calls. Yes, filtering out unwanted calls is the most important feature needed in sequence diagram generating tools. Besides, MaintainJ provides a neat interface to explore the diagram and search for calls in one use case or across use cases.
Check the demo video to get a quick overview.
I am the author of MaintainJ, by the way.
A: I believe the perfect tool to solve your problem is Diver: Dynamic Interactive Views For Reverse Engineering. It provides both static and dynamic sequence diagrams and looks to solve all your requirements from your question.
It is a plugin for Eclipse and lets you:
*
*Easily trace your Java programs
*Visualize your program’s runtime functionality
*Filter your traces to make them more compact
*Filter your IDE based on what occurs at runtime
*See what code ran in your source code editors
It's on Github and there is also a project web site
Full Disclosure: I am the current project lead for Diver
A: Enterprise architect from Sparx claims to be able to reverse engineer java code including generating sequence diagrams - see this section of the user guide
It looks like it can record a debugging session and then you generate the sequence diagram from that
I've not tried it (though have used EA as a modelling tool) so ymmv!
There is a free 30day evaluation download available
A: Take a look at http://www.maintainj.com
It don't know, whether it can filter library calls, but it has a reasonable graphical front end and seems to trace even very large applications.
A: MaintainJ is really wonderful tool, Recently i was started to use with MaintainJ with my application it is giving more comfort with my entire usage to understand my system based on Maintainj sequence & UML diagrams.
I am sure for the above question MaintainJ is will give better idea.
Thanks,
Krishna MM
A: I have just started using the sequence diagram recording feature in Sparx Systems Enterprise Architect. It works very well for C#. You can create filters by class and by method. I'm actually trying to find out if it's possible to filter out an entire package. There is a checkbox for automatically excluding external modules (like the .NET Framework) which aids in declutter. YMMV for Java, but I think their support (and documentation) for Java is generally better (more examples) than for .NET.
A: Heatlamp (http://www.jmolly.com/heatlamp/) was designed for exactly this purpose.
It generates interactive (and printable) diagrams from running Java code. You can specify filters to describe which classes, packages, and methods to trace. You can also search, filter, and collapse invocations after the diagram is rendered to further reduce the sequence diagram.
Disclaimer: I'm the author of Heatlamp.
A: Here's and add-on to Asgeir's answer. Here's the link that I found.
http://www.java2s.com/Code/Jar/s/sequence.htm
Run from the command line ... "java -jar sequence.jar" ... this is a Java application with a GUI.
The help section says:
SEQUENCE is a program for producing UML Sequence Diagrams. In contrast to most similar programs you don't actually draw the diagram. Instead you write a textual description of the method calls you want to diagram and the layout is calculated and drawn automatically.
So this tool doesn't reverse engineer anything, but I can see how it might be helpful if you wanted to quickly diagram things from scratch. Looks like it was built in 2002 and I think there are probably better tools out there now.
Here's another similar tool here:
http://sdedit.sourceforge.net/example/index.html
A: This looks like a really nice tool:
http://www.architexa.com/learn-more/sequence-diagrams
But it looks like it's only free for a year, then its $250 a year. Bummer.
I found the ModelGoon plugin to be helpful. It's a bit limited because you choose a single method as the starting point for the sequence diagram, and it only shows the calls that method makes (so to go a level deeper you need to generate another diagram.)
http://www.modelgoon.org/?page_id=53
A: JIVE (www.cse.buffalo.edu/jive) will construct a sequence diagram from the execution of a Java program. It has an Exclusion Filter capability will allow you to exclude objects belonging to designated classes or packages. JIVE can draw sequence diagrams for multi-threaded Java program execution. It also has the ability compact large diagrams in both the horizontal and vertical dimension, under user guidance.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
}
|
Q: WCSF Random assembly manifest definition does not match assembly ref in .NET 2.0 I'm running WCSF Feb 2008 along with Enterprise Library 3.1 and noticed that randomly I get the "fun"
Could not load file or assembly Microsoft.Practices.EnterpriseLibrary.Common, Version=3.1.0.0, Culture=neutral, Public ... The located assembly's manifest definition does not match the assembly reference.
Usually this wouldn't be worth mentioning on stackoverflow, but the strange thing is that the first time I fire this up it breaks, but if I close it down and simply hit F11 again - it works .... strange. Does anyone know why this might break sometimes, but not others?
A: The problem was related to my version of the data access DLL I was adding. I found that if I went to the following:
C:\Program Files\Microsoft Web Client
Software Factory February
2008\Microsoft Practices Library
and imported this specific data access DLL instead of the one I compiled myself from the Enterprise Library 3.1 installer, everything worked great.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do I preserve line feeds, tabs, and spaces in data while still wrapping text? I had data in XML that had line feeds, spaces, and tabs that I wanted to preserve in the output HTML (so I couldn't use <p>) but I also wanted the lines to wrap when the side of the screen was reached (so I couldn't use <pre>).
A: Another way of putting this is that you want to turn all pairs of spaces into two non-breaking spaces, tabs into four non-breaking spaces and all line breaks into <br> elements. In XSLT 1.0, I'd do:
<xsl:template name="replace-spaces">
<xsl:param name="text" />
<xsl:choose>
<xsl:when test="contains($text, ' ')">
<xsl:call-template name="replace-spaces">
<xsl:with-param name="text" select="substring-before($text, ' ')"/>
</xsl:call-template>
<xsl:text>  </xsl:text>
<xsl:call-template name="replace-spaces">
<xsl:with-param name="text" select="substring-before($text, ' ')" />
</xsl:call-template>
</xsl:when>
<xsl:when test="contains($text, '	')">
<xsl:call-template name="replace-spaces">
<xsl:with-param name="text" select="substring-before($text, '	')"/>
</xsl:call-template>
<xsl:text>    </xsl:text>
<xsl:call-template name="replace-spaces">
<xsl:with-param name="text" select="substring-before($text, '	')" />
</xsl:call-template>
</xsl:when>
<xsl:when test="contains($text, '
')">
<xsl:call-template name="replace-spaces">
<xsl:with-param name="text" select="substring-before($text, '
')" />
</xsl:call-template>
<br />
<xsl:call-template name="replace-spaces">
<xsl:with-param name="text" select="substring-after($text, '
')" />
</xsl:call-template>
</xsl:when>
<xsl:otherwise>
<xsl:value-of select="$text" />
</xsl:otherwise>
</xsl:choose>
</xsl:template>
Not being able to use tail recursion is a bit of a pain, but it shouldn't be a real problem unless the text is very long.
An XSLT 2.0 solution would use <xsl:analyze-string>.
A: I and a co-worker (Patricia Eromosele) came up with the following solution: (Is there a better solution?)
<p> <xsl:call-template name="prewrap"> <xsl:with-param name="text" select="text"/> </xsl:call-template> </p> <xsl:template name="prewrap"> <xsl:param name="text" select="."/> <xsl:variable name="spaceIndex" select="string-length(substring-before($text, ' '))"/> <xsl:variable name="tabIndex" select="string-length(substring-before($text, '	'))"/> <xsl:variable name="lineFeedIndex" select="string-length(substring-before($text, '
'))"/> <xsl:choose> <xsl:when test="$spaceIndex = 0 and $tabIndex = 0 and $lineFeedIndex = 0"><!-- no special characters left --> <xsl:value-of select="$text"/> </xsl:when> <xsl:when test="$spaceIndex > $tabIndex and $lineFeedIndex > $tabIndex"><!-- tab --> <xsl:value-of select="substring-before($text, '	')"/> <xsl:text disable-output-escaping="yes">&nbsp;</xsl:text> <xsl:text disable-output-escaping="yes">&nbsp;</xsl:text> <xsl:text disable-output-escaping="yes">&nbsp;</xsl:text> <xsl:text disable-output-escaping="yes">&nbsp;</xsl:text> <xsl:call-template name="prewrap"> <xsl:with-param name="text" select="substring-after($text,'	')"/> </xsl:call-template> </xsl:when> <xsl:when test="$spaceIndex > $lineFeedIndex and $tabIndex > $lineFeedIndex"><!-- line feed --> <xsl:value-of select="substring-before($text, '
')"/> <br/> <xsl:call-template name="prewrap"> <xsl:with-param name="text" select="substring-after($text,'
')"/> </xsl:call-template> </xsl:when> <xsl:when test="$lineFeedIndex > $spaceIndex and $tabIndex > $spaceIndex"><!-- two spaces --> <xsl:value-of select="substring-before($text, ' ')"/> <xsl:text disable-output-escaping="yes">&nbsp;</xsl:text> <xsl:text disable-output-escaping="yes">&nbsp;</xsl:text> <xsl:call-template name="prewrap"> <xsl:with-param name="text" select="substring-after($text,' ')"/> </xsl:call-template> </xsl:when> <xsl:otherwise><!-- should never happen --> <xsl:value-of select="$text"/> </xsl:otherwise> </xsl:choose> </xsl:template>
Source: http://jamesjava.blogspot.com/2008/06/xsl-preserving-line-feeds-tabs-and.html
A: Really, I'd choose an editor which supports this correctly, rather than wrangling it through more XML.
A: Not sure if this relevant, but isn't there a preservespace attribute and whatnot for xml?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What are some good Module Development Solution/Environments/Best Practices for Dot Net Nuke Modules I've been developing modules for DNN since version 2 and back then I was able to easily able to run my module as I developed it on my environment and still easily deploy my module as a DLL.
When version 4 came out and used the web site solution (rather than the Web Application solution). It seems like there was something lost. I can continue to develop in my test environment and immediately see changes as I make them, but releasing for me has become a headache.
I mostly do my development for one site in particular have just been using FTP deployment of the modules to the main site after I was done making changes.
I'd like to set up a good environment for multiple developers to be able to work on the module(s).
When adding stuff to source control, are people generally putting all of DNN into source control so they can bring the whole solution down to work on, or just their module and each person needs to set up their own dev DNN environment?
I'd like to start getting my modules projects organized so more people could work on them and I feel a bit lost for some best practices both in doing this and deploying those changes to a live site.
A: I have a few detailed blog postings about this on my blog site, mitchelsellers.com.
I personally use the WAP development model and I do NOT check the DNN solution, or any core files into source control, as I do NOT modify the core for any of my clients. When working with multiple people we create a similar environment for each person, and still can work with each of our individual projects, at times we will have completely isolated dev environments with individual databases and code, at other times I have worked with a shared dev database to resolve issues with dev module installation issues.
With the WAP model I use a method to dynamically create my installation packages on project build using a post-build event and then I have a test installation that I use to validate that the packages occur. Debugging is then done via Attach to Process.
A: I would suggest Mitchel book if you are needing some reference material - Professional Dotnetnuke Module Programming by Wrox Module Programming - Michel Sellers
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: What is the best way to localize a WPF application, sans LocBAML? There seems to be no good way to localize a WPF application. MSDN seems to think that littering my XAML with x:Uid's, generating CSV files, and then generating new assemblies (using their sample code!) is the answer. Worse, this process doesn't address how to localize images, binary blobs (say, PDF files), or strings that are embedded in code.
So, how might you localize an application that:
*
*Contains several assemblies
*Contains images and other binary blobs (eg: PDF docs) that need to be localized
*Has string data that isn't in XAML (eg: MessageBox.Show("Hello World");)
A: Not an expert here, but "littering" your xaml with x:Uids is not worse than "littering" your Windows Forms code with all the string table nonsense you have to do for localizing them.
As far as I understand, WPF apps still support "all the Framework CLR resources including string tables, images, and so forth." which means you can have localized resources.
Of course, it would be much simpler if you created a markup extension that handled much of this nonsense for you. You can find an example of someone doing this here. And there was another, similar solution at http://blog.taggersoft.com/2008/07/wpf-application-localization-pattern_29.html, but that link no longer works.
A: You should have a look at the article and code available here. It describes different ways of localizing WPF apps, using LocBaml, custom markup extensions, or attached properties. IMHO the best solution is to use the markup extensions and Resx resources. The code contains a localization framework for doing that.
A: You can use the old "ResX" files which support all of your mentioned scenarios. How this can be accomplished in a WPF application is explained here:
WPF Application Framework (WAF) => See Localization Sample
A: try the Gnu GetText utilities, and supporting applications. It does generate C# classes based around ResourceManager and ResourceSets, and of course you can reuse the translations for other parts of your application - eg web pages, native code, or iphone etc.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
}
|
Q: .NET library for processing HTML e-mails & stripping previous responses Does anyone know of a .NET library that will process HTML e-mails and can be used to trim out the reply-chain? It needs to be able to accept HTML -or- text mails and then trim out everything but the actual response, removing the trail of messages that are not original content. I don't expect it to be able to handle responseswhen they're interleaved into the previous mail ("responses in-line") - that case can fail.
We have a home-built one based on SgmlReader and a series of XSL transforms, but it requires constant maintenance to deal with new e-mail clients. I'd like to find one I can buy... :)
Thanks,
Steve
A: This does not answer much of your question, but the W3C's Converting HTML to Other Formats has a section on converting HTML to text. I hope it helps someone develop a full answer to your question!
A: One free and very useful library we've used for dealing with HTML, including malformed HTML, is the HtmlAgilityPack.
There is no StripOutPreviousResponses() function, but it may help you with your home-made one.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: When would you need to use late static binding? After reading this description of late static binding (LSB) I see pretty clearly what is going on. Now, under which sorts of circumstances might that be most useful or needed?
A: I needed LSB this for the following scenario:
*
*Imagine you're building a "mail processor" daemon that downloads the message from an email server, classifies it, parses it, saves it, and then does something, depending on the type of the message.
*Class hierarchy: you have a base Message class, with children "BouncedMessage" and "AcceptedMessage".
*Each of the message types has its own way to persist itself on disk. For example, all messages of type BouncedMessage try to save itself as BouncedMessage-id.xml. AcceptedMessage, on the other hand, needs to save itself differently - as AcceptedMessage-timestamp.xml. The important thing here is that the logic for determining the filename pattern is different for different subclasses, but shared for all items within the subclass. That's why it makes sense for it to be in a static method.
*Base Message class has an abstract static method (yes, abstract AND static) "save". BouncedMessage implements this method with a concrete static method. Then, inside the class that actually retrieves the message, you can call "::save()"
If you want to learn more about the subject:
*
*http://www.qcodo.com/forums/topic.php/2356
*http://community.livejournal.com/php/585907.html
*http://bugs.php.net/bug.php?id=42681
A: One primary need I have for late static binding is for a set of static instance-creation methods.
This DateAndTime class is part of a chronology library that I ported to PHP from Smalltalk/Squeak. Using static instance-creation methods enables creation of instances with a variety of argument types, while keeping parameter checking in the static method so that the consumer of the library is unable to obtain an instance that is not fully valid.
Late static binding is useful in this case so that the implementations of these static instance-creation methods can determine what class was originally targeted by the call. Here is an example of usage:
With LSB:
class DateAndTime {
public static function now() {
$class = static::myClass();
$obj = new $class;
$obj->setSeconds(time());
return $obj;
}
public static function yesterday() {
$class = static::myClass();
$obj = new $class;
$obj->setSeconds(time() - 86400);
return $obj;
}
protected static function myClass () {
return 'DateAndTime';
}
}
class Timestamp extends DateAndTime {
protected static function myClass () {
return 'Timestamp';
}
}
// Usage:
$date = DateAndTime::now();
$timestamp = Timestamp::now();
$date2 = DateAndTime::yesterday();
$timestamp2 = Timestamp::yesterday();
Without late static binding, [as in my current implementation] each class must implement every instance creation method as in this example:
Without LSB:
class DateAndTime {
public static function now($class = 'DateAndTime') {
$obj = new $class;
$obj->setSeconds(time());
return $obj;
}
public static function yesterday($class = 'DateAndTime') {
$obj = new $class;
$obj->setSeconds(time() - 86400);
return $obj;
}
}
class Timestamp extends DateAndTime {
public static function now($class = 'Timestamp') {
return self::now($class);
}
public static function yesterday($class = 'Timestamp') {
return self::yesterday($class);
}
}
As the number of instance-creation methods and class-hierarchy increases the duplication of methods becomes a real pain in the butt. LSB reduces this duplication and allows for much cleaner and more straight-forward implementations.
A: It's useful when:
*
*You have functionality that varies over the class hierarchy,
*The functionality has the same signature over the hierarchy, and
*(crucially) You don't have an instance to hang the functionality off of.
If only #1 and #2 obtained, you would use an ordinary instance method. So Alex's problem (see his answer to this question) does not require LSB.
A typical case is object creation, where subclasses create themselves in different ways, but using the same parameters. Obviously you have no instance to call, so the creation method (also known as a factory method) must be static. Yet you want its behavior to vary depending on the subclass, so an ordinary static method is not right. See Adam Franco's answer for an example.
A: If you need to access an overloaded static property/Method within a method that hasn't been overloaded in a subclass - you need late static binding. A quick example: paste2.org
The classic example is the ActiveRecord class from Rails, if you try to implement something similar in PHP, which would look like this: class User extends ActiveRecord and then try to call User::find(1) the method that gets called is actually ActiveRecord::find() because you haven't overloaded find() in User - but without late static binding the find() method in ActiveRecord has no way of knowing which classed it got called from (self within it will always point to ActiveRecord), and thus it can't fetch your User-object for you.
A: Suppose you have classes representing tables (row instances) in a simplified object-relational mapper.
You would have a class "User" and a class "Company" who's instances are representing rows of the respective tables.
User and Company would inherit from some base abstract class, let's say "BaseObject" that will have some common methods like save(), delete(), validate() etc ...
If you want to store data about the validation and the table definition, the best place would be in a static variable in each derived class - since the validation and table definition is the same for each instance of User.
Without LSB the mentioned validate() method in BaseObject would have no reference to the static variables defined in User and Company, even though you are calling it through an instance of User. It will look for the same static variable in the BaseObject class, and it will raise an error.
This is my experience with PHP 5.2.8 - LSB is going to be introduced in 5.3
A: I have a class with a static method that handles some formatting. I have another class that than needs all the functionality of the original one except for how it handles formatting.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
}
|
Q: Mocking WebResponse's from a WebRequest I have finally started messing around with creating some apps that work with RESTful web interfaces, however, I am concerned that I am hammering their servers every time I hit F5 to run a series of tests..
Basically, I need to get a series of web responses so I can test I am parsing the varying responses correctly, rather than hit their servers every time, I thought I could do this once, save the XML and then work locally.
However, I don't see how I can "mock" a WebResponse, since (AFAIK) they can only be instantiated by WebRequest.GetResponse
How do you guys go about mocking this sort of thing? Do you? I just really don't like the fact I am hammering their servers :S I dont want to change the code too much, but I expect there is a elegant way of doing this..
Update Following Accept
Will's answer was the slap in the face I needed, I knew I was missing a fundamental point!
*
*Create an Interface that will return a proxy object which represents the XML.
*Implement the interface twice, one that uses WebRequest, the other that returns static "responses".
*The interface implmentation then either instantiates the return type based on the response, or the static XML.
*You can then pass the required class when testing or at production to the service layer.
Once I have the code knocked up, I'll paste some samples.
A: I found this question while looking to do exactly the same thing. Couldn't find an answer anywhere, but after a bit more digging found that the .Net Framework has built in support for this.
You can register a factory object with WebRequest.RegisterPrefix which WebRequest.Create will call when using that prefix (or url). The factory object must implement IWebRequestCreate which has a single method Create which returns a WebRequest. Here you can return your mock WebRequest.
I've put some sample code up at
http://blog.salamandersoft.co.uk/index.php/2009/10/how-to-mock-httpwebrequest-when-unit-testing/
A: You can't. Best thing to do is wrap it in a proxy object, and then mock that. Alternatively, you'd have to use a mock framework that can intercept types that can't be mocked, like TypeMock. But you're talking about bucks, there. Better to do a little wrapping.
Apparently you can with a little extra work. Check the highest voted answer here.
A: I found the following blog earlier which explains quite a nice approach using Microsoft Moles.
http://maraboustork.co.uk/index.php/2011/03/mocking-httpwebresponse-with-moles/
In short the solution suggests the following:
[TestMethod]
[HostType("Moles")]
[Description("Tests that the default scraper returns the correct result")]
public void Scrape_KnownUrl_ReturnsExpectedValue()
{
var mockedWebResponse = new MHttpWebResponse();
MHttpWebRequest.AllInstances.GetResponse = (x) =>
{
return mockedWebResponse;
};
mockedWebResponse.StatusCodeGet = () => { return HttpStatusCode.OK; };
mockedWebResponse.ResponseUriGet = () => { return new Uri("http://www.google.co.uk/someRedirect.aspx"); };
mockedWebResponse.ContentTypeGet = () => { return "testHttpResponse"; };
var mockedResponse = "<html> \r\n" +
" <head></head> \r\n" +
" <body> \r\n" +
" <h1>Hello World</h1> \r\n" +
" </body> \r\n" +
"</html>";
var s = new MemoryStream();
var sw = new StreamWriter(s);
sw.Write(mockedResponse);
sw.Flush();
s.Seek(0, SeekOrigin.Begin);
mockedWebResponse.GetResponseStream = () => s;
var scraper = new DefaultScraper();
var retVal = scraper.Scrape("http://www.google.co.uk");
Assert.AreEqual(mockedResponse, retVal.Content, "Should have returned the test html response");
Assert.AreEqual("http://www.google.co.uk/someRedirect.aspx", retVal.FinalUrl, "The finalUrl does not correctly represent the redirection that took place.");
}
A: Here is a solution that doesn't require mocking. You implement all three components of the WebRequest: IWebRequestCreate WebRequest and WebResponse. See below. My example generates failing requests (by throwing WebException), but should be able to adapt it to send "real" responses:
class WebRequestFailedCreate : IWebRequestCreate {
HttpStatusCode status;
String statusDescription;
public WebRequestFailedCreate(HttpStatusCode hsc, String sd) {
status = hsc;
statusDescription = sd;
}
#region IWebRequestCreate Members
public WebRequest Create(Uri uri) {
return new WebRequestFailed(uri, status, statusDescription);
}
#endregion
}
class WebRequestFailed : WebRequest {
HttpStatusCode status;
String statusDescription;
Uri itemUri;
public WebRequestFailed(Uri uri, HttpStatusCode status, String statusDescription) {
this.itemUri = uri;
this.status = status;
this.statusDescription = statusDescription;
}
WebException GetException() {
SerializationInfo si = new SerializationInfo(typeof(HttpWebResponse), new System.Runtime.Serialization.FormatterConverter());
StreamingContext sc = new StreamingContext();
WebHeaderCollection headers = new WebHeaderCollection();
si.AddValue("m_HttpResponseHeaders", headers);
si.AddValue("m_Uri", itemUri);
si.AddValue("m_Certificate", null);
si.AddValue("m_Version", HttpVersion.Version11);
si.AddValue("m_StatusCode", status);
si.AddValue("m_ContentLength", 0);
si.AddValue("m_Verb", "GET");
si.AddValue("m_StatusDescription", statusDescription);
si.AddValue("m_MediaType", null);
WebResponseFailed wr = new WebResponseFailed(si, sc);
Exception inner = new Exception(statusDescription);
return new WebException("This request failed", inner, WebExceptionStatus.ProtocolError, wr);
}
public override WebResponse GetResponse() {
throw GetException();
}
public override IAsyncResult BeginGetResponse(AsyncCallback callback, object state) {
Task<WebResponse> f = Task<WebResponse>.Factory.StartNew (
_ =>
{
throw GetException();
},
state
);
if (callback != null) f.ContinueWith((res) => callback(f));
return f;
}
public override WebResponse EndGetResponse(IAsyncResult asyncResult) {
return ((Task<WebResponse>)asyncResult).Result;
}
}
class WebResponseFailed : HttpWebResponse {
public WebResponseFailed(SerializationInfo serializationInfo, StreamingContext streamingContext)
: base(serializationInfo, streamingContext) {
}
}
You must create a HttpWebResponse subclass, because you cannot otherwise create one.
The tricky part (in the GetException() method) is feeding in the values you cannot override, e.g. StatusCode and this is where our bestest buddy SerializaionInfo comes in! This is where you supply the values you cannot override. Obviously, override the parts (of HttpWebResponse) you are able, to get the rest of the way there.
How did I obtain the "names" in all those AddValue() calls? From the exception messages! It was nice enough to tell me every one in turn, until I made it happy.
Now, the compiler will complain about "obsolete" but this nevertheless works, including .NET Framework version 4.
Here is a (passing) test case for reference:
[TestMethod, ExpectedException(typeof(WebException))]
public void WebRequestFailedThrowsWebException() {
string TestURIProtocol = TestContext.TestName;
var ResourcesBaseURL = TestURIProtocol + "://resources/";
var ContainerBaseURL = ResourcesBaseURL + "container" + "/";
WebRequest.RegisterPrefix(TestURIProtocol, new WebRequestFailedCreate(HttpStatusCode.InternalServerError, "This request failed on purpose."));
WebRequest wr = WebRequest.Create(ContainerBaseURL);
try {
WebResponse wrsp = wr.GetResponse();
using (wrsp) {
Assert.Fail("WebRequest.GetResponse() Should not have succeeded.");
}
}
catch (WebException we) {
Assert.IsInstanceOfType(we.Response, typeof(HttpWebResponse));
Assert.AreEqual(HttpStatusCode.InternalServerError, (we.Response as HttpWebResponse).StatusCode, "Status Code failed");
throw we;
}
}
A: This is not a perfect solution yet it worked for me before and deserves extra care for the simplicity :
HTTPSimulator
Also a typemock example documented in typemock forums:
using System;
using System.IO;
using System.Net;
using NUnit.Framework;
using TypeMock;
namespace MockHttpWebRequest
{
public class LibraryClass
{
public string GetGoogleHomePage()
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://www.google.com");
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
using (StreamReader reader = new StreamReader(response.GetResponseStream()))
{
return reader.ReadToEnd();
}
}
}
[TestFixture]
[VerifyMocks]
public class UnitTests
{
private Stream responseStream = null;
private const string ExpectedResponseContent = "Content from mocked response.";
[SetUp]
public void SetUp()
{
System.Text.UTF8Encoding encoding = new System.Text.UTF8Encoding();
byte[] contentAsBytes = encoding.GetBytes(ExpectedResponseContent);
this.responseStream = new MemoryStream();
this.responseStream.Write(contentAsBytes, 0, contentAsBytes.Length);
this.responseStream.Position = 0;
}
[TearDown]
public void TearDown()
{
if (responseStream != null)
{
responseStream.Dispose();
responseStream = null;
}
}
[Test(Description = "Mocks a web request using natural mocks.")]
public void NaturalMocks()
{
HttpWebRequest mockRequest = RecorderManager.CreateMockedObject<HttpWebRequest>(Constructor.Mocked);
HttpWebResponse mockResponse = RecorderManager.CreateMockedObject<HttpWebResponse>(Constructor.Mocked);
using (RecordExpectations recorder = RecorderManager.StartRecording())
{
WebRequest.Create("http://www.google.com");
recorder.CheckArguments();
recorder.Return(mockRequest);
mockRequest.GetResponse();
recorder.Return(mockResponse);
mockResponse.GetResponseStream();
recorder.Return(this.responseStream);
}
LibraryClass testObject = new LibraryClass();
string result = testObject.GetGoogleHomePage();
Assert.AreEqual(ExpectedResponseContent, result);
}
[Test(Description = "Mocks a web request using reflective mocks.")]
public void ReflectiveMocks()
{
Mock<HttpWebRequest> mockRequest = MockManager.Mock<HttpWebRequest>(Constructor.Mocked);
MockObject<HttpWebResponse> mockResponse = MockManager.MockObject<HttpWebResponse>(Constructor.Mocked);
mockResponse.ExpectAndReturn("GetResponseStream", this.responseStream);
mockRequest.ExpectAndReturn("GetResponse", mockResponse.Object);
LibraryClass testObject = new LibraryClass();
string result = testObject.GetGoogleHomePage();
Assert.AreEqual(ExpectedResponseContent, result);
}
}
}
A: You can use NSubstitute, e.g.
var httpWebResponse = Substitute.For<HttpWebResponse>();
httpWebResponse.StatusCode.Returns(HttpStatusCode.NotFound);
httpWebResponse.StatusDescription.Returns("Not Found");
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
}
|
Q: How do I choose a CMS/Portal solution for a small website(s)? I currently maintain 3 websites all revolving around the same concept. 2 of them are WinForms applications where the website gives a few basic details, and download links. The third is a web application to query data. I also have a forum (SMF/TinyPortal) that has been serving as a tech support/news hub for the three sites. The download traffic is decent, but I don't get a lot of hits on the support forums
I want to consolidate these three entities so that I don't have to duplicate announcements, upload data library updates to multiple locations, and also provide a unified look to the sites.
Fortunately my hosting account has both .NET and PHP support, so I've been looking into Drupal, Graffiti, DotNetNuke, Joomla, Community Server, and more. However, it has been hard for me to discern between what features included, supported, or just not part of the framework whatsoever.
Does anybody have a good evaluation of these projects (and others too) and can evaluate them for features/expandability/customization/etc.? I'm not necessarily looking for a "what's your favorite" but more of a feature set / target end user type evaluation.
A: If you want to quickly compare features on CMS's, then take a look at CMS Matrix - has practically every cms known to man on there.
Edit
To be a little more precise, from the site
CMSMatrix is the number one content management system comparison site on the Internet. It allows users to evaluate over 950 content management systems in 135+ different categories.
A: Go with N2 if you want to get up and running in no time with a couple of nice features packed. Also, it is really targetted against extensibility and clean code.
http://www.n2cms.com
A: "Open Source cms" has tons of them, and running demos with admin logins
A: DotNetNuke:
*
*very flexible
*lots of community around it
*community tends to be fairly technical and can be hard to find useful end-user support
*can be difficult to upgrade and to keep current versions available
*fairly easy to program basic modules for
*100s of available modules (free and pay)
*documentation can be difficult to find and sparse in detail
*easy to skin so your sites can have a unified look
*1000s of pre made skins available.
hopefully this is along the lines of what you are looking for.
A: I've found that CMS Matrix (refer:iAn) can sometimes be a bit out of date but it is definitely a good starting place. Open Source CMS is a good resource (refer:mrinject). I'd lean towards something you can tinker with - closed source could back you into a corner.
If you're looking into .NET then MojoPortal is another option, as is umbracco etc. Search here on DNN and these others. I've found Drupal to be be more intimidating to approach. Also, it's forums are pretty basic. Joomla tends to want money for add-ins, as does DNN although there are freebies for both. Apparently the freebies fro Joomla can vary in quality - I never looked into it too closely.
I think the pick of the PHP crowd is Drupal - if you can invest the headspace for learning it. Drupal tends to be more developer-friendly than end-user friendly so if you're not a developer it is harder to grasp than something like Joomla. Apparently its codebase is better than Joomla.
Have a browse through the communities - you'll spend some time there so make sure they are to your liking.
If the site is quite simple then perhaps WordPress will suffice as it has a plethora of plugins and there are lots of template available for free or
I've been meandering down this path for a while now. My advice is to set up some test installs and roughly configure them to something that has what you want and then try using and and - important - try to break it. Installing them together on the same server is a good way to test the relative speed differences too.
Test drive them - it's the only way to tell which one works for you.
A: DotNetNuke out of the box contains a lot of features, content management, link management, documents list modules, forum modules, and items of that nature. There is also a very good third-party module and skin market out there for getting the enhancements needed to really get a full solution implemented.
With a little bit of time DNN can serve as a great foundation for a collection of websites. It also supports a multi-portal system that allows you to host more than one site off of the same code base which is very helpful.
The best part of all is that it is Free!
A: As you mentioned, there are plenty of options available, most of them have all the basic features. If you are looking for a simple setup, most may even be overkill for what you are trying to achieve. Which CMS you choose, may best depend on your preference for the programming language the CMS is using.
For some websites I maintain, I have used Typo3 (http://www.typo3.com/). The reason for my choice was the flexibility of Typo3, with its many (many!) plugins for all sorts of features, and for the ability to develop plugins yourself.
HTH,
J.
A: Assuming you're going open source, strong considerations are:
An active and knowledgeable community. <-- You don't want to be the only person able to support this CMS in 10 years time.
regular and simple updating techniques.
Your skill sets.
A: As a vendor, I find CMS matrix to be daunting. Its basically a list of every CMS under the sun, with a few generic ratings and reviews. Before selecting a CMS, I'd commit to a model first, then I'd investigate the various options that are available.
*
*Open Source...has lots of user generated support, but often requires the assistance of outside developers for software maintenance and add-on installation.
*Private installed solutions...can be easier to work with, but lock you in to one vendor for maintenance.
*SaaS Model...still locks in to one vendor, but all updates are included and initial costs are minimal.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How does gcc implement stack unrolling for C++ exceptions on linux? How does gcc implement stack unrolling for C++ exceptions on linux? In particular, how does it know which destructors to call when unrolling a frame (i.e., what kind of information is stored and where is it stored)?
A: There isn't much documentation currently available, however the basic system is that GCC translates try/catch blocks to function calls and then links in a library with the needed runtime support (documentation about the tree building code includes the statement "throwing an exception is not directly represented in GIMPLE, since it is implemented by calling a function").
Unfortunately I'm not familiar with these functions and can't tell you what to look at (other than the source for libgcc -- which includes the exception handling runtime).
There is an "Exception Handling for Newbies" document available.
A: Although this looks to be for Itanium, presumably the implementation is similar for x86: exception handling ABI
A: See section 6.2 of the x86_64 ABI. This details the interface but not a lot of the underlying data. This is also independent of C++ and could conceivably be used for other purposes as well.
There are primarily two sections of the ELF binary as emitted by gcc which are of interest for exception handling. They are .eh_frame and .gcc_except_table.
.eh_frame follows the DWARF format (the debugging format that primarily comes into play when you're using gdb). It has exactly the same format as the .debug_frame section emitted when compiling with -g. Essentially, it contains the information necessary to pop back to the state of the machine registers and the stack at any point higher up the call stack. See the Dwarf Standard at dwarfstd.org for more information on this.
.gcc_except_table contains information about the exception handling "landing pads" the locations of handlers. This is necessary so as to know when to stop unwinding. Unfortunately this section is not well documented. The only snippets of information I have been able to glean come from the gcc mailing list. See particularly this post
The remaining piece of information is then what actual code interprets the information found in these data sections. The relevant code lives in libstdc++ and libgcc. I cannot remember at the moment which pieces live in which. The interpreter for the DWARF call frame information can be found in the gcc source code in the file gcc/unwind-dw.c
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: ASP.NET UrlRewriting and Constructing Page Links So this post talked about how to actually implement url rewriting in an ASP.NET application to get "friendly urls". That works perfect and it is great for sending a user to a specific page, but does anyone know of a good solution for creating "Friendly" URLs inside your code when using one of the tools referenced?
For example listing a link inside of an asp.net control as ~/mypage.aspx?product=12 when a rewrite rule exists would be an issue as then you are linking to content in two different ways.
I'm familiar with using DotNetNuke and FriendlyUrl's where there is a "NavigateUrl" method that will get the friendly Url code from the re-writer but I'm not finding examples of how to do this with UrlRewriting.net or the other solutions out there.
Ideally I'd like to be able to get something like this.
string friendlyUrl = GetFriendlyUrl("~/MyUnfriendlyPage.aspx?myid=13");
EDIT: I am looking for a generic solution, not something that I have to implement for every page in my site, but potentially something that can match against the rules in the opposite direction.
A: See System.Web.Routing
Routing is a different from rewriting. Implementing this technique does require minor changes to your pages (namely, any code accessing querystring parameters will need to be modified), but it allows you to generate links based on the routes you define. It's used by ASP.NET MVC, but can be employed in any ASP.NET application.
Routing is part of .Net 3.5 SP1
A: Create a UrlBuilder class with methods for each page like so:
public class UrlBuilder
{
public static string BuildProductUrl(int id)
{
if (true) // replace with logic to determine if URL rewriting is enabled
{
return string.Format("~/Product/{0}", id);
}
else
{
return string.Format("~/product.aspx?id={0}", id);
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: representing CRLF using Hex in C# How do i represent CRLF using Hex in C#?
A: Since no one has actually given the answer requested, here it is:
"\x0d\x0a"
A: End-of-Line characters.
For DOS/Windows it's
x0d x0a (\r\n)
and for *NIX it's
x0a (\n)
*
*dos2unix - Convert x0d x0a to x0a to make it *NIX compatible.
*unix2dos - Convert x0a to x0d x0a to make it DOS/Win compatible.
A: Not sure why, but it's 0x0d, 0x0a, aka "\r\n".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
}
|
Q: Make git-svn work on Slackware 12.1 It is obviosly some Perl extensions. Perl version is 5.8.8.
I found Error.pm, but now I'm looking for Core.pm.
While we're at it: how do you guys search for those modules. I tried Google, but that didn't help much. Thanks.
And finally, after I built everything, running:
./Build install
gives me:
Running make install-lib
/bin/ginstall -c -d /usr/lib/perl5/site_perl/5.8.8/i486-linux-thread-multi/Alien/SVN --prefix=/usr
/bin/ginstall: unrecognized option `--prefix=/usr'
Try `/bin/ginstall --help' for more information.
make: *** [install-fsmod-lib] Error 1
installing libs failed at inc/My/SVN/Builder.pm line 165.
Looks like Slackware's 'ginstall' really does not have that option. I think I'm going to Google a little bit now, to see how to get around this.
A: Base class package "Module::Build" is empty.
(Perhaps you need to 'use' the module which defines that package first.)
at inc/My/SVN/Builder.pm line 5
BEGIN failed--compilation aborted at inc/My/SVN/Builder.pm line 5.
Compilation failed in require at Build.PL line 6.
BEGIN failed--compilation aborted at Build.PL line 6.
is a (rather poor) way of asking you to install Module::Build.
Once you do that, it's
perl Build.PL
./Build
./Build test
./Build install
A:
how do you guys search for those modules
http://search.cpan.org/
A:
now I'm looking for Core.pm
That’s SVN::Core, which is a bit of a problem. Try installing Alien::SVN from CPAN. That worked for me on my freshly installed Slackware 12.0 on my laptop, but I have yet to get it to install on my workstation.
A: It should be compatible. The CPAN Tester's matrix shows no failures for Perl 5.8.8 on any platform.
Per the README, you can install it by doing:
perl Makefile.pl
make
make test
make install
A: https://metacpan.org/ is your first port of call for Perl modules.
A: I'm guessing you're running on Slackware so the cpan command is what you want to be using to install any Perl modules. It will pull in all dependencies for you. If you're running it for the first time it will have to do some cofiguration, but newer versions of cpan will ask if you want it to automatically configure it.
$ sudo cpan
cpan> install Alien::SVN
Additionally, if there's a package management application for Slackware, you should try that first to install new Perl modules.
A: What do you mean by "does not seem to be compatible"? Can you post the error message?
If the latest version does not work, you can select an older version in the "other releases" drop down and download that.
Edit: to those reading this, the author updated the question, so my answer seems a bit out of left field :)
A: The place to search is http://search.cpan.org.
I have my browser (Firefox) set up so that I can type "cpan foo" in the address bar and it will search CPAN for modules matching "foo." You can do this with either a keyword bookmark or by assigning a keyword to a search plugin.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to edit sessions parameters on Oracle 10g XE? default is 49
how to edit to higher?
A: You will need to issue the following command (connected as a user that has alter system privileges, sys will do it)
alter system set sessions=numberofsessions scope=spfile;
Have you been getting an ORA-12516 or ORA-12520 error?
If so it's probably a good idea to increase the number of processes too
alter system set processes=numberofprocesses scope=spfile;
IIRC you'll need to bounce the database after issuing these commands.
This link http://www.oracle.com/technology/tech/php/pdf/underground-php-oracle-manual.pdf has some good information about configuring XE.
I consulted it when I ran into similar issues using XE.
A: You can check connection limits in order to fine tune the session/process limits:
http://zhefeng.wordpress.com/2008/09/24/ora-12516-error-tnslistener-could-not-find-available-handler-with-matching-protocol-stack/?
Step1: take a look at the processes limition.
select * from gv$resource_limit;
Step2: increase the parameter from 150 (default) to 300 (or any other desired number)
sql>alter system set processes=300 scope=spfile;
Step3: reboot the database to let parameter taking effect.
ps: check link for more info.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: In ASP.net Webforms how do you detect which Textbox someone pressed enter? In ASP.net Webforms how do you detect which Textbox someone pressed enter?
Please no Javascript answers.
I need to handle it all in the code behind using VB.NET.
A: I suspect it cannot be done without javascript - when you hit enter, the browser submits the form - it doesn't submit what field had the focus. So unless you use JS to add that information to the form being submitted, you're out of luck.
A: Why do you need to determine the which TextBox was pressed? Are you looking to see which TextBox was being focused so that you can trigger the proper button click event?
If you are looking to do something like this, one trick I've done was to "group" the appropriate form elements within their own panel and then set the "DefaultButton" property accordingly.
Doing this allows me to have a "Search by Name", "Search by Department", "Search by Id", etc. Textbox/Button combination on a single form and still allow the user to type their query parameter, hit Enter, and have the proper search method get invoked in the code behind.
A: Without using Javascript, you just can't. That information is not conveyed from the client browser to the server.
A: As far as I know there is no possible way for a server side script to detect that. It simply does not get sent to the server. It must be done client-side (i.e. With Javascript) and then sent to the server.
A: I solved this for one site's search by looking at the Request.Form object, server side to see if the search box had a value. I did it in a base class that all my pages (or a base class for the masterpage) inherit from. If it has a value, the odds are pretty good somebody typed something in and hit enter and so I handled the search.
A: In the event handler, the "source" object (the first parameter of the event handler) is the object raising the event. Type it to button and get the name, or use reflection to get information out of the non-typed object.
In addition, if the control is a child of a web control that you do not have raise its own events (just saying...) then you can use OnBubbleEvent to determine what's going on. OnBubbleEvent also has a "source" parameter you can type, or use reflection on.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Mathematical analysis of a sound sample (as an array of numbers) I need to find the frequency of a sample, stored (in vb) as an array of byte. Sample is a sine wave, known frequency, so I can check), but the numbers are a bit odd, and my maths-foo is weak.
Full range of values 0-255. 99% of numbers are in range 235 to 245, but there are some outliers down to 0 and 1, and up to 255 in the remaining 1%.
How do I normalise this to remove outliers, (calculating the 235-245 interval as it may change with different samples), and how do I then calculate zero-crossings to get the frequency?
Apologies if this description is rubbish!
A: The FFT is probably the best answer, but if you really want to do it by your method, try this:
To normalize, first make a histogram to count how many occurrances of each value from 0 to 255. Then throw out X percent of the values from each end with something like:
for (i=lower=0;i< N*(X/100); lower++)
i+=count[lower];
//repeat in other direction for upper
Now normalize with
A[i] = 255*(A[i]-lower)/(upper-lower)-128
Throw away results outside the -128..127 range.
Now you can count zero crossings. To make sure you are not fooled by noise, you might want to keep track of the slope over the last several points, and only count crossings when the average slope is going the right way.
A: The standard method to attack this problem is to consider one block of data, hopefully at least twice the actual frequency (taking more data isn't bad, so it's good to overestimate a bit), then take the FFT and guess that the frequency corresponds to the largest number in the resulting FFT spectrum.
By the way, very similar problems have been asked here before - you could search for those answers as well.
A: Use the Fourier transform, it's much more noise insensitive than counting zero crossings
Edit: @WaveyDavey
I found an F# library to do an FFT: From here
As it turns out, the best free
implementation that I've found for F#
users so far is still the fantastic
FFTW library. Their site has a
precompiled Windows DLL. I've written
minimal bindings that allow
thread-safe access to FFTW from F#,
with both guru and simple interfaces.
Performance is excellent, 32-bit
Windows XP Pro is only up to 35%
slower than 64-bit Linux.
Now I'm sure you can call F# lib from VB.net, C# etc, that should be in their docs
A: If I understood well from your description, what you have is a signal which is a combination of a sine plus a constant plus some random glitches. Say, like
x[n] = A*sin(f*n + phi) + B + N[n]
where N[n] is the "glitch" noise you want to get rid of.
If the glitches are one-sample long, you can remove them using a median filter which has to be bigger than the glitch length. On both sides of the glitch. Glitches of length 1, mean you will have enough with a median of 3 samples of length.
y[n] = median3(x[n])
The median is computed so: Take the samples of x you want to filter (x[n-1],x[n],x[n+1]), sort them, and your output is the middle one.
Now that the noise signal is away, get rid of the constant signal. I understand the buffer is of a limited and known length, so you can just compute the mean of the whole buffer. Substract it.
Now you have your single sinus signal. You can now compute the fundamental frequency by counting zero crossings. Count the amount of samples above 0 in which the former sample was below 0. The period is the total amount of samples of your buffer divided by this, and the frequency is the oposite (1/x) of the period.
A: Although I would go with the majority and say that it seems like what you want is an fft solution (fft algorithm is pretty quick), if fft is not the answer for whatever reason you may want to try fitting a sine curve to the data using a fitting program and reading off the fitted frequency.
Using Fityk, you can load the data, and fit to a*sin(b*x-c) where 2*pi/b will give you the frequency after fitting.
Fityk can be used from a gui, from a command-line for scripting and has a C++ API so could be included in your programs directly.
A: I googled for "basic fft". Visual Basic FFT Your question screams FFT, but be careful, using FFT without understanding even a little bit about DSP can lead results that you don't understand or don't know where they come from.
A: get the Frequency Analyzer at http://www.relisoft.com/Freeware/index.htm and run it and look at the code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: How to embed audio/video on HTML page that plays on iPhone browser over GPRS Although I don't have an iPhone to test this out, my colleague told me that embedded
media files such as the one in the snippet below, only works when the iphone is connected over the
WLAN connection or 3G, and does not work when connecting via GPRS.
<html><body>
<object data="http://joliclic.free.fr/html/object-tag/en/data/test.mp3" type="audio/mpeg">
<p>alternate text</p>
</object>
</body></html>
Is there an example URL with a media file, that will play in an iPhone browser
when the iphone connects using GPRS (not 3G)?
A: The iPhone YouTube application automatically downloads lower quality video when connected via EDGE than when connected via Wi-Fi, because the network is much slower. That fact leads me to believe Apple would make the design decision to not bother downloading an MP3 over EDGE. The browser has no way to know in advance if the bit rate is low enough, and chances are, it won't be. So rather than frustrate the users with a sound file that takes too long to play (and prevents thems from receiving a call while downloading), it's better to spare them the grief and encourage them to find a Wi-Fi hotspot.
A: Try something like this, it works on a web page. This is actually a 320kps mp3 but it is only 30 seconds long. You can use a program called LAME to convert mp3's to a bitrate you
that will work for you.
<div class="music">
<p>Pachelbel's Canon</p>
<!--[if !IE]>-->
<object id="Cannon" type="audio/mpeg" data="http://calgarydj.ca/sound%20files/Pachebels%20Cannon.mp3" width="250" height="16">
<param name="autoplay" value="false" />
<param name="src" value="http://calgarydj.ca/sound%20files/Pachebels%20Cannon.mp3" />
<!--<![endif]-->
<object id="Cannon" classid="CLSID:6BF52A52-394A-11d3-B153-00C04F79FAA6" width="250" height="60">
<param name="autostart" value="false" />
<param name="url" value="http://calgarydj.ca/sound%20files/Pachebels%20Cannon.mp3" />
<param name="showcontrols" value="true" />
<param name="volume" value="100" />
<!--[if !IE]>--></object><!--<![endif]-->
</object>
</div><!-- end of control -->
A: I wasn't aware of that limitation. Although it does make sense to disable potentially data-hefty OBJECT or EMBED tags when on the cellular data service for which your provider may be charging by the byte, if that were the reason it wouldn't make sense that it would still work on 3G and only not on GPRS.
Perhaps the problem is one of basic data throughput? Not having an iPhone yourself (or myself) makes it difficult to test your colleague's statement.
Remember that GPRS is much slower than Wi-Fi or 3G. According to Wikipedia, GPRS will provide between 56 and 114 kbps of total duplex throughput, not all of which is in the download direction. You can already see that's not fast enough to instantly stream a typical 128 kbps mp3, even if you were getting the optimal throughput and getting it all as download speed.
Looking at this forum discussion as an example that came up on Google, the GPRS customers (the ones not using Telestra, which is an EDGE provider in that area) are getting around 40 kbps. So if as the question implies, you're stuck in EDGEland, NOT 3Gland or anything inbetween, it's going to take about 20 seconds of buffering to play a 30 second mp3. And when you use a behaviour-ambiguous tag like OBJECT or EMBED, there's no guarantee in how the browser will interpret it and whether it's going to try to intelligently stream the file rather than having to download the whole thing before starting it.
So, it's quite possible your colleague just didn't wait long enough to see if whatever embedded media he chose as a test started to play (assuming he wasn't using your 17KB test mp3 there). It's also possible that the iPhone does indeed have this limitation, though I'd think Google would be more forthcoming with it than my quick search uncovered, since people have been vocal enough with other things they don't like about iPhone. Another possibility would be that it's a limitation in the build of Safari that currently ships with the iPhone which might be changed in future versions or in another browser.
Ultimately though, the question is, what kind of user experience do you really want? Embedded audio on GPRS is going to take a long time to load, and users aren't going to enjoy the experience, or potentially even experience it at all if it's supposed to start playing on page visit and it doesn't load before they navigate away. It might not be a goal worth striving towards in that case.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How do I limit the number of simultaneous downloads in Asp.net and/or IIS? I have a website with a lot of large files. However, I don't want users to start downloading like 10 files at a time. I noticed there are website out there where they only allow 2 simultaneous downloads.
My website is programmed using ASP.net running on IIS. Does anyone know how I can limit simultaneous downloads?
A: The Dynamic IP Restrictions module from Microsoft (currently in beta) will do this.
For details and a download: http://www.iis.net/download/DynamicIPRestrictions
A: I think the only problem with max concurrent in IIS is it might block page requests rather than just download requests.
I'd say write an HTTP Handler which actually does the download and can then decide (based on IP or Cookie) if a download is allowed to be sent back to the browser. Pretty straight forward code I'd think.
A: Do you want to do it programatically? Otherwise I believe there is a setting for max conncurrent connections from an ip address for IIS.
A:
I think the only problem with max concurrent in IIS is it might block page requests rather than just download requests.
I'm no IIS expert but, if this setting is per domain / virtual host, you are set. If you can serve your downloads from a sub-domain that isn't used for anything, the setup it will not interfere with browsers that fetch several page elements at once.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: What is the most convincing command in Vim I want to ditch my current editor. I feel I need something else. That do not expose my hands to the risk of RSI. I need to see why I should change editor. And it would be nice to believe, that I will be coding when I'm 80 years old.
All the big guys out there are using Vim. The only Emacs guy I know are RMS. Paul Graham is a Vi dude.
A: the numbers.
in command mode
type a number ( any number of digits )
type a command.
that command will be executed $number times
ie:
99dd
erases the next 99 lines.
A: *
*The fast startup time.
*The sharp distinction between editing and viewing. (you know when you edit)
*The only way you ever find what you are looking for is with search "/" and that is good, since it much faster than your eyes.
But the best command(s) are:
/ - search string
ZZ - quit
. - repeat last insert (I think)
%! - insert unix command
A: . (dot) - repeats the last editing action. Really handy when you need to perform a few similar edits.
A: Handling multi line regexps in search strings with "\_.". While checking over 4GB text files of various formats, it had saved my life several times.
A: :help usr_12.txt
That'll bring up a section in the help system that discusses "Clever Tricks". If those don't get you excited I don't know what will!
A: Recording macros
A: The asterisk.
*
Its effect: Immediately search for the next instance of the word under the cursor.
A: The best thing is the efficiency with which you can edit code (which is done a lot in programming). The commands such as
*
*cw to change a word
*dw to delete a word
*ct, to change all text until the next comma
*ci( to change the contents of the parentheses you're currently in
*xp to correct spelling mistakes ("spleling" -> cursor on l -> xp -> "spelling")
*o to insert a new line below and start editing
*O to insert a new line above
Then there is the possibility to work with named registers very quickly. To move a block, just select it, press d, then move to it's new location and press p. Much faster than Ctrl-C and Ctrl-V. Use "ud to delete text and move it to register u (I use this one for the commenting template).
And also Vim has all the scripting support you need (either using it's native scripting language or using Python, Ruby, ...)
A: Why are you looking to be convinced to start using a different editor? If you're happy with what you have now, stick. If not, perhaps ask about editors with features that you lack.
A: Even if you are using Visual Studio there is the wonderful vsvim.
A: The lovely built in regular expression evaluator.
A: I love the speed of Vim but I find it lacks the features of a modern IDE for C++ development. Eclipse CDT with the viPlugin is a good compromise.
You get the power and source overview provided by Eclipse CDT with the speed and flexibility of Vim for coding.
A: Maybe reading "Come home to vim" by Steve Losh article is a good start, or
a series of videos about interesting plugins. And be sure to see some articles on the site vimcasts.org
A: You should map Caps Lock to Esc. It will make getting out of insert mode feel natural as opposed to the awkward move you must make to press the ESC key. Besides, who uses Caps Lock anyway?
A: To be truly inspired, you must see a vim guru in action. If you do not have a local guru, here is a video to inspire you.
http://www.youtube.com/watch?v=jDWBJOXs_iI&feature=related
If you don't already know vim, the speed at which code is navigated, sliced, and diced will be indistinguishable from magic. After a few months of studying vim, the same editing speed will seem commonplace.
A: \v
Make your regular expressions mostly Perl compatible.
See very magic section here for more information.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: How can I create a human-readable script for every DTS package on a SQL server? I know I can edit each individual DTS package and save it as a Visual Basic script, but with hundreds of packages on the server, that will take forever. How can I script them all at once? I'd like to be able to create one file per package so that I can check them into source control, search them to see which one references a specific table, or compare the packages on our development server to the packages on our production server.
A: I ended up digging through the SQL 2000 documentation (Building SQL Server Applications / DTS Programming / Programming DTS Applications / DTS Object Model) and creating a VBS script to read the packages and write XML files. It's not complete, and it could be improved in several ways, but it's a big start:
GetPackages.vbs
Option Explicit
Sub GetProperties (strPackageName, dtsProperties, xmlDocument, xmlProperties)
Dim dtsProperty
If Not dtsProperties Is Nothing Then
For Each dtsProperty in dtsProperties
If dtsProperty.Set Then
Dim xmlProperty
Set xmlProperty = xmlProperties.insertBefore ( _
xmlDocument.createElement ("Property"), _
xmlProperties.selectSingleNode ("Property[@Name > '" & dtsProperty.Name & "']"))
'properties
'xmlProperty.setAttribute "Get", dtsProperty.Get
'xmlProperty.setAttribute "Set", dtsProperty.Set
xmlProperty.setAttribute "Type", dtsProperty.Type
xmlProperty.setAttribute "Name", dtsProperty.Name
If not isnull(dtsProperty.Value) Then
xmlProperty.setAttribute "Value", dtsProperty.Value
End If
'collections
'getting properties of properties causes a stack overflow
'GetProperties strPackageName, dtsProperty.Properties, xmlDocument, xmlProperty.appendChild (xmlDocument.createElement ("Properties"))
End If
Next
End If
End Sub
Sub GetOLEDBProperties (strPackageName, dtsOLEDBProperties, xmlDocument, xmlOLEDBProperties)
Dim dtsOLEDBProperty
For Each dtsOLEDBProperty in dtsOLEDBProperties
If dtsOLEDBProperty.IsDefaultValue = 0 Then
Dim xmlOLEDBProperty
Set xmlOLEDBProperty = xmlOLEDBProperties.insertBefore ( _
xmlDocument.createElement ("OLEDBProperty"), _
xmlOLEDBProperties.selectSingleNode ("OLEDBProperty[@Name > '" & dtsOLEDBProperty.Name & "']"))
'properties
xmlOLEDBProperty.setAttribute "Name", dtsOLEDBProperty.Name
'xmlOLEDBProperty.setAttribute "PropertyID", dtsOLEDBProperty.PropertyID
'xmlOLEDBProperty.setAttribute "PropertySet", dtsOLEDBProperty.PropertySet
xmlOLEDBProperty.setAttribute "Value", dtsOLEDBProperty.Value
'xmlOLEDBProperty.setAttribute "IsDefaultValue", dtsOLEDBProperty.IsDefaultValue
'collections
'these properties are the same as the ones directly above
'GetProperties strPackageName, dtsOLEDBProperty.Properties, xmlDocument, xmlOLEDBProperty.appendChild (xmlDocument.createElement ("Properties"))
End If
Next
End Sub
Sub GetConnections (strPackageName, dtsConnections, xmlDocument, xmlConnections)
Dim dtsConnection2
For Each dtsConnection2 in dtsConnections
Dim xmlConnection2
Set xmlConnection2 = xmlConnections.insertBefore ( _
xmlDocument.createElement ("Connection2"), _
xmlConnections.selectSingleNode ("Connection2[@Name > '" & dtsConnection2.Name & "']"))
'properties
xmlConnection2.setAttribute "ID", dtsConnection2.ID
xmlConnection2.setAttribute "Name", dtsConnection2.Name
xmlConnection2.setAttribute "ProviderID", dtsConnection2.ProviderID
'collections
GetProperties strPackageName, dtsConnection2.Properties, xmlDocument, xmlConnection2.appendChild (xmlDocument.createElement ("Properties"))
Dim dtsOLEDBProperties
On Error Resume Next
Set dtsOLEDBProperties = dtsConnection2.ConnectionProperties
If Err.Number = 0 Then
On Error Goto 0
GetOLEDBProperties strPackageName, dtsOLEDBProperties, xmlDocument, xmlConnection2.appendChild (xmlDocument.createElement ("ConnectionProperties"))
Else
MsgBox Err.Description & vbCrLf & "ProviderID: " & dtsConnection2.ProviderID & vbCrLf & "Connection Name: " & dtsConnection2.Name, , strPackageName
On Error Goto 0
End If
Next
End Sub
Sub GetGlobalVariables (strPackageName, dtsGlobalVariables, xmlDocument, xmlGlobalVariables)
Dim dtsGlobalVariable2
For Each dtsGlobalVariable2 in dtsGlobalVariables
Dim xmlGlobalVariable2
Set xmlGlobalVariable2 = xmlGlobalVariables.insertBefore ( _
xmlDocument.createElement ("GlobalVariable2"), _
xmlGlobalVariables.selectSingleNode ("GlobalVariable2[@Name > '" & dtsGlobalVariable2.Name & "']"))
'properties
xmlGlobalVariable2.setAttribute "Name", dtsGlobalVariable2.Name
If Not Isnull(dtsGlobalVariable2.Value) Then
xmlGlobalVariable2.setAttribute "Value", dtsGlobalVariable2.Value
End If
'no extended properties
'collections
'GetProperties strPackageName, dtsGlobalVariable2.Properties, xmlDocument, xmlGlobalVariable2.appendChild (xmlDocument.createElement ("Properties"))
Next
End Sub
Sub GetSavedPackageInfos (strPackageName, dtsSavedPackageInfos, xmlDocument, xmlSavedPackageInfos)
Dim dtsSavedPackageInfo
For Each dtsSavedPackageInfo in dtsSavedPackageInfos
Dim xmlSavedPackageInfo
Set xmlSavedPackageInfo = xmlSavedPackageInfos.appendChild (xmlDocument.createElement ("SavedPackageInfo"))
'properties
xmlSavedPackageInfo.setAttribute "Description", dtsSavedPackageInfo.Description
xmlSavedPackageInfo.setAttribute "IsVersionEncrypted", dtsSavedPackageInfo.IsVersionEncrypted
xmlSavedPackageInfo.setAttribute "PackageCreationDate", dtsSavedPackageInfo.PackageCreationDate
xmlSavedPackageInfo.setAttribute "PackageID", dtsSavedPackageInfo.PackageID
xmlSavedPackageInfo.setAttribute "PackageName", dtsSavedPackageInfo.PackageName
xmlSavedPackageInfo.setAttribute "VersionID", dtsSavedPackageInfo.VersionID
xmlSavedPackageInfo.setAttribute "VersionSaveDate", dtsSavedPackageInfo.VersionSaveDate
Next
End Sub
Sub GetPrecedenceConstraints (strPackageName, dtsPrecedenceConstraints, xmlDocument, xmlPrecedenceConstraints)
Dim dtsPrecedenceConstraint
For Each dtsPrecedenceConstraint in dtsPrecedenceConstraints
Dim xmlPrecedenceConstraint
Set xmlPrecedenceConstraint = xmlPrecedenceConstraints.insertBefore ( _
xmlDocument.createElement ("PrecedenceConstraint"), _
xmlPrecedenceConstraints.selectSingleNode ("PrecedenceConstraint[@StepName > '" & dtsPrecedenceConstraint.StepName & "']"))
'properties
xmlPrecedenceConstraint.setAttribute "StepName", dtsPrecedenceConstraint.StepName
'collections
GetProperties strPackageName, dtsPrecedenceConstraint.Properties, xmlDocument, xmlPrecedenceConstraint.appendChild (xmlDocument.createElement ("Properties"))
Next
End Sub
Sub GetSteps (strPackageName, dtsSteps, xmlDocument, xmlSteps)
Dim dtsStep2
For Each dtsStep2 in dtsSteps
Dim xmlStep2
Set xmlStep2 = xmlSteps.insertBefore ( _
xmlDocument.createElement ("Step2"), _
xmlSteps.selectSingleNode ("Step2[@Name > '" & dtsStep2.Name & "']"))
'properties
xmlStep2.setAttribute "Name", dtsStep2.Name
xmlStep2.setAttribute "Description", dtsStep2.Description
'collections
GetProperties strPackageName, dtsStep2.Properties, xmlDocument, xmlStep2.appendChild (xmlDocument.createElement ("Properties"))
GetPrecedenceConstraints strPackageName, dtsStep2.PrecedenceConstraints, xmlDocument, xmlStep2.appendChild (xmlDocument.createElement ("PrecedenceConstraints"))
Next
End Sub
Sub GetColumns (strPackageName, dtsColumns, xmlDocument, xmlColumns)
Dim dtsColumn
For Each dtsColumn in dtsColumns
Dim xmlColumn
Set xmlColumn = xmlColumns.appendChild (xmlDocument.createElement ("Column"))
GetProperties strPackageName, dtsColumn.Properties, xmlDocument, xmlColumn.appendChild (xmlDocument.createElement ("Properties"))
Next
End Sub
Sub GetLookups (strPackageName, dtsLookups, xmlDocument, xmlLookups)
Dim dtsLookup
For Each dtsLookup in dtsLookups
Dim xmlLookup
Set xmlLookup = xmlLookups.appendChild (xmlDocument.createElement ("Lookup"))
GetProperties strPackageName, dtsLookup.Properties, xmlDocument, xmlLookup.appendChild (xmlDocument.createElement ("Properties"))
Next
End Sub
Sub GetTransformations (strPackageName, dtsTransformations, xmlDocument, xmlTransformations)
Dim dtsTransformation
For Each dtsTransformation in dtsTransformations
Dim xmlTransformation
Set xmlTransformation = xmlTransformations.appendChild (xmlDocument.createElement ("Transformation"))
GetProperties strPackageName, dtsTransformation.Properties, xmlDocument, xmlTransformation.appendChild (xmlDocument.createElement ("Properties"))
Next
End Sub
Sub GetTasks (strPackageName, dtsTasks, xmlDocument, xmlTasks)
Dim dtsTask
For each dtsTask in dtsTasks
Dim xmlTask
Set xmlTask = xmlTasks.insertBefore ( _
xmlDocument.createElement ("Task"), _
xmlTasks.selectSingleNode ("Task[@Name > '" & dtsTask.Name & "']"))
' The task can be of any task type, and each type of task has different properties.
'properties
xmlTask.setAttribute "CustomTaskID", dtsTask.CustomTaskID
xmlTask.setAttribute "Name", dtsTask.Name
xmlTask.setAttribute "Description", dtsTask.Description
'collections
GetProperties strPackageName, dtsTask.Properties, xmlDocument, xmlTask.appendChild (xmlDocument.createElement ("Properties"))
If dtsTask.CustomTaskID = "DTSDataPumpTask" Then
GetOLEDBProperties strPackageName, dtsTask.CustomTask.SourceCommandProperties, xmlDocument, xmlTask.appendChild (xmlDocument.createElement ("SourceCommandProperties"))
GetOLEDBProperties strPackageName, dtsTask.CustomTask.DestinationCommandProperties, xmlDocument, xmlTask.appendChild (xmlDocument.createElement ("DestinationCommandProperties"))
GetColumns strPackageName, dtsTask.CustomTask.DestinationColumnDefinitions, xmlDocument, xmlTask.appendChild (xmlDocument.createElement ("DestinationColumnDefinitions"))
GetLookups strPackageName, dtsTask.CustomTask.Lookups, xmlDocument, xmlTask.appendChild (xmlDocument.createElement ("Lookups"))
GetTransformations strPackageName, dtsTask.CustomTask.Transformations, xmlDocument, xmlTask.appendChild (xmlDocument.createElement ("Transformations"))
End If
Next
End Sub
Sub FormatXML (xmlDocument, xmlElement, intIndent)
Dim xmlSubElement
For Each xmlSubElement in xmlElement.selectNodes ("*")
xmlElement.insertBefore xmlDocument.createTextNode (vbCrLf & String (intIndent + 1, vbTab)), xmlSubElement
FormatXML xmlDocument, xmlSubElement, intIndent + 1
Next
If xmlElement.selectNodes ("*").length > 0 Then
xmlElement.appendChild xmlDocument.createTextNode (vbCrLf & String (intIndent, vbTab))
End If
End Sub
Sub GetPackage (strServerName, strPackageName)
Dim dtsPackage2
Set dtsPackage2 = CreateObject ("DTS.Package2")
Dim DTSSQLStgFlag_Default
Dim DTSSQLStgFlag_UseTrustedConnection
DTSSQLStgFlag_Default = 0
DTSSQLStgFlag_UseTrustedConnection = 256
On Error Resume Next
dtsPackage2.LoadFromSQLServer strServerName, , , DTSSQLStgFlag_UseTrustedConnection, , , , strPackageName
If Err.Number = 0 Then
On Error Goto 0
'fsoTextStream.WriteLine dtsPackage2.Name
Dim xmlDocument
Set xmlDocument = CreateObject ("Msxml2.DOMDocument.3.0")
Dim xmlPackage2
Set xmlPackage2 = xmlDocument.appendChild (xmlDocument.createElement ("Package2"))
'properties
xmlPackage2.setAttribute "Name", dtsPackage2.Name
'collections
GetProperties strPackageName, dtsPackage2.Properties, xmlDocument, xmlPackage2.appendChild (xmlDocument.createElement("Properties"))
GetConnections strPackageName, dtsPackage2.Connections, xmlDocument, xmlPackage2.appendChild (xmlDocument.createElement ("Connections"))
GetGlobalVariables strPackageName, dtsPackage2.GlobalVariables, xmlDocument, xmlPackage2.appendChild (xmlDocument.createElement ("GlobalVariables"))
'SavedPackageInfos only apply to DTS packages saved in structured storage files
'GetSavedPackageInfos strPackageName, dtsPackage2.SavedPackageInfos, xmlDocument, xmlPackage2.appendChild (xmlDocument.createElement ("SavedPackageInfos"))
GetSteps strPackageName, dtsPackage2.Steps, xmlDocument, xmlPackage2.appendChild (xmlDocument.createElement ("Steps"))
GetTasks strPackageName, dtsPackage2.Tasks, xmlDocument, xmlPackage2.appendChild (xmlDocument.createElement ("Tasks"))
FormatXML xmlDocument, xmlPackage2, 0
xmlDocument.save strPackageName + ".xml"
Else
MsgBox Err.Description, , strPackageName
On Error Goto 0
End If
End Sub
Sub Main
Dim strServerName
strServerName = Trim (InputBox ("Server:"))
If strServerName "" Then
Dim cnSQLServer
Set cnSQLServer = CreateObject ("ADODB.Connection")
cnSQLServer.Open "Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=msdb;Data Source=" & strServerName
Dim rsDTSPackages
Set rsDTSPackages = cnSQLServer.Execute ("SELECT DISTINCT name FROM sysdtspackages ORDER BY name")
Dim strPackageNames
Do While Not rsDTSPackages.EOF
GetPackage strServerName, rsDTSPackages ("name")
rsDTSPackages.MoveNext
Loop
rsDTSPackages.Close
set rsDTSPackages = Nothing
cnSQLServer.Close
Set cnSQLServer = Nothing
Dim strCustomTaskIDs
Dim strCustomTaskID
MsgBox "Finished", , "GetPackages.vbs"
End If
End Sub
Main
A: You might try working with the system table sysdtspackages as demonstrated on sqldts.com in Transferring DTS Packages.
Also, there used to be many tools available for MS SQL 2000 before the new versions proliferated. I found one, called DTS Package Compare, as a free download at Red Gate Labs.
A: For completeness, I started another VBS script to read an XML file generated by GetPackages.vbs and save it as a DTS package on another SQL Server. This is even less complete, but I hope it will eventually be useful.
PushPackages.vbs
Option Explicit
Sub SetProperties (dtsProperties, xmlProperties)
dim xmlProperty
For Each xmlProperty in xmlProperties.selectNodes ("Property[@Set='-1']")
dtsProperties.Item (xmlProperty.getAttribute ("Name")).Value = xmlProperty.getAttribute ("Value")
Next
End Sub
Sub SetOLEDBProperties (dtsOLEDBProperties, xmlOLEDBProperties)
dim xmlOLEDBProperty
For Each xmlOLEDBProperty in xmlOLEDBProperties.selectNodes ("OLEDBProperty")
dtsOLEDBProperties.Item (xmlOLEDBProperty.getAttribute ("Name")).Value = xmlOLEDBProperty.getAttribute ("Value")
Next
End Sub
Sub SetConnections (dtsConnections, xmlConnections)
dim dtsConnection2
dim xmlConnection2
For each xmlConnection2 in xmlConnections.selectNodes ("Connection2")
set dtsConnection2 = dtsConnections.New (xmlConnection2.getAttribute ("ProviderID"))
SetProperties dtsConnection2.Properties, xmlConnection2.selectSingleNode ("Properties")
SetOLEDBProperties dtsConnection2.ConnectionProperties, xmlConnection2.selectSingleNode ("ConnectionProperties")
dtsConnections.Add dtsConnection2
Next
End Sub
Sub SetGlobalVariables (dtsGlobalVariables, xmlGlobalVariables)
dim xmlGlobalVariable2
For Each xmlGlobalVariable2 in xmlGlobalVariables.selectNodes ("GlobalVariable2")
dtsGlobalVariables.AddGlobalVariable xmlGlobalVariable2.getAttribute ("Name"), xmlGlobalVariable2.getAttribute ("Value")
Next
End Sub
Sub SetPrecedenceConstraints (dtsPrecedenceConstraints, xmlPrecedenceConstraints)
dim xmlPrecedenceConstraint
dim dtsPrecedenceConstraint
For Each xmlPrecedenceConstraint in xmlPrecedenceConstraints.selectNodes ("PrecedenceConstraint")
set dtsPrecedenceConstraint = dtsPrecedenceConstraints.New (xmlPrecedenceConstraint.getAttribute ("StepName"))
SetProperties dtsPrecedenceConstraint.Properties, xmlPrecedenceConstraint.selectSingleNode ("Properties")
dtsPrecedenceConstraints.Add dtsPrecedenceConstraint
Next
End Sub
Sub SetSteps (dtsSteps, xmlSteps)
dim xmlStep2
dim dtsStep2
For Each xmlStep2 in xmlSteps.selectNodes ("Step2")
set dtsStep2 = dtsSteps.New
SetProperties dtsStep2.Properties, xmlStep2.selectSingleNode ("Properties")
dtsSteps.Add dtsStep2
Next
For Each xmlStep2 in xmlSteps.selectNodes ("Step2")
set dtsStep2 = dtsSteps.Item (xmlStep2.getAttribute ("Name"))
SetPrecedenceConstraints dtsStep2.PrecedenceConstraints, xmlStep2.selectSingleNode ("PrecedenceConstraints")
Next
End Sub
Sub SetTasks (dtsTasks, xmlTasks)
dim xmlTask
dim dtsTask
For Each xmlTask in xmlTasks.selectNodes ("Task")
set dtsTask = dtsTasks.New (xmlTask.getAttribute ("CustomTaskID"))
SetProperties dtsTask.Properties, xmlTask.selectSingleNode ("Properties")
dtsTasks.Add dtsTask
Next
End Sub
Sub CreatePackage (strServerName, strFileName)
Dim fsoFileSystem
set fsoFileSystem = CreateObject ("Scripting.FileSystemObject")
Dim dtsPackage2
Set dtsPackage2 = CreateObject ("DTS.Package2")
Dim DTSSQLStgFlag_Default
Dim DTSSQLStgFlag_UseTrustedConnection
DTSSQLStgFlag_Default = 0
DTSSQLStgFlag_UseTrustedConnection = 256
Dim xmlDocument
Set xmlDocument = CreateObject ("Msxml2.DOMDocument.3.0")
xmlDocument.load strFileName
Dim xmlPackage2
set xmlPackage2 = xmlDocument.selectSingleNode ("Package2")
'properties
SetProperties dtsPackage2.Properties, xmlPackage2.selectSingleNode ("Properties")
'collections
SetConnections dtsPackage2.Connections, xmlPackage2.selectSingleNode ("Connections")
SetGlobalVariables dtsPackage2.GlobalVariables, xmlPackage2.selectSingleNode ("GlobalVariables")
SetSteps dtsPackage2.Steps, xmlPackage2.selectSingleNode ("Steps")
SetTasks dtsPackage2.Tasks, xmlPackage2.selectSingleNode ("Tasks")
On Error Resume Next
dtsPackage2.SaveToSQLServer strServerName, , , DTSSQLStgFlag_UseTrustedConnection
If Err.Number Then
MsgBox Err.Description
End If
End Sub
Sub Main
Dim strServerName
Dim strFileName
If WScript.Arguments.Count 2 Then
MsgBox "Usage: PushPackages servername filename"
Else
strServerName = WScript.Arguments (0)
strFileName = WScript.Arguments (1)
CreatePackage strServerName, strFileName
End If
End Sub
Main
A: This tool (DTSDoc) does a good job on documenting DTS Packages. It can be run from the command-line which is great to keep documentation up-to-date.
It has some positive reviews:
Review by ASP Alliance
Review by Mike Gunderloy (LARKWARE)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Calculating frames per second in a game What's a good algorithm for calculating frames per second in a game? I want to show it as a number in the corner of the screen. If I just look at how long it took to render the last frame the number changes too fast.
Bonus points if your answer updates each frame and doesn't converge differently when the frame rate is increasing vs decreasing.
A: This is what I have used in many games.
#define MAXSAMPLES 100
int tickindex=0;
int ticksum=0;
int ticklist[MAXSAMPLES];
/* need to zero out the ticklist array before starting */
/* average will ramp up until the buffer is full */
/* returns average ticks per frame over the MAXSAMPLES last frames */
double CalcAverageTick(int newtick)
{
ticksum-=ticklist[tickindex]; /* subtract value falling off */
ticksum+=newtick; /* add new value */
ticklist[tickindex]=newtick; /* save new value so it can be subtracted later */
if(++tickindex==MAXSAMPLES) /* inc buffer index */
tickindex=0;
/* return average */
return((double)ticksum/MAXSAMPLES);
}
A: This might be overkill for most people, that's why I hadn't posted it when I implemented it. But it's very robust and flexible.
It stores a Queue with the last frame times, so it can accurately calculate an average FPS value much better than just taking the last frame into consideration.
It also allows you to ignore one frame, if you are doing something that you know is going to artificially screw up that frame's time.
It also allows you to change the number of frames to store in the Queue as it runs, so you can test it out on the fly what is the best value for you.
// Number of past frames to use for FPS smooth calculation - because
// Unity's smoothedDeltaTime, well - it kinda sucks
private int frameTimesSize = 60;
// A Queue is the perfect data structure for the smoothed FPS task;
// new values in, old values out
private Queue<float> frameTimes;
// Not really needed, but used for faster updating then processing
// the entire queue every frame
private float __frameTimesSum = 0;
// Flag to ignore the next frame when performing a heavy one-time operation
// (like changing resolution)
private bool _fpsIgnoreNextFrame = false;
//=============================================================================
// Call this after doing a heavy operation that will screw up with FPS calculation
void FPSIgnoreNextFrame() {
this._fpsIgnoreNextFrame = true;
}
//=============================================================================
// Smoothed FPS counter updating
void Update()
{
if (this._fpsIgnoreNextFrame) {
this._fpsIgnoreNextFrame = false;
return;
}
// While looping here allows the frameTimesSize member to be changed dinamically
while (this.frameTimes.Count >= this.frameTimesSize) {
this.__frameTimesSum -= this.frameTimes.Dequeue();
}
while (this.frameTimes.Count < this.frameTimesSize) {
this.__frameTimesSum += Time.deltaTime;
this.frameTimes.Enqueue(Time.deltaTime);
}
}
//=============================================================================
// Public function to get smoothed FPS values
public int GetSmoothedFPS() {
return (int)(this.frameTimesSize / this.__frameTimesSum * Time.timeScale);
}
A: Well, certainly
frames / sec = 1 / (sec / frame)
But, as you point out, there's a lot of variation in the time it takes to render a single frame, and from a UI perspective updating the fps value at the frame rate is not usable at all (unless the number is very stable).
What you want is probably a moving average or some sort of binning / resetting counter.
For example, you could maintain a queue data structure which held the rendering times for each of the last 30, 60, 100, or what-have-you frames (you could even design it so the limit was adjustable at run-time). To determine a decent fps approximation you can determine the average fps from all the rendering times in the queue:
fps = # of rendering times in queue / total rendering time
When you finish rendering a new frame you enqueue a new rendering time and dequeue an old rendering time. Alternately, you could dequeue only when the total of the rendering times exceeded some preset value (e.g. 1 sec). You can maintain the "last fps value" and a last updated timestamp so you can trigger when to update the fps figure, if you so desire. Though with a moving average if you have consistent formatting, printing the "instantaneous average" fps on each frame would probably be ok.
Another method would be to have a resetting counter. Maintain a precise (millisecond) timestamp, a frame counter, and an fps value. When you finish rendering a frame, increment the counter. When the counter hits a pre-set limit (e.g. 100 frames) or when the time since the timestamp has passed some pre-set value (e.g. 1 sec), calculate the fps:
fps = # frames / (current time - start time)
Then reset the counter to 0 and set the timestamp to the current time.
A: Good answers here. Just how you implement it is dependent on what you need it for. I prefer the running average one myself "time = time * 0.9 + last_frame * 0.1" by the guy above.
however I personally like to weight my average more heavily towards newer data because in a game it is SPIKES that are the hardest to squash and thus of most interest to me. So I would use something more like a .7 \ .3 split will make a spike show up much faster (though it's effect will drop off-screen faster as well.. see below)
If your focus is on RENDERING time, then the .9.1 split works pretty nicely b/c it tend to be more smooth. THough for gameplay/AI/physics spikes are much more of a concern as THAT will usually what makes your game look choppy (which is often worse than a low frame rate assuming we're not dipping below 20 fps)
So, what I would do is also add something like this:
#define ONE_OVER_FPS (1.0f/60.0f)
static float g_SpikeGuardBreakpoint = 3.0f * ONE_OVER_FPS;
if(time > g_SpikeGuardBreakpoint)
DoInternalBreakpoint()
(fill in 3.0f with whatever magnitude you find to be an unacceptable spike)
This will let you find and thus solve FPS issues the end of the frame they happen.
A: A much better system than using a large array of old framerates is to just do something like this:
new_fps = old_fps * 0.99 + new_fps * 0.01
This method uses far less memory, requires far less code, and places more importance upon recent framerates than old framerates while still smoothing the effects of sudden framerate changes.
A: Increment a counter every time you render a screen and clear that counter for some time interval over which you want to measure the frame-rate.
Ie. Every 3 seconds, get counter/3 and then clear the counter.
A: There are at least two ways to do it:
The first is the one others have mentioned here before me.
I think it's the simplest and preferred way. You just to keep track of
*
*cn: counter of how many frames you've rendered
*time_start: the time since you've started counting
*time_now: the current time
Calculating the fps in this case is as simple as evaluating this formula:
*
*FPS = cn / (time_now - time_start).
Then there is the uber cool way you might like to use some day:
Let's say you have 'i' frames to consider. I'll use this notation: f[0], f[1],..., f[i-1] to describe how long it took to render frame 0, frame 1, ..., frame (i-1) respectively.
Example where i = 3
|f[0] |f[1] |f[2] |
+----------+-------------+-------+------> time
Then, mathematical definition of fps after i frames would be
(1) fps[i] = i / (f[0] + ... + f[i-1])
And the same formula but only considering i-1 frames.
(2) fps[i-1] = (i-1) / (f[0] + ... + f[i-2])
Now the trick here is to modify the right side of formula (1) in such a way that it will contain the right side of formula (2) and substitute it for it's left side.
Like so (you should see it more clearly if you write it on a paper):
fps[i] = i / (f[0] + ... + f[i-1])
= i / ((f[0] + ... + f[i-2]) + f[i-1])
= (i/(i-1)) / ((f[0] + ... + f[i-2])/(i-1) + f[i-1]/(i-1))
= (i/(i-1)) / (1/fps[i-1] + f[i-1]/(i-1))
= ...
= (i*fps[i-1]) / (f[i-1] * fps[i-1] + i - 1)
So according to this formula (my math deriving skill are a bit rusty though), to calculate the new fps you need to know the fps from the previous frame, the duration it took to render the last frame and the number of frames you've rendered.
A: You need a smoothed average, the easiest way is to take the current answer (the time to draw the last frame) and combine it with the previous answer.
// eg.
float smoothing = 0.9; // larger=more smoothing
measurement = (measurement * smoothing) + (current * (1.0-smoothing))
By adjusting the 0.9 / 0.1 ratio you can change the 'time constant' - that is how quickly the number responds to changes. A larger fraction in favour of the old answer gives a slower smoother change, a large fraction in favour of the new answer gives a quicker changing value. Obviously the two factors must add to one!
A: You could keep a counter, increment it after each frame is rendered, then reset the counter when you are on a new second (storing the previous value as the last second's # of frames rendered)
A: JavaScript:
// Set the end and start times
var start = (new Date).getTime(), end, FPS;
/* ...
* the loop/block your want to watch
* ...
*/
end = (new Date).getTime();
// since the times are by millisecond, use 1000 (1000ms = 1s)
// then multiply the result by (MaxFPS / 1000)
// FPS = (1000 - (end - start)) * (MaxFPS / 1000)
FPS = Math.round((1000 - (end - start)) * (60 / 1000));
A: Here's a complete example, using Python (but easily adapted to any language). It uses the smoothing equation in Martin's answer, so almost no memory overhead, and I chose values that worked for me (feel free to play around with the constants to adapt to your use case).
import time
SMOOTHING_FACTOR = 0.99
MAX_FPS = 10000
avg_fps = -1
last_tick = time.time()
while True:
# <Do your rendering work here...>
current_tick = time.time()
# Ensure we don't get crazy large frame rates, by capping to MAX_FPS
current_fps = 1.0 / max(current_tick - last_tick, 1.0/MAX_FPS)
last_tick = current_tick
if avg_fps < 0:
avg_fps = current_fps
else:
avg_fps = (avg_fps * SMOOTHING_FACTOR) + (current_fps * (1-SMOOTHING_FACTOR))
print(avg_fps)
A: Set counter to zero. Each time you draw a frame increment the counter. After each second print the counter. lather, rinse, repeat. If yo want extra credit, keep a running counter and divide by the total number of seconds for a running average.
A: In (c++ like) pseudocode these two are what I used in industrial image processing applications that had to process images from a set of externally triggered camera's. Variations in "frame rate" had a different source (slower or faster production on the belt) but the problem is the same. (I assume that you have a simple timer.peek() call that gives you something like the nr of msec (nsec?) since application start or the last call)
Solution 1: fast but not updated every frame
do while (1)
{
ProcessImage(frame)
if (frame.framenumber%poll_interval==0)
{
new_time=timer.peek()
framerate=poll_interval/(new_time - last_time)
last_time=new_time
}
}
Solution 2: updated every frame, requires more memory and CPU
do while (1)
{
ProcessImage(frame)
new_time=timer.peek()
delta=new_time - last_time
last_time = new_time
total_time += delta
delta_history.push(delta)
framerate= delta_history.length() / total_time
while (delta_history.length() > avg_interval)
{
oldest_delta = delta_history.pop()
total_time -= oldest_delta
}
}
A: qx.Class.define('FpsCounter', {
extend: qx.core.Object
,properties: {
}
,events: {
}
,construct: function(){
this.base(arguments);
this.restart();
}
,statics: {
}
,members: {
restart: function(){
this.__frames = [];
}
,addFrame: function(){
this.__frames.push(new Date());
}
,getFps: function(averageFrames){
debugger;
if(!averageFrames){
averageFrames = 2;
}
var time = 0;
var l = this.__frames.length;
var i = averageFrames;
while(i > 0){
if(l - i - 1 >= 0){
time += this.__frames[l - i] - this.__frames[l - i - 1];
}
i--;
}
var fps = averageFrames / time * 1000;
return fps;
}
}
});
A: How i do it!
boolean run = false;
int ticks = 0;
long tickstart;
int fps;
public void loop()
{
if(this.ticks==0)
{
this.tickstart = System.currentTimeMillis();
}
this.ticks++;
this.fps = (int)this.ticks / (System.currentTimeMillis()-this.tickstart);
}
In words, a tick clock tracks ticks. If it is the first time, it takes the current time and puts it in 'tickstart'. After the first tick, it makes the variable 'fps' equal how many ticks of the tick clock divided by the time minus the time of the first tick.
Fps is an integer, hence "(int)".
A: Here's how I do it (in Java):
private static long ONE_SECOND = 1000000L * 1000L; //1 second is 1000ms which is 1000000ns
LinkedList<Long> frames = new LinkedList<>(); //List of frames within 1 second
public int calcFPS(){
long time = System.nanoTime(); //Current time in nano seconds
frames.add(time); //Add this frame to the list
while(true){
long f = frames.getFirst(); //Look at the first element in frames
if(time - f > ONE_SECOND){ //If it was more than 1 second ago
frames.remove(); //Remove it from the list of frames
} else break;
/*If it was within 1 second we know that all other frames in the list
* are also within 1 second
*/
}
return frames.size(); //Return the size of the list
}
A: In Typescript, I use this algorithm to calculate framerate and frametime averages:
let getTime = () => {
return new Date().getTime();
}
let frames: any[] = [];
let previousTime = getTime();
let framerate:number = 0;
let frametime:number = 0;
let updateStats = (samples:number=60) => {
samples = Math.max(samples, 1) >> 0;
if (frames.length === samples) {
let currentTime: number = getTime() - previousTime;
frametime = currentTime / samples;
framerate = 1000 * samples / currentTime;
previousTime = getTime();
frames = [];
}
frames.push(1);
}
usage:
statsUpdate();
// Print
stats.innerHTML = Math.round(framerate) + ' FPS ' + frametime.toFixed(2) + ' ms';
Tip: If samples is 1, the result is real-time framerate and frametime.
A: This is based on KPexEA's answer and gives the Simple Moving Average. Tidied and converted to TypeScript for easy copy and paste:
Variable declaration:
fpsObject = {
maxSamples: 100,
tickIndex: 0,
tickSum: 0,
tickList: []
}
Function:
calculateFps(currentFps: number): number {
this.fpsObject.tickSum -= this.fpsObject.tickList[this.fpsObject.tickIndex] || 0
this.fpsObject.tickSum += currentFps
this.fpsObject.tickList[this.fpsObject.tickIndex] = currentFps
if (++this.fpsObject.tickIndex === this.fpsObject.maxSamples) this.fpsObject.tickIndex = 0
const smoothedFps = this.fpsObject.tickSum / this.fpsObject.maxSamples
return Math.floor(smoothedFps)
}
Usage (may vary in your app):
this.fps = this.calculateFps(this.ticker.FPS)
A: I adapted @KPexEA's answer to Go, moved the globals into struct fields, allowed the number of samples to be configurable, and used time.Duration instead of plain integers and floats.
type FrameTimeTracker struct {
samples []time.Duration
sum time.Duration
index int
}
func NewFrameTimeTracker(n int) *FrameTimeTracker {
return &FrameTimeTracker{
samples: make([]time.Duration, n),
}
}
func (t *FrameTimeTracker) AddFrameTime(frameTime time.Duration) (average time.Duration) {
// algorithm adapted from https://stackoverflow.com/a/87732/814422
t.sum -= t.samples[t.index]
t.sum += frameTime
t.samples[t.index] = frameTime
t.index++
if t.index == len(t.samples) {
t.index = 0
}
return t.sum / time.Duration(len(t.samples))
}
The use of time.Duration, which has nanosecond precision, eliminates the need for floating-point arithmetic to compute the average frame time, but comes at the expense of needing twice as much memory for the same number of samples.
You'd use it like this:
// track the last 60 frame times
frameTimeTracker := NewFrameTimeTracker(60)
// main game loop
for frame := 0;; frame++ {
// ...
if frame > 0 {
// prevFrameTime is the duration of the last frame
avgFrameTime := frameTimeTracker.AddFrameTime(prevFrameTime)
fps := 1.0 / avgFrameTime.Seconds()
}
// ...
}
Since the context of this question is game programming, I'll add some more notes about performance and optimization. The above approach is idiomatic Go but always involves two heap allocations: one for the struct itself and one for the array backing the slice of samples. If used as indicated above, these are long-lived allocations so they won't really tax the garbage collector. Profile before optimizing, as always.
However, if performance is a major concern, some changes can be made to eliminate the allocations and indirections:
*
*Change samples from a slice of []time.Duration to an array of [N]time.Duration where N is fixed at compile time. This removes the flexibility of changing the number of samples at runtime, but in most cases that flexibility is unnecessary.
*Then, eliminate the NewFrameTimeTracker constructor function entirely and use a var frameTimeTracker FrameTimeTracker declaration (at the package level or local to main) instead. Unlike C, Go will pre-zero all relevant memory.
A: Unfortunately, most of the answers here don't provide either accurate enough or sufficiently "slow responsive" FPS measurements. Here's how I do it in Rust using a measurement queue:
use std::collections::VecDeque;
use std::time::{Duration, Instant};
pub struct FpsCounter {
sample_period: Duration,
max_samples: usize,
creation_time: Instant,
frame_count: usize,
measurements: VecDeque<FrameCountMeasurement>,
}
#[derive(Copy, Clone)]
struct FrameCountMeasurement {
time: Instant,
frame_count: usize,
}
impl FpsCounter {
pub fn new(sample_period: Duration, samples: usize) -> Self {
assert!(samples > 1);
Self {
sample_period,
max_samples: samples,
creation_time: Instant::now(),
frame_count: 0,
measurements: VecDeque::new(),
}
}
pub fn fps(&self) -> f32 {
match (self.measurements.front(), self.measurements.back()) {
(Some(start), Some(end)) => {
let period = (end.time - start.time).as_secs_f32();
if period > 0.0 {
(end.frame_count - start.frame_count) as f32 / period
} else {
0.0
}
}
_ => 0.0,
}
}
pub fn update(&mut self) {
self.frame_count += 1;
let current_measurement = self.measure();
let last_measurement = self
.measurements
.back()
.copied()
.unwrap_or(FrameCountMeasurement {
time: self.creation_time,
frame_count: 0,
});
if (current_measurement.time - last_measurement.time) >= self.sample_period {
self.measurements.push_back(current_measurement);
while self.measurements.len() > self.max_samples {
self.measurements.pop_front();
}
}
}
fn measure(&self) -> FrameCountMeasurement {
FrameCountMeasurement {
time: Instant::now(),
frame_count: self.frame_count,
}
}
}
How to use:
*
*Create the counter:
let mut fps_counter = FpsCounter::new(Duration::from_millis(100), 5);
*Call fps_counter.update() on every frame drawn.
*Call fps_counter.fps() whenever you like to display current FPS.
Now, the key is in parameters to FpsCounter::new() method: sample_period is how responsive fps() is to changes in framerate, and samples controls how quickly fps() ramps up or down to the actual framerate. So if you choose 10 ms and 100 samples, fps() would react almost instantly to any change in framerate - basically, FPS value on the screen would jitter like crazy, but since it's 100 samples, it would take 1 second to match the actual framerate.
So my choice of 100 ms and 5 samples means that displayed FPS counter doesn't make your eyes bleed by changing crazy fast, and it would match your actual framerate half a second after it changes, which is sensible enough for a game.
Since sample_period * samples is averaging time span, you don't want it to be too short if you want a reasonably accurate FPS counter.
A: store a start time and increment your framecounter once per loop? every few seconds you could just print framecount/(Now - starttime) and then reinitialize them.
edit: oops. double-ninja'ed
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "117"
}
|
Q: Get list of XML attribute values in Python I need to get a list of attribute values from child elements in Python.
It's easiest to explain with an example.
Given some XML like this:
<elements>
<parent name="CategoryA">
<child value="a1"/>
<child value="a2"/>
<child value="a3"/>
</parent>
<parent name="CategoryB">
<child value="b1"/>
<child value="b2"/>
<child value="b3"/>
</parent>
</elements>
I want to be able to do something like:
>>> getValues("CategoryA")
['a1', 'a2', 'a3']
>>> getValues("CategoryB")
['b1', 'b2', 'b3']
It looks like a job for XPath but I'm open to all recommendations. I'd also like to hear about your favourite Python XML libraries.
A: I'm not really an old hand at Python, but here's an XPath solution using libxml2.
import libxml2
DOC = """<elements>
<parent name="CategoryA">
<child value="a1"/>
<child value="a2"/>
<child value="a3"/>
</parent>
<parent name="CategoryB">
<child value="b1"/>
<child value="b2"/>
<child value="b3"/>
</parent>
</elements>"""
doc = libxml2.parseDoc(DOC)
def getValues(cat):
return [attr.content for attr in doc.xpathEval("/elements/parent[@name='%s']/child/@value" % (cat))]
print getValues("CategoryA")
With result...
['a1', 'a2', 'a3']
A: ElementTree 1.3 (unfortunately not 1.2 which is the one included with Python) supports XPath like this:
import elementtree.ElementTree as xml
def getValues(tree, category):
parent = tree.find(".//parent[@name='%s']" % category)
return [child.get('value') for child in parent]
Then you can do
>>> tree = xml.parse('data.xml')
>>> getValues(tree, 'CategoryA')
['a1', 'a2', 'a3']
>>> getValues(tree, 'CategoryB')
['b1', 'b2', 'b3']
lxml.etree (which also provides the ElementTree interface) will also work in the same way.
A: You can do this with BeautifulSoup
>>> from BeautifulSoup import BeautifulStoneSoup
>>> soup = BeautifulStoneSoup(xml)
>>> def getValues(name):
. . . return [child['value'] for child in soup.find('parent', attrs={'name': name}).findAll('child')]
If you're doing work with HTML/XML I would recommend you take a look at BeautifulSoup. It's similar to the DOM tree but contains more functionality.
A: Using a standard W3 DOM such as the stdlib's minidom, or pxdom:
def getValues(category):
for parent in document.getElementsByTagName('parent'):
if parent.getAttribute('name')==category:
return [
el.getAttribute('value')
for el in parent.getElementsByTagName('child')
]
raise ValueError('parent not found')
A: My preferred python xml library is lxml , which wraps libxml2.
Xpath does seem the way to go here, so I'd write this as something like:
from lxml import etree
def getValues(xml, category):
return [x.attrib['value'] for x in
xml.findall('/parent[@name="%s"]/*' % category)]
xml = etree.parse(open('filename.xml'))
>>> print getValues(xml, 'CategoryA')
['a1', 'a2', 'a3']
>>> print getValues(xml, 'CategoryB')
['b1', 'b2', 'b3]
A: In Python 3.x, fetching a list of attributes is a simple task of using the member items()
Using the ElementTree, below snippet shows a way to get the list of attributes.
NOTE that this example doesn't consider namespaces, which if present, will need to be accounted for.
import xml.etree.ElementTree as ET
flName = 'test.xml'
tree = ET.parse(flName)
root = tree.getroot()
for element in root.findall('<child-node-of-root>'):
attrList = element.items()
print(len(attrList), " : [", attrList, "]" )
REFERENCE:
Element.items()
Returns the element attributes as a sequence of (name, value) pairs.
The attributes are returned in an arbitrary order.
Python manual
A: I must admit I'm a fan of xmltramp due to its ease of use.
Accessing the above becomes:
import xmltramp
values = xmltramp.parse('''...''')
def getValues( values, category ):
cat = [ parent for parent in values['parent':] if parent(name) == category ]
cat_values = [ child(value) for child in parent['child':] for parent in cat ]
return cat_values
getValues( values, "CategoryA" )
getValues( values, "CategoryB" )
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: Numbering Regex Submatches Is there a canonical ordering of submatch expressions in a regular
expression?
For example: What is the order of the submatches in
"(([0-9]{3}).([0-9]{3}).([0-9]{3}).([0-9]{3}))\s+([A-Z]+)" ?
a. (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3}))\s+([A-Z]+)
(([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3}))
([A-Z]+)
([0-9]{3})
([0-9]{3})
([0-9]{3})
([0-9]{3})
b. (([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3}))\s+([A-Z]+)
(([0-9]{3})\.([0-9]{3})\.([0-9]{3})\.([0-9]{3}))
([0-9]{3})
([0-9]{3})
([0-9]{3})
([0-9]{3})
([A-Z]+)
or
c. somthin' else.
A: They tend to be numbered in the order the capturing parens start, left to right. Therefore, option b.
A: In Perl 5 regular expressions, answer b is correct. Submatch groupings are stored in order of open-parentheses.
Many other regular expression engines take their cues from Perl, but you would have to look up individual implementations to be sure. I'd suggest the book Mastering Regular Expressions for a deeper understanding.
A: You count opening parentheses, left to right. So the order would be
(([0-9]{3}).([0-9]{3}).([0-9]{3}).([0-9]{3}))
([0-9]{3})
([0-9]{3})
([0-9]{3})
([0-9]{3})
([A-Z]+)
At least this is what Perl would do. Other regex engines might have different rules.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: What are good grep tools for Windows? Any recommendations on grep tools for Windows? Ideally ones that could leverage 64-bit OS.
I'm aware of Cygwin, of course, and have also found PowerGREP, but I'm wondering if there are any hidden gems out there?
A: I always use WinGREP, but I've had issues with it not letting go of files.
A: Well, besides the Windows port of the GNU grep, there's also Borland's grep (very similar to the GNU one) available in the freeware Borland's Free C++ Compiler (it's a freeware with command-line tools).
A: I have successfully used GNU utilities for Win32 for quite some time and it has a good grep as well as tail and other handy GNU utilities for Win32. I avoid the packaged shell and simply use the executables right in Win32 command prompt.
The tail that is packaged is quite a good little application as well.
A: I'm the author of Aba Search and Replace. Just like PowerGREP, it supports regular expressions, saving patterns for further use, undo for replacements, preview with syntax highlight for HTML/CSS/JS/PHP, different encodings, including UTF-8 and UTF-16.
In comparison with PowerGREP, the GUI is less cluttered. Aba instantly starts searching as you are typing the pattern (incremental search), so you can experiment with regular expressions and immediately see the results.
You are welcomed to try my tool; I will be happy to answer any questions.
A: Update July 2013:
Another grep tool I now use all the time on Windows is AstroGrep:
Its ability to show me more than just the line search (i.e. the --context=NUM of a command-line grep) is invaluable.
And it is fast. Very fast, even on an old computer with non-SSD drive (I know, they used to do this hard drive with spinning disks, called platters, crazy right?)
It is free.
It is portable (simple zip archive to unzip).
Original answer October 2008
Gnu Grep is alright
You can download it for example here: (site ftp)
All the usual options are here.
That, combined with gawk and xargs (includes 'find', from GnuWin32), and you can really script like you were on Unix!
See also the options I am using to grep recursively:
grep --include "*.xxx" -nRHI "my Text to grep" *
A: I wanted a free grep tool for Windows that allowed you to right click on a folder and do a regex search of every file - without any nag screen.
The following is a quick solution based on the findstr mentioned in a previous post.
Create a text file somewhere on your hard drive where you keep long lived tools. Rename to .bat or .cmd and paste the following into it:
@echo off
set /p term="Search term> "
del %temp%\grepresult.txt
findstr /i /S /R /n /C:"%term%" "%~1\*.*" > "%temp%\grepresult.txt"
start notepad "%temp%\grepresult.txt"
Then browse to the SendTo folder. On Windows 7 browse to %APPDATA%\Microsoft\Windows\SendTo and drag a shortcut of the batch file to that SendTo folder.
I renamed the shortcut to 1 GREP to keep it at the top of the SendTo list.
Things that I'd like to do next with this is pipe the output of findstr through something that would generate an html file so that you could click on each output line to open that file. Also, I don't think it works with shortcuts to folders. I'd have to inspect the parameter and see if it contains ".lnk".
A: FINDSTR is fairly powerful, supports regular expressions and has the advantages of being on all Windows machines already.
c:\> FindStr /?
Searches for strings in files.
FINDSTR [/B] [/E] [/L] [/R] [/S] [/I] [/X] [/V] [/N] [/M] [/O] [/P] [/F:file]
[/C:string] [/G:file] [/D:dir list] [/A:color attributes] [/OFF[LINE]]
strings [[drive:][path]filename[ ...]]
/B Matches pattern if at the beginning of a line.
/E Matches pattern if at the end of a line.
/L Uses search strings literally.
/R Uses search strings as regular expressions.
/S Searches for matching files in the current directory and all
subdirectories.
/I Specifies that the search is not to be case-sensitive.
/X Prints lines that match exactly.
/V Prints only lines that do not contain a match.
/N Prints the line number before each line that matches.
/M Prints only the filename if a file contains a match.
/O Prints character offset before each matching line.
/P Skip files with non-printable characters.
/OFF[LINE] Do not skip files with offline attribute set.
/A:attr Specifies color attribute with two hex digits. See "color /?"
/F:file Reads file list from the specified file(/ stands for console).
/C:string Uses specified string as a literal search string.
/G:file Gets search strings from the specified file(/ stands for console).
/D:dir Search a semicolon delimited list of directories
strings Text to be searched for.
[drive:][path]filename
Specifies a file or files to search.
Use spaces to separate multiple search strings unless the argument is prefixed
with /C. For example, 'FINDSTR "hello there" x.y' searches for "hello" or
"there" in file x.y. 'FINDSTR /C:"hello there" x.y' searches for
"hello there" in file x.y.
Regular expression quick reference:
. Wildcard: any character
* Repeat: zero or more occurances of previous character or class
^ Line position: beginning of line
$ Line position: end of line
[class] Character class: any one character in set
[^class] Inverse class: any one character not in set
[x-y] Range: any characters within the specified range
\x Escape: literal use of metacharacter x
\<xyz Word position: beginning of word
xyz\> Word position: end of word
Example usage: findstr text_to_find * or to search recursively findstr /s text_to_find *
A: PowerShell's Select-String cmdlet was fine in v1.0, but it is significantly better for v2.0. Having PowerShell built in to recent versions of Windows means your skills here will always be useful, without first installing something.
New parameters added to Select-String: Select-String cmdlet now supports new parameters, such as:
*
*-Context: This allows you to see lines before and after the match line
*-AllMatches: which allows you to see all matches in a line (Previously, you could see only the first match in a line)
*-NotMatch: Equivalent to grep -v o
*-Encoding: to specify the character encoding
I find it expedient to create an function gcir for Get-ChildItem -Recurse ., with smarts to pass parameters correctly, and an alias ss for Select-String. So you an write:
gcir *.txt | ss foo
A: UnxUtils is the one I use, and it works perfectly for me...
A: I used Borland's grep for years, but I just found a pattern that it won't match. Eeeks. What else hasn't it found over the years? I wrote a simple text search replacement that does recursion like grep - it's FS.EXE on SourceForge.
grep fails...
C:\DEV> GREP GAAPRNTR \SOURCE\TPALIB\*.PRG
<no results>
Windows' findstr works...
C:\DEV> FINDSTR GAAPRNTR \SOURCE\TPALIB\*.PRG
\SOURCE\TPALIB\TPGAAUPD.PRG:ffSPOOL(cRPTFILE, MEM->GAAPRNTR, MEM->NETTYPE)
\SOURCE\TPALIB\TPPRINTR.PRG: AADD(mPRINTER, TPACONFG->GAAPRNTR)
\SOURCE\TPALIB\TPPRINTR.PRG: IF TRIM(TPACONFG->GAAPRNTR) <> TRIM(mPRINTER[2])
\SOURCE\TPALIB\TPPRINTR.PRG: REPLACE TPACONFG->GAAPRNTR WITH mPRINTER[2]
A: My tool of choice is the appropriately named Windows Grep:
*
*nice simple GUI
*supports search and replace
*can show the lines around the lines found
*can search within columns in CSVs and fixed-width files
A: It may not exactly fall into the 'grep' category, but I couldn't get by on Windows without a utility called AgentRansack. It's a GUI-based "find in files" utility with regex support.
It's dead simple to right-click on a folder, hit "ransack.." and find files containing what you're looking for. It is extremely fast too.
A: Based on recommendations in the comments, I've started using grepWin and it's fantastic and free.
(I'm still a fan of PowerGREP, but I don't use it anymore.)
I know you already mentioned it, but PowerGREP is awesome.
Some of my favorite features are:
*
*Right-click on a folder to run PowerGREP on it
*Use regular expressions or literal text
*Specify wildcards for files to include & exclude
*Search & replace
*Preview mode is nice because you can make sure you're replacing what you intend to.
Now I realize that the other grep tools can do all of the above. It's just that PowerGREP packages all of the functionality into a very easy-to-use GUI.
From the same wonderful folks who brought you RegexBuddy and who I have no affiliation with beyond loving their stuff. (It should be noted that RegexBuddy includes a basic version of grep (for Windows) itself and it costs a lot less than PowerGREP.)
Additional solutions
Existing Windows commands
*
*FINDSTR
*Select-String in PowerShell
Linux command implementations on Windows
*
*Cygwin
*Cash
Grep tools with a graphical interface
*
*AstroGrep
*BareGrep
*GrepWin
Additional Grep tools
*
*dnGrep
A: PowerShell's Select-String is similar. It does not have the same options and semantics, but it's still powerful.
A: Another good choice is MSYS. It gives you a bunch of other GNU utilities to allow you to be more productive.
A: Baregrep (Baretail is good too)
A: I'd recommend AstroGrep.
It's free, open source, and has a simple interface. I use it to search code all the time.
A: PowerShell has been mentioned a few times. Here is how you would actually use it in a grepish way:
Get-ChildItem -recurse -include *.txt | Select-String -CaseSensitive "SomeString"
It recursively searches all text files in the current directory tree for SomeString with case sensitivity.
Even better, run this:
function pgrep { param([string]$search, [string]$inc) Get-ChildItem -recurse -include $inc | Select-String -CaseSensitive $search }
Then do:
pgrep SomeStringToSearch *.txt
Then to really make it magical, add the function alias to your PowerShell Profile and you can almost dull the pain of not having proper command line tools.
A: Git on Windows = grep in cmd.exe
I just found out installing Git will give you some basic Linux commands: cat, grep, scp and all other good ones.
Install then add the Git bin folder to your PATH and then your cmd.exe has basic Linux functionality!
http://code.google.com/p/msysgit/downloads/list?can=3
A: ack works well on Windows (if you've got Perl). I find it better than grep for many uses.
A: Cygwin includes grep. All the GNU tools and Unix stuff works great on Windows if you install Cygwin.
A: dnGREP is an open source grep tool for Windows. It supports a number of cool features including:
*
*Undo for replace
*Ability to search by right clicking on folder in explorer
*Advance search options such as phonetic search and xpath
*Search inside PDF files, archives, and Word documents
IMHO, it has a nice and clean interface too :)
A: GrepWin is free and open source (GPL)
I've been using grepWin which was written by one of the TortoiseSVN guys. It does the job on Windows...
A: If you want a simple-to-use Windows Grep tool, I created one called P-Grep that I have made available for free download from my website: www.adjutantit.com - home menu, downloads.
Windows Grep seemed to have problems with a large number of files, so I wrote my own - which seems more reliable. You can select a folder, right click and send it to P-Grep. The sendto folder gets updated during installation.
A: I've been using AJC Grep daily for years. The only major limitation I've found is that file paths are limited to 255 characters and it stops when it encounters one, rather than just issuing a warning. It's annoying but doesn't happen very often.
I use it on 64-bit Windows 7 Ultimate, so its 64-bit credentials are fine.
A: GREP for Windows
I've been using it forever and luckily it's still available. It's super fast and very small.
A: I have Cygwin installed on my machine and put the Cygwin bin directory in my environmental path, so the Cygwin grep works like normal in a command line which solves all my scripting needs for grep at the moment.
A: If none of the solutions is quite what you are looking for, perhaps you could write a wrapper to FindStr that does exactly what you require?
FindStr is pretty good anyway so it should just be knocking a GUI up (if you want it) and providing a few extra features (like combining it with Find to find the count of files which contain a specified string [mentioned above]).
This, of course, assumes you have the requirement, time and inclination to do this!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "288"
}
|
Q: TestDriven.Net doesn't find tests I have a test project using MbUnit and TestDriven.Net.
If I right-click on an individual test method and say "Run Tests" the test runs successfully. Same thing if I click on a file name in the solution explorer.
However, if I right click and say run tests on the project or the solution, TestDriven.Net reports "0 Passed, 0 Failed, 0 Skipped."
I have other similar projects that work just fine, and yes, the classes are labeled [TestFixture] and the methods are labeled [Test].
A: Are the classes public?
A: I had once similar problem. The problem was that I forgot to declare my test class with public modifier.
A: You need to add testing attributes for your favorite testing framework. TestDriven picks up these attributes by reflection in order to know what tests to run.
For example, using NUnit.Framework - each test class needs [TextFixture] and each test method needs [Test]
Here's an example
A: If you're on Windows x64, it may be an installer problem. It bit me on Server 20080 x64.
A: Just make sure TestDriven.Net was installed before Gallio, otherwise Gallio won't install its extensions for TestDriven.Net.
Gallio v3.0.4 and more recent include a 64-bit installer.
A: I've seen TestDriven.Net not finding any tests if I used newest version of NUnit, reinstalling TestDriven.Net fixed the issue.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Can I pass a JavaScript variable to another browser window? I have a page which spawns a popup browser window. I have a JavaScript variable in the parent browser window and I would like to pass it to the popped-up browser window.
Is there a way to do this? I know this can be done across frames in the same browser window but I'm not sure if it can be done across browser windows.
A: Yes, scripts can access properties of other windows in the same domain that they have a handle on (typically gained through window.open/opener and window.frames/parent). It is usually more manageable to call functions defined on the other window rather than fiddle with variables directly.
However, windows can die or move on, and browsers deal with it differently when they do. Check that a window (a) is still open (!window.closed) and (b) has the function you expect available, before you try to call it.
Simple values like strings are fine, but generally it isn't a good idea to pass complex objects such as functions, DOM elements and closures between windows. If a child window stores an object from its opener, then the opener closes, that object can become 'dead' (in some browsers such as IE), or cause a memory leak. Weird errors can ensue.
A: Passing variables between the windows (if your windows are on the same domain) can be easily done via:
*
*Cookies
*localStorage. Just make sure your browser supports localStorage, and do the variable maintenance right (add/delete/remove) to keep localStorage clean.
A: Provided the windows are from the same security domain, and you have a reference to the other window, yes.
Javascript's open() method returns a reference to the window created (or existing window if it reuses an existing one). Each window created in such a way gets a property applied to it "window.opener" pointing to the window which created it.
Either can then use the DOM (security depending) to access properties of the other one, or its documents,frames etc.
A: One can pass a message from the 'parent' window to the 'child' window:
in the 'parent window' open the child
var win = window.open(<window.location.href>, '_blank');
setTimeout(function(){
win.postMessage(SRFBfromEBNF,"*")
},1000);
win.focus();
the to be replaced according to the context
In the 'child'
window.addEventListener('message', function(event) {
if(event.srcElement.location.href==window.location.href){
/* do what you want with event.data */
}
});
The if test must be changed according to the context
A: In your parent window:
var yourValue = 'something';
window.open('/childwindow.html?yourKey=' + yourValue);
Then in childwindow.html:
var query = location.search.substring(1);
var parameters = {};
var keyValues = query.split(/&/);
for (var keyValue in keyValues) {
var keyValuePairs = keyValue.split(/=/);
var key = keyValuePairs[0];
var value = keyValuePairs[1];
parameters[key] = value;
}
alert(parameters['yourKey']);
There is potentially a lot of error checking you should be doing in the parsing of your key/value pairs but I'm not including it here. Maybe someone can provide a more inclusive Javascript query string parsing routine in a later answer.
A: You can pass variables, and reference to things in the parent window quite easily:
// open an empty sample window:
var win = open("");
win.document.write("<html><body><head></head><input value='Trigger handler in other window!' type='button' id='button'></input></body></html>");
// attach to button in target window, and use a handler in this one:
var button = win.document.getElementById('button');
button.onclick = function() {
alert("I'm in the first frame!");
}
A: Putting code to the matter, you can do this from the parent window:
var thisIsAnObject = {foo:'bar'};
var w = window.open("http://example.com");
w.myVariable = thisIsAnObject;
or this from the new window:
var myVariable = window.opener.thisIsAnObject;
I prefer the latter, because you will probably need to wait for the new page to load anyway, so that you can access its elements, or whatever you want.
A: Yes, it can be done as long as both windows are on the same domain. The window.open() function will return a handle to the new window. The child window can access the parent window using the DOM element "opener".
A: For me the following doesn't work
var A = {foo:'bar'};
var w = window.open("http://example.com");
w.B = A;
// in new window
var B = window.opener.B;
But this works(note the variable name)
var B = {foo:'bar'};
var w = window.open("http://example.com");
w.B = B;
// in new window
var B = window.opener.B;
Also var B should be global.
A: Alternatively, you can add it to the URL and let the scripting language (PHP, Perl, ASP, Python, Ruby, whatever) handle it on the other side. Something like:
var x = 10;
window.open('mypage.php?x='+x);
A: I have struggled to successfully pass arguments to the newly opened window.
Here is what I came up with :
function openWindow(path, callback /* , arg1 , arg2, ... */){
var args = Array.prototype.slice.call(arguments, 2); // retrieve the arguments
var w = window.open(path); // open the new window
w.addEventListener('load', afterLoadWindow.bind(w, args), false); // listen to the new window's load event
function afterLoadWindow(/* [arg1,arg2,...], loadEvent */){
callback.apply(this, arguments[0]); // execute the callbacks, passing the initial arguments (arguments[1] contains the load event)
}
}
Example call:
openWindow("/contact",function(firstname, lastname){
this.alert("Hello "+firstname+" "+lastname);
}, "John", "Doe");
Live example
http://jsfiddle.net/rj6o0jzw/1/
A: You can use window.name as a data transport between windows - and it works cross domain as well. Not officially supported, but from my understanding, actually works very well cross browser.
More info here on this Stackoverflow Post
A: The window.open() function will also allow this if you have a reference to the window created, provided it is on the same domain.
If the variable is used server side you should be using a $_SESSION variable (assuming you are using PHP).
A: Yes browsers clear all ref. for a window. So you have to search a ClassName of something on the main window or use cookies as Javascript homemade ref.
I have a radio on my project page. And then you turn on for the radio it´s starts in a popup window and i controlling the main window links on the main page and show status of playing and in FF it´s easy but in MSIE not so Easy at all. But it can be done.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "75"
}
|
Q: How should exceptions be planned at the architectural level? Are there any good resources for planning how exceptions will be used from an architecture perspective? (Or provide your suggestions directly here.) In projects on which I have worked I find that a few common Exceptions are used over and over again and tend to lose their meaning.
From: http://jamesjava.blogspot.com/2007/10/exception-plan.html
A: I generally find that obsessing over checked exceptions is overkill. Exceptions should be reserved for unexpected error conditions from which the program cannot reasonably recover. For these, I tend to agree with what you've observed, that special error types tend to lose their meaning.
As a general rule, throw an exception when all of the following are true:
*
*The condition can't be recovered from here.
*The caller can't reasonably be expected to recover from this condition.
*Somebody will want to debug the situation, so they might want to see the stack trace.
You'll find that if all of those are true, the type of the exception is not important, and you may as well throw java.lang.Error. If any of them are false, then an exception is probably overkill (do you really need the type, message, and stack trace?)
Instead, consider augmenting the return type of methods to indicate expected kinds of failure. For example, use an Option<A> type for methods where it may make sense to return no value in some cases. Use an Either<A,B> type for methods where you might need to return a value whose type depends on failure or success. Where you might have had specialised Exception subtypes before, you can instead just use Either<Fail,A> where Fail is just an enum over the kinds of failures that are expected.
A: I half agree with Apocalisp's comment. Instances of Exception should be reserved for cases where a data or processing error has taken place, but can be recovered from by User or System intervention. Instances of RuntimeException should be reserved for those occasions when no intervention within the confines of your application can resolve the issue. These two types are thusly known as Checked and Unchecked exceptions.
An example of Unchecked exceptions would be if a physical resource (such as a database, or a message bus) is unavailable. A RuntimeException is good in this case because you are not planning for the resource to be unavailable, so your business logic doesn't have to constantly check for DatabaseUnavailableException, or something similar. You can handle RuntimeExceptions in a different way (maybe AOP sending an Email) to report the outage and let staff or support fix the physical issue.
Examples of Checked exceptions would be poor data input, or incomplete data access or something which actually fails business logic, but can be checked and recovered from. An example of this might be that a User you searched for is unavailable. It's a separate condition in your business logic that may be checked and handled (say by the View portion of your application which then reports the Exception's message as an error message to the User).
Many frameworks use this model and it seems to work better than I've seen otherwise.
That being said, many do replace checked exceptions with returned nulls, -1, empty String, or Number.MIN_VALUE. While this is okay, if you are not routinely expecting these values from a datasource or method, it should probably represented as an exception.
A: "Effective Java," 2nd. ed. Bloch, has some good advice in chapter 9.
"Hardcore Java," Simmons, has some good advice in chapter 5.
I'll also shamelessly mention that I've written a small tool to manage Java exception source code. Exception code is often very redundant, all that you usually care about is the type. My tool accepts a configuration file that names exception types, then generates code for those exceptions.
A: I try to ensure that the number of places using of try/finally greatly outnumbers the number of places using try/catch. catch will usually be reserved for catching exceptions at the top-level - where they can sensibly be handled, or dealing with exceptions from platform APIs.
In this case, as long as you give exceptions meaningful messages, and chain exceptions where necessary, not being able to identify an exception by its class isn't a big loss. I often throw RuntimeException rather than try to develop an exception class for every conceivable error case - but your opinion on that will depend on where you sit on the checked vs unchecked exception debate.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: What identifying information can a website capture? If the owner of a web site wants to track who their users are as much as possible, what things can they capture (and how). You might want to know about this in order to capture information on a site you create or, as a user, to prevent a site from capturing data on you.
Here is a starting list, but I'm sure I have missed some important ones:
*
*Referrer (what web page had the link you followed to get here). This is a HTTP header.
*IP Address of the machine you are browsing from. This is available with the HTTP headers.
*User Agent (what browser you are using). This is a HTTP header.
*Cookie placed on a previous visit. This is a header, available only if a cookie was placed earlier and was not deleted by the user.
*Flash Cookie placed on a previous visit. Some users turn off cookies, but very few know how to turn off Flash cookies. Works like a normal cookie although it depends on Flash.
*Web Bugs. Place something small (like a transparent single-pixel GIF) on the page that's served up from a 3rd party. Some third parties (such as DoubleClick) will have their own cookies and can correlate with other visits the user makes (for a fee!).
Those are the common ones I think of, but there have to be LOTS of unusual ones. For instance, this:
*
*Time on the user's clock. Use JavaScript to transmit it.
... which I had never heard of before reading it here.
ADDED LATER (after reading this):
Please try to put just ONE item per answer, then we can use voting up to sort out the better/more-interesting ones. The list below is probably less effective.
Ah well... NEXT time I ask a question like this I'll set it up better.
And here are some of the best answers I got:
*
*James points out that IE transmits the .NET framework version.
*AviewAnew points out that one can find what sites you have visited.
*Mecki points out that Screen Resolution can be determined.
*Mecki also points out that any auto-fill information your browser has cached can be determined, by creating a hidden field, then reading it with JavaScript.
*jjrv points out that Flash can list the fonts on the user's machine.
*Kent points out that you can find out what websites a person has visited.
*Silver Dragon points out you can determine the location of the mouse within the browsing window using Flash and AJAX.
*Jim points out that you can tell what language the user has configured in their browser from a HTTP header.
*Jim also mentions that you can detect whether people are using Greasemonkey or something similar to modify the page.
A: Modifications to your original:
*
*can be escaped ( i think its an option in some browsers )
*only avoidable with a proxy ( javascript can contravene this however with smart lookaround )
*is unreliable, easily forged.
*And assuming it was not wiped by browser closure ( session cookie ) and cookie is in the same domain/path
The real nasty ones are
*
*Using javascript to probe your network/lan
*Using javascript to access your firewall from behind the firewall and adjust its settings ( no joke )
*Using the feature of the "visited link" to determine which of a list of urls have been visited. ( deep history probing ! )
*Goodness knows what if the user has Windows/IE/ActiveX
A: *
*There's a header that can include information about a proxy server the user is using, and that can also include the user's IP address (in which case the other IP is the one of the proxy)
*Screen Resolution, Operating System, Color Depth, size of your taskbar (compare max and current resolution), if Java is enabled, Anti-Aliasing Fonts, Plugins Installed all via Javascript
*A Java applet can give you a bunch of information as well, but I don't know what.
*Sites you've visited
*Details of your local network such as active hosts, web servers. Paper Also outlines drive-by printing, drive-by router modification
And this is all assuming the attacker doesn't pull off arbitrary code execution
A: Javascript can get more information than just time. E.g. screen resolution (+ color depth) being one of them.
See Getting Screen Resolution with JS
Everything JS can capture, can be transmitted using AJAX without the user performing any interaction. Other examples are (not all will work in every browser):
*
*It can look into your browser history, e.g. what URL your browser would go if you hit back or forward.
*The language of your browser (Note: usually the HTTP request will also contain a list of preferred languages for the page you request. However this list is user editable in the prefs of many browser, while JS can actually find out what the language translation your browser is using in the interface)
*If your browser auto fills form fields (e.g. e-mail, username, etc.), JS can actually already read what your browser entered into the fields before you submitted the form (thus it can even read what your browser pre-filled there, even if you never submit the form at all).
A Java applet could also gather some information and transmit it, though there is not much information you wouldn't already get elsewhere. Since it's easy to get the IP of a visitor, it's possible to find out which online service he's using (looking up the IP at address services like IANA for USA or RIPE for Europe and so on) and there are services that translate IPs to country, so it's possible to find out where the user most likely is currently located.
A: Some additional info, that might be of interest:
*
*Using the ip address, one can resolve the hostname, net provider / organization the IP belongs to, and rough geographic location.
*Using the referer, the list of queries a specified client makes, and a reliable cookie mechanism, one can resolve the path the visitor makes (even clickthroughs to other sides, with AJAX and/or a forwarder page)
*Using flash, with a combination of AJAX, the mouse location within the browsing window can be captured
*The User Agent might contain information regarding operation system, installed .NET frameworks, and other curiosities
A: .NET framework versions are transmitted in IE, in the User Agent.
A: *
*Can the browser support JS
*Can the browser support flash
*Operating system platform
*Screen resolution
*Supports CSS
*Supports tables
A: Flash can give you a list of fonts on the user's machine among other things. Javascript can send information when the mouse stops over an ad without clicking it. You can also get the window size, whether the site is open in a frame, if popups or specific plugins have been blocked, looking for Javascript features can tell if the user agent header is correct or faked...
A: *
*You can usually determine which language the user speaks through the Accept-Language HTTP header.
*You can determine whether certain applications and browser plugins are installed by looking at the Accept HTTP header.
*Browser version/patchlevel and .NET framework version through the User-Agent HTTP header.
*Your ISP/Employer and geographical location through IP address.
*Whether or not you have visited particular URLs through CSS and/or timing load events. If a particular website has user-specific URIs, this could disclose whether you are a certain user on that site or not.
*Which fonts are available through measuring ems and/or Flash.
*Screen resolution, window size, timezone through JavaScript.
*Where you move your mouse and keystrokes through JavaScript. For instance, you can see what people type into text boxes even if they don't hit submit.
*Many UserJS/Greasemonkey scripts leak information (e.g. if you filter out certain people, the sites it is configured for may be able to find out who).
A: If you're concerned about your personal security (I'm not sure if that's what you're really getting after, so my apologies if this is misguided), you can always use a Tor network. If you use Firefox, you can use Torbutton for one click enabling. It has the benefit (drawback, to some), of disabling Flash because it's otherwise impossible to protect against Flash information leaks.
A: I need to dig up the link, but if the user is using IE, with common software titles installed, determining which ones are installed is possible.
A: As far as I know, it's possible to get clipboard data via javascript. Not sure how possible it is by default these days, but it was all the rage not long ago. I do believe IE still allows it.
People have a habit of leaving very important data in their clipboard, so this is pretty bad.
A: late to the party here, the website can also scan your ports, to find what software you are running!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
}
|
Q: What are the major vulnerabilities of Windows 2003 + Apache? I am searching for a host for a new commercial website. Among other things, I'd like to know what the various OS - Webserver combinations have in terms of vulnerabilities. What are the vulnerabilities of Windows 2003 + Apache?
A: You could look here: http://httpd.apache.org/security/vulnerabilities_20.html
As for the windows side, it's windows. There are going to be vulnerabilities. Just stay up to date with service packs and patches, and you'll be fine.
A: As suggested here, I could check out the CERT Database.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Check if a class has a member function of a given signature I'm asking for a template trick to detect if a class has a specific member function of a given signature.
The problem is similar to the one cited here
http://www.gotw.ca/gotw/071.htm
but not the same: in the item of Sutter's book he answered to the question that a class C MUST PROVIDE a member function with a particular signature, else the program won't compile. In my problem I need to do something if a class has that function, else do "something else".
A similar problem was faced by boost::serialization but I don't like the solution they adopted: a template function that invokes by default a free function (that you have to define) with a particular signature unless you define a particular member function (in their case "serialize" that takes 2 parameters of a given type) with a particular signature, else a compile error will happens. That is to implement both intrusive and non-intrusive serialization.
I don't like that solution for two reasons:
*
*To be non intrusive you must override the global "serialize" function that is in boost::serialization namespace, so you have IN YOUR CLIENT CODE to open namespace boost and namespace serialization!
*The stack to resolve that
mess was 10 to 12 function invocations.
I need to define a custom behavior for classes that has not that member function, and my entities are inside different namespaces (and I don't want to override a global function defined in one namespace while I'm in another one)
Can you give me a hint to solve this puzzle?
A: You appear to want the detector idiom. The above answers are variations on this that work with C++11 or C++14.
The std::experimental library has features which do essentially this. Reworking an example from above, it might be:
#include <experimental/type_traits>
// serialized_method_t is a detector type for T.serialize(int) const
template<typename T>
using serialized_method_t = decltype(std::declval<const T&>().serialize(std::declval<int>()));
// has_serialize_t is std::true_type when T.serialize(int) exists,
// and false otherwise.
template<typename T>
using has_serialize_t = std::experimental::is_detected_t<serialized_method_t, T>;
If you can't use std::experimental, a rudimentary version can be made like this:
template <typename... Ts>
using void_t = void;
template <template <class...> class Trait, class AlwaysVoid, class... Args>
struct detector : std::false_type {};
template <template <class...> class Trait, class... Args>
struct detector<Trait, void_t<Trait<Args...>>, Args...> : std::true_type {};
// serialized_method_t is a detector type for T.serialize(int) const
template<typename T>
using serialized_method_t = decltype(std::declval<const T&>().serialize(std::declval<int>()));
// has_serialize_t is std::true_type when T.serialize(int) exists,
// and false otherwise.
template <typename T>
using has_serialize_t = typename detector<serialized_method_t, void, T>::type;
Since has_serialize_t is really either std::true_type or std::false_type, it can be used via any of the common SFINAE idioms:
template<class T>
std::enable_if_t<has_serialize_t<T>::value, std::string>
SerializeToString(const T& t) {
}
Or by using dispatch with overload resolution:
template<class T>
std::string SerializeImpl(std::true_type, const T& t) {
// call serialize here.
}
template<class T>
std::string SerializeImpl(std::false_type, const T& t) {
// do something else here.
}
template<class T>
std::string Serialize(const T& t) {
return SerializeImpl(has_serialize_t<T>{}, t);
}
A: You can use std::is_member_function_pointer
class A {
public:
void foo() {};
}
bool test = std::is_member_function_pointer<decltype(&A::foo)>::value;
A: Came with the same kind of problem myself, and found the proposed solutions in here very interesting... but had the requirement for a solution that:
*
*Detects inherited functions as well;
*Is compatible with non C++11 ready compilers (so no decltype)
Found another thread proposing something like this, based on a BOOST discussion.
Here is the generalisation of the proposed solution as two macros declaration for traits class, following the model of boost::has_* classes.
#include <boost/type_traits/is_class.hpp>
#include <boost/mpl/vector.hpp>
/// Has constant function
/** \param func_ret_type Function return type
\param func_name Function name
\param ... Variadic arguments are for the function parameters
*/
#define DECLARE_TRAITS_HAS_FUNC_C(func_ret_type, func_name, ...) \
__DECLARE_TRAITS_HAS_FUNC(1, func_ret_type, func_name, ##__VA_ARGS__)
/// Has non-const function
/** \param func_ret_type Function return type
\param func_name Function name
\param ... Variadic arguments are for the function parameters
*/
#define DECLARE_TRAITS_HAS_FUNC(func_ret_type, func_name, ...) \
__DECLARE_TRAITS_HAS_FUNC(0, func_ret_type, func_name, ##__VA_ARGS__)
// Traits content
#define __DECLARE_TRAITS_HAS_FUNC(func_const, func_ret_type, func_name, ...) \
template \
< typename Type, \
bool is_class = boost::is_class<Type>::value \
> \
class has_func_ ## func_name; \
template<typename Type> \
class has_func_ ## func_name<Type,false> \
{public: \
BOOST_STATIC_CONSTANT( bool, value = false ); \
typedef boost::false_type type; \
}; \
template<typename Type> \
class has_func_ ## func_name<Type,true> \
{ struct yes { char _foo; }; \
struct no { yes _foo[2]; }; \
struct Fallback \
{ func_ret_type func_name( __VA_ARGS__ ) \
UTILITY_OPTIONAL(func_const,const) {} \
}; \
struct Derived : public Type, public Fallback {}; \
template <typename T, T t> class Helper{}; \
template <typename U> \
static no deduce(U*, Helper \
< func_ret_type (Fallback::*)( __VA_ARGS__ ) \
UTILITY_OPTIONAL(func_const,const), \
&U::func_name \
>* = 0 \
); \
static yes deduce(...); \
public: \
BOOST_STATIC_CONSTANT( \
bool, \
value = sizeof(yes) \
== sizeof( deduce( static_cast<Derived*>(0) ) ) \
); \
typedef ::boost::integral_constant<bool,value> type; \
BOOST_STATIC_CONSTANT(bool, is_const = func_const); \
typedef func_ret_type return_type; \
typedef ::boost::mpl::vector< __VA_ARGS__ > args_type; \
}
// Utility functions
#define UTILITY_OPTIONAL(condition, ...) UTILITY_INDIRECT_CALL( __UTILITY_OPTIONAL_ ## condition , ##__VA_ARGS__ )
#define UTILITY_INDIRECT_CALL(macro, ...) macro ( __VA_ARGS__ )
#define __UTILITY_OPTIONAL_0(...)
#define __UTILITY_OPTIONAL_1(...) __VA_ARGS__
These macros expand to a traits class with the following prototype:
template<class T>
class has_func_[func_name]
{
public:
/// Function definition result value
/** Tells if the tested function is defined for type T or not.
*/
static const bool value = true | false;
/// Function definition result type
/** Type representing the value attribute usable in
http://www.boost.org/doc/libs/1_53_0/libs/utility/enable_if.html
*/
typedef boost::integral_constant<bool,value> type;
/// Tested function constness indicator
/** Indicates if the tested function is const or not.
This value is not deduced, it is forced depending
on the user call to one of the traits generators.
*/
static const bool is_const = true | false;
/// Tested function return type
/** Indicates the return type of the tested function.
This value is not deduced, it is forced depending
on the user's arguments to the traits generators.
*/
typedef func_ret_type return_type;
/// Tested function arguments types
/** Indicates the arguments types of the tested function.
This value is not deduced, it is forced depending
on the user's arguments to the traits generators.
*/
typedef ::boost::mpl::vector< __VA_ARGS__ > args_type;
};
So what is the typical usage one can do out of this?
// We enclose the traits class into
// a namespace to avoid collisions
namespace ns_0 {
// Next line will declare the traits class
// to detect the member function void foo(int,int) const
DECLARE_TRAITS_HAS_FUNC_C(void, foo, int, int);
}
// we can use BOOST to help in using the traits
#include <boost/utility/enable_if.hpp>
// Here is a function that is active for types
// declaring the good member function
template<typename T> inline
typename boost::enable_if< ns_0::has_func_foo<T> >::type
foo_bar(const T &_this_, int a=0, int b=1)
{ _this_.foo(a,b);
}
// Here is a function that is active for types
// NOT declaring the good member function
template<typename T> inline
typename boost::disable_if< ns_0::has_func_foo<T> >::type
foo_bar(const T &_this_, int a=0, int b=1)
{ default_foo(_this_,a,b);
}
// Let us declare test types
struct empty
{
};
struct direct_foo
{
void foo(int,int);
};
struct direct_const_foo
{
void foo(int,int) const;
};
struct inherited_const_foo :
public direct_const_foo
{
};
// Now anywhere in your code you can seamlessly use
// the foo_bar function on any object:
void test()
{
int a;
foo_bar(a); // calls default_foo
empty b;
foo_bar(b); // calls default_foo
direct_foo c;
foo_bar(c); // calls default_foo (member function is not const)
direct_const_foo d;
foo_bar(d); // calls d.foo (member function is const)
inherited_const_foo e;
foo_bar(e); // calls e.foo (inherited member function)
}
A: To accomplish this we'll need to use:
*
*Function template overloading with differing return types according to whether the method is available
*In keeping with the meta-conditionals in the type_traits header, we'll want to return a true_type or false_type from our overloads
*Declare the true_type overload expecting an int and the false_type overload expecting Variadic Parameters to exploit: "The lowest priority of the ellipsis conversion in overload resolution"
*In defining the template specification for the true_type function we will use declval and decltype allowing us to detect the function independent of return type differences or overloads between methods
You can see a live example of this here. But I'll also explain it below:
I want to check for the existence of a function named test which takes a type convertible from int, then I'd need to declare these two functions:
template <typename T, typename S = decltype(declval<T>().test(declval<int>))> static true_type hasTest(int);
template <typename T> static false_type hasTest(...);
*
*decltype(hasTest<a>(0))::value is true (Note there is no need to create special functionality to deal with the void a::test() overload, the void a::test(int) is accepted)
*decltype(hasTest<b>(0))::value is true (Because int is convertable to double int b::test(double) is accepted, independent of return type)
*decltype(hasTest<c>(0))::value is false (c does not have a method named test that accepts a type convertible from int therefor this is not accepted)
This solution has 2 drawbacks:
*
*Requires a per method declaration of a pair of functions
*Creates namespace pollution particularly if we want to test for similar names, for example what would we name a function that wanted to test for a test() method?
So it's important that these functions be declared in a details namespace, or ideally if they are only to be used with a class, they should be declared privately by that class. To that end I've written a macro to help you abstract this information:
#define FOO(FUNCTION, DEFINE) template <typename T, typename S = decltype(declval<T>().FUNCTION)> static true_type __ ## DEFINE(int); \
template <typename T> static false_type __ ## DEFINE(...); \
template <typename T> using DEFINE = decltype(__ ## DEFINE<T>(0));
You could use this like:
namespace details {
FOO(test(declval<int>()), test_int)
FOO(test(), test_void)
}
Subsequently calling details::test_int<a>::value or details::test_void<a>::value would yield true or false for the purposes of inline code or meta-programming.
A: The accepted answer to this question of compiletime member-function
introspection, although it is justly popular, has a snag which can be observed
in the following program:
#include <type_traits>
#include <iostream>
#include <memory>
/* Here we apply the accepted answer's technique to probe for the
the existence of `E T::operator*() const`
*/
template<typename T, typename E>
struct has_const_reference_op
{
template<typename U, E (U::*)() const> struct SFINAE {};
template<typename U> static char Test(SFINAE<U, &U::operator*>*);
template<typename U> static int Test(...);
static const bool value = sizeof(Test<T>(0)) == sizeof(char);
};
using namespace std;
/* Here we test the `std::` smart pointer templates, including the
deprecated `auto_ptr<T>`, to determine in each case whether
T = (the template instantiated for `int`) provides
`int & T::operator*() const` - which all of them in fact do.
*/
int main(void)
{
cout << has_const_reference_op<auto_ptr<int>,int &>::value;
cout << has_const_reference_op<unique_ptr<int>,int &>::value;
cout << has_const_reference_op<shared_ptr<int>,int &>::value << endl;
return 0;
}
Built with GCC 4.6.3, the program outputs 110 - informing us that
T = std::shared_ptr<int> does not provide int & T::operator*() const.
If you are not already wise to this gotcha, then a look at of the definition of
std::shared_ptr<T> in the header <memory> will shed light. In that
implementation, std::shared_ptr<T> is derived from a base class
from which it inherits operator*() const. So the template instantiation
SFINAE<U, &U::operator*> that constitutes "finding" the operator for
U = std::shared_ptr<T> will not happen, because std::shared_ptr<T> has no
operator*() in its own right and template instantiation does not
"do inheritance".
This snag does not affect the well-known SFINAE approach, using "The sizeof() Trick",
for detecting merely whether T has some member function mf (see e.g.
this answer and comments). But
establishing that T::mf exists is often (usually?) not good enough: you may
also need to establish that it has a desired signature. That is where the
illustrated technique scores. The pointerized variant of the desired signature
is inscribed in a parameter of a template type that must be satisfied by
&T::mf for the SFINAE probe to succeed. But this template instantiating
technique gives the wrong answer when T::mf is inherited.
A safe SFINAE technique for compiletime introspection of T::mf must avoid the
use of &T::mf within a template argument to instantiate a type upon which SFINAE
function template resolution depends. Instead, SFINAE template function
resolution can depend only upon exactly pertinent type declarations used
as argument types of the overloaded SFINAE probe function.
By way of an answer to the question that abides by this constraint I'll
illustrate for compiletime detection of E T::operator*() const, for
arbitrary T and E. The same pattern will apply mutatis mutandis
to probe for any other member method signature.
#include <type_traits>
/*! The template `has_const_reference_op<T,E>` exports a
boolean constant `value that is true iff `T` provides
`E T::operator*() const`
*/
template< typename T, typename E>
struct has_const_reference_op
{
/* SFINAE operator-has-correct-sig :) */
template<typename A>
static std::true_type test(E (A::*)() const) {
return std::true_type();
}
/* SFINAE operator-exists :) */
template <typename A>
static decltype(test(&A::operator*))
test(decltype(&A::operator*),void *) {
/* Operator exists. What about sig? */
typedef decltype(test(&A::operator*)) return_type;
return return_type();
}
/* SFINAE game over :( */
template<typename A>
static std::false_type test(...) {
return std::false_type();
}
/* This will be either `std::true_type` or `std::false_type` */
typedef decltype(test<T>(0,0)) type;
static const bool value = type::value; /* Which is it? */
};
In this solution, the overloaded SFINAE probe function test() is "invoked
recursively". (Of course it isn't actually invoked at all; it merely has
the return types of hypothetical invocations resolved by the compiler.)
We need to probe for at least one and at most two points of information:
*
*Does T::operator*() exist at all? If not, we're done.
*Given that T::operator*() exists, is its signature
E T::operator*() const?
We get the answers by evaluating the return type of a single call
to test(0,0). That's done by:
typedef decltype(test<T>(0,0)) type;
This call might be resolved to the /* SFINAE operator-exists :) */ overload
of test(), or it might resolve to the /* SFINAE game over :( */ overload.
It can't resolve to the /* SFINAE operator-has-correct-sig :) */ overload,
because that one expects just one argument and we are passing two.
Why are we passing two? Simply to force the resolution to exclude
/* SFINAE operator-has-correct-sig :) */. The second argument has no other signifance.
This call to test(0,0) will resolve to /* SFINAE operator-exists :) */ just
in case the first argument 0 satifies the first parameter type of that overload,
which is decltype(&A::operator*), with A = T. 0 will satisfy that type
just in case T::operator* exists.
Let's suppose the compiler say's Yes to that. Then it's going with
/* SFINAE operator-exists :) */ and it needs to determine the return type of
the function call, which in that case is decltype(test(&A::operator*)) -
the return type of yet another call to test().
This time, we're passing just one argument, &A::operator*, which we now
know exists, or we wouldn't be here. A call to test(&A::operator*) might
resolve either to /* SFINAE operator-has-correct-sig :) */ or again to
might resolve to /* SFINAE game over :( */. The call will match
/* SFINAE operator-has-correct-sig :) */ just in case &A::operator* satisfies
the single parameter type of that overload, which is E (A::*)() const,
with A = T.
The compiler will say Yes here if T::operator* has that desired signature,
and then again has to evaluate the return type of the overload. No more
"recursions" now: it is std::true_type.
If the compiler does not choose /* SFINAE operator-exists :) */ for the
call test(0,0) or does not choose /* SFINAE operator-has-correct-sig :) */
for the call test(&A::operator*), then in either case it goes with
/* SFINAE game over :( */ and the final return type is std::false_type.
Here is a test program that shows the template producing the expected
answers in varied sample of cases (GCC 4.6.3 again).
// To test
struct empty{};
// To test
struct int_ref
{
int & operator*() const {
return *_pint;
}
int & foo() const {
return *_pint;
}
int * _pint;
};
// To test
struct sub_int_ref : int_ref{};
// To test
template<typename E>
struct ee_ref
{
E & operator*() {
return *_pe;
}
E & foo() const {
return *_pe;
}
E * _pe;
};
// To test
struct sub_ee_ref : ee_ref<char>{};
using namespace std;
#include <iostream>
#include <memory>
#include <vector>
int main(void)
{
cout << "Expect Yes" << endl;
cout << has_const_reference_op<auto_ptr<int>,int &>::value;
cout << has_const_reference_op<unique_ptr<int>,int &>::value;
cout << has_const_reference_op<shared_ptr<int>,int &>::value;
cout << has_const_reference_op<std::vector<int>::iterator,int &>::value;
cout << has_const_reference_op<std::vector<int>::const_iterator,
int const &>::value;
cout << has_const_reference_op<int_ref,int &>::value;
cout << has_const_reference_op<sub_int_ref,int &>::value << endl;
cout << "Expect No" << endl;
cout << has_const_reference_op<int *,int &>::value;
cout << has_const_reference_op<unique_ptr<int>,char &>::value;
cout << has_const_reference_op<unique_ptr<int>,int const &>::value;
cout << has_const_reference_op<unique_ptr<int>,int>::value;
cout << has_const_reference_op<unique_ptr<long>,int &>::value;
cout << has_const_reference_op<int,int>::value;
cout << has_const_reference_op<std::vector<int>,int &>::value;
cout << has_const_reference_op<ee_ref<int>,int &>::value;
cout << has_const_reference_op<sub_ee_ref,int &>::value;
cout << has_const_reference_op<empty,int &>::value << endl;
return 0;
}
Are there new flaws in this idea? Can it be made more generic without once again
falling foul of the snag it avoids?
A: To be non-intrusive, you can also put serialize in the namespace of the class being serialised, or of the archive class, thanks to Koenig lookup. See Namespaces for Free Function Overrides for more details. :-)
Opening up any given namespace to implement a free function is Simply Wrong. (e.g., you're not supposed to open up namespace std to implement swap for your own types, but should use Koenig lookup instead.)
A: Here are some usage snippets:
*The guts for all this are farther down
Check for member x in a given class. Could be var, func, class, union, or enum:
CREATE_MEMBER_CHECK(x);
bool has_x = has_member_x<class_to_check_for_x>::value;
Check for member function void x():
//Func signature MUST have T as template variable here... simpler this way :\
CREATE_MEMBER_FUNC_SIG_CHECK(x, void (T::*)(), void__x);
bool has_func_sig_void__x = has_member_func_void__x<class_to_check_for_x>::value;
Check for member variable x:
CREATE_MEMBER_VAR_CHECK(x);
bool has_var_x = has_member_var_x<class_to_check_for_x>::value;
Check for member class x:
CREATE_MEMBER_CLASS_CHECK(x);
bool has_class_x = has_member_class_x<class_to_check_for_x>::value;
Check for member union x:
CREATE_MEMBER_UNION_CHECK(x);
bool has_union_x = has_member_union_x<class_to_check_for_x>::value;
Check for member enum x:
CREATE_MEMBER_ENUM_CHECK(x);
bool has_enum_x = has_member_enum_x<class_to_check_for_x>::value;
Check for any member function x regardless of signature:
CREATE_MEMBER_CHECK(x);
CREATE_MEMBER_VAR_CHECK(x);
CREATE_MEMBER_CLASS_CHECK(x);
CREATE_MEMBER_UNION_CHECK(x);
CREATE_MEMBER_ENUM_CHECK(x);
CREATE_MEMBER_FUNC_CHECK(x);
bool has_any_func_x = has_member_func_x<class_to_check_for_x>::value;
OR
CREATE_MEMBER_CHECKS(x); //Just stamps out the same macro calls as above.
bool has_any_func_x = has_member_func_x<class_to_check_for_x>::value;
Details and core:
/*
- Multiple inheritance forces ambiguity of member names.
- SFINAE is used to make aliases to member names.
- Expression SFINAE is used in just one generic has_member that can accept
any alias we pass it.
*/
//Variadic to force ambiguity of class members. C++11 and up.
template <typename... Args> struct ambiguate : public Args... {};
//Non-variadic version of the line above.
//template <typename A, typename B> struct ambiguate : public A, public B {};
template<typename A, typename = void>
struct got_type : std::false_type {};
template<typename A>
struct got_type<A> : std::true_type {
typedef A type;
};
template<typename T, T>
struct sig_check : std::true_type {};
template<typename Alias, typename AmbiguitySeed>
struct has_member {
template<typename C> static char ((&f(decltype(&C::value))))[1];
template<typename C> static char ((&f(...)))[2];
//Make sure the member name is consistently spelled the same.
static_assert(
(sizeof(f<AmbiguitySeed>(0)) == 1)
, "Member name specified in AmbiguitySeed is different from member name specified in Alias, or wrong Alias/AmbiguitySeed has been specified."
);
static bool const value = sizeof(f<Alias>(0)) == 2;
};
Macros (El Diablo!):
CREATE_MEMBER_CHECK:
//Check for any member with given name, whether var, func, class, union, enum.
#define CREATE_MEMBER_CHECK(member) \
\
template<typename T, typename = std::true_type> \
struct Alias_##member; \
\
template<typename T> \
struct Alias_##member < \
T, std::integral_constant<bool, got_type<decltype(&T::member)>::value> \
> { static const decltype(&T::member) value; }; \
\
struct AmbiguitySeed_##member { char member; }; \
\
template<typename T> \
struct has_member_##member { \
static const bool value \
= has_member< \
Alias_##member<ambiguate<T, AmbiguitySeed_##member>> \
, Alias_##member<AmbiguitySeed_##member> \
>::value \
; \
}
CREATE_MEMBER_VAR_CHECK:
//Check for member variable with given name.
#define CREATE_MEMBER_VAR_CHECK(var_name) \
\
template<typename T, typename = std::true_type> \
struct has_member_var_##var_name : std::false_type {}; \
\
template<typename T> \
struct has_member_var_##var_name< \
T \
, std::integral_constant< \
bool \
, !std::is_member_function_pointer<decltype(&T::var_name)>::value \
> \
> : std::true_type {}
CREATE_MEMBER_FUNC_SIG_CHECK:
//Check for member function with given name AND signature.
#define CREATE_MEMBER_FUNC_SIG_CHECK(func_name, func_sig, templ_postfix) \
\
template<typename T, typename = std::true_type> \
struct has_member_func_##templ_postfix : std::false_type {}; \
\
template<typename T> \
struct has_member_func_##templ_postfix< \
T, std::integral_constant< \
bool \
, sig_check<func_sig, &T::func_name>::value \
> \
> : std::true_type {}
CREATE_MEMBER_CLASS_CHECK:
//Check for member class with given name.
#define CREATE_MEMBER_CLASS_CHECK(class_name) \
\
template<typename T, typename = std::true_type> \
struct has_member_class_##class_name : std::false_type {}; \
\
template<typename T> \
struct has_member_class_##class_name< \
T \
, std::integral_constant< \
bool \
, std::is_class< \
typename got_type<typename T::class_name>::type \
>::value \
> \
> : std::true_type {}
CREATE_MEMBER_UNION_CHECK:
//Check for member union with given name.
#define CREATE_MEMBER_UNION_CHECK(union_name) \
\
template<typename T, typename = std::true_type> \
struct has_member_union_##union_name : std::false_type {}; \
\
template<typename T> \
struct has_member_union_##union_name< \
T \
, std::integral_constant< \
bool \
, std::is_union< \
typename got_type<typename T::union_name>::type \
>::value \
> \
> : std::true_type {}
CREATE_MEMBER_ENUM_CHECK:
//Check for member enum with given name.
#define CREATE_MEMBER_ENUM_CHECK(enum_name) \
\
template<typename T, typename = std::true_type> \
struct has_member_enum_##enum_name : std::false_type {}; \
\
template<typename T> \
struct has_member_enum_##enum_name< \
T \
, std::integral_constant< \
bool \
, std::is_enum< \
typename got_type<typename T::enum_name>::type \
>::value \
> \
> : std::true_type {}
CREATE_MEMBER_FUNC_CHECK:
//Check for function with given name, any signature.
#define CREATE_MEMBER_FUNC_CHECK(func) \
template<typename T> \
struct has_member_func_##func { \
static const bool value \
= has_member_##func<T>::value \
&& !has_member_var_##func<T>::value \
&& !has_member_class_##func<T>::value \
&& !has_member_union_##func<T>::value \
&& !has_member_enum_##func<T>::value \
; \
}
CREATE_MEMBER_CHECKS:
//Create all the checks for one member. Does NOT include func sig checks.
#define CREATE_MEMBER_CHECKS(member) \
CREATE_MEMBER_CHECK(member); \
CREATE_MEMBER_VAR_CHECK(member); \
CREATE_MEMBER_CLASS_CHECK(member); \
CREATE_MEMBER_UNION_CHECK(member); \
CREATE_MEMBER_ENUM_CHECK(member); \
CREATE_MEMBER_FUNC_CHECK(member)
A: Okay. Second try. It's okay if you don't like this one either, I'm looking for more ideas.
Herb Sutter's article talks about traits. So you can have a traits class whose default instantiation has the fallback behaviour, and for each class where your member function exists, then the traits class is specialised to invoke the member function. I believe Herb's article mentions a technique to do this so that it doesn't involve lots of copying and pasting.
Like I said, though, perhaps you don't want the extra work involved with "tagging" classes that do implement that member. In which case, I'm looking at a third solution....
A: If you are using facebook folly, there are out of box macro to help you:
#include <folly/Traits.h>
namespace {
FOLLY_CREATE_HAS_MEMBER_FN_TRAITS(has_test_traits, test);
} // unnamed-namespace
void some_func() {
cout << "Does class Foo have a member int test() const? "
<< boolalpha << has_test_traits<Foo, int() const>::value;
}
Though the implementation details is the same with the previous answer, use a library is simpler.
A: I had a similar need and came across o this SO. There are many interesting/powerful solutions proposed here, though it is a bit long for just a specific need : detect if a class has member function with a precise signature. So I did some reading/testing and came up with my version that could be of interest. It detect :
*
*static member function
*non-static member function
*non-static member function const
with a precise signature. Since I don't need to capture any signature (that'd require a more complicated solution), this one suites to me. It basically used enable_if_t.
struct Foo{ static int sum(int, const double&){return 0;} };
struct Bar{ int calc(int, const double&) {return 1;} };
struct BarConst{ int calc(int, const double&) const {return 1;} };
// Note : second typename can be void or anything, as long as it is consistent with the result of enable_if_t
template<typename T, typename = T> struct has_static_sum : std::false_type {};
template<typename T>
struct has_static_sum<typename T,
std::enable_if_t<std::is_same<decltype(T::sum), int(int, const double&)>::value,T>
> : std::true_type {};
template<typename T, typename = T> struct has_calc : std::false_type {};
template<typename T>
struct has_calc <typename T,
std::enable_if_t<std::is_same<decltype(&T::calc), int(T::*)(int, const double&)>::value,T>
> : std::true_type {};
template<typename T, typename = T> struct has_calc_const : std::false_type {};
template<typename T>
struct has_calc_const <T,
std::enable_if_t<std::is_same<decltype(&T::calc), int(T::*)(int, const double&) const>::value,T>
> : std::true_type {};
int main ()
{
constexpr bool has_sum_val = has_static_sum<Foo>::value;
constexpr bool not_has_sum_val = !has_static_sum<Bar>::value;
constexpr bool has_calc_val = has_calc<Bar>::value;
constexpr bool not_has_calc_val = !has_calc<Foo>::value;
constexpr bool has_calc_const_val = has_calc_const<BarConst>::value;
constexpr bool not_has_calc_const_val = !has_calc_const<Bar>::value;
std::cout<< " has_sum_val " << has_sum_val << std::endl
<< " not_has_sum_val " << not_has_sum_val << std::endl
<< " has_calc_val " << has_calc_val << std::endl
<< " not_has_calc_val " << not_has_calc_val << std::endl
<< " has_calc_const_val " << has_calc_const_val << std::endl
<< "not_has_calc_const_val " << not_has_calc_const_val << std::endl;
}
Output :
has_sum_val 1
not_has_sum_val 1
has_calc_val 1
not_has_calc_val 1
has_calc_const_val 1
not_has_calc_const_val 1
A: Here's a possible implementation relying on C++11 features. It correctly detects the function even if it's inherited (unlike the solution in the accepted answer, as Mike Kinghan observes in his answer).
The function this snippet tests for is called serialize:
#include <type_traits>
// Primary template with a static assertion
// for a meaningful error message
// if it ever gets instantiated.
// We could leave it undefined if we didn't care.
template<typename, typename T>
struct has_serialize {
static_assert(
std::integral_constant<T, false>::value,
"Second template parameter needs to be of function type.");
};
// specialization that does the checking
template<typename C, typename Ret, typename... Args>
struct has_serialize<C, Ret(Args...)> {
private:
template<typename T>
static constexpr auto check(T*)
-> typename
std::is_same<
decltype( std::declval<T>().serialize( std::declval<Args>()... ) ),
Ret // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>::type; // attempt to call it and see if the return type is correct
template<typename>
static constexpr std::false_type check(...);
typedef decltype(check<C>(0)) type;
public:
static constexpr bool value = type::value;
};
Usage:
struct X {
int serialize(const std::string&) { return 42; }
};
struct Y : X {};
std::cout << has_serialize<Y, int(const std::string&)>::value; // will print 1
A: This should be sufficient, if you know the name of the member function you are expecting. (In this case, the function bla fails to instantiate if there is no member function (writing one that works anyway is tough because there is a lack of function partial specialization. You may need to use class templates) Also, the enable struct (which is similar to enable_if) could also be templated on the type of function you want it to have as a member.
template <typename T, int (T::*) ()> struct enable { typedef T type; };
template <typename T> typename enable<T, &T::i>::type bla (T&);
struct A { void i(); };
struct B { int i(); };
int main()
{
A a;
B b;
bla(b);
bla(a);
}
A: Here is a simpler take on Mike Kinghan's answer. This will detect inherited methods. It will also check for the exact signature (unlike jrok's approach which allows argument conversions).
template <class C>
class HasGreetMethod
{
template <class T>
static std::true_type testSignature(void (T::*)(const char*) const);
template <class T>
static decltype(testSignature(&T::greet)) test(std::nullptr_t);
template <class T>
static std::false_type test(...);
public:
using type = decltype(test<C>(nullptr));
static const bool value = type::value;
};
struct A { void greet(const char* name) const; };
struct Derived : A { };
static_assert(HasGreetMethod<Derived>::value, "");
Runnable example
A: With c++ 20 this becomes much simpler. Say we want to test if a class T has a member function void T::resize(typename T::size_type). For example, std::vector<U> has such a member function. Then,
template<typename T>
concept has_resize_member_func = requires {
typename T::size_type;
{ std::declval<T>().resize(std::declval<typename T::size_type>()) } -> std::same_as<void>;
};
and the usage is
static_assert(has_resize_member_func<std::string>, "");
static_assert(has_resize_member_func<int> == false, "");
A: I'm not sure if I understand you correctly, but you may exploit SFINAE to detect function presence at compile-time. Example from my code (tests if class has member function size_t used_memory() const).
template<typename T>
struct HasUsedMemoryMethod
{
template<typename U, size_t (U::*)() const> struct SFINAE {};
template<typename U> static char Test(SFINAE<U, &U::used_memory>*);
template<typename U> static int Test(...);
static const bool Has = sizeof(Test<T>(0)) == sizeof(char);
};
template<typename TMap>
void ReportMemUsage(const TMap& m, std::true_type)
{
// We may call used_memory() on m here.
}
template<typename TMap>
void ReportMemUsage(const TMap&, std::false_type)
{
}
template<typename TMap>
void ReportMemUsage(const TMap& m)
{
ReportMemUsage(m,
std::integral_constant<bool, HasUsedMemoryMethod<TMap>::Has>());
}
A: Without C++11 support (decltype) this might work:
SSCCE
#include <iostream>
using namespace std;
struct A { void foo(void); };
struct Aa: public A { };
struct B { };
struct retA { int foo(void); };
struct argA { void foo(double); };
struct constA { void foo(void) const; };
struct varA { int foo; };
template<typename T>
struct FooFinder {
typedef char true_type[1];
typedef char false_type[2];
template<int>
struct TypeSink;
template<class U>
static true_type &match(U);
template<class U>
static true_type &test(TypeSink<sizeof( matchType<void (U::*)(void)>( &U::foo ) )> *);
template<class U>
static false_type &test(...);
enum { value = (sizeof(test<T>(0, 0)) == sizeof(true_type)) };
};
int main() {
cout << FooFinder<A>::value << endl;
cout << FooFinder<Aa>::value << endl;
cout << FooFinder<B>::value << endl;
cout << FooFinder<retA>::value << endl;
cout << FooFinder<argA>::value << endl;
cout << FooFinder<constA>::value << endl;
cout << FooFinder<varA>::value << endl;
}
How it hopefully works
A, Aa and B are the clases in question, Aa being the special one that inherits the member we're looking for.
In the FooFinder the true_type and false_type are the replacements for the correspondent C++11 classes. Also for the understanding of template meta programming, they reveal the very basis of the SFINAE-sizeof-trick.
The TypeSink is a template struct that is used later to sink the integral result of the sizeof operator into a template instantiation to form a type.
The match function is another SFINAE kind of template that is left without a generic counterpart. It can hence only be instantiated if the type of its argument matches the type it was specialized for.
Both the test functions together with the enum declaration finally form the central SFINAE pattern. There is a generic one using an ellipsis that returns the false_type and a counterpart with more specific arguments to take precedence.
To be able to instantiate the test function with a template argument of T, the match function must be instantiated, as its return type is required to instantiate the TypeSink argument. The caveat is that &U::foo, being wrapped in a function argument, is not referred to from within a template argument specialization, so inherited member lookup still takes place.
A: Building on jrok's answer, I have avoided using nested template classes and/or functions.
#include <type_traits>
#define CHECK_NESTED_FUNC(fName) \
template <typename, typename, typename = std::void_t<>> \
struct _has_##fName \
: public std::false_type {}; \
\
template <typename Class, typename Ret, typename... Args> \
struct _has_##fName<Class, Ret(Args...), \
std::void_t<decltype(std::declval<Class>().fName(std::declval<Args>()...))>> \
: public std::is_same<decltype(std::declval<Class>().fName(std::declval<Args>()...)), Ret> \
{}; \
\
template <typename Class, typename Signature> \
using has_##fName = _has_##fName<Class, Signature>;
#define HAS_NESTED_FUNC(Class, Func, Signature) has_##Func<Class, Signature>::value
We can use the above macros as below:
class Foo
{
public:
void Bar(int, const char *) {}
};
CHECK_NESTED_FUNC(Bar); // generate required metafunctions
int main()
{
using namespace std;
cout << boolalpha
<< HAS_NESTED_FUNC(Foo, Bar, void(int, const char *)) // prints true
<< endl;
return 0;
}
Suggestions are welcome.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "170"
}
|
Q: javascript library to show photo album I visited few web sites in the past where they had a set of photo thumbnails and clicking on one of them created a cool effect of an expanding popup showing the full size image.
Is there any available free JavaScript library that will do this?
I'm interested mostly in the popup effect and less in the rest of the album management.
A: Lightbox is another popular one:
Lightbox Project Page
A: The thickbox plugin for jQuery will do what you want.
A: check out jQuery http://jquery.com
and then the LightBox plugin for jQuery: http://leandrovieira.com/projects/jquery/lightbox/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How can I find the location of a regex match in Perl? I need to write a function that receives a string and a regex. I need to check if there is a match and return the start and end location of a match. (The regex was already compiled by qr//.)
The function might also receive a "global" flag and then I need to return the (start,end) pairs of all the matches.
I cannot change the regex, not even add () around it as the user might use () and \1. Maybe I can use (?:).
Example: given "ababab" and the regex qr/ab/, in the global case I need to get back 3 pairs of (start, end).
A: The built-in variables @- and @+ hold the start and end positions, respectively, of the last successful match. $-[0] and $+[0] correspond to entire pattern, while $-[N] and $+[N] correspond to the $N ($1, $2, etc.) submatches.
A: The pos function gives you the position of the match. If you put your regex in parentheses you can get the length (and thus the end) using length $1. Like this
sub match_positions {
my ($regex, $string) = @_;
return if not $string =~ /($regex)/;
return (pos($string) - length $1, pos($string));
}
sub all_match_positions {
my ($regex, $string) = @_;
my @ret;
while ($string =~ /($regex)/g) {
push @ret, [pos($string) - length $1, pos($string)];
}
return @ret
}
A: Forget my previous post, I've got a better idea.
sub match_positions {
my ($regex, $string) = @_;
return if not $string =~ /$regex/;
return ($-[0], $+[0]);
}
sub match_all_positions {
my ($regex, $string) = @_;
my @ret;
while ($string =~ /$regex/g) {
push @ret, [ $-[0], $+[0] ];
}
return @ret
}
This technique doesn't change the regex in any way.
Edited to add: to quote from perlvar on $1..$9. "These variables are all read-only and dynamically scoped to the current BLOCK." In other words, if you want to use $1..$9, you cannot use a subroutine to do the matching.
A: You can also use the deprecated $` variable, if you're willing to have all the REs in your program execute slower. From perlvar:
$‘ The string preceding whatever was matched by the last successful pattern match (not
counting any matches hidden within a BLOCK or eval enclosed by the current BLOCK).
(Mnemonic: "`" often precedes a quoted string.) This variable is read-only.
The use of this variable anywhere in a program imposes a considerable performance penalty
on all regular expression matches. See "BUGS".
A: #!/usr/bin/perl
# search the postions for the CpGs in human genome
sub match_positions {
my ($regex, $string) = @_;
return if not $string =~ /($regex)/;
return (pos($string), pos($string) + length $1);
}
sub all_match_positions {
my ($regex, $string) = @_;
my @ret;
while ($string =~ /($regex)/g) {
push @ret, [(pos($string)-length $1),pos($string)-1];
}
return @ret
}
my $regex='CG';
my $string="ACGACGCGCGCG";
my $cgap=3;
my @pos=all_match_positions($regex,$string);
my @hgcg;
foreach my $pos(@pos){
push @hgcg,@$pos[1];
}
foreach my $i(0..($#hgcg-$cgap+1)){
my $len=$hgcg[$i+$cgap-1]-$hgcg[$i]+2;
print "$len\n";
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
}
|
Q: Resharper and ViEmu Keybindings ( and Visual Assist ) With ViEmu you really need to unbind a lot of resharpers keybindings to make it work well.
Does anyone have what they think is a good set of keybindings that work well for resharper when using ViEmu?
What I'm doing at the moment using the Visual Studio bindings from Resharper. Toasting all the conflicting ones with ViEmu, and then just driving the rest through the menu modifiers ( Alt-R keyboard shortcut for the menu item ). I also do the same with Visual Assist shortcuts ( for C++ )
if anyones got any tips and tricks for ViEmu / Resharper or Visual Assist working together well I'd most apprciate it!
A: I use both as well, but I'm using the IntelliJ keybindings instead, so I can't speak specifically to the Visual Studio bindings. J.P. Boodhoo has some changes that he has made via AutoHotKey to provide additional Vim-like functionality to Visual Studio + ReSharper + ViEmu.
I have removed a few of the scanned keys, though, because I want to keep some of the ReSharper functionality over the ViEmu functionality, though the way I use these tools change over time as I learn more shortcuts from either ViEmu or ReSharper.
A: I have noticed the following, which may be useful to know. Some of the ReSharper keyboard mappings that ViEmu hoses, will work once you have a different ReSharper dialog open. I use the IntelliJ IDEA-based shortcuts, but I assume this will work similarly for ReSharper's VS scheme.
Example: ViEmu binds to Ctrl+N which R# uses for Go To Type. However, ViEmu does not bind to Ctrl+Shift+N which R# uses for Go To File. Therefore, if you hit Ctrl+Shift+N the Go To dialog is launched. You can then take your finger off Shift and hit N again and the dialog will switch to Go To Type.
This is very useful, if like me you use Go To Type a lot and don't really want to mess with the keyboard mappings.
A: You can also create mappings in ViEmu that will call the VS and R# actions. For example, I have these lines in my _viemurc file for commenting and uncommenting a selection:
map <C-S-c> gS:vsc Edit.CommentSelection<CR>
map <C-A-c> gS:vsc Edit.UncommentSelection<CR>
The :vsc is for "visual studio command," and then you enter the exact text of the command, as it appears in the commands list when you go to Tool>Options>Keyboard
I don't use any of the R# ones in this way, but it does work, as with:
map <C-S-A-f> gS:vsc ReSharper.FindUsages<CR>
A: As @Jay noted the best way is to set up custom bindings.
Here is example of bindings at https://github.com/StanislawSwierc/Profile. I created my bindings based on the previous at https://github.com/w1ld/viemu_settings
A: I use both plugins, but I really prefer the power of the Vi input model that ViEmu gives. I really don't miss so much the Resharper keybindings...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Headless HTML rendering, preferably open source I'm currently looking to perform some headless HTML rendering to essentially create resources off screen and persist the result as an image. The purpose is to take a subset of the HTML language and apply it to small screen devices (like PocketPCs) because our users know HTML and the transition from Photoshop to HTML markup would be acceptable.
I am also considering using WPF Imaging so if anyone can weigh in comments about its use (particularly tools you would point your users to for creating WPF layouts you can convert into images and how well it performs) it would be appreciated.
My order of preference is:
*
*open source
*high performance
*native C# or C# wrapper
*lowest complexity for implementation on Windows
I'm not very worried about how feature rich the headless rendering is since we won't make big use of JavaScript, Flash, nor other embedded objects aside from images. I'd be fine with anything that uses IE, Firefox, webkit, or even a custom rendering implementation so long as its implementation is close to standards compliant.
A: http://www.phantomjs.org/
Full web stack
PhantomJS is a headless WebKit with JavaScript API. It has fast and native support for various web standards: DOM handling, CSS selector, JSON, Canvas, and SVG.
A: You can use Gecko to do this.
A: I have found IECapt during my search which actually includes a C# implementation. Although it is by design a CLI application the source code is provided so I can likely modify it for my own needs.
A: The suitable tools are CutyCapt for WebKit (Safari, Google Chrome) and IECapt (MS IE).
A: Flying Saucer is a Java-based XHTML & CSS2.1 renderer that passess ACID2 with some error caveats. Its downside is that it has no error handling. It is not really designed to be a browser, but rather to be a component used to display HTML content (help files, etc.) within an application.
A: I'm enjoying using url2png for these types of jobs/screenshots.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
}
|
Q: Using both 1.1 and 2.0 frameworks on Windows 2003 x64 So, much to my annoyance I discover (after lots of research), that when running 1.1 and 2.0 dot.net frameworks on a 64bit 2003 install, it removes the asp.net tab from the IIS properties. I've tried the registry hacks, I've tried registering 32bit versions of both frameworks, and no luck. My only work around is running the excellent ASP.NET switcher from Dennis Bauer.
Does anyone else have any insight?
A: Also, you might try running the 32-bit version of MMC. IIRC, MMC can only load extensions that are the same bit-ness as itself, and the .Net 2.0 extension is 32-bit only.
That said, the tool you linked in your question is very useful for working around this issue as well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Different versions of C++ libraries After compiling a simple C++ project using Visual Studio 2008 on vista, everything runs fine on the original vista machine and other vista computers. However, moving it over to an XP box results in an error message: "The application failed to start because the application configuration is incorrect".
What do I have to do so my compiled EXE works on XP and Vista? I had this same problem a few months ago, and just fiddling with some settings on the project fixed it, but I don't remember which ones I changed.
A: You need to install the Visual Studios 2008 runtime on the target computer:
http://www.microsoft.com/downloads/details.aspx?FamilyID=9b2da534-3e03-4391-8a4d-074b9f2bc1bf&displaylang=en
Alternatively, you could also link the run time statically, in the project properties window go to:
c++ -> Code Generation -> Runtime
Library and select "multi-threaded
/MT"
A: You need to install the runtime redistributable files onto the machine you are trying to run the app on.
The redistributable for 2008 is here.
The redistributable for 2005 is here.
They can be installed side-by-side, in case you need both.
A: You probably need to distribute the VC runtime with your application. There are a variety of ways to do this. This article from the Microsoft Visual C++ Team best explains the different ways to distribute these dependencies if you are using Visual Studio 2005 or 2008.
As stated in the article, though you can download the Redistributable installer package and simply launch that on the client machine, that is almost always not the optimal option. There are usually better ways to include the required DLLs such as including the merge module if you are distributing via Windows Setup or App-Local copy if you just want to distribute a zipped folder.
Another option is to statically link against the runtime libraries, instead of distributing them with your application. This option is only suitable for standalone EXEs that do not load other DLLs. You also cannot do this with DLLs that are loaded by other applications.
A: It is much the simplest to link to the runtime statically.
c++ -> Code Generation -> Runtime Library and select "multi-threaded /MT"
However, this does make your executable a couple hundred KByte larger. This might be a problem if you are installing a large number of small programs, since each will be burdened by its very own copy of the runtime. The answer is to create an installer.
New project -> "setup and deployment" -> "setup project"
Load the output from your application projects ( defined using the DLL version of the runtime ) into the installer project and build it. The dependency on the runtime DLL will be noticed, included in the installer package, and neatly and unobtrusively installed in the correct place on the target machine.
A: Visual studio 2005 actually has two
The one for the original release
and the one for SP1
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: get contacts from email account a lot of websites like twitter, facebook and others let the users enter their email id and pwd and 'extract' the contacts based on that.
Anyone know how this is done?
A: They login to your account and scrape the contents, or use a public API. Either way, it's not a method that I would implement or use myself because I wouldn't trust anyone else with my credentials. And I think it teaches users to be careless with the secrecy of their credentials.
A: Leaving aside the ethical questions, there's a commercial library available that can do this for you: http://www.octazen.com/product_abimporter.php
The library is available for PHP, .NET, Java, Ruby & ColdFusion. It supports importing contacts from dozens of different services (including all the main ones).
It only costs about $100 for a licence, works perfectly and (using the Java version) only requires this single line of code to import contacts from any of the supported services:
List<Contact> contacts = SimpleAddressBookImporter.fetchContacts(emailAddress, password)
They have another library that can import friend lists from social networks, though I haven't tried that one.
A: This is the sort of thing OAuth was designed for. Google have started to adopt it. It doesn't have the same trust issues as the more typical scraping.
Unfortunately, for the time being, people tend to just ask for your password, log in as you, and scrape the information, which is far less secure, as it gives the website total access to your account. This isn't something you should copy, use OAuth or an equivalent wherever possible.
A: There are APis available:
Yahoo --> http://developer.yahoo.com/addressbook/
Google --> http://code.google.com/apis/contacts/
None for AOL (yet).
A: I assume they log into your email account, either by POP3, a public API or they know the html formatting of webmail systems and read the DOM. Then they find whoever you recieved and sent emails to and looks through it's own user database to find matches.
A: yeah, I agree. trusting a site with your email credentials isnt safe. Especially after what was found by gmail archiver (http://it.slashdot.org/article.pl?sid=08/03/11/1723206&from=rss)
But just from programmatic POV I was wondering how they did this. maybe Gmail hotmail and all others do have API's which users can use....need to look into it more i guess.
A: The contact list Java library is easy to use and works well with Gmail, Yahoo!, yeah, Hotmail and MSN.
A: For gmail:
http://sourceforge.net/projects/gmail-api
http://johnvey.com/features/gmailapi/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Specifying Source for Debugging using Netbeans Using the debugger in Netbeans 6.1, I'd like to step into a method of the JSF library (specifically method saveSerializedView of class StateManager), but I cannot figure out how to specify through the IDE the location of the source code for the JSF library. I'm even having trouble determining which jar file or files Netbeans is using for JSF.
A: This answer applies to Netbeans 6.1 circa September 2008:
It sounds as if you need to explore the "Libraries" dialog. Select "Tools" from the menu bar and "Libraries" from the menu. If you select "JSF" on the left, you'll see an association of jar files, sources (currently none in my Netbeans 6.1) and javadoc (which shows javaee5-doc-api.zip in my Netbeans).
What you need to do is add a new zip or jar of source files under the "Sources" tag.
An example of a fully populated library is the "Swing Layout Extensions" which has a jar file, sources and javadoc.
A: I take it your jars don't include the source, since you can't "step into."
In cases like these what I do is find the appropriate source (just StateManager.java in this case, if it's available, or jars that include source), taking care that it is the version I'm using. Inside my project tree, I create the package hierarchy to that specific class, and put that source in there. Even if the class exists in a jar, I can use this source to set breakpoints, etc.
A: Example for Netbeans7 and Mojarra 2.0.3
Create a new library(Tools->Libraries), call it for example Mojarra-2.0.3.
In the classpath tab add the 2 mojarra jars:
*
*jsf-api.jar
*jsf-impl.jar
In the sources tab, add two paths:
*
*..\mojarra-2.0.3-FCS-source\jsf-api\src\main\java\
*..\mojarra-2.0.3-FCS-source\jsf-ri\src\main\java\
Add the new created library to the project
A: download the jsf source and point to the working folder for source lookup..
It will dig into the code
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Disabling Back button on the browser I am writing an application that if the user hits back, it may resend the same information and mess up the flow and integrity of data. How do I disable it for users who are with and without javascript on?
A: I strongly urge you to go to heroic lengths to prevent breaking the back button, it is a sure fire way to alienate your users and even made it to No.1 on Jacob Neilsen's Top 10 Web Design Mistakes in 1999.
Perhaps you could consider rather asking the question: "How to avoid breaking the back button for <insert your scenario here>?"
If Scott's answer hits close to the mark, consider changing your flow to the PRG model. If it's something else, then give a bit more detail and see how we can help.
A: It's not possible, sadly. However, consider your applications navigation model. Are you using Post/Redirect/Get PRG Model? http://en.wikipedia.org/wiki/Post/Redirect/Get?
This model is more back button friendly than the Postback model.
A: I came up with a little hack that disables the back button using JavaScript. I checked it on chrome 10, firefox 3.6 and IE9:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<title>Untitled Page</title>
<script type = "text/javascript" >
function changeHashOnLoad() {
window.location.href += "#";
setTimeout("changeHashAgain()", "50");
}
function changeHashAgain() {
window.location.href += "1";
}
var storedHash = window.location.hash;
window.setInterval(function () {
if (window.location.hash != storedHash) {
window.location.hash = storedHash;
}
}, 50);
</script>
</head>
<body onload="changeHashOnLoad(); ">
Try to hit back!
</body>
</html>
A: Best option is not to depend on postbacks to control flow, however if you are stuck with it (for now)
you may use something like this:
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.Cache.SetExpires(Now.AddSeconds(-1));
Response.Cache.SetNoStore();
Response.AppendHeader("Pragma", "no-cache");
Soon you will find that it will not work on all browsers, but then you may introduce a check in your code like:
if (Page.IsPostBack)
{
if (pageIsExpired()){
Response.Redirect("/Some_error_page.htm");
}
else {
var now = Now;
Session("TimeStamp") = now.ToString();
ViewState("TimeStamp") = now.ToString();
}
private boolean pageIsExpired()
{
if (Session("TimeStamp") == null || ViewState("TimeStamp") == null)
return false;
if (Session("TimeStamp") == ViewState("TimeStamp"))
return true;
return false;
}
That will solve problem to some extend, Code not checked -- only for examples purposes..
A: It is possible to disable back button in all major browser. It just uses hash values to disable the back button completely.
Just put these 5 lines of code in your page
<script>
window.location.hash="no-back-button";
window.location.hash="Again-no-back-button";//for google chrome
window.onhashchange=function(){window.location.hash="no-back-button";}
</script>
Detailed description
A: Here's a previous post on it:
Prevent Use of the Back Button (in IE)
A: Whatever you come up with to disable the back button might not stop the back button in future browsers.
If its late in the development cycle I suggest you try some suggestions above but when you get time you should structure your flow so that the back button does not interfere with the logic of your site, it simply takes the user back to the previous page like they expect it to do.
A: You shouldn't.
You could attach some script to the onbeforeunload event of a page and confirm with the user that's what they want to do; and you can go a bit further and try to disable it but of course that will only work for users who have javascript turned on. Instead look at rewriting the app so you don't commit transactions on each page submit, but only at the end of the process.
A: It is true, proper validation should be added to make sure duplicate data doesn't mess things up. However, as in my case, I don't full control of the data since I'm using some third party API after my form. So I used this
history.go(+1);
This will send user forward to the "receipt" which is supposed to come after "payment" page if they try to go back to "payment" page (just giving a payment for example). Use sparingly, though
A: You could post the data on each form to a _NEW window. This will disable the back button on each window, but without javascript it might be difficult to force the old one closed.
A: I was able to accomplish this by using:
Response.Cache.SetExpires(DateTime.MinValue);
Response.Cache.SetNoStore();
When I used Response.Cache.SetCacheability(HttpCacheability.NoCache); it prevented me from downloading office files.
A: Find below link
Disable browser back button functionality using JavaScript in asp.net | ASP.Net disable browser back button (using javascript)
http://www.aspdotnet-suresh.com/2011/11/disable-browser-back-button.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31"
}
|
Q: Is it safe to generally assume that toString() has a low cost? Do you generally assume that toString() on any given object has a low cost (i.e. for logging)? I do. Is that assumption valid? If it has a high cost should that normally be changed? What are valid reasons to make a toString() method with a high cost? The only time that I get concerned about toString costs is when I know that it is on some sort of collection with many members.
From: http://jamesjava.blogspot.com/2007/08/tostring-cost.html
Update: Another way to put it is: Do you usually look into the cost of calling toString on any given class before calling it?
A: The best way to find out is to profile your code. However, rather than worry that a particular function has a high overhead, it's (usually) better to worry about the correctness of your application and then do performance profiling on it (but be wary that real-world use and your test setup may differ radically). As it turns out, programmers generally guess wrong about what's really slow in their application and they often spend a lot of time optimizing things that don't need optimizing (eliminating that triple nested loop which only consumes .01% of your application's time is probably a waste).
Fortunately, there are plenty of open source profilers for Java.
A:
Do you generally assume that toString() on any given object has a low cost? I do.
Why would you do that? Profile your code if you're running into performance issues; it'll save you a lot of time working past incorrect assumptions.
A: No it's not. Because ToString() can be overloaded by anyone, they can do whatever they like. It's a reasonable assumption that ToString() SHOULD have a low cost, but if ToString() accesses properties that do "lazy loading" of data, you might even hit a database inside your ToString().
A: Your question's title uses the contradictory words "safe" and "generally." So even though in comments you seem to be emphasizing the general case, to which the answer is probably "yes, it's generally not a problem," a lot of people are seeing "safe" and therefore are answering either "No, because there's a risk of arbitrarily poor performance," or "No, because if you want to be 'safe' with a performance question, you must profile."
A: The Java standard library seems to have been written with the intent of keeping the cost of toString calls very low. For example, Java arrays and collections have toString methods which do not iterate over their contents; to get a good string representation of these objects you must use either Arrays.toString or Collections.toString from the java.util package.
Similarly, even objects with expensive equals methods have inexpensive toString calls. For example, the java.net.URL class has an equals method which makes use of an internet connection to determine whether two URLs are truly equal, but it still has a simple and constant-time toString method.
So yes, inexpensive toString calls are the norm, and unless you use some weird third-party package which breaks with the convention, you shouldn't worry about these taking a long time.
Of course, you shouldn't really worry about performance until you find yourself in a situation where your program is taking too long, and even then you should use a profiler to figure out what's taking so longer rather than worrying about this sort of thing ahead of time.
A: Since I generally only call toString() on methods and classes that I have written myself and overrode the base method, then I generally know what the cost is ahead of time. The only time I use toString() otherwise is error handling and or debugging when speed is not of the same importance.
A: My pragmatic answer would be: yes, you always assume a toString() call is cheap, unless you make an enormous amount of them. On the one hand, it is extremely unlikely that a toString() method would be expensive and on the other hand, it is extremely unlikely that you run into trouble if it isn't. I generally don't worry about issues like these, because there are too many of them and you won't get any code written if you do ;).
If you do run into performance issues, everything is open, including the performance of toString() and you should, as Shog9 suggest, simply profile the code. The Java Puzzlers show that even Sun wrote some pretty nasty constructors and toString() methods in their JDK's.
A: I think the question has a flaw. I wouldn't even assume toString() will print a useful piece of data. So, if you begin with that assumption, you know you have to check it prior to calling it and can assess it's 'cost' on a case by case basis.
A: Possibly the largest cost with naive toString() chaining is appending all those strings. If you want to generate large strings, you should use an underlying representation that supports an efficient append. If you know the append is efficient, then toString()s probably have a relatively low cost.
For example, in Java, StringBuilder will preallocate some space so that a certain amount of string appending takes linear time. It will reallocate when you run out of space.
In general, if you want to append sequences of things and you for whatever reason don't want to do something similar, you can use difference lists. These support linear time append by turning sequence appending into function composition.
A: toString() is used to represent an object as a String. So if you need slow running code to create a representation of an object, you need to be very careful and have a very good reason to do so. Debugging would be the only one I can think of where a slow running toString is acceptable.
A: My thought is:
Yes on standard-library objects
No on non-standard objects unless you have the source code in front of you and can check it.
A: I will always override toString to put in whatever I think I will need to debug problems. It it usually up to the developer to use it by either calling the toString method itself or having another class call it for you (println, logging, etc.).
A: There's an easy answer to this one, which I first heard in a discussion about reflection: "if you have to ask, you can't afford it."
Basically, if you need ToString() of large objects in the day-to-day operation of your program, then your program is crazy. Even if you need to ToString() an integer for anything time critical, your program is crazy, because it's obviously using a string where an integer would do.
ToString() for log messages is automatically okay, because logging is already expensive. If your program is too slow, turn down the log level! It doesn't really matter how slow it is to actually generate the debug messages, as long as you can choose to not generate them. (Note: your logging infrastructure should call ToString() itself, and only when the log message is supposed to be printed. Don't ToString() it by hand on the way into the log infrastructure, or you'll pay the price even if the log level is low and you won't be printing it after all! See http://www.colijn.ca/~caffeine/?m=200708#16 for more explanation of this.)
A: Since you put "generally" in your question, I would say yes. For -most- objects, there isn't going to be a costly ToString overload. There definitely can be, but generally there won't be.
A: In general I consider toString() low cost when I use it on simple objects, such as an integer or very simple struct. When applied to complex objects, however, toString() is a bit of crap shoot. There are two reason's for this. First, complex objects tend to contain other objects, so a single call to toString() can cascade into many calls to toString() on other objects plus the overehead of concatenating all those results. Second, there is no "standard" for converting complex objects to strings. One toString() call may yeild a single line of comma-separated values; another a much more verbose form. Only by checking it yourself can you know.
So my rule is toString() on simple objects is generally safe but on complex objects is suspect until checked.
A: I'd avoid using toString() on objects other than the basic types. toString() may not display anything useful. It may iterate over all the member variables and print them out. It may load something not yet loaded. Depending on what you plan to do with that string, you should consider not building it.
There are typically a few reasons why you use toString(): logging/debugging is probably the most common for random objects; display is common for certain objects (such as numbers). For logging I'd do something like
if(logger.isDebugEnabled()) {
logger.debug("The zig didn't take off. Response: {0}", response.getAsXML().toString());
}
This does two things: 1. Prevents constructing the string and 2. Prevents unnecessary string addition if the message won't be logged.
A: In general, I don't check every implementation. However, if I see a dependency on Apache commons, alarm bells go off and I look at the implementation more closely to make sure that they aren't using ToStringBuilder or other atrocities.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: What are the major vulnerabilities of Redhat + Apache? I am searching for a host for a new commercial website. Among other things, I'd like to know what the various OS - Webserver combinations have in terms of vulnerabilities. What are the vulnerabilities of Redhat + Apache?
A: See: http://httpd.apache.org/security/vulnerabilities_20.html
A: Poor system admins is the biggest in my experience.
A: The biggest risk to any web application server is vulnerabilities in the web application its self. Linux Apache MySQL and PHP (LAMP) is a very secure platform. RedHat's Fedora core is very secure because it uses SELinux, this is somthing that does not exist for Windows. However vulnerabilities such as SQL Injection and XSS can still result in your server getting hacked.
A: Its kind of a difficult question to answer, the development life cycles are so active you're asking for something that's likely to have been solved already ( and if its been reported so that we know of it, the likelihood its fixed is really high )
What you need is a 0-day hack for them, and asking this list really wont get you those.
A: Any system is only as strong as its weakest link. Invariably that will not be the OS or the server software, it will be the end application you develop or install.
A: As suggested here, I could check out the CERT Database.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do you extract macros programmatically from OpenOffice.org Writer document using .NET? How do you extract the Macro code from an OpenOffice.org Writer document using the .NET API?
I got an answer to the "Office 2007" version of this question, but we are evaluating OpenOffice as an alternative -- if anyone has any experience with this, any tips or resources would be appreciated.
A: It's a quite old question, but nowadays there are several answers that could be useful:
*
*How to create an OpenOffice document in .NET
*How do I programmatically save a document in OpenOffice.org?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Virtual network interface in Mac OS X I know that you can make a virtual network interface in Windows (see here), and in Linux it is also pretty easy with ip-aliases, but does something similar exist for Mac OS X? I've been looking for loopback adapters, virtual interfaces and couldn't find a good solution.
You can create a new interface in the networking panel, based on an existing interface, but it will not act as a real fully functional interface (if the original interface is inactive, then the derived one is also inactive).
This scenario is needed when working in a completely disconnected situation. Even then, it makes sense to have networking capabilities when running servers in a VMWare installation. Those virtual machines can be reached by their IP address, but not by their DNS name, even if I run a DNS server in one of those virtual machines. By configuring an interface to use the virtual DNS server, I thought I could test some DNS scenario's. Unfortunately, no interface is resolving DNS names if none of them are inactive...
A: The loopback adapter is always up.
ifconfig lo0 alias 172.16.123.1 will add an alias IP 172.16.123.1 to the loopback adapter
ifconfig lo0 -alias 172.16.123.1 will remove it
A: It's possible to use TUN/TAP device.
http://tuntaposx.sourceforge.net/
A: Replying in particular to:
You can create a new interface in the networking panel, based on an existing interface, but it will not act as a real fully functional interface (if the original interface is inactive, then the derived one is also inactive).
This can be achieved using a Tun/Tap device as suggested by psv141, and manipulating the /Library/Preferences/SystemConfiguration/preferences.plist file to add a NetworkService based on either a tun or tap interface. Mac OS X will not allow the creation of a NetworkService based on a virtual network interface, but one can directly manipulate the preferences.plist file to add the NetworkService by hand. Basically you would open the preferences.plist file in Xcode (or edit the XML directly, but Xcode is likely to be more fool-proof), and copy the configuration from an existing Ethernet interface. The place to create the new NetworkService is under "NetworkServices", and if your Mac has an Ethernet device the NetworkService profile will also be under this property entry. The Ethernet entry can be copied pretty much verbatim, the only fields you would actually be changing are:
*
*UUID
*UserDefinedName
*IPv4 configuration and set the interface to your tun or tap device (i.e. tun0 or tap0).
*DNS server if needed.
Then you would also manipulate the particular Location you want this NetworkService for (remember Mac OS X can configure all network interfaces dependent on your "Location"). The default location UUID can be obtained in the root of the PropertyList as the key "CurrentSet". After figuring out which location (or set) you want, expand the Set property, and add entries under Global/IPv4/ServiceOrder with the UUID of the new NetworkService. Also under the Set property you need to expand the Service property and add the UUID here as a dictionary with one String entry with key __LINK__ and value as the UUID (use the other interfaces as an example).
After you have modified your preferences.plist file, just reboot, and the NetworkService will be available under SystemPreferences->Network. Note that we have mimicked an Ethernet device so Mac OS X layer of networking will note that "a cable is unplugged" and will not let you activate the interface through the GUI. However, since the underlying device is a tun/tap device and it has an IP address, the interface will become active and the proper routing will be added at the BSD level.
As a reference this is used to do special routing magic.
In case you got this far and are having trouble, you have to create the tun/tap device by opening one of the devices under /dev/. You can use any program to do this, but I'm a fan of good-old-fashioned C myself:
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>
int main()
{
int fd = open("/dev/tun0", O_RDONLY);
if (fd < 0)
{
printf("Failed to open tun/tap device. Are you root? Are the drivers installed?\n");
return -1;
}
while (1)
{
sleep(100000);
}
return 0;
}
A: if you are on a dev environment and want access some service already running on localhost/host machine. in docker for mac you have another option.use docker.for.mac.localhost instead of localhost in docker container.
docker.for.mac.host.internal should be used instead of docker.for.mac.localhost from Docker Community Edition 17.12.0-ce-mac46 2018-01-09.
this allows you to connect to service running on your on mac from within a docker container.please refer below links
understanding the docker.for.mac.localhost behavior
release notes
A: In regards to @bmasterswizzle's BRILLIANT answer - more specifically - to @DanRamos' question about how to force the new interface's link-state to "up".. I use this script, of whose origin I cannot recall, but which works fabulously (in coordination with @bmasterswizzles "Mona Lisa" of answers)...
#!/bin/zsh
[[ "$UID" -ne "0" ]] && echo "You must be root. Goodbye..." && exit 1
echo "starting"
exec 4<>/dev/tap0
ifconfig tap0 10.10.10.1 10.10.10.255
ifconfig tap0 up
ping -c1 10.10.10.1
echo "ending"
export PS1="tap interface>"
dd of=/dev/null <&4 & # continuously reads from buffer and dumps to null
I am NOT quite sure I understand the alteration to the prompt at the end, or...
dd of=/dev/null <&4 & # continuously reads from buffer and dumps to null
but WHATEVER. it works. link light: green✅. loves it.
A: A few others seemed to hint at this, but the following demonstrates using ifconfig to create a vlan and test DNS on the virtual interface (using minidns) on OS X 10.9.5:
$ sw_vers -productVersion
10.9.5
$ sudo ifconfig vlan169 create && echo vlan169 created
vlan169 created
$ sudo ifconfig vlan169 inet 169.254.169.254 netmask 255.255.255.255 && echo vlan169 configured
vlan169 configured
$ sudo ./minidns.py 169.254.169.254 &
[1] 35125
$ miniDNS :: * 60 IN A 169.254.169.254
$ dig @169.254.169.254 +short test.host
Request: test.host. -> 169.254.169.254
Request: test.host. -> 169.254.169.254
169.254.169.254
$ sudo kill 35125
$
[1]+ Exit 143 sudo ./minidns.py 169.254.169.254
$ sudo ifconfig vlan169 destroy && echo vlan169 destroyed
vlan169 destroyed
A: What do you mean by
"but it will not act as a real fully functional interface (if the original interface is inactive, then the derived one is also inactive"
?
I can make a new interface, base it on an already existing one, then disable the existing one and the new one still works. Making a second interface does however not create a real interface (when you check with ifconfig), it will just assign a second IP to the already existing one (however, this one can be DHCP while the first one is hard coded for example).
So did I understand you right, that you want to create an interface, not bound to any real interface? How would this interface then be used? E.g. if you disconnect all WLAN and pull all network cables, where would this interface send traffic to, if you send traffic to it? Maybe your question is a bit unclear, it might help a lot if rephrase it, so it's clear what you are actually trying to do with this "virtual interface" once you have it.
As you mentioned "alias IP" in your question, this would mean an alias interface. But an alias interface is always bound to a real interface. The difference is in Linux such an interface really IS an interface (e.g. an alias interface for eth0 could be eth1), while on Mac, no real interface is created, instead a virtual interface is created, that can configured and used independently, but it is still the same interface physically and thus no new named interface is generated (you just have two interfaces, that are both in fact en0, but both can be enabled/disabled and configured independently).
A: Take a look at this tutorial, it's for FreeBSD but also applies to OS X. http://people.freebsd.org/~arved/vlan/vlan_en.html
A: Go to Network Preferences.
At the bottom of the list of network adapters, click the + icons
Select the existing interface that you want to arp (say Ethernet 1), and give the Service Name that you want for the new port (say Ethernet 1.1) then press create.
Now you have the new virtual interface in the gui and can manage IP addresses etc it in the normal way.
ifconfig -a will confirm that you have multiple IPs on the interface, and these will still be there when you reboot.
Its a Mac. Don't fight it, do it the easy way.
A: i have resorted to running PFSense, a BSD based router/firewall to achieve this goal….
why? because OS X Server gets so FREAKY without a Static IP…
so after wrestling with it for DAYS to make NAT and DHCP and firewall and …
I'm trying this is parallels…
will let ya know how it goes...
A: ifconfig interfacename create will create a virtual interface,
A: Here's a good guide: https://web.archive.org/web/20160301104014/http://gerrydevstory.com/2012/08/20/how-to-create-virtual-network-interface-on-mac-os-x/
Basically you select a network adapter in the Networks pane of system preferences, then click the gear to "Duplicate Service". After the service is duplicated, you manually assign an IP in one of the private address ranges. Then ping it to make sure ;)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "56"
}
|
Q: Can you specify filenames using wildcards or regexes in the subversion mv command? I want to do this so that I can say something like, svn mv *.php php-folder/, but it does not seem to be working. Is it even possible? No mention of it is made on the relevant page in the svn book.
Example output of svn mv *.php php-folder/ :
svn: Client error in parsing arguments
Being able to move a whole file system would be a plus, so if any answers given could try to include that ability, that'd be cool.
Thanks in advance!
A: svn move only moves one file at a time. Your best bet is a shell loop. In Bash, try
for f in *.php ; do svn mv $f php-folder/; done
On Windows, that's
for %f in (*.php) do svn mv %f php-folder/
Edit: Starting with Subversion 1.5, svn mv accepts multiple source files, so your original command would work. The shell loop is only needed for svn 1.4.x and earlier. (Of course, the shell loop will still work with 1.5; it just isn't necessary.)
A: Not sure about svn itself, but either your shell should be able to expand that wildcard and svn can take multiple source arguments, or you can use something like
for file in *.php; do svn mv $file php-folder/; done
in a bash shell, for example.
A: find . -name "*.php" -exec svn mv {} php-folder \;
A: If you're in the correct checked out directory, I don't see why it wouldn't work? Your shell should expand the *.php to a list of php files, and svn move accepts multiple sources as arguments.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: Are "proxy properties" good style? I have a class with a string property that's actually several strings joined with a separator.
I'm wondering if it is good form to have a proxy property like this:
public string ActualProperty
{
get { return actualProperty; }
set { actualProperty = value; }
}
public string[] IndividualStrings
{
get { return ActualProperty.Split(.....); }
set
{
// join strings from array in propval .... ;
ActualProperty = propval;
}
}
Is there any risks I have overlooked?
A: Linking two settable properties together is bad juju in my opinion. Switch to using explicit get / set methods instead of a property if this is really what you want. Code which has non-obvious side-effects will almost always bite you later on. Keep things simple and straightforward as much as possible.
Also, if you have a property which is a formatted string containing sub-strings, it looks like what you really want is a separate struct / class for that property rather than misusing a primitive type.
A: Seems that the array is the real data, and the single-string stuff is a convenience. That's fine, but I'd say look out for things like serialization and memberwise cloning, which will get and set both writeable properties.
I think I would;
*
*keep the array as a property
*provide a GetJoinedString(string seperator) method.
*provide a SetStrings(string joined, string seperator) or Parse(string joined, string seperator) method.
Realistically, the seperator in the strings isn't really part of the class, but an ephemeral detail. Make references to it explicit, so that, say, a CSV application can pass a comma, where a tab-delimited app could pass a tab. It'll make your app easier to maintain. Also, it removes that nasty problem of having two getters and setters for the same actual data.
A: Define "good". It shouldn't break (unless you failed to properly guarantee that the delimiter(s) passed to Split() are never allowed in the individual strings themselves), but if IndividualStrings is accessed more frequently than ActualProperty you'll end up parsing actualProperty far more often than you should. Of course, if the reverse is true, then you're doing well... and if both are called so often that any unnecessary parsing or concatenation is unacceptable, then just store both and re-parse when the value changes.
A: Properties are intended to be very simple members of a class; getting or setting the value of a property should be considered a trivial operation without significant side-effects.
If setting a property causes public values of the class other than the assigned property to change, this is more significant than a basic assignment and is probably no longer a good fit for the property.
A "complex" property is dangerous, because it breaks from the expectations of callers. Properties are interpreted as fields (with side-effects), but as fields you expect to be able to assign a value and then later retrieve that value. In this way, a caller should expect to be able to assign to multiple properties and retrieve their values again later.
In your example, I can't assign a value to both properties and retrieve them; one value will affect the other. This breaks a fundamental expectation of the property. If you create a method to assign values to both properties at the same time and make both properties read-only, it becomes much easier to understand where the values are set.
Additionally, as an aside:
It's generally considered bad practise to return a temporary array from a property. Arrays may be immutable, but their contents are not. This implies you can change a value within the array which will persist with the object.
For example:
YourClass i = new YourClass();
i.IndividualStrings[0] = "Hello temporary array!";
This code looks like it's changing a value in the IndividualStrings property, but actually the array is created by the property and is not assigned anywhere, so the array and the change will fall out of scope immediately.
public string ActualProperty { get; set; }
public string[] GetIndividualStrings()
{
return ActualProperty.Split(.....);
}
public void SetFromIndividualStrings(string[] values)
{
// join strings from array .... ;
}
A: Well I'd say your "set" is high risk, what if somebody didn't know they had to pass an already joined sequence of values or your example above was maybe missing that. What if the string already contained the separator - you'd break.
I'm sure performance isn't great depending on how often this property is used.
A: I'm not sure what the benefit of this design would be. I think the split would be better served in an extension method.
At a minimum, I'd remove the setter on the IndividualStrings property, or move it into two methods: string[] SplitActualProperty() and void MergeToActualProperty(string[] parts).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How might I pass variables through to cached content in PHP? Essentially I have a PHP page that calls out some other HTML to be rendered through an object's method. It looks like this:
MY PHP PAGE:
// some content...
<?php
$GLOBALS["topOfThePage"] = true;
$this->renderSomeHTML();
?>
// some content...
<?php
$GLOBALS["topOfThePage"] = false;
$this->renderSomeHTML();
?>
The first method call is cached, but I need renderSomeHTML() to display slightly different based upon its location in the page. I tried passing through to $GLOBALS, but the value doesn't change, so I'm assuming it is getting cached.
Is this not possible without passing an argument through the method or by not caching it? Any help is appreciated. This is not my application -- it is Magento.
Edit:
This is Magento, and it looks to be using memcached. I tried to pass an argument through renderSomeHTML(), but when I use func_get_args() on the PHP include to be rendered, what comes out is not what I put into it.
Edit:
Further down the line I was able to "invalidate" the cache by calling a different method that pulled the same content and passing in an argument that turned off caching. Thanks everyone for your help.
A: Obviously, you cannot. The whole point of caching is that the 'thing' you cache is not going to change. So you either:
*
*provide a parameter
*aviod caching
*invalidate the cache when you set a different parameter
Or, you rewrite the cache mechanism yourself - to support some dynamic binding.
A: Chaching is handled differently by different frameworks, so you'd have to help us out with some more information. But I also wonder if you could pass that as a parameter instead of using $GLOBALS.
$this->renderSomeHTML(true);
A: Your question seems unclear, but caching pretty much means 'stored so we don't have to calculate it again'. If you want the content to differ, you need to cache more results and pick the correct cached object one to send back.
Need more info to give a better answer. What is caching the document, Smarty? And what do you mean by "its location in the page"? What is 'it'?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How many DataTable objects should I use in my C# app? I'm an experienced programmer in a legacy (yet object oriented) development tool and making the switch to C#/.Net. I'm writing a small single user app using SQL server CE 3.5. I've read the conceptual DataSet and related doc and my code works.
Now I want to make sure that I'm doing it "right", get some feedback from experienced .Net/SQL Server coders, the kind you don't get from reading the doc.
I've noticed that I have code like this in a few places:
var myTableDataTable = new MyDataSet.MyTableDataTable();
myTableTableAdapter.Fill(MyTableDataTable);
... // other code
In a single user app, would you typically just do this once when the app starts, instantiate a DataTable object for each table and then store a ref to it so you ever just use that single object which is already filled with data? This way you would ever only read the data from the db once instead of potentially multiple times. Or is the overhead of this so small that it just doesn't matter (plus could be counterproductive with large tables)?
A: For CE, it's probably a non issue. If you were pushing this app to thousands of users and they were all hitting a centralized DB, you might want to spend some time on optimization. In a single-user instance DB like CE, unless you've got data that says you need to optimize, I wouldn't spend any time worrying about it. Premature optimization, etc.
A: The way to decide varys between 2 main few things
1. Is the data going to be accesses constantly
2. Is there a lot of data
If you are constanty using the data in the tables, then load them on first use.
If you only occasionally use the data, fill the table when you need it and then discard it.
For example, if you have 10 gui screens and only use myTableDataTable on 1 of them, read it in only on that screen.
A: The choice really doesn't depend on C# itself. It comes down to a balance between:
*
*How often do you use the data in your code?
*Does the data ever change (and do you care if it does)?
*What's the relative (time) cost of getting the data again, compared to everything else your code does?
*How much value do you put on performance, versus developer effort/time (for this particular application)?
As a general rule: for production applications, where the data doesn't change often, I would probably create the DataTable once and then hold onto the reference as you mention. I would also consider putting the data in a typed collection/list/dictionary, instead of the generic DataTable class, if nothing else because it's easier to let the compiler catch my typing mistakes.
For a simple utility you run for yourself that "starts, does its thing and ends", it's probably not worth the effort.
You are asking about Windows CE. In that particular care, I would most likely do the query only once and hold onto the results. Mobile OSs have extra constraints in batteries and space that desktop software doesn't have. Basically, a mobile OS makes bullet #4 much more important.
Everytime you add another retrieval call from SQL, you make calls to external libraries more often, which means you are probably running longer, allocating and releasing more memory more often (which adds fragmentation), and possibly causing the database to be re-read from Flash memory. it's most likely a lot better to hold onto the data once you have it, assuming that you can (see bullet #2).
A: It's easier to figure out the answer to this question when you think about datasets as being a "session" of data. You fill the datasets; you work with them; and then you put the data back or discard it when you're done. So you need to ask questions like this:
*
*How current does the data need to be? Do you always need to have the very very latest, or will the database not change that frequently?
*What are you using the data for? If you're just using it for reports, then you can easily fill a dataset, run your report, then throw the dataset away, and next time just make a new one. That'll give you more current data anyway.
*Just how much data are we talking about? You've said you're working with a relatively small dataset, so there's not a major memory impact if you load it all in memory and hold it there forever.
Since you say it's a single-user app without a lot of data, I think you're safe loading everything in at the beginning, using it in your datasets, and then updating on close.
The main thing you need to be concerned with in this scenario is: What if the app exits abnormally, due to a crash, power outage, etc.? Will the user lose all his work? But as it happens, datasets are extremely easy to serialize, so you can fairly easily implement a "save every so often" procedure to serialize the dataset contents to disk so the user won't lose a lot of work.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: If it is decided that our system needs an overhaul, what is the best way to go about it? We are mainting a web application that is built on Classic ASP using VBScript as the primary language. We are in agreement that our backend (framework if you will) is out dated and doesn't provide us with the proper tools to move forward in a quick manner. We have pretty much embraced the current webMVC pattern that is all over the place, and cannot do it, in a reasonable manner, with the current technology. The big missing features are proper dispatching and templating with inheritance, amongst others.
Currently there are two paths being discussed:
*
*Port the existing application to Classic ASP using JScript, which will allow us to hopefully go from there to .NET MSJscript without too much trouble, and eventually end up on the .NET platform (preferably the MVC stuff will be done by then, ASP.NET isn't much better than were we are on now, in our opinions). This has been argued as the safer path with less risk than the next option, albeit it might take slightly longer.
*Completely rewrite the application using some other technology, right now the leader of the pack is Python WSGI with a custom framework, ORM, and a good templating solution. There is wiggle room here for even django and other pre-built solutions. This method would hopefully be the quickest solution, as we would probably run a beta beside the actual product, but it does have the potential for a big waste of time if we can't/don't get it right.
This does not mean that our logic is gone, as what we have built over the years is fairly stable, as noted just difficult to deal with. It is built on SQL Server 2005 with heavy use of stored procedures and published on IIS 6, just for a little more background.
Now, the question. Has anyone taken either of the two paths above? If so, was it successful, how could it have been better, etc. We aren't looking to deviate much from doing one of those two things, but some suggestions or other solutions would potentially be helpful.
A: Don't throw away your code!
It's the single worst mistake you can make (on a large codebase). See Things You Should Never Do, Part 1.
You've invested a lot of effort into that old code and worked out many bugs. Throwing it away is a classic developer mistake (and one I've done many times). It makes you feel "better", like a spring cleaning. But you don't need to buy a new apartment and all new furniture to outfit your house. You can work on one room at a time... and maybe some things just need a new paintjob. Hence, this is where refactoring comes in.
For new functionality in your app, write it in C# and call it from your classic ASP. You'll be forced to be modular when you rewrite this new code. When you have time, refactor parts of your old code into C# as well, and work out the bugs as you go. Eventually, you'll have replaced your app with all new code.
You could also write your own compiler. We wrote one for our classic ASP app a long time ago to allow us to output PHP. It's called Wasabi and I think it's the reason Jeff Atwood thought Joel Spolsky went off his rocker. Actually, maybe we should just ship it, and then you could use that.
It allowed us to switch our entire codebase to .NET for the next release while only rewriting a very small portion of our source. It also caused a bunch of people to call us crazy, but writing a compiler is not that complicated, and it gave us a lot of flexibility.
Also, if this is an internal only app, just leave it. Don't rewrite it - you are the only customer and if the requirement is you need to run it as classic asp, you can meet that requirement.
A: Use this as an opportunity to remove unused features! Definitely go with the new language. Call it 2.0. It will be a lot less work to rebuild the 80% of it that you really need.
Start by wiping your brain clean of the whole application. Sit down with a list of its overall goals, then decide which features are needed based on which ones are used. Then redesign it with those features in mind, and build.
(I love to delete code.)
A: It works out better than you'd believe.
Recently I did a large reverse-engineering job on a hideous old collection of C code. Function by function I reallocated the features that were still relevant into classes, wrote unit tests for the classes, and built up what looked like a replacement application. It had some of the original "logic flow" through the classes, and some classes were poorly designed [Mostly this was because of a subset of the global variables that was too hard to tease apart.]
It passed unit tests at the class level and at the overall application level. The legacy source was mostly used as a kind of "specification in C" to ferret out the really obscure business rules.
Last year, I wrote a project plan for replacing 30-year old COBOL. The customer was leaning toward Java. I prototyped the revised data model in Python using Django as part of the planning effort. I could demo the core transactions before I was done planning.
Note: It was quicker to build a the model and admin interface in Django than to plan the project as a whole.
Because of the "we need to use Java" mentality, the resulting project will be larger and more expensive than finishing the Django demo. With no real value to balance that cost.
Also, I did the same basic "prototype in Django" for a VB desktop application that needed to become a web application. I built the model in Django, loaded legacy data, and was up and running in a few weeks. I used that working prototype to specify the rest of the conversion effort.
Note: I had a working Django implementation (model and admin pages only) that I used to plan the rest of the effort.
The best part about doing this kind of prototyping in Django is that you can mess around with the model, unit tests and admin pages until you get it right. Once the model's right, you can spend the rest of your time fiddling around with the user interface until everyone's happy.
A: Whatever you do, see if you can manage to follow a plan where you do not have to port the application all in one big bang. It is tempting to throw it all away and start from scratch, but if you can manage to do it gradually the mistakes you do will not cost so much and cause so much panic.
A: Half a year ago I took over a large web application (fortunately already in Python) which had some major architectural deficiencies (templates and code mixed, code duplication, you name it...).
My plan is to eventually have the system respond to WSGI, but I am not there yet. I found the best way to do it, is in small steps. Over the last 6 month, code reuse has gone up and progress has accelerated.
General principles which have worked for me:
*
*Throw away code which is not used or commented out
*Throw away all comments which are not useful
*Define a layer hierarchy (models, business logic, view/controller logic, display logic, etc.) of your application. This has not to be very clear cut architecture but rather should help you think about the various parts of your application and help you better categorize your code.
*If something grossly violates this hierarchy, change the offending code. Move the code around, recode it at another place, etc. At the same time adjust the rest of your application to use this code instead of the old one. Throw the old one away if not used anymore.
*Keep you APIs simple!
Progress can be painstakingly slow, but should be worth it.
A: I would not recommend JScript as that is definitely the road less traveled.
ASP.NET MVC is rapidly maturing, and I think that you could begin a migration to it, simultaneously ramping up on the ASP.NET MVC framework as its finalization comes through.
Another option would be to use something like ASP.NET w/Subsonic or NHibernate.
A: Don't try and go 2.0 ( more features then currently exists or scheduled) instead build your new platform with the intent of resolving the current issues with the code base (maintainability/speed/wtf) and go from there.
A: A good place to begin if you're considering the move to Python is to rewrite your administrator interface in Django. This will help you get some of the kinks worked out in terms of getting Python up and running with IIS (or to migrate it to Apache). Speaking of which, I recommend isapi-wsgi. It's by far the easiest way to get up and running with IIS.
A: I agree with Michael Pryor and Joel that it's almost always a better idea to continue evolving your existing code base rather than re-writing from scratch. There are typically opportunities to just re-write or re-factor certain components for performance or flexibility.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: SharePoint custom content feature with Word Quick Parts and Document Information Panel I am creating a custom content type feature for MOSS that will also include a Word 2007 document as the document template. The same Word document will also have a Document Information Panel (DIP) and Quick Parts for all the fields in the content type.
The problem is that when my feature is deployed the Word document's Quick Parts no longer seem bound to the content type's columns in the Document Library. For example, if you:
*
*Type a value into the Quick Part
*Save the Word document to the document library
*Look at the documents properties;
The value just typed is not listed. However if you use the DIP to specify the value (instead of the quick part) and then save it, it does get saved as metadata.
The "Document Information Panel Settings" screen for my content type is acting as if there is no InfoPath template. Sure enough if I re-upload (or create a new) InfoPath template, then the above problem goes away.
How do I get this to work in my feature without having to do the manual step described above?
A: It may be possible to define a custom template for the DIP and deploy that to the site, setting the content type to link to that template.
A: I found a solution in a blog, but you have to use InfoPath... Here is the link:
Using SharePoint Metadata in Word Documents – The Lookup Column
http://vspug.com/maartene/2009/03/13/using-sharepoint-metadata-in-word-documents-the-lookup-column-issue/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Excel column names What column names cannot be used when creating an Excel spreadsheet with ADO.
I have a statement that creates a page in a spreadsheet:
CREATE TABLE [TableName] (Column string, Column2 string);
I have found that using a column name of Date or Container will generate an error when the statement is executed.
Does anyone have a complete (or partial) list of words that cannot be used as column names? This is for use in a user-driven environment and it would be better to "fix" the columns than to crash.
My work-around for these is to replace any occurences of Date or Container with Date_ and Container_ respectively.
A: Here are the reserved words for MS Query:
http://support.microsoft.com/kb/125948
Cell naming rules:
http://ezinearticles.com/?Rules-For-Naming-Cells-in-Microsoft-Excel&id=218607
A: It seems more like an issue with SQL reserved words. This is a good list
A: You can use brackets for any fieldname, e.g.:
CREATE TABLE [TableName] ([Date] string, [Container] string)
Full example:
using (OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\\temp\\test.xls;Extended Properties='Excel 8.0;HDR=Yes'"))
{
conn.Open();
OleDbCommand cmd = new OleDbCommand("CREATE TABLE [TableName] ([Date] string, [Container] string)", conn);
cmd.ExecuteNonQuery();
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Making WCF easier to configure I have a set of WCF web services connected to dynamically by a desktop application.
My problem is the really detailed config settings that WCF requires to work. Getting SSL to work involves custom settings. Getting MTOM or anything else to work requires more. You want compression? Here we go again...
WCF is really powerful - you can use a host of different ways to connect, but all seem to involve lots of detailed config. If host and client don't match perfectly you get hard to decipher errors.
I want to make the desktop app far easier to configure - ideally some kind of auto-discovery. The users of the desktop app should just be able to enter the URL and it do the rest.
Does anyone know a good way to do this?
I know Visual Studio can set the config up for you, but I want the desktop app to be able to do it based on a wide variety of different server set-ups.
I know that VS's tools can be used externally, but I'm looking for users of the desktop apps to not have to be WCF experts. I know MS made this intentionally over complicated.
Is there any way, mechanism, 3rd party library or anything to make auto-discovery of WCF settings possible?
A: All information about the endpoint is available in metadata of a service, you can write a client what will explore the meta data of the service and will configure the client. For a code example you can look into this excellent Mex Explorer from Juval Lowy.
A: Thanks, that was useful code (+1).
It's more than a little bit messy though, has some bugs (case sensitive checks that shouldn't be, for instance), has a load of UI functionality that I don't need and repeats a lot of code.
I've taken from it the actual discovery mechanism, re-wrote it and almost got it working (connects, but needs some finessing).
First some util functions used by the main method:
/// <summary>If the url doesn't end with a WSDL query string append it</summary>
static string AddWsdlQueryStringIfMissing( string input )
{
return input.EndsWith( "?wsdl", StringComparison.OrdinalIgnoreCase ) ?
input : input + "?wsdl";
}
/// <summary>Imports the meta data from the specified location</summary>
static ServiceEndpointCollection GetEndpoints( BindingElement bindingElement, Uri address, MetadataExchangeClientMode mode )
{
CustomBinding binding = new CustomBinding( bindingElement );
MetadataSet metadata = new MetadataExchangeClient( binding ).GetMetadata( address, mode );
return new WsdlImporter( metadata ).ImportAllEndpoints();
}
Then a method that tries different way to connect and returns the endpoints:
public static ServiceEndpointCollection Discover( string url )
{
Uri address = new Uri( url );
ServiceEndpointCollection endpoints = null;
if ( string.Equals( address.Scheme, "http", StringComparison.OrdinalIgnoreCase ) )
{
var httpBindingElement = new HttpTransportBindingElement();
//Try the HTTP MEX Endpoint
try { endpoints = GetEndpoints( httpBindingElement, address, MetadataExchangeClientMode.MetadataExchange ); }
catch { }
//Try over HTTP-GET
if ( endpoints == null )
endpoints = GetEndpoints( httpBindingElement,
new Uri( AddWsdlQueryStringIfMissing( url ) ), MetadataExchangeClientMode.HttpGet );
}
else if ( string.Equals( address.Scheme, "https", StringComparison.OrdinalIgnoreCase ) )
{
var httpsBindingElement = new HttpsTransportBindingElement();
//Try the HTTPS MEX Endpoint
try { endpoints = GetEndpoints( httpsBindingElement, address, MetadataExchangeClientMode.MetadataExchange ); }
catch { }
//Try over HTTP-GET
if ( endpoints == null )
endpoints = GetEndpoints( httpsBindingElement,
new Uri( AddWsdlQueryStringIfMissing( url ) ), MetadataExchangeClientMode.HttpGet );
}
else if ( string.Equals( address.Scheme, "net.tcp", StringComparison.OrdinalIgnoreCase ) )
endpoints = GetEndpoints( new TcpTransportBindingElement(),
address, MetadataExchangeClientMode.MetadataExchange );
else if ( string.Equals( address.Scheme, "net.pipe", StringComparison.OrdinalIgnoreCase ) )
endpoints = GetEndpoints( new NamedPipeTransportBindingElement(),
address, MetadataExchangeClientMode.MetadataExchange );
return endpoints;
}
A: There is now another way to do this that wasn't available when I asked the original question. Microsoft now supports REST for WCF services.
*
*The downside of using REST is that you lose the WSDL.
*The upside is minimal config and your WCF contract interfaces will still work!
You'll need a new reference to System.ServiceModel.Web
Mark your operations with either WebInvoke or WebGet
//get a user - note that this can be cached by IIS and proxies
[WebGet]
User GetUser(string id )
//post changes to a user
[WebInvoke]
void SaveUser(string id, User changes )
Adding these to a site is easy - add a .svc file:
<%@ServiceHost
Service="MyNamespace.MyServiceImplementationClass"
Factory="System.ServiceModel.Activation.WebServiceHostFactory" %>
The factory line tells ASP.net how to activate the endpoint - you need no server side config at all!
Then constructing your ChannelFactory is pretty much unchanged, except that you don't need to specify an endpoint any more (or auto-discover one as I have in the other answers)
var cf = new WebChannelFactory<IMyContractInterface>();
var binding = new WebHttpBinding();
cf.Endpoint.Binding = binding;
cf.Endpoint.Address = new EndpointAddress(new Uri("mywebsite.com/myservice.svc"));
cf.Endpoint.Behaviors.Add(new WebHttpBehavior());
IMyContractInterface wcfClient = cf.CreateChannel();
var usr = wcfClient.GetUser("demouser");
// and so on...
Note that I haven't specified or discovered the client config - there's no local config needed!
Another big upside is that you can easily switch to JSON serialisation - that allows the same WCF services to be consumed by Java, ActionScript, Javascript, Silverlight or anything else that can handle JSON and REST easily.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: In Flex/AS3, how do I get a class definition of an embedded asset with getDefinitionByName I have a class with many embedded assets.
Within the class, I would like to get the class definition of an asset by name. I have tried using getDefinitionByName(), and also ApplicationDomain.currentDomain.getDefinition() but neither work.
Example:
public class MyClass
{
[Embed(source="images/image1.png")] private static var Image1Class:Class;
[Embed(source="images/image2.png")] private static var Image2Class:Class;
[Embed(source="images/image3.png")] private static var Image3Class:Class;
private var _image:Bitmap;
public function MyClass(name:String)
{
var ClassDef:Class = getDefinitionByName(name) as Class; //<<-- Fails
_image = new ClassDef() as Bitmap;
}
}
var cls:MyClass = new MyClass("Image1Class");
A: This doesn't answer your question, but it may solve your problem. I believe doing something like this should work:
public class MyClass
{
[Embed(source="images/image1.png")] private static var Image1Class:Class;
[Embed(source="images/image2.png")] private static var Image2Class:Class;
[Embed(source="images/image3.png")] private static var Image3Class:Class;
private var _image:Bitmap;
public function MyClass(name:String)
{
_image = new this[name]() as Bitmap;
}
}
var cls:MyClass = new MyClass("Image1Class");
I'm having a tough time remembering if bracket notation works on sealed classes. If it doesn't, a simple solution is to mark the class as dynamic.
A: The reason your method does not work is because "Image1Class" is a variable name, not the actual Class name.
You can get the class name like this
import flash.utils.getQualifiedClassName;
trace(getQualifiedClassName(Image1Class));
Which as you may see, means your class name (the one that should be passed into the function) is something like MyClass_Image1Class.
A: You don't need to use any fancy getDefinitionByName() methods, simply refer to it dynamically. In your case, replace the 'Fails' line with:
var classDef:Class = MyClass[name] as Class;
And that should do it.
A: Thank you so much! I just spent almost 5 hours trying to get the POS getDefinitionByName to work with the getQualifiedClassName that I was ready to throw stuff!! My final working code looks like this and even gets the string name from an array.
CreatureParam is a 2 dimentional array of strings;
Type is an integer that is sent to flash by HTML tag which is turn comes from a MYSQL database via PHP.
Mark1_cb is a combobox that is on the stage and has an instance name. It's output is also an integer.
So this code directly below imports the class "BirdBodyColor_mc" from an external swf "ArtLibrary.swf". BirdBodyColor_mc is a movieclip created from a png image. Note you must double click on the movieclip in the ArtLibrary.fla and insert a second key frame. Movieclips apparently need two frames or flash tries to import it as a sprite and causes a type mismatch.
[Embed(source="ArtLibrary.swf", symbol="BirdBodyColor_mc")]
var BirdBodyColor_mc:Class;
Normally I would put an instance of this movieclip class on the stage by using this code.
myMC:MovieClip = new BirdBodyColor_mc();
addChild(myMC);
var Definition:Class = this["BirdBodyColor_mc"] as Class;
var Mark1:MovieClip = new Definition();
But I need to do this using a string value looked up in my array. So here is the code for that.
var Definition:Class = this[CreatureParam[Type][Mark1_cb + 2]] as Class;
var Mark1:MovieClip = new Definition();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: What is your preferred way to produce charts in a Ruby on Rails web application? I'd like to add some pie, bar and scatter charts to my Ruby on Rails web application. I want want them to be atractive, easy to add and not introduce much overhead.
What charting solution would you recommend?
What are its drawbacks (requires Javascript, Flash, expensive, etc)?
A: It requires flash and isn't free (though inexpensive): amcharts.
I've used it successfully and like it. I evaluated a number of options a while back and chose it. At the time, however, Google Charts wasn't as mature as it seems to be now. I would consider that first if I were to re-evaluate now.
A: There's also Scruffy. I took a look at the code recently and it seemed easy to modify/extend. It produces svg and (by conversion) png.
A: Have you tried the Google Charts API? - web service APIs don't really come much simpler. It's free to use, simple to implement, and the charts don't look too shoddy.
A: Open Flash Chart II is a free option that gives very nice output. It does, as you'd expect, require Flash.
Fusion Charts is even nicer, but is $499. In researching this, I found a cut-down free version that might serve your needs.
A: I 2nd the vote for flot. The latest version lets you do some animations and actions that I previously thought would only be possible via Flash. The documentation is fantastic. It simple to write by hand, but for simple cases it gets even easier with a Rails plugin called flotilla. You should check out the examples page for a better idea of what it's capable of. The zooming and hover capabilities are especially impressive.
A: Google Charts is an excellent choice if you don't want to use Flash. It's pretty easy to use on its own, but for Rails, it's even easier with the gchartrb gem. An example:
GoogleChart::PieChart.new('320x200', "Things I Like To Eat", false) do |pc|
pc.data "Broccoli", 30
pc.data "Pizza", 20
pc.data "PB&J", 40
pc.data "Turnips", 10
puts pc.to_url
end
A: The new Google Visualization appears to produce charts that are of more varied type, better looking and interactive than Google Graphs.
http://code.google.com/apis/visualization/
A: Morris.js is nice and open source. I would like to choose it comparing to highcharts. There is a new great video tutorial from Railscasts
A: I've just found ZiYa produces some really sexy charts and is Rails specific.
The downsides are it uses Flash and if you don't want the sites to link to XML/SWF page it costs $50 per site.
[I've not decided on it yet, but wanted to throw it out there in case people want to vote it up]
A: I've used Fusion Charts extensively from within a Java web application, but it should work the same way from Rails since you're just embedding a Flash via HTML or JavaScript and passing it XML data. It's a slick package and their support has always been very responsive.
A: You should take a look at Dmitry Baranovskiy's Javascript library called Raphaël.
A: Google charts is very nice, but it's not a rails only solution. You simple use the programming language of your choice to dynamically produce urls that contain the data and google returns you back a nice image with your chart.
http://code.google.com/apis/chart/
A: In the old days, I decided to roll my own (using RVG/RMagick), mainly because Gruff didn't have everything I wanted. The downside was that finding and eliminating all the bugs in graphing code is a pain. These days Gruff is my choice as it's really gone forward in terms of customization and flexibility.
The standard Gruff templates/color choices suck though, so you'll need to get your hands dirty for best results.
A: Regarding amcharts, there's a "free" version with a very few restrictions that generates Flash charts including the 'chart by amCharts.com' mention.
And there's a nice plugin, ambling, that provides you with some helper methods to easily add charts to your views. Please note that amCharts.com reference documentation is still a must to tailor the chart to your requirements.
A: GoogleCharts and Gruff charts are great, but sometimes they lack some features that I need for more scientific plotting. There is a gem for gnuplot which may be helpful for some of these situations.
http://rgplot.rubyforge.org/
A: I have started using protovis to generate SVG charts with javascript. My basic approach in rails is to have a controller that returns the data to be charted as JSON, and scoop it up with a bit of javascript and protovis.
Only downside, is that full IE support (Since it is based on SVG) is currently unavailable straight out of the box... However, current patches go a fair way to providing IE support, details of which can be found here.
A: If you don't need images, and can settle on requiring JavaScript, you could try a client-side solution like the jQuery plugin flot.
A: I am a fan of Gruff Graphs, but Google Charts is also good if you don't mind relying on an external server.
A: I personally prefer JavaScript-based charts over Flash. If that's ok, also check out High Charts. A Rails plugin is also available.
A: The gchartrb gem is no longer maintained, it seems. The author points to these gems:
*
*googlecharts
*gchart (seems abandoned as well)
A: We do this by shelling out to gnuplot to generate the charts as PNGs server-side. It's a bit old-school and the charts aren't interactive but it works and is cacheable.
(The other reason we do this is so we can put exactly the same chart in the PDF version of the report).
A: This isn't specifically RoR however, it is pretty slick port of Gruff to javascript: http://bluff.jcoglan.com/
A: ChartDirector. Ugly API, but good, server-side image results. Self contained binary.
A: FWIW, I'm not a fan of using Google Charts when fit & finish is important. I find that the variables for sizing, in particular, are unpredictable - the chart does its own thing.
I haven't yet played with Gruff/Bluff/etc., but for a higher-profile project I won't use Google Charts.
A: If you want quite sexy charts, easy to generate, and you can enable Flash, then you should definitely have a look at maani.us xml/swf charts.
Some XML builder behind it and you're ready to go.
A: FusionCharts is a very good charting product. Works well with RoR. Their support and forums are good. The free version of this product has limited number of charts and features, but no watermark.
A: I just started using googlecharts for my rails 3 project. It is nice and clean, and seems to be the only google visualization api based gem which is alive. Others are inactive and mostly use the old google charts api (released somewhere in 2007-2008).
https://github.com/mattetti/googlecharts
A: D3 has become my preferred way add great looking charts to web apps. You have to do a little mroe work that some other frameworks, but the appearance and control outweighs that.
I primarily use SVG, which means no IE8, but that is becoming less of an issue.
A: HighChart - A charting library written in pure JavaScript
Gems like highchart-rails, lazy-high-chart makes the integration with rails easier
A: gem 'chart' makes it easy to add ChartJS and NVD3 charts to rails.
A: My own option for people who need the support of multiple types of charts and rails helpers to simplify integration - https://github.com/railsjazz/rails_charts
It's based on Apache eCharts.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/87561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "76"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.